cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
What’s new: end-to-end encryption, Replay and Dash updates. Find out more about these updates, new features and more here.

Dropbox API Support & Feedback

Find help with the Dropbox API from other developers.

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Re: The list_folder api is slow even if there is only 1500 folders

The list_folder api is slow even if there is only 1500 folders

ethanfang
Explorer | Level 4

Hi,

We've tested that when we created 6000 team folders (eg. under team space). The list_folder_continue api will always timeout due to long cursor.

 

However, even if we delete the folders to 4000, 3500,......, 1500 it still almost always timeouts (eg. the 504 gateway timeout takes 90 seconds).

 

Actually, when only 1000 folders, there are still chances that timeout happens when using list_folder_continue.

 

We've seen in the document (eg. https://help.dropbox.com/zh-tw/teams-admins/admin/member-space-limits) that 5000 shared folders should be the limit. But according to our test, it seems 1000 shared folders can also timeout oftenly.

 

Could you please provide a specific limit on team folders count that is guaranteed not to timeout when using list_folder_continue with cursor.

 

And also what leads to such a long cursor, we found that when cursor is long (eg. more than 50000 bytes), it is highly probably to timeout. Also, we found that when the folder name is longer, the cursor will also be longer, does this affect the speed? 

 

Also, we've found that if we performed delete folders actions, and start with list_folder api from the beginning, and then list_folder_continue to keep pull data, the cursor seems also longer than that if we just simply create the same number of folders.

ethanfang_0-1657271461268.png


Thank you,

Best regards,

Ethan Fang

3 Replies 3

Greg-DB
Dropbox Staff

Thanks for the feedback. It looks like this is related to your earlier thread.

 

As discussed there, this will be related to the number of files, folders, and namespaces, as well as the amount of activity. Cursor length will also depend somewhat on the length of folder names/paths. Unfortunately, there isn't a specific limit that will guarantee the performance of this functionality, as it depends on that variety of factors. I've sent this along to the team to hopefully improve this in the future, but I can't make any guarantees on that.

ethanfang
Explorer | Level 4

Hi,Greg

https://help.dropbox.com/zh-tw/teams-admins/admin/member-space-limits

We want to know a specific limit becuase the document explicitly says the "5000" limit.

 

We want to know how the "5000" limit is obtained, and what have been done to guarantee that this "5000" folders must work without performance issue.

 

According to our test, "1000" folders can still have performance issue, which causes a consecutive 9 timeouts (eg. 90 seconds) for list_folder_continue api.

 

We really want the document to use a "reasonable" limit, not "5000", or even lower than "1000", so that it's guaranteed to work. And if the performance issue happens, we can refer our customers to this dropbox team limit document.

 

I guess it may require some of your experiments to make the document ground and trustworthy. But if "5000" folders doesn't work, then the document is just not trustworthy, and we can't be relieved to refer our customers to dropbox document. If this is the case, then I'm afraid that "5000" folders will necessarily need to be changed to a lower number in dropbox document.

 

Thank you.

Best regards,

Ethan Fang

Greg-DB
Dropbox Staff

The 5,000 shared folder number mentioned in that help article is a more general guideline, not specifically about the API or list_folder functionality in particular. The performance of any particular API endpoint may vary, and as discussed the list_folder performance will depend on a variety of factors, which can vary over time, so I can't offer a specific limit or guarantee. I've sent this feedback along to the team to see if we can improve this, but I can't make any promises myself.

Need more support?