Learn how to make the most out of the Dropbox Community here 💙!
Forum Discussion
lekn
5 years agoExplorer | Level 3
Downloading zip files from Dropbox via API returns size zero
Hi,
I was trying to download zip files within a folder from my dropbox.
For example, there are 7z files named 0.7z, 1.7z, ... 5.7z under folder name 'extra'.
I could download the 7z files v...
- 5 years ago
Apologies, I had the wrong link in that post. I just fixed it. It should be this post.
Anyway, it looks like the issue is that you're running out of memory when downloading the file, since you're accessing the whole thing, and the file is very large.
You can read off the download result in pieces like this:
with open(new_file_path, 'wb') as new_file: for chunk in res.iter_content(chunk_size=4*1024*1024): # or whatever chunk size you want new_file.write(chunk)
(Note I also changed the file open mode to binary mode.)
lekn
5 years agoExplorer | Level 3
Hi,
1. The new code that I run was just without try and except.
I just changed the latter part as the following:
for entry in files.values():
new_file_path = directory + entry.name
# download the file, returns the File Metadata and File Content.
#file_metadata, file_content = dbx.files_download(entry.path_lower)
file_metadata, res = dbx.files_download(entry.path_lower)
# write the file to the directory
with open(new_file_path,'w') as new_file:
new_file.write(res.content)
2. The full error is here:
Traceback (most recent call last):
File "./dropbox_load.py", line 50, in <MODULE>
new_file.write(res.content)
File "/opt/apps/apps/binapps/anaconda3/2019.03/lib/python3.7/site-packages/requests/models.py", line 828, in content
self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
MemoryError
When I tried the try except with print (e) you advised, I also got Memory Error.
3. Could you elaborate more on how to download the compressed files in pieces? Also, the link that you shared does not work. Could you share the address instead?
Thank you.</MODULE>
1. The new code that I run was just without try and except.
I just changed the latter part as the following:
for entry in files.values():
new_file_path = directory + entry.name
# download the file, returns the File Metadata and File Content.
#file_metadata, file_content = dbx.files_download(entry.path_lower)
file_metadata, res = dbx.files_download(entry.path_lower)
# write the file to the directory
with open(new_file_path,'w') as new_file:
new_file.write(res.content)
2. The full error is here:
Traceback (most recent call last):
File "./dropbox_load.py", line 50, in <MODULE>
new_file.write(res.content)
File "/opt/apps/apps/binapps/anaconda3/2019.03/lib/python3.7/site-packages/requests/models.py", line 828, in content
self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
MemoryError
When I tried the try except with print (e) you advised, I also got Memory Error.
3. Could you elaborate more on how to download the compressed files in pieces? Also, the link that you shared does not work. Could you share the address instead?
Thank you.</MODULE>
Greg-DB
Dropbox Staff
5 years agoApologies, I had the wrong link in that post. I just fixed it. It should be this post.
Anyway, it looks like the issue is that you're running out of memory when downloading the file, since you're accessing the whole thing, and the file is very large.
You can read off the download result in pieces like this:
with open(new_file_path, 'wb') as new_file: for chunk in res.iter_content(chunk_size=4*1024*1024): # or whatever chunk size you want new_file.write(chunk)
(Note I also changed the file open mode to binary mode.)
- lekn5 years agoExplorer | Level 3
This works well. Thank you.
The link still doesn't work. Could you upload the link again?
I was wondering what the chunk size means and how to choose the size.
- Greg-DB5 years ago
Dropbox Staff
I'm glad to hear that works well.
The chunk size is how much data to read from the Dropbox API and then write to the local file at a time. I used 4 MB in this sample, but you can change that as desired, with the constraint being how much memory you can use.
I'm not sure why that link got broken again, but I fixed it again and confirmed it's working now.
About Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.5,972 PostsLatest Activity: 24 hours ago
If you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!