Learn how to make the most out of the Dropbox Community here 💙!
Forum Discussion
lekn
5 years agoExplorer | Level 3
Downloading zip files from Dropbox via API returns size zero
Hi,
I was trying to download zip files within a folder from my dropbox.
For example, there are 7z files named 0.7z, 1.7z, ... 5.7z under folder name 'extra'.
I could download the 7z files v...
- 5 years ago
Apologies, I had the wrong link in that post. I just fixed it. It should be this post.
Anyway, it looks like the issue is that you're running out of memory when downloading the file, since you're accessing the whole thing, and the file is very large.
You can read off the download result in pieces like this:
with open(new_file_path, 'wb') as new_file: for chunk in res.iter_content(chunk_size=4*1024*1024): # or whatever chunk size you want new_file.write(chunk)
(Note I also changed the file open mode to binary mode.)
Greg-DB
Dropbox Staff
5 years agoIt looks like you may be silently dropping exceptions, based on this code:
except: continue
Add some error handling there to see what may be going wrong with writing the file. At least something like:
except Exception as e: print(e) continue
- lekn5 years agoExplorer | Level 3
Hi Greg,
Thank you for your reply.
Do you have any idea about the following issue?
self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
MemoryError- Greg-DB5 years ago
Dropbox Staff
In Python, a MemoryError is "Raised when an operation runs out of memory". That certainly seems possible here given the file sizes you mentioned.
This new line of code seems to reference several things not included in the original code you shared, and doesn't seem to mention Dropbox. Is this new code of yours, or is it from something else?
Is something still not working as expected on the Dropbox side of things? If so, please share the latest relevant code and full error output so we can take a look.
If you need to download the file in pieces due to memory constraints, you may want to try using res.iter_content directly. I have some information on that in this post.
- lekn5 years agoExplorer | Level 3Hi,
1. The new code that I run was just without try and except.
I just changed the latter part as the following:
for entry in files.values():
new_file_path = directory + entry.name
# download the file, returns the File Metadata and File Content.
#file_metadata, file_content = dbx.files_download(entry.path_lower)
file_metadata, res = dbx.files_download(entry.path_lower)
# write the file to the directory
with open(new_file_path,'w') as new_file:
new_file.write(res.content)
2. The full error is here:
Traceback (most recent call last):
File "./dropbox_load.py", line 50, in <MODULE>
new_file.write(res.content)
File "/opt/apps/apps/binapps/anaconda3/2019.03/lib/python3.7/site-packages/requests/models.py", line 828, in content
self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
MemoryError
When I tried the try except with print (e) you advised, I also got Memory Error.
3. Could you elaborate more on how to download the compressed files in pieces? Also, the link that you shared does not work. Could you share the address instead?
Thank you.</MODULE>
About Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.5,972 PostsLatest Activity: 2 days ago
If you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!