One month down in 2025: How are your resolutions coming along? Check out how to get back on track here.
Forum Discussion
lekn
5 years agoExplorer | Level 3
Downloading zip files from Dropbox via API returns size zero
Hi,
I was trying to download zip files within a folder from my dropbox.
For example, there are 7z files named 0.7z, 1.7z, ... 5.7z under folder name 'extra'.
I could download the 7z files via API, but the size of each 7z file turned out zero although they are around 11GB each.
Other text files under the same folder are downloaded correctly.
Could you let me know how I can download the zip files correctly?
Here is the Python 3 code that I am using:
import dropbox
def process_folder_entries(current_state, entries):
for entry in entries:
if isinstance(entry, dropbox.files.FileMetadata):
current_state[entry.path_lower] = entry
elif isinstance(entry, dropbox.files.DeletedMetadata):
current_state.pop(entry.path_lower, None) # ignore KeyError if missing
return current_state
# Initializing Dropbox API
dbx = dropbox.Dropbox("token")
# Scanning for files
path = r"/download"
result = dbx.files_list_folder(folder_name)
files = process_folder_entries({}, result.entries)
# check for and collect any additional entries
while result.has_more:
result = dbx.files_list_folder_continue(result.cursor)
files = process_folder_entries(files, result.entries)
# define the download location.
directory = directoryname
for entry in files.values():
new_file_path = directory + entry.name
# download the file, returns the File Metadata and File Content.
#file_metadata, file_content = dbx.files_download(entry.path_lower)
file_metadata, res = dbx.files_download(entry.path_lower)
try:
# write the file to the directory
with open(new_file_path,'w') as new_file:
new_file.write(res.content)
except:
continue
Apologies, I had the wrong link in that post. I just fixed it. It should be this post.
Anyway, it looks like the issue is that you're running out of memory when downloading the file, since you're accessing the whole thing, and the file is very large.
You can read off the download result in pieces like this:
with open(new_file_path, 'wb') as new_file: for chunk in res.iter_content(chunk_size=4*1024*1024): # or whatever chunk size you want new_file.write(chunk)
(Note I also changed the file open mode to binary mode.)
- Greg-DB
Dropbox Staff
It looks like you may be silently dropping exceptions, based on this code:
except: continue
Add some error handling there to see what may be going wrong with writing the file. At least something like:
except Exception as e: print(e) continue
- leknExplorer | Level 3
Hi Greg,
Thank you for your reply.
Do you have any idea about the following issue?
self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b''
MemoryError- Greg-DB
Dropbox Staff
In Python, a MemoryError is "Raised when an operation runs out of memory". That certainly seems possible here given the file sizes you mentioned.
This new line of code seems to reference several things not included in the original code you shared, and doesn't seem to mention Dropbox. Is this new code of yours, or is it from something else?
Is something still not working as expected on the Dropbox side of things? If so, please share the latest relevant code and full error output so we can take a look.
If you need to download the file in pieces due to memory constraints, you may want to try using res.iter_content directly. I have some information on that in this post.
About Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.5,945 PostsLatest Activity: 2 hours ago
If you need more help you can view your support options (expected response time for an email or ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!