Dropbox API Support & Feedback
Find help with the Dropbox API from other developers.
I am using Elixir wrapper to DropboxAPI to upload files, which is doing simple upload using `files/upload` route.
def upload(client, path, file, mode \\ "add", autorename \\ true, mute \\ false) do dropbox_headers = %{ :path => path, :mode => mode, :autorename => autorename, :mute => mute } headers = %{ "Dropbox-API-Arg" => Poison.encode!(dropbox_headers), "Content-Type" => "application/octet-stream" } upload_request( client, Application.get_env(:elixir_dropbox, :upload_url), "files/upload", file, headers ) end
But I am having an issue lately while uploading so many files parallelly, I am getting an error as `too many write operations`
I have been reading and looking into https://www.dropbox.com/developers/documentation/http/documentation#files-upload_session-start
But I am not following it totally, What I read from it, I can upload multiple files in one session, When I start a session while uploading one file, I think it works for sending chunks of a single file not, different files.
says, This route helps you commit many files at once into a user's Dropbox.
But how?? How I can send multiple files to Dropbox account not
Hello,
The data ingress guide should be of some help here. Specifically, there are some instructions towards the bottom that explain how to handle many files in parallel.
I'll paraphrase a bit here:
Hope that helps!
Okay I have few questions:
How to organize files into batches of under 1000? like is there any structure for it? which is not written in documentation?
and how you can pas that batch of files to the endpoint?
First I made a new client with the access token
iex(3)> client = ElixirDropbox.Client.new("PWK08cePo_kAAAAAAAB2oRsQSyG") %ElixirDropbox.Client{ access_token: "PWK08cePo_kAAAAAAAB2oRsQSyG" }
and uploaded a file
iex(4)> ElixirDropbox.Files.UploadSession.start(client, false, "mix.lock") %{"session_id" => "AAAAAAAYclutwUwG5odRew"}
with close: false,
iex(5)> ElixirDropbox.Files.UploadSession.append(client, "AAAAAAAYclutwUwG5odRew", false, "mix.exs") {{:status_code, 409}, {:ok, %{ "error" => %{".tag" => "incorrect_offset", "correct_offset" => 18597}, "error_summary" => "incorrect_offset/." }}}
I uploaded another file now, with the session id, and it gave me error as above?
What is the proper way of sending the batches of files?
https://www.dropbox.com/developers/documentation/http/documentation#files-upload_session-start I don't see any such information here?
To clarify a bit, you need one "upload session" per file. Each upload session gets its own session ID. When adding data to a single upload session, you need to identify how much you've already added to that upload session by using the "offset" value. This helps make sure you're uploading the right pieces. If you specify the wrong current offset for an upload session, you'll get that "incorrect_offset" error.
You can then "finish" up to 1000 files together (i.e., up to 1000 upload sessions) using /2/files/upload_session/finish_batch.
Check the /2/files/upload_session/finish_batch documentation for information on how to format the parameters for it. Be sure to click on "UploadSessionFinishArg", and so on, to expand the documentation for the nested fields.
I also recommend using the API v2 Explorer to help guide you in understanding how to build the parameters. Click the "Add" button to add an entry and fill in the fields, for instance, and then click "Show Code" to see what the resulting code would be for the call.
I don't believe we have a sample for using upload sessions with /2/files/upload_session/finish_batch, but there are some examples of just using upload sessions (with some of the SDKs) here, which should be a good starting point for the logic:
So at first, I ran this in the command line
curl -X POST https://content.dropboxapi.com/2/files/upload_session/start \ --header "Authorization: Bearer PWK08cePo_kAAAAAAAB2oRsQSyGQbQ2qy-SC69zC7G" \ --header "Dropbox-API-Arg: {\"close\": false}" \ --header "Content-Type: application/octet-stream" \ --data-binary @local_file.txt
It returned:
{"session_id": "AAAAAAAbQhipmPJOosGS-Q"}
I don't see any offset values here but let's take a look at next request of append.
curl -X POST https://content.dropboxapi.com/2/files/upload_session/append_v2 \ --header "Authorization: Bearer C69zip6zi9bwHgqhYHNYQbrpZ5C7G" \ --header "Dropbox-API-Arg: {\"cursor\": {\"session_id\": \"AAAAAAAbQhipmPJOosGS-Q\",\"offset\": 0},\"close\": false}" \ --header "Content-Type: application/octet-stream" \ --data-binary @local_file.txt
It returned:
{"error_summary": "incorrect_offset/.", "error": {".tag": "incorrect_offset", "correct_offset": 37}}
Now I sent the offset value 37 and it gave me null as
curl -X POST https://content.dropboxapi.com/2/files/upload_session/append_v2 --header "Authorization: Bearer SC69zip6zi9bwHgqhYHNYQbrpZ5C7G" --header "Dropbox-API-Arg: {\"cursor\": {\"session_id\": \"AAAAAAAbQhipmPJOosGS-Q\",\"offset\": 37},\"close\": false}" --header "Content-Type: application/octet-stream" --data-binary @local_file.txt
it returned: `null`
Now I did finish_batch,
curl -X POST https://api.dropboxapi.com/2/files/upload_session/finish_batch \ > --header "Authorization: Bearer SC69zip6zi9bwHgqhYHNYQbrpZ5C7G" \ > --header "Content-Type: application/json" \ > --data "{\"entries\": [{\"cursor\": {\"session_id\": \"AAAAAAAbQhipmPJOosGS-Q\",\"offset\": 0},\"commit\": {\"path\": \"/Homework/math/Matrices.txt\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}}]}"
It returned to me with async_job_id,
{".tag": "async_job_id", "async_job_id": "dbjid:AACkTPVL8VvaGyrUdEPui0uHg-mzJuroOoZ5yQvP3HB-4JxhpBGNlRrt8XOH6XiOJIpUKlNjUbHtJgUPGWKhQnXm"}
on the batch check,
curl -X POST https://api.dropboxapi.com/2/files/upload_session/finish_batch/check --header "Authorization: Bearer SC69zip6zi9bwHgqhYHNYQbrpZ5C7G" --header "Content-Type: application/json" --data "{\"async_job_id\": \"dbjid:AACkTPVL8VvaGyrUdEPui0uHg-mzJuroOoZ5yQvP3HB-4JxhpBGNlRrt8XOH6XiOJIpUKlNjUbHtJgUPGWKhQnXm\"}
It returned as
{".tag": "complete", "entries": [{".tag": "failure", "failure": {".tag": "lookup_failed", "lookup_failed": {".tag": "incorrect_offset", "correct_offset": 74}}}]}
My question is, from where I can have the offset value? its nowhere from the very start?
I followed the complete documentation guide, and I am not able to even upload one file with upload session.
The 'offset' value is "The amount of data that has been uploaded so far. We use this to make sure upload data isn't lost or duplicated in the event of a network error.".
That is to say, it should be the number of bytes that you have already uploaded so far for that particular upload session. You should keep track of this client-side as you send the file data up to Dropbox.
Okay thanks now,
on upload_session/start and then doing upload_session/finish
my file is getting uploaded.
the file was of 37 kbs. which I set in offset in upload_session/finish, but
how upload_session/finish_batch can over all the present sessions? when it only takes one session id? and one one file name?
I can now upload the file to my account using only
upload_session/start and upload_session/finish
would that be enough ? for uploading multiple files at once? to dropbox?
I am actually in a loop which being spawned and sending files to dropbox account.
Now I only need to so session start and in the finish add the file size and that it? would that stop sending me too many write operations message?
and If I can the append_v2 with close true, it gives me error as
double the offset, it should be, file as 37KBs and it gives an error as offset should be 74
If you want to commit multiple file uploads at once, and avoid the you should use /2/files/upload_session/finish_batch. That endpoint does not only take a single file name/upload session ID. Its UploadSessionFinishBatchArg.entries parameter takes a list of multiple UploadSessionFinishArg. You can find information on that in the documentation here:
It says "List of (UploadSessionFinishArg, max_items=1000)", indicating that you can supply up to 1000 per call. Each UploadSessionFinishArg should contain the information for a single file and upload session.
When appending data to an upload session, you shouldn't repeatedly send the same data if it's already been successfully sent.
When I am going to need this append option anyways?
you mean if the file is bigger, I first send some of its kbs and then some of the others?
In case I have a 2MB file? Can I send it once in the first upload session start request. and then Should I still need to do the append? or not?
Hi there!
If you need more help you can view your support options (expected response time for a ticket is 24 hours), or contact us on X or Facebook.
For more info on available support options for your Dropbox plan, see this article.
If you found the answer to your question in this Community thread, please 'like' the post to say thanks and to let us know it was useful!