Just to check, the right way to paginate all the log events would be:
1) make an initial request to get_events
2) continue polling to get_events/continue with the cursor from the previous get_events or get_events/continue call, until has_more = false. Once has_more = false, go back to step 1 with the max event timestamp I've seen so far (or should I keep polling the existing cursor and it will work if new events have come in?)
3) at any point in 2) I can get a bad_cursor or reset, in which case I should go back to step 1 with the max event timestamp I've seen so far (and add 1 second to not get duplicates?). Also the docs say in case of a reset the api response will include a timestamp to use - is there an example response showing that, or a way for me to induce that behavior myself? Since I have to keep track of the max event timestamp I've seen for the "bad_cursor" case anyway, maybe I should just always use that and ignore the value returned in the "reset" case?
If you want to paginate through all events, you should page through using the cursors as you describe, except that you don't need to start over with the current timestamp once you get has_more = false. You should instead just use that latest cursor you received and call back to get_events/continue again later to check again.
And yes, if you then get a 'bad_cursor' or 'reset' error, you'll need to start over (optionally specifying a particular timestamp if you wish) to get a new cursor.
In the 'reset' case, the error will include a timestamp that you should use. Here's an example of what that would look like, for reference: