This is a pretty big change that will render existing clients unable
to modify the database, but it's important that we use POST or PUT
instead of GET for anything that may change state, in case this
is ever put behind a cache.
PUTs generate a "HTTP/1.1 100 Continue" response before the
"HTTP/1.1 200 OK" response, and so we were mistakenly picking up
the 100 status code and not returning any data. Improve the
header callback to correctly process any number of status codes.
Various places suggest that this is needed for better thread-safety,
and the only drawback is that some systems cannot timeout properly on
DNS lookups.
These functions can now take an object or a type (class).
If given an object, they will wrap subsequent calls to that object.
If given a type, they will return an object that can be instantiated
to create a new object, and all calls including __init__ will be
covered by the serialization or thread verification.
Occasional segfaults may be the result of performing thread-unsafe
operations. This class decorator verifies that all of its methods
are called in a thread-safe manner.
It can separately warn about:
- two threads calling methods in a function (the kind of thing sqlite
doesn't like)
- recursion
- concurrency (two different threads functions at the same time)
This decorator makes a class always be serialized, including its
instantiation, in a separate thread. This is an improvement over
the old Serializer() object wrapper, which didn't put the
instantiation into the new thread.
Previously, we could get empty intervals anyway by having a non-empty
interval and removing a smaller interval around each piece of data.
Turns out that empty intervals are OK and needed in some situations,
so explicitly allow and test for it.
Now supports these operations:
ctx.insert_line()
ctx.insert_iter()
ctx.finalize() (end the current contiguous interval, so a new one
can be started with a gap)
ctx.update_end() (update ending timestamp before finalizing interval)
ctx.update_start() (update starting timestamp for new interval)
block_data += "string" is fast with local variables, but slow with
variables inside some namespace. Instead, build a list of strings and
join them once at the end. This fixes the slowdown that resulted from
the stream_insert_context cleanup.
It still should be closed, but warning each time was mostly for
debugging and it's kind of annoying when writing one-off programs
where it's OK to just let things get torn down as they're completed.
Not closing is not fatal in terms of data integrity etc.