Compare commits

..

124 Commits

Author SHA1 Message Date
3b90318f83 Merge remote-tracking branch 'origin/packaging' 2013-01-31 21:54:41 -05:00
1fb37604d3 Rearrange documentation, clean up Makefile, README 2013-01-31 19:06:32 -05:00
018ecab310 Make setup.py executable 2013-01-31 17:26:55 -05:00
6a1d6017e2 Include datetime_tz module 2013-01-31 17:25:14 -05:00
e7406f8147 Add metadata 2013-01-31 17:14:47 -05:00
f316026592 Move datetime_tz package under nilmdb.utils
datetime_tz isn't readily available, so it's a lot easier to just
package it within the nilmdb tree.
2013-01-30 19:03:42 -05:00
a8db747768 More work on setup.py; fixed issues in setup.cfg
Adjusted setup.cfg so "python setup.py nosetests" now works correctly.
Also added a "test" alias so that "python setup.py test" works.
2013-01-30 18:35:12 -05:00
727af94722 Start working on setup.py 2013-01-29 20:21:03 -05:00
6c89659df7 Cleanup cmdline "create" help text 2013-01-28 19:07:48 -05:00
58c7c8f6ff Support "now" as a timestamp argument 2013-01-28 19:07:45 -05:00
225003f412 Huge cleanup of namespaces, modules, packages, imports.
Now nilmdb.client, nilmdb.server, nilmdb.cmdline, and nilmdb.utils
are each their own modules, and there is a little bit more of a
logical separation between them.  Various changes scattered throughout
to fix naming (for example, nilmdb.nilmdb.NilmDBError is now
nilmdb.server.errors.NilmDBError).

Reduced usage of "from __future__ import absolute_import" as much
as possible.  It's still needed for the functions in the nilmdb/server
directory to be able to import the nilmdb module rather than the
nilmdb.py script.

This should hopefully ease future packaging a bit.
2013-01-28 19:04:52 -05:00
40b966aef2 Add pycurl-specific hack to Iteratorizer
Inside the pycurl callback, we can't raise exceptions, because the
pycurl extension module will unconditionally print the exception
itself, and not pass it up to the caller.  Instead, we have the
callback return a value that tells curl to abort.  (-1 would be best,
in case we were given 0 bytes, but the extension doesn't support
that either).

This resolves the 'Exception("should die")' problem when interrupting
a streaming generator like stream_extract.
2013-01-24 19:06:20 -05:00
294ec6988b Rewrite Iteratorizer as a context manager
Relying on __del__ to clean up the thread isn't as reliable.
2013-01-24 19:04:25 -05:00
fad23ebb22 Add --timestamp-raw option to extract and list 2013-01-24 16:03:38 -05:00
b226dc4337 Properly handle test case where server doesn't start 2013-01-24 16:03:38 -05:00
e7af863017 httpclient: make sure we error out quickly if nested calls are made
Curl will give an error if we call .setopt() while a .perform() is
in progress, for example if we try to do a stream_insert() while
in the middle of a stream_extract().  Move the setopt() to the
beginning of the get/put functions to ensure that we hit this
error before we mess with the URLs or anything else.
2013-01-24 15:36:10 -05:00
af6ce5b79c Remove superfluous from iteratorizor callback exception 2013-01-23 15:42:27 -05:00
0a6fc943e2 Add some better documentation of layout parameter to create.py 2013-01-22 18:47:39 -05:00
67c6e178e1 Documentation updates 2013-01-22 18:36:05 -05:00
9bf213707c Properly return an error if two timestamps are equal 2013-01-22 18:35:18 -05:00
5cd7899e98 Send a Access-Control-Allow-Origin (CORS) header with all responses 2013-01-22 14:42:03 -05:00
ceec5fb9b3 Force /stream/interval and /stream/extract responses to be text/plain 2013-01-22 12:47:06 -05:00
85be497edb Fix README 2013-01-21 17:30:01 -05:00
bd1b7107af Update TODO, clean up bulkdata error message 2013-01-21 11:43:28 -05:00
b8275f108d Make error message more helpful 2013-01-18 17:27:57 -05:00
2820ff9758 More fixes to mustclose decorator and argspecs 2013-01-18 17:21:30 -05:00
a015de893d Cleanup 2013-01-18 17:14:26 -05:00
b7f746e66d Fix lrucache decorator argspecs 2013-01-18 17:13:50 -05:00
40cf4941f0 Test that argspecs are maintained in lrucache 2013-01-18 17:01:46 -05:00
8a418ceb3e Fix issue where mustclose decorator doesn't maintain argspec 2013-01-18 16:57:15 -05:00
0312b6eb07 Test for issue where mustclose decorator didn't maintain argspec 2013-01-18 16:55:51 -05:00
077f197d24 Fix server returning 500 for bad HTTP parameters 2013-01-18 16:54:49 -05:00
62354b4dce Add test for bad-parameters-give-500-error 2013-01-17 19:58:48 -05:00
5970cd85cf Disable "ie-friendly" error message padding in CherryPy 2013-01-16 17:57:45 -05:00
4f6a742e6c Fix test failure 2013-01-16 17:31:31 -05:00
87b43e5d04 Command line errors cleaned up and made more consistent 2013-01-16 16:52:43 -05:00
f0c2a64ae3 Update doc formatting, .gitignore 2013-01-09 23:36:23 -05:00
e5d3deb6fe Removal support is complete.
`nrows` may change if you restart the server; documented why this is
the case in the design.md file.  It's not a problem.
2013-01-09 23:26:59 -05:00
d321058b48 Add basic versioning to bulkdata table format file. 2013-01-09 19:26:24 -05:00
cea83140c0 More work towards correctly removing rows. 2013-01-09 19:25:45 -05:00
7807d6caf0 Progress and tests for bulkdata.remove
Passes tests, but doesn't really handle nrows (and removing partially
full files) correctly, when deleting near the end of the data.
2013-01-09 17:39:29 -05:00
3d0fad3c2a Move some helper functions around 2013-01-09 17:39:29 -05:00
fe3b087435 Remove implemented in nilmdb; still needs bulkdata changes. 2013-01-08 21:07:52 -05:00
bcefe52298 nilmdb: Bring out range manipulating SQL so we can reuse it 2013-01-08 18:45:03 -05:00
f88c148ccc Interval removal work in progress. Needs NilmDB and BulkData work. 2013-01-08 18:37:01 -05:00
4a47b1d04a remove support: command line, client 2013-01-06 20:13:57 -05:00
80da937cb7 cmdline: return error when start > end (extract, list, remove) 2013-01-06 20:13:28 -05:00
c81972e66e Minor testsuite and commandline fixes.
Now supports "list /foo/bar" in addition to the older "list --path /foo/bar"
2013-01-06 19:25:07 -05:00
b09362fde1 Full coverage of nilmdb.utils.mustclose 2013-01-05 18:02:53 -05:00
b7688844fa Add a Nosetests plugin that lets me specify a test order within a directory. 2013-01-05 18:02:37 -05:00
3d212e7592 Move test helpers into subdirectory 2013-01-05 15:00:34 -05:00
7aedfdf9c3 Add lower level bulkdata test 2013-01-05 14:55:22 -05:00
ebd4f74959 Remove "pragma: no cover" from things that should get tested 2013-01-05 14:52:06 -05:00
ebe2fbab92 Add wrap_verify option to nilmdb.utils.must_close decorator 2013-01-05 14:51:41 -05:00
4831a0cae1 Small doc updates 2013-01-04 17:27:04 -05:00
07192c6ffb nilmdb.BulkData: Switch to nested subdir/filename layout
Use numbered subdirectories to avoid having too many files in one dir.
Add appropriate tests.

Also fix an issue where the mmap_open LRU cache could inappropriately
open a file twice because it was using the optional "newsize"
parameter as a key -- now lrucache can be given a slice object that
describes which arguments are important.
2013-01-04 16:51:05 -05:00
09d325e8ab Rename format -> _format in data dirs 2013-01-03 20:46:15 -05:00
11b0293d5f Clean up BulkData file size calculations, test more thoroughly
Now the goal is 128 MiB files, rather than a specific length.
2013-01-03 20:19:01 -05:00
493bbed82c More coverage and tests 2013-01-03 19:21:12 -05:00
3bc25daaab Trim urllib to get full coverage of the features in use 2013-01-03 17:10:07 -05:00
40a3bc4bc3 Update README with Python 2.7 requirement 2013-01-03 17:09:51 -05:00
c083d63c96 Tests for Unicode compliance 2013-01-03 17:03:52 -05:00
0221e3ea21 Update commandline test helpers to better handle Unicode
We replace cStringIO with StringIO subclass that forces UTF-8
encoding, and explicitly convert commandlines to UTF-8 before
shlex.  These changes will only affect tests, not normal commandline
operation.
2013-01-03 17:03:52 -05:00
f5fd2b064e Replace urllib.encode() with a version that encodes Unicode as UTF-8 instead 2013-01-03 17:02:38 -05:00
06e91a6a98 Always use function version of print() 2013-01-03 17:02:38 -05:00
41b3f3c018 Always use UTF-8 for filenames in nilmdb.bulkdata 2013-01-03 17:02:38 -05:00
842076fef4 Cleanup server error handling with decorator 2013-01-03 17:02:38 -05:00
10d58f6a47 More test coverage 2013-01-02 00:00:05 -05:00
e2464efc12 Test everything; remove debugging 2013-01-01 23:46:54 -05:00
1beae5024e Bulkdata extract works now. 2013-01-01 23:44:52 -05:00
c7c65b6542 Work around CherryPy bug #1200; related cleanups
Spent way too long trying to track down a cryptic error that turned
out to be a CherryPy bug.  Now we catch this using a decorator in the
'extract' and 'intervals' generators that transforms exceptions that
trigger the bugs into one that does not.  fun!
2013-01-01 23:03:53 -05:00
f41ff0a6e8 Inserting bulk data is essentially done, not tested 2013-01-01 21:04:35 -05:00
389c1d189f Make option to turn off chunked encoding for debugging more clear. 2013-01-01 21:03:33 -05:00
487298986e More work towards bulkdata 2012-12-31 18:44:57 -05:00
d4cd045c48 Fix path stuff, build packer in bulkdata.Table 2012-12-31 17:22:30 -05:00
3816645313 More work on BulkData 2012-12-31 17:22:30 -05:00
83b937c720 More Pytables -> bulkdata conversion 2012-12-31 17:22:30 -05:00
b3e6e8976f More work towards flat bulk data storage.
Cleaned up OS-specific path handling in nilmdb, bulkdata.
2012-12-31 17:22:30 -05:00
c890ea93cb WIP switching away from PyTables 2012-12-31 17:22:29 -05:00
84c68c6913 Better documentation, cache Tables 2012-12-31 17:22:29 -05:00
6f1e6fe232 Isolate all PyTables stuff to a single file.
This will make migrating to my own data storage engine easier.
2012-12-31 17:22:29 -05:00
b0d76312d1 Add must_close() decorator, use it in nilmdb
Warns at runtime if a class's close() method wasn't called before the
object was destroyed.
2012-12-31 17:21:19 -05:00
19c846c71c Remove outdated files 2012-12-31 15:55:43 -05:00
f355c73209 Refactor utility classes into nilmdb.utils subdir/namespace
There's some bug with the testing harness where placing e.g.
  from du import du
in nilmdb/utils/__init__.py doesn't quite work -- sometimes the
module "du" replaces the function "du".  Not exactly sure why;
we work around that by just renaming files so they don't match
the imported names directly.
2012-12-31 15:55:36 -05:00
173014ba19 Use nilmdb.lrucache for caching interval sets 2012-12-31 14:52:46 -05:00
24d4752bc3 Add LRU cache memoizing decorator for functions 2012-12-31 14:39:16 -05:00
a85b273e2e Remove compression.
Messes up extraction, since we random access for the timestamp binary
search.  In the future, maybe switching to multiple tables (one for
timestamp, one for compressed data) would be smart.
2012-12-14 17:19:23 -05:00
7f73b4b304 Use compression in pytables 2012-12-14 17:17:52 -05:00
f3eb6d1b79 Time it! 2012-12-14 16:57:02 -05:00
9082cc9f44 Merging adjacent intervals is working now!
Adjust test expectations accordingly, since the number of intervals
they print out will now be smaller.
2012-12-12 19:25:27 -05:00
bf64a40472 Some misc test additions, interval optimizations. Still need adjacency test 2012-12-11 23:31:55 -05:00
32dbeebc09 More insertion checks. Need to get interval concatenation working. 2012-12-11 18:08:00 -05:00
66ddc79b15 Inserting works again, with proper end/start for paired blocks.
timeit.sh script works too!
2012-12-07 20:30:39 -05:00
7a8bd0bf41 Don't include layout on client side 2012-12-07 16:24:15 -05:00
ee552de740 Start reworking/fixing insert timestamps 2012-12-06 20:25:24 -05:00
6d1fb61573 Use 'repr' instead of 'str' in Interval string representation.
Otherwise timestamps get truncated to 2 decimal places.
2012-12-05 17:47:48 -05:00
f094529e66 TODO update 2012-12-04 22:15:53 -05:00
5fecec2a4c Support deleting streams with new 'destroy' command 2012-12-04 22:15:00 -05:00
85bb46f45c Use pytable's createparents flag to avoid having to create group
structure manually.
2012-12-04 18:57:36 -05:00
17c329fd6d Start to be a little more strict about how intervals are half-open. 2012-11-29 15:35:11 -05:00
437e1b425a More speed tests, some whitespace cleanups 2012-11-29 15:22:47 -05:00
c0f87db3c1 Converted rbtree, interval to Cython. Serious speedups! 2012-11-29 15:13:09 -05:00
a9c5c19e30 Start converting interval.py to Cython. 2012-11-29 12:42:38 -05:00
f39567b2bc Speed updates 2012-11-29 01:35:01 -05:00
99ec0f4946 Converted rbtree.py to Cython
About 3x faster
2012-11-29 01:25:51 -05:00
f5c60f68dc Speed tests.
test_interval_speed is about O(n * log n), which is good -- but the
constants are high and it hits swap on a 4G machine for the 2**21
test.  Hopefully cython helps!
2012-11-29 01:00:54 -05:00
bdef0986d6 rbtree and interval tests fully pass now.
On to benchmarking...
2012-11-29 00:42:50 -05:00
c396c4dac8 rbtree tests complete 2012-11-29 00:07:49 -05:00
0b443f510b Filling out rbtree tests, search routines 2012-11-28 20:57:23 -05:00
66fa6f3824 Add rendering test 2012-11-28 18:34:51 -05:00
875fbe969f Some documentation and other cleanups in rbtree.py 2012-11-28 18:30:21 -05:00
e35e85886e add .gitignore 2012-11-28 17:21:51 -05:00
7211217f40 Working on getting the RBTree working. Intersections are busted.
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11380 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-15 18:55:56 +00:00
d34b980516 RBTree seems generally OK now
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11379 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-14 20:10:43 +00:00
6aee52d980 Deletion is still broken. F.
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11378 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-14 04:23:53 +00:00
090c8d5315 More progress
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11377 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-14 04:12:15 +00:00
1042ff9f4b add RBtree C++ example that I based this on; update tests
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11376 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-14 03:55:37 +00:00
bc687969c1 Work in progress switching to my own RBTree. Currently creates loops
somewhere, need to figure out what's going on.


git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11375 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-14 03:48:04 +00:00
de27bd3f41 Attempt at using a sentinel instead of class instances for the leaf node.. doesnt quite work for deletion
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11361 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-10 02:12:01 +00:00
4dcf713d0e Attempts at speeding up the RbTree implementation
with cython.  Still quite a bit slower than the bxinterval
implementation, though.


git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11360 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-09 21:06:04 +00:00
f9dea53c24 Randomize order for the insertion test
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11358 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-08 23:50:23 +00:00
6cedd7c327 fix
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11357 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-08 23:44:21 +00:00
6278d32f7d Passes tests, but is slow
git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11356 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-08 23:08:01 +00:00
991039903c Partial implementation of Interval and IntervalSet with a red-black
tree.

This is currently hitting an issue where it's considering the
intersection of [0,1] and [1,2] to be [1,1].  It matches the 
fully-closed definition of intervals, unlike before -- but might
cause issues.  Need to consider whether test case is correct.


git-svn-id: https://bucket.mit.edu/svn/nilm/nilmdb@11355 ddd99763-3ecb-0310-9145-efcb8ce7c51f
2012-11-08 22:56:05 +00:00
77 changed files with 3941 additions and 1434 deletions

View File

@@ -7,3 +7,4 @@
exclude_lines =
pragma: no cover
if 0:
omit = nilmdb/utils/datetime_tz*

21
.gitignore vendored
View File

@@ -1,2 +1,23 @@
# Tests
tests/*testdb/
.coverage
db/
# Compiled / cythonized files
docs/*.html
build/
*.pyc
nilmdb/server/interval.c
nilmdb/server/interval.so
nilmdb/server/layout.c
nilmdb/server/layout.so
nilmdb/server/rbtree.c
nilmdb/server/rbtree.so
# Setup junk
dist/
nilmdb.egg-info/
# Misc
timeit*out

View File

@@ -1,18 +1,10 @@
all: test
tool:
python nilmtool.py --help
python nilmtool.py list --help
python nilmtool.py -u asfdadsf list
lint:
pylint -f parseable nilmdb
test:
nosetests
profile:
nosetests --with-profile
python runtests.py
clean::
find . -name '*pyc' | xargs rm -f

View File

@@ -1,2 +1,10 @@
sudo apt-get install python-nose python-coverage
sudo apt-get install python-tables cython python-cherrypy3
nilmdb: Non-Intrusive Load Monitor Database
by Jim Paris <jim@jtan.com>
Prerequisites:
sudo apt-get install python2.7 python-cherrypy3 python-decorator python-nose python-coverage python-setuptools
Install:
python setup.py install

5
TODO
View File

@@ -1,5 +0,0 @@
- Merge adjacent intervals on insert (maybe with client help?)
- Better testing:
- see about getting coverage on layout.pyx
- layout.pyx performance tests, before and after generalization

181
design.md
View File

@@ -1,181 +0,0 @@
Structure
---------
nilmdb.nilmdb is the NILM database interface. It tracks a PyTables
database holds actual rows of data, and a SQL database tracks metadata
and ranges.
Access to the nilmdb must be single-threaded. This is handled with
the nilmdb.serializer class.
nilmdb.server is a HTTP server that provides an interface to talk,
thorugh the serialization layer, to the nilmdb object.
nilmdb.client is a HTTP client that connects to this.
Sqlite performance
------------------
Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
takes about 125msec. sqlite3 will commit transactions at 3 times:
1: explicit con.commit()
2: between a series of DML commands and non-DML commands, e.g.
after a series of INSERT, SELECT, but before a CREATE TABLE or
PRAGMA.
3: at the end of an explicit transaction, e.g. "with self.con as con:"
To speed up testing, or if this transaction speed becomes an issue,
the sync=False option to NilmDB will set PRAGMA synchronous=OFF.
Inserting streams
-----------------
We need to send the contents of "data" as POST. Do we need chunked
transfer?
- Don't know the size in advance, so we would need to use chunked if
we send the entire thing in one request.
- But we shouldn't send one chunk per line, so we need to buffer some
anyway; why not just make new requests?
- Consider the infinite-streaming case, we might want to send it
immediately? Not really -- server still should do explicit inserts
of fixed-size chunks.
- Even chunked encoding needs the size of each chunk beforehand, so
everything still gets buffered. Just a tradeoff of buffer size.
Before timestamps are added:
- Raw data is about 440 kB/s (9 channels)
- Prep data is about 12.5 kB/s (1 phase)
- How do we know how much data to send?
- Remember that we can only do maybe 8-50 transactions per second on
the sqlite database. So if one block of inserted data is one
transaction, we'd need the raw case to be around 64kB per request,
ideally more.
- Maybe use a range, based on how long it's taking to read the data
- If no more data, send it
- If data > 1 MB, send it
- If more than 10 seconds have elapsed, send it
- Should those numbers come from the server?
Converting from ASCII to PyTables:
- For each row getting added, we need to set attributes on a PyTables
Row object and call table.append(). This means that there isn't a
particularly efficient way of converting from ascii.
- Could create a function like nilmdb.layout.Layout("foo".fillRow(asciiline)
- But this means we're doing parsing on the serialized side
- Let's keep parsing on the threaded server side so we can detect
errors better, and not block the serialized nilmdb for a slow
parsing process.
- Client sends ASCII data
- Server converts this ACSII data to a list of values
- Maybe:
# threaded side creates this object
parser = nilmdb.layout.Parser("layout_name")
# threaded side parses and fills it with data
parser.parse(textdata)
# serialized side pulls out rows
for n in xrange(parser.nrows):
parser.fill_row(rowinstance, n)
table.append()
Inserting streams, inside nilmdb
--------------------------------
- First check that the new stream doesn't overlap.
- Get minimum timestamp, maximum timestamp from data parser.
- (extend parser to verify monotonicity and track extents)
- Get all intervals for this stream in the database
- See if new interval overlaps any existing ones
- If so, bail
- Question: should we cache intervals inside NilmDB?
- Assume database is fast for now, and always rebuild fom DB.
- Can add a caching layer later if we need to.
- `stream_get_ranges(path)` -> return IntervalSet?
Speed
-----
- First approach was quadratic. Adding four hours of data:
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-110000 /bpnilm/1/raw
real 24m31.093s
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-120001 /bpnilm/1/raw
real 43m44.528s
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-130002 /bpnilm/1/raw
real 93m29.713s
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-140003 /bpnilm/1/raw
real 166m53.007s
- Disabling pytables indexing didn't help:
real 31m21.492s
real 52m51.963s
real 102m8.151s
real 176m12.469s
- Server RAM usage is constant.
- Speed problems were due to IntervalSet speed, of parsing intervals
from the database and adding the new one each time.
- First optimization is to cache result of `nilmdb:_get_intervals`,
which gives the best speedup.
- Also switched to internally using bxInterval from bx-python package.
Speed of `tests/test_interval:TestIntervalSpeed` is pretty decent
and seems to be growing logarithmically now. About 85μs per insertion
for inserting 131k entries.
- Storing the interval data in SQL might be better, with a scheme like:
http://www.logarithmic.net/pfh/blog/01235197474
- Next slowdown target is nilmdb.layout.Parser.parse().
- Rewrote parsers using cython and sscanf
- Stats (rev 10831), with _add_interval disabled
layout.pyx.Parser.parse:128 6303 sec, 262k calls
layout.pyx.parse:63 13913 sec, 5.1g calls
numpy:records.py.fromrecords:569 7410 sec, 262k calls
- Probably OK for now.
IntervalSet speed
-----------------
- Initial implementation was pretty slow, even with binary search in
sorted list
- Replaced with bxInterval; now takes about log n time for an insertion
- TestIntervalSpeed with range(17,18) and profiling
- 85 μs each
- 131072 calls to `__iadd__`
- 131072 to bx.insert_interval
- 131072 to bx.insert:395
- 2355835 to bx.insert:106 (18x as many?)
- Tried blist too, worse than bxinterval.
- Might be algorithmic improvements to be made in Interval.py,
like in `__and__`
Layouts
-------
Current/old design has specific layouts: RawData, PrepData, RawNotchedData.
Let's get rid of this entirely and switch to simpler data types that are
just collections and counts of a single type. We'll still use strings
to describe them, with format:
type_count
where type is "uint16", "float32", or "float64", and count is an integer.
nilmdb.layout.named() will parse these strings into the appropriate
handlers. For compatibility:
"RawData" == "uint16_6"
"RawNotchedData" == "uint16_9"
"PrepData" == "float32_8"

9
docs/Makefile Normal file
View File

@@ -0,0 +1,9 @@
ALL_DOCS = $(wildcard *.md)
all: $(ALL_DOCS:.md=.html)
%.html: %.md
pandoc -s $< > $@
clean:
rm -f *.html

5
docs/TODO.md Normal file
View File

@@ -0,0 +1,5 @@
- Documentation
- Machine-readable information in OverflowError, parser errors.
Maybe subclass `cherrypy.HTTPError` and override `set_response`
to add another JSON field?

268
docs/design.md Normal file
View File

@@ -0,0 +1,268 @@
Structure
---------
nilmdb.nilmdb is the NILM database interface. A nilmdb.BulkData
interface stores data in flat files, and a SQL database tracks
metadata and ranges.
Access to the nilmdb must be single-threaded. This is handled with
the nilmdb.serializer class. In the future this could probably
be turned into a per-path serialization.
nilmdb.server is a HTTP server that provides an interface to talk,
thorugh the serialization layer, to the nilmdb object.
nilmdb.client is a HTTP client that connects to this.
Sqlite performance
------------------
Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
takes about 125msec. sqlite3 will commit transactions at 3 times:
1. explicit con.commit()
2. between a series of DML commands and non-DML commands, e.g.
after a series of INSERT, SELECT, but before a CREATE TABLE or
PRAGMA.
3. at the end of an explicit transaction, e.g. "with self.con as con:"
To speed up testing, or if this transaction speed becomes an issue,
the sync=False option to NilmDB will set PRAGMA synchronous=OFF.
Inserting streams
-----------------
We need to send the contents of "data" as POST. Do we need chunked
transfer?
- Don't know the size in advance, so we would need to use chunked if
we send the entire thing in one request.
- But we shouldn't send one chunk per line, so we need to buffer some
anyway; why not just make new requests?
- Consider the infinite-streaming case, we might want to send it
immediately? Not really -- server still should do explicit inserts
of fixed-size chunks.
- Even chunked encoding needs the size of each chunk beforehand, so
everything still gets buffered. Just a tradeoff of buffer size.
Before timestamps are added:
- Raw data is about 440 kB/s (9 channels)
- Prep data is about 12.5 kB/s (1 phase)
- How do we know how much data to send?
- Remember that we can only do maybe 8-50 transactions per second on
the sqlite database. So if one block of inserted data is one
transaction, we'd need the raw case to be around 64kB per request,
ideally more.
- Maybe use a range, based on how long it's taking to read the data
- If no more data, send it
- If data > 1 MB, send it
- If more than 10 seconds have elapsed, send it
- Should those numbers come from the server?
Converting from ASCII to PyTables:
- For each row getting added, we need to set attributes on a PyTables
Row object and call table.append(). This means that there isn't a
particularly efficient way of converting from ascii.
- Could create a function like nilmdb.layout.Layout("foo".fillRow(asciiline)
- But this means we're doing parsing on the serialized side
- Let's keep parsing on the threaded server side so we can detect
errors better, and not block the serialized nilmdb for a slow
parsing process.
- Client sends ASCII data
- Server converts this ACSII data to a list of values
- Maybe:
# threaded side creates this object
parser = nilmdb.layout.Parser("layout_name")
# threaded side parses and fills it with data
parser.parse(textdata)
# serialized side pulls out rows
for n in xrange(parser.nrows):
parser.fill_row(rowinstance, n)
table.append()
Inserting streams, inside nilmdb
--------------------------------
- First check that the new stream doesn't overlap.
- Get minimum timestamp, maximum timestamp from data parser.
- (extend parser to verify monotonicity and track extents)
- Get all intervals for this stream in the database
- See if new interval overlaps any existing ones
- If so, bail
- Question: should we cache intervals inside NilmDB?
- Assume database is fast for now, and always rebuild fom DB.
- Can add a caching layer later if we need to.
- `stream_get_ranges(path)` -> return IntervalSet?
Speed
-----
- First approach was quadratic. Adding four hours of data:
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-110000 /bpnilm/1/raw
real 24m31.093s
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-120001 /bpnilm/1/raw
real 43m44.528s
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-130002 /bpnilm/1/raw
real 93m29.713s
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-140003 /bpnilm/1/raw
real 166m53.007s
- Disabling pytables indexing didn't help:
real 31m21.492s
real 52m51.963s
real 102m8.151s
real 176m12.469s
- Server RAM usage is constant.
- Speed problems were due to IntervalSet speed, of parsing intervals
from the database and adding the new one each time.
- First optimization is to cache result of `nilmdb:_get_intervals`,
which gives the best speedup.
- Also switched to internally using bxInterval from bx-python package.
Speed of `tests/test_interval:TestIntervalSpeed` is pretty decent
and seems to be growing logarithmically now. About 85μs per insertion
for inserting 131k entries.
- Storing the interval data in SQL might be better, with a scheme like:
http://www.logarithmic.net/pfh/blog/01235197474
- Next slowdown target is nilmdb.layout.Parser.parse().
- Rewrote parsers using cython and sscanf
- Stats (rev 10831), with _add_interval disabled
layout.pyx.Parser.parse:128 6303 sec, 262k calls
layout.pyx.parse:63 13913 sec, 5.1g calls
numpy:records.py.fromrecords:569 7410 sec, 262k calls
- Probably OK for now.
- After all updates, now takes about 8.5 minutes to insert an hour of
data, constant after adding 171 hours (4.9 billion data points)
- Data set size: 98 gigs = 20 bytes per data point.
6 uint16 data + 1 uint32 timestamp = 16 bytes per point
So compression must be off -- will retry with compression forced on.
IntervalSet speed
-----------------
- Initial implementation was pretty slow, even with binary search in
sorted list
- Replaced with bxInterval; now takes about log n time for an insertion
- TestIntervalSpeed with range(17,18) and profiling
- 85 μs each
- 131072 calls to `__iadd__`
- 131072 to bx.insert_interval
- 131072 to bx.insert:395
- 2355835 to bx.insert:106 (18x as many?)
- Tried blist too, worse than bxinterval.
- Might be algorithmic improvements to be made in Interval.py,
like in `__and__`
- Replaced again with rbtree. Seems decent. Numbers are time per
insert for 2**17 insertions, followed by total wall time and RAM
usage for running "make test" with `test_rbtree` and `test_interval`
with range(5,20):
- old values with bxinterval:
20.2 μS, total 20 s, 177 MB RAM
- rbtree, plain python:
97 μS, total 105 s, 846 MB RAM
- rbtree converted to cython:
26 μS, total 29 s, 320 MB RAM
- rbtree and interval converted to cython:
8.4 μS, total 12 s, 134 MB RAM
Layouts
-------
Current/old design has specific layouts: RawData, PrepData, RawNotchedData.
Let's get rid of this entirely and switch to simpler data types that are
just collections and counts of a single type. We'll still use strings
to describe them, with format:
type_count
where type is "uint16", "float32", or "float64", and count is an integer.
nilmdb.layout.named() will parse these strings into the appropriate
handlers. For compatibility:
"RawData" == "uint16_6"
"RawNotchedData" == "uint16_9"
"PrepData" == "float32_8"
BulkData design
---------------
BulkData is a custom bulk data storage system that was written to
replace PyTables. The general structure is a `data` subdirectory in
the main NilmDB directory. Within `data`, paths are created for each
created stream. These locations are called tables. For example,
tables might be located at
nilmdb/data/newton/raw/
nilmdb/data/newton/prep/
nilmdb/data/cottage/raw/
Each table contains:
- An unchanging `_format` file (Python pickle format) that describes
parameters of how the data is broken up, like files per directory,
rows per file, and the binary data format
- Hex named subdirectories `("%04x", although more than 65536 can exist)`
- Hex named files within those subdirectories, like:
/nilmdb/data/newton/raw/000b/010a
The data format of these files is raw binary, interpreted by the
Python `struct` module according to the format string in the
`_format` file.
- Same as above, with `.removed` suffix, is an optional file (Python
pickle format) containing a list of row numbers that have been
logically removed from the file. If this range covers the entire
file, the entire file will be removed.
- Note that the `bulkdata.nrows` variable is calculated once in
`BulkData.__init__()`, and only ever incremented during use. Thus,
even if all data is removed, `nrows` can remain high. However, if
the server is restarted, the newly calculated `nrows` may be lower
than in a previous run due to deleted data. To be specific, this
sequence of events:
- insert data
- remove all data
- insert data
will result in having different row numbers in the database, and
differently numbered files on the filesystem, than the sequence:
- insert data
- remove all data
- restart server
- insert data
This is okay! Everything should remain consistent both in the
`BulkData` and `NilmDB`. Not attempting to readjust `nrows` during
deletion makes the code quite a bit simpler.
- Similarly, data files are never truncated shorter. Removing data
from the end of the file will not shorten it; it will only be
deleted when it has been fully filled and all of the data has been
subsequently removed.

View File

@@ -1,16 +1,4 @@
"""Main NilmDB import"""
from .nilmdb import NilmDB
from .server import Server
from .client import Client
from .timer import Timer
import cmdline
import pyximport; pyximport.install()
import layout
import serializer
import timestamper
import interval
import du
from server import NilmDB, Server
from client import Client

View File

@@ -1,502 +0,0 @@
# cython: profile=False
# This is from bx-python 554:07aca5a9f6fc (BSD licensed), modified to
# store interval ranges as doubles rather than 32-bit integers.
"""
Data structure for performing intersect queries on a set of intervals which
preserves all information about the intervals (unlike bitset projection methods).
:Authors: James Taylor (james@jamestaylor.org),
Ian Schenk (ian.schenck@gmail.com),
Brent Pedersen (bpederse@gmail.com)
"""
# Historical note:
# This module original contained an implementation based on sorted endpoints
# and a binary search, using an idea from Scott Schwartz and Piotr Berman.
# Later an interval tree implementation was implemented by Ian for Galaxy's
# join tool (see `bx.intervals.operations.quicksect.py`). This was then
# converted to Cython by Brent, who also added support for
# upstream/downstream/neighbor queries. This was modified by James to
# handle half-open intervals strictly, to maintain sort order, and to
# implement the same interface as the original Intersecter.
#cython: cdivision=True
import operator
cdef extern from "stdlib.h":
int ceil(float f)
float log(float f)
int RAND_MAX
int rand()
int strlen(char *)
int iabs(int)
cdef inline double dmax2(double a, double b):
if b > a: return b
return a
cdef inline double dmax3(double a, double b, double c):
if b > a:
if c > b:
return c
return b
if a > c:
return a
return c
cdef inline double dmin3(double a, double b, double c):
if b < a:
if c < b:
return c
return b
if a < c:
return a
return c
cdef inline double dmin2(double a, double b):
if b < a: return b
return a
cdef float nlog = -1.0 / log(0.5)
cdef class IntervalNode:
"""
A single node of an `IntervalTree`.
NOTE: Unless you really know what you are doing, you probably should us
`IntervalTree` rather than using this directly.
"""
cdef float priority
cdef public object interval
cdef public double start, end
cdef double minend, maxend, minstart
cdef public IntervalNode cleft, cright, croot
property left_node:
def __get__(self):
return self.cleft if self.cleft is not EmptyNode else None
property right_node:
def __get__(self):
return self.cright if self.cright is not EmptyNode else None
property root_node:
def __get__(self):
return self.croot if self.croot is not EmptyNode else None
def __repr__(self):
return "IntervalNode(%g, %g)" % (self.start, self.end)
def __cinit__(IntervalNode self, double start, double end, object interval):
# Python lacks the binomial distribution, so we convert a
# uniform into a binomial because it naturally scales with
# tree size. Also, python's uniform is perfect since the
# upper limit is not inclusive, which gives us undefined here.
self.priority = ceil(nlog * log(-1.0/(1.0 * rand()/RAND_MAX - 1)))
self.start = start
self.end = end
self.interval = interval
self.maxend = end
self.minstart = start
self.minend = end
self.cleft = EmptyNode
self.cright = EmptyNode
self.croot = EmptyNode
cpdef IntervalNode insert(IntervalNode self, double start, double end, object interval):
"""
Insert a new IntervalNode into the tree of which this node is
currently the root. The return value is the new root of the tree (which
may or may not be this node!)
"""
cdef IntervalNode croot = self
# If starts are the same, decide which to add interval to based on
# end, thus maintaining sortedness relative to start/end
cdef double decision_endpoint = start
if start == self.start:
decision_endpoint = end
if decision_endpoint > self.start:
# insert to cright tree
if self.cright is not EmptyNode:
self.cright = self.cright.insert( start, end, interval )
else:
self.cright = IntervalNode( start, end, interval )
# rebalance tree
if self.priority < self.cright.priority:
croot = self.rotate_left()
else:
# insert to cleft tree
if self.cleft is not EmptyNode:
self.cleft = self.cleft.insert( start, end, interval)
else:
self.cleft = IntervalNode( start, end, interval)
# rebalance tree
if self.priority < self.cleft.priority:
croot = self.rotate_right()
croot.set_ends()
self.cleft.croot = croot
self.cright.croot = croot
return croot
cdef IntervalNode rotate_right(IntervalNode self):
cdef IntervalNode croot = self.cleft
self.cleft = self.cleft.cright
croot.cright = self
self.set_ends()
return croot
cdef IntervalNode rotate_left(IntervalNode self):
cdef IntervalNode croot = self.cright
self.cright = self.cright.cleft
croot.cleft = self
self.set_ends()
return croot
cdef inline void set_ends(IntervalNode self):
if self.cright is not EmptyNode and self.cleft is not EmptyNode:
self.maxend = dmax3(self.end, self.cright.maxend, self.cleft.maxend)
self.minend = dmin3(self.end, self.cright.minend, self.cleft.minend)
self.minstart = dmin3(self.start, self.cright.minstart, self.cleft.minstart)
elif self.cright is not EmptyNode:
self.maxend = dmax2(self.end, self.cright.maxend)
self.minend = dmin2(self.end, self.cright.minend)
self.minstart = dmin2(self.start, self.cright.minstart)
elif self.cleft is not EmptyNode:
self.maxend = dmax2(self.end, self.cleft.maxend)
self.minend = dmin2(self.end, self.cleft.minend)
self.minstart = dmin2(self.start, self.cleft.minstart)
def intersect( self, double start, double end, sort=True ):
"""
given a start and a end, return a list of features
falling within that range
"""
cdef list results = []
self._intersect( start, end, results )
if sort:
results = sorted(results)
return results
find = intersect
cdef void _intersect( IntervalNode self, double start, double end, list results):
# Left subtree
if self.cleft is not EmptyNode and self.cleft.maxend > start:
self.cleft._intersect( start, end, results )
# This interval
if ( self.end > start ) and ( self.start < end ):
results.append( self.interval )
# Right subtree
if self.cright is not EmptyNode and self.start < end:
self.cright._intersect( start, end, results )
cdef void _seek_left(IntervalNode self, double position, list results, int n, double max_dist):
# we know we can bail in these 2 cases.
if self.maxend + max_dist < position:
return
if self.minstart > position:
return
# the ordering of these 3 blocks makes it so the results are
# ordered nearest to farest from the query position
if self.cright is not EmptyNode:
self.cright._seek_left(position, results, n, max_dist)
if -1 < position - self.end < max_dist:
results.append(self.interval)
# TODO: can these conditionals be more stringent?
if self.cleft is not EmptyNode:
self.cleft._seek_left(position, results, n, max_dist)
cdef void _seek_right(IntervalNode self, double position, list results, int n, double max_dist):
# we know we can bail in these 2 cases.
if self.maxend < position: return
if self.minstart - max_dist > position: return
#print "SEEK_RIGHT:",self, self.cleft, self.maxend, self.minstart, position
# the ordering of these 3 blocks makes it so the results are
# ordered nearest to farest from the query position
if self.cleft is not EmptyNode:
self.cleft._seek_right(position, results, n, max_dist)
if -1 < self.start - position < max_dist:
results.append(self.interval)
if self.cright is not EmptyNode:
self.cright._seek_right(position, results, n, max_dist)
cpdef left(self, position, int n=1, double max_dist=2500):
"""
find n features with a start > than `position`
f: a Interval object (or anything with an `end` attribute)
n: the number of features to return
max_dist: the maximum distance to look before giving up.
"""
cdef list results = []
# use start - 1 becuase .left() assumes strictly left-of
self._seek_left( position - 1, results, n, max_dist )
if len(results) == n: return results
r = results
r.sort(key=operator.attrgetter('end'), reverse=True)
return r[:n]
cpdef right(self, position, int n=1, double max_dist=2500):
"""
find n features with a end < than position
f: a Interval object (or anything with a `start` attribute)
n: the number of features to return
max_dist: the maximum distance to look before giving up.
"""
cdef list results = []
# use end + 1 becuase .right() assumes strictly right-of
self._seek_right(position + 1, results, n, max_dist)
if len(results) == n: return results
r = results
r.sort(key=operator.attrgetter('start'))
return r[:n]
def traverse(self):
if self.cleft is not EmptyNode:
for node in self.cleft.traverse():
yield node
yield self.interval
if self.cright is not EmptyNode:
for node in self.cright.traverse():
yield node
cdef IntervalNode EmptyNode = IntervalNode( 0, 0, Interval(0, 0))
## ---- Wrappers that retain the old interface -------------------------------
cdef class Interval:
"""
Basic feature, with required integer start and end properties.
Also accepts optional strand as +1 or -1 (used for up/downstream queries),
a name, and any arbitrary data is sent in on the info keyword argument
>>> from bx.intervals.intersection import Interval
>>> f1 = Interval(23, 36)
>>> f2 = Interval(34, 48, value={'chr':12, 'anno':'transposon'})
>>> f2
Interval(34, 48, value={'anno': 'transposon', 'chr': 12})
"""
cdef public double start, end
cdef public object value, chrom, strand
def __init__(self, double start, double end, object value=None, object chrom=None, object strand=None ):
assert start <= end, "start must be less than end"
self.start = start
self.end = end
self.value = value
self.chrom = chrom
self.strand = strand
def __repr__(self):
fstr = "Interval(%g, %g" % (self.start, self.end)
if not self.value is None:
fstr += ", value=" + str(self.value)
fstr += ")"
return fstr
def __richcmp__(self, other, op):
if op == 0:
# <
return self.start < other.start or self.end < other.end
elif op == 1:
# <=
return self == other or self < other
elif op == 2:
# ==
return self.start == other.start and self.end == other.end
elif op == 3:
# !=
return self.start != other.start or self.end != other.end
elif op == 4:
# >
return self.start > other.start or self.end > other.end
elif op == 5:
# >=
return self == other or self > other
cdef class IntervalTree:
"""
Data structure for performing window intersect queries on a set of
of possibly overlapping 1d intervals.
Usage
=====
Create an empty IntervalTree
>>> from bx.intervals.intersection import Interval, IntervalTree
>>> intersecter = IntervalTree()
An interval is a start and end position and a value (possibly None).
You can add any object as an interval:
>>> intersecter.insert( 0, 10, "food" )
>>> intersecter.insert( 3, 7, dict(foo='bar') )
>>> intersecter.find( 2, 5 )
['food', {'foo': 'bar'}]
If the object has start and end attributes (like the Interval class) there
is are some shortcuts:
>>> intersecter = IntervalTree()
>>> intersecter.insert_interval( Interval( 0, 10 ) )
>>> intersecter.insert_interval( Interval( 3, 7 ) )
>>> intersecter.insert_interval( Interval( 3, 40 ) )
>>> intersecter.insert_interval( Interval( 13, 50 ) )
>>> intersecter.find( 30, 50 )
[Interval(3, 40), Interval(13, 50)]
>>> intersecter.find( 100, 200 )
[]
Before/after for intervals
>>> intersecter.before_interval( Interval( 10, 20 ) )
[Interval(3, 7)]
>>> intersecter.before_interval( Interval( 5, 20 ) )
[]
Upstream/downstream
>>> intersecter.upstream_of_interval(Interval(11, 12))
[Interval(0, 10)]
>>> intersecter.upstream_of_interval(Interval(11, 12, strand="-"))
[Interval(13, 50)]
>>> intersecter.upstream_of_interval(Interval(1, 2, strand="-"), num_intervals=3)
[Interval(3, 7), Interval(3, 40), Interval(13, 50)]
"""
cdef IntervalNode root
def __cinit__( self ):
root = None
# Helper for plots
def emptynode( self ):
return EmptyNode
def rootnode( self ):
return self.root
# ---- Position based interfaces -----------------------------------------
def insert( self, double start, double end, object value=None ):
"""
Insert the interval [start,end) associated with value `value`.
"""
if self.root is None:
self.root = IntervalNode( start, end, value )
else:
self.root = self.root.insert( start, end, value )
add = insert
def find( self, start, end ):
"""
Return a sorted list of all intervals overlapping [start,end).
"""
if self.root is None:
return []
return self.root.find( start, end )
def before( self, position, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie before `position` and are no
further than `max_dist` positions away
"""
if self.root is None:
return []
return self.root.left( position, num_intervals, max_dist )
def after( self, position, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie after `position` and are no
further than `max_dist` positions away
"""
if self.root is None:
return []
return self.root.right( position, num_intervals, max_dist )
# ---- Interval-like object based interfaces -----------------------------
def insert_interval( self, interval ):
"""
Insert an "interval" like object (one with at least start and end
attributes)
"""
self.insert( interval.start, interval.end, interval )
add_interval = insert_interval
def before_interval( self, interval, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie completely before `interval`
and are no further than `max_dist` positions away
"""
if self.root is None:
return []
return self.root.left( interval.start, num_intervals, max_dist )
def after_interval( self, interval, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie completely after `interval` and
are no further than `max_dist` positions away
"""
if self.root is None:
return []
return self.root.right( interval.end, num_intervals, max_dist )
def upstream_of_interval( self, interval, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie completely upstream of
`interval` and are no further than `max_dist` positions away
"""
if self.root is None:
return []
if interval.strand == -1 or interval.strand == "-":
return self.root.right( interval.end, num_intervals, max_dist )
else:
return self.root.left( interval.start, num_intervals, max_dist )
def downstream_of_interval( self, interval, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie completely downstream of
`interval` and are no further than `max_dist` positions away
"""
if self.root is None:
return []
if interval.strand == -1 or interval.strand == "-":
return self.root.left( interval.start, num_intervals, max_dist )
else:
return self.root.right( interval.end, num_intervals, max_dist )
def traverse(self):
"""
iterator that traverses the tree
"""
if self.root is None:
return iter([])
return self.root.traverse()
# For backward compatibility
Intersecter = IntervalTree

View File

@@ -0,0 +1,4 @@
"""nilmdb.client"""
from .client import Client
from .errors import *

View File

@@ -1,28 +1,32 @@
# -*- coding: utf-8 -*-
"""Class for performing HTTP client requests via libcurl"""
from __future__ import absolute_import
from nilmdb.printf import *
import nilmdb
import nilmdb.utils
import nilmdb.client.httpclient
from nilmdb.utils.printf import *
import time
import sys
import re
import os
import simplejson as json
import nilmdb.httpclient
# Other functions expect to see these in the nilmdb.client namespace
from nilmdb.httpclient import ClientError, ServerError, Error
import itertools
version = "1.0"
def float_to_string(f):
# Use repr to maintain full precision in the string output.
return repr(float(f))
class Client(object):
"""Main client interface to the Nilm database."""
client_version = version
def __init__(self, url):
self.http = nilmdb.httpclient.HTTPClient(url)
self.http = nilmdb.client.httpclient.HTTPClient(url)
def _json_param(self, data):
"""Return compact json-encoded version of parameter"""
@@ -84,33 +88,88 @@ class Client(object):
"layout" : layout }
return self.http.get("stream/create", params)
def stream_insert(self, path, data):
def stream_destroy(self, path):
"""Delete stream and its contents"""
params = { "path": path }
return self.http.get("stream/destroy", params)
def stream_remove(self, path, start = None, end = None):
"""Remove data from the specified time range"""
params = {
"path": path
}
if start is not None:
params["start"] = float_to_string(start)
if end is not None:
params["end"] = float_to_string(end)
return self.http.get("stream/remove", params)
def stream_insert(self, path, data, start = None, end = None):
"""Insert data into a stream. data should be a file-like object
that provides ASCII data that matches the database layout for path."""
that provides ASCII data that matches the database layout for path.
start and end are the starting and ending timestamp of this
stream; all timestamps t in the data must satisfy 'start <= t
< end'. If left unspecified, 'start' is the timestamp of the
first line of data, and 'end' is the timestamp on the last line
of data, plus a small delta of 1μs.
"""
params = { "path": path }
# See design.md for a discussion of how much data to send.
# These are soft limits -- actual data might be rounded up.
max_data = 1048576
max_time = 30
end_epsilon = 1e-6
def extract_timestamp(line):
return float(line.split()[0])
def sendit():
result = self.http.put("stream/insert", send_data, params)
params["old_timestamp"] = result[1]
return result
# If we have more data after this, use the timestamp of
# the next line as the end. Otherwise, use the given
# overall end time, or add end_epsilon to the last data
# point.
if nextline:
block_end = extract_timestamp(nextline)
if end and block_end > end:
# This is unexpected, but we'll defer to the server
# to return an error in this case.
block_end = end
elif end:
block_end = end
else:
block_end = extract_timestamp(line) + end_epsilon
# Send it
params["start"] = float_to_string(block_start)
params["end"] = float_to_string(block_end)
return self.http.put("stream/insert", block_data, params)
clock_start = time.time()
block_data = ""
block_start = start
result = None
start = time.time()
send_data = ""
for line in data:
elapsed = time.time() - start
send_data += line
for (line, nextline) in nilmdb.utils.misc.pairwise(data):
# If we don't have a starting time, extract it from the first line
if block_start is None:
block_start = extract_timestamp(line)
if (len(send_data) > max_data) or (elapsed > max_time):
clock_elapsed = time.time() - clock_start
block_data += line
# If we have enough data, or enough time has elapsed,
# send this block to the server, and empty things out
# for the next block.
if (len(block_data) > max_data) or (clock_elapsed > max_time):
result = sendit()
send_data = ""
start = time.time()
if len(send_data):
block_start = None
block_data = ""
clock_start = time.time()
# One last block?
if len(block_data):
result = sendit()
# Return the most recent JSON result we got back, or None if
@@ -125,9 +184,9 @@ class Client(object):
"path": path
}
if start is not None:
params["start"] = repr(start) # use repr to keep precision
params["start"] = float_to_string(start)
if end is not None:
params["end"] = repr(end)
params["end"] = float_to_string(end)
return self.http.get_gen("stream/intervals", params, retjson = True)
def stream_extract(self, path, start = None, end = None, count = False):
@@ -143,9 +202,9 @@ class Client(object):
"path": path,
}
if start is not None:
params["start"] = repr(start) # use repr to keep precision
params["start"] = float_to_string(start)
if end is not None:
params["end"] = repr(end)
params["end"] = float_to_string(end)
if count:
params["count"] = 1

33
nilmdb/client/errors.py Normal file
View File

@@ -0,0 +1,33 @@
"""HTTP client errors"""
from nilmdb.utils.printf import *
class Error(Exception):
"""Base exception for both ClientError and ServerError responses"""
def __init__(self,
status = "Unspecified error",
message = None,
url = None,
traceback = None):
Exception.__init__(self, status)
self.status = status # e.g. "400 Bad Request"
self.message = message # textual message from the server
self.url = url # URL we were requesting
self.traceback = traceback # server traceback, if available
def _format_error(self, show_url):
s = sprintf("[%s]", self.status)
if self.message:
s += sprintf(" %s", self.message)
if show_url and self.url: # pragma: no cover
s += sprintf(" (%s)", self.url)
if self.traceback: # pragma: no cover
s += sprintf("\nServer traceback:\n%s", self.traceback)
return s
def __str__(self):
return self._format_error(show_url = False)
def __repr__(self): # pragma: no cover
return self._format_error(show_url = True)
class ClientError(Error):
pass
class ServerError(Error):
pass

View File

@@ -1,7 +1,9 @@
"""HTTP client library"""
from __future__ import absolute_import
from nilmdb.printf import *
import nilmdb
import nilmdb.utils
from nilmdb.utils.printf import *
from nilmdb.client.errors import *
import time
import sys
@@ -9,38 +11,9 @@ import re
import os
import simplejson as json
import urlparse
import urllib
import pycurl
import cStringIO
import nilmdb.iteratorizer
class Error(Exception):
"""Base exception for both ClientError and ServerError responses"""
def __init__(self,
status = "Unspecified error",
message = None,
url = None,
traceback = None):
Exception.__init__(self, status)
self.status = status # e.g. "400 Bad Request"
self.message = message # textual message from the server
self.url = url # URL we were requesting
self.traceback = traceback # server traceback, if available
def __str__(self):
s = sprintf("[%s]", self.status)
if self.message:
s += sprintf(" %s", self.message)
if self.url:
s += sprintf(" (%s)", self.url)
if self.traceback: # pragma: no cover
s += sprintf("\nServer traceback:\n%s", self.traceback)
return s
class ClientError(Error):
pass
class ServerError(Error):
pass
class HTTPClient(object):
"""Class to manage and perform HTTP requests from the client"""
def __init__(self, baseurl = ""):
@@ -60,7 +33,8 @@ class HTTPClient(object):
def _setup_url(self, url = "", params = ""):
url = urlparse.urljoin(self.baseurl, url)
if params:
url = urlparse.urljoin(url, "?" + urllib.urlencode(params, True))
url = urlparse.urljoin(
url, "?" + nilmdb.utils.urllib.urlencode(params))
self.curl.setopt(pycurl.URL, url)
self.url = url
@@ -85,6 +59,10 @@ class HTTPClient(object):
raise ClientError(**args)
else: # pragma: no cover
if code >= 500 and code <= 599:
if args["message"] is None:
args["message"] = ("(no message; try disabling " +
"response.stream option in " +
"nilmdb.server for better debugging)")
raise ServerError(**args)
else:
raise Error(**args)
@@ -109,13 +87,14 @@ class HTTPClient(object):
self.curl.setopt(pycurl.WRITEFUNCTION, callback)
self.curl.perform()
try:
for i in nilmdb.iteratorizer.Iteratorizer(func):
if self._status == 200:
# If we had a 200 response, yield the data to the caller.
yield i
else:
# Otherwise, collect it into an error string.
error_body += i
with nilmdb.utils.Iteratorizer(func, curl_hack = True) as it:
for i in it:
if self._status == 200:
# If we had a 200 response, yield the data to caller.
yield i
else:
# Otherwise, collect it into an error string.
error_body += i
except pycurl.error as e:
raise ServerError(status = "502 Error",
url = self.url,
@@ -185,9 +164,9 @@ class HTTPClient(object):
def put(self, url, postdata, params = None, retjson = True):
"""Simple PUT"""
self.curl.setopt(pycurl.UPLOAD, 1)
self._setup_url(url, params)
data = cStringIO.StringIO(postdata)
self.curl.setopt(pycurl.UPLOAD, 1)
self.curl.setopt(pycurl.READFUNCTION, data.read)
return self._doreq(url, params, retjson)
@@ -213,8 +192,8 @@ class HTTPClient(object):
def put_gen(self, url, postdata, params = None, retjson = True):
"""Simple PUT, returning a generator"""
self.curl.setopt(pycurl.UPLOAD, 1)
self._setup_url(url, params)
data = cStringIO.StringIO(postdata)
self.curl.setopt(pycurl.UPLOAD, 1)
self.curl.setopt(pycurl.READFUNCTION, data.read)
return self._doreq_gen(url, params, retjson)

View File

@@ -1 +1,3 @@
"""nilmdb.cmdline"""
from .cmdline import Cmdline

View File

@@ -1,21 +1,21 @@
"""Command line client functionality"""
from __future__ import absolute_import
from nilmdb.printf import *
import nilmdb.client
import nilmdb
from nilmdb.utils.printf import *
from nilmdb.utils import datetime_tz
import datetime_tz
import dateutil.parser
import sys
import re
import argparse
from argparse import ArgumentDefaultsHelpFormatter as def_form
version = "0.1"
version = "1.0"
# Valid subcommands. Defined in separate files just to break
# things up -- they're still called with Cmdline as self.
subcommands = [ "info", "create", "list", "metadata", "insert", "extract" ]
subcommands = [ "info", "create", "list", "metadata", "insert", "extract",
"remove", "destroy" ]
# Import the subcommand modules. Equivalent way of doing this would be
# from . import info as cmd_info
@@ -23,10 +23,16 @@ subcmd_mods = {}
for cmd in subcommands:
subcmd_mods[cmd] = __import__("nilmdb.cmdline." + cmd, fromlist = [ cmd ])
class JimArgumentParser(argparse.ArgumentParser):
def error(self, message):
self.print_usage(sys.stderr)
self.exit(2, sprintf("error: %s\n", message))
class Cmdline(object):
def __init__(self, argv):
self.argv = argv
self.client = None
def arg_time(self, toparse):
"""Parse a time string argument"""
@@ -42,10 +48,10 @@ class Cmdline(object):
If the string doesn't contain a timestamp, the current local
timezone is assumed (e.g. from the TZ env var).
"""
# If string doesn't contain at least 6 digits, consider it
# invalid. smartparse might otherwise accept empty strings
# and strings with just separators.
if len(re.findall(r"\d", toparse)) < 6:
# If string isn't "now" and doesn't contain at least 4 digits,
# consider it invalid. smartparse might otherwise accept
# empty strings and strings with just separators.
if toparse != "now" and len(re.findall(r"\d", toparse)) < 4:
raise ValueError("not enough digits for a timestamp")
# Try to just parse the time as given
@@ -92,8 +98,8 @@ class Cmdline(object):
version_string = sprintf("nilmtool %s, client library %s",
version, nilmdb.Client.client_version)
self.parser = argparse.ArgumentParser(add_help = False,
formatter_class = def_form)
self.parser = JimArgumentParser(add_help = False,
formatter_class = def_form)
group = self.parser.add_argument_group("General options")
group.add_argument("-h", "--help", action='help',
@@ -118,7 +124,8 @@ class Cmdline(object):
def die(self, formatstr, *args):
fprintf(sys.stderr, formatstr + "\n", *args)
self.client.close()
if self.client:
self.client.close()
sys.exit(-1)
def run(self):
@@ -130,13 +137,17 @@ class Cmdline(object):
self.parser_setup()
self.args = self.parser.parse_args(self.argv)
# Run arg verify handler if there is one
if "verify" in self.args:
self.args.verify(self)
self.client = nilmdb.Client(self.args.url)
# Make a test connection to make sure things work
try:
server_version = self.client.version()
except nilmdb.client.Error as e:
self.die("Error connecting to server: %s", str(e))
self.die("error connecting to server: %s", str(e))
# Now dispatch client request to appropriate function. Parser
# should have ensured that we don't have any unknown commands

View File

@@ -1,17 +1,27 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
import textwrap
from argparse import ArgumentDefaultsHelpFormatter as def_form
from argparse import RawDescriptionHelpFormatter as raw_form
def setup(self, sub):
cmd = sub.add_parser("create", help="Create a new stream",
formatter_class = def_form,
formatter_class = raw_form,
description="""
Create a new empty stream at the
specified path and with the specifed
layout type.
""")
Create a new empty stream at the specified path and with the specified
layout type.
Layout types are of the format: type_count
'type' is a data type like 'float32', 'float64', 'uint16', 'int32', etc.
'count' is the number of columns of this type.
For example, 'float32_8' means the data for this stream has 8 columns of
32-bit floating point values.
""")
cmd.set_defaults(handler = cmd_create)
group = cmd.add_argument_group("Required arguments")
group.add_argument("path",
@@ -24,4 +34,4 @@ def cmd_create(self):
try:
self.client.stream_create(self.args.path, self.args.layout)
except nilmdb.client.ClientError as e:
self.die("Error creating stream: %s", str(e))
self.die("error creating stream: %s", str(e))

25
nilmdb/cmdline/destroy.py Normal file
View File

@@ -0,0 +1,25 @@
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
from argparse import ArgumentDefaultsHelpFormatter as def_form
def setup(self, sub):
cmd = sub.add_parser("destroy", help="Delete a stream and all data",
formatter_class = def_form,
description="""
Destroy the stream at the specified path. All
data and metadata related to the stream is
permanently deleted.
""")
cmd.set_defaults(handler = cmd_destroy)
group = cmd.add_argument_group("Required arguments")
group.add_argument("path",
help="Path of the stream to delete, e.g. /foo/bar")
def cmd_destroy(self):
"""Destroy stream"""
try:
self.client.stream_destroy(self.args.path)
except nilmdb.client.ClientError as e:
self.die("error destroying stream: %s", str(e))

View File

@@ -1,7 +1,7 @@
from __future__ import absolute_import
from nilmdb.printf import *
from __future__ import print_function
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
import nilmdb.layout
import sys
def setup(self, sub):
@@ -9,17 +9,18 @@ def setup(self, sub):
description="""
Extract data from a stream.
""")
cmd.set_defaults(handler = cmd_extract)
cmd.set_defaults(verify = cmd_extract_verify,
handler = cmd_extract)
group = cmd.add_argument_group("Data selection")
group.add_argument("path",
help="Path of stream, e.g. /foo/bar")
group.add_argument("-s", "--start", required=True,
metavar="TIME", type=self.arg_time,
help="Starting timestamp (free-form)")
help="Starting timestamp (free-form, inclusive)")
group.add_argument("-e", "--end", required=True,
metavar="TIME", type=self.arg_time,
help="Ending timestamp (free-form)")
help="Ending timestamp (free-form, noninclusive)")
group = cmd.add_argument_group("Output format")
group.add_argument("-b", "--bare", action="store_true",
@@ -27,20 +28,32 @@ def setup(self, sub):
group.add_argument("-a", "--annotate", action="store_true",
help="Include comments with some information "
"about the stream")
group.add_argument("-T", "--timestamp-raw", action="store_true",
help="Show raw timestamps in annotated information")
group.add_argument("-c", "--count", action="store_true",
help="Just output a count of matched data points")
def cmd_extract_verify(self):
if self.args.start is not None and self.args.end is not None:
if self.args.start > self.args.end:
self.parser.error("start is after end")
def cmd_extract(self):
streams = self.client.stream_list(self.args.path)
if len(streams) != 1:
self.die("Error getting stream info for path %s", self.args.path)
self.die("error getting stream info for path %s", self.args.path)
layout = streams[0][1]
if self.args.timestamp_raw:
time_string = repr
else:
time_string = self.time_string
if self.args.annotate:
printf("# path: %s\n", self.args.path)
printf("# layout: %s\n", layout)
printf("# start: %s\n", self.time_string(self.args.start))
printf("# end: %s\n", self.time_string(self.args.end))
printf("# start: %s\n", time_string(self.args.start))
printf("# end: %s\n", time_string(self.args.end))
printed = False
for dataline in self.client.stream_extract(self.args.path,
@@ -51,7 +64,7 @@ def cmd_extract(self):
# Strip timestamp (first element). Doesn't make sense
# if we are only returning a count.
dataline = ' '.join(dataline.split(' ')[1:])
print dataline
print(dataline)
printed = True
if not printed:
if self.args.annotate:

View File

@@ -1,5 +1,4 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
from argparse import ArgumentDefaultsHelpFormatter as def_form

View File

@@ -1,8 +1,7 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
import nilmdb.layout
import nilmdb.timestamper
import nilmdb.utils.timestamper as timestamper
import sys
@@ -52,12 +51,12 @@ def cmd_insert(self):
# Find requested stream
streams = self.client.stream_list(self.args.path)
if len(streams) != 1:
self.die("Error getting stream info for path %s", self.args.path)
self.die("error getting stream info for path %s", self.args.path)
layout = streams[0][1]
if self.args.start and len(self.args.file) != 1:
self.die("--start can only be used with one input file, for now")
self.die("error: --start can only be used with one input file")
for filename in self.args.file:
if filename == '-':
@@ -66,11 +65,11 @@ def cmd_insert(self):
try:
infile = open(filename, "r")
except IOError:
self.die("Error opening input file %s", filename)
self.die("error opening input file %s", filename)
# Build a timestamper for this file
if self.args.none:
ts = nilmdb.timestamper.TimestamperNull(infile)
ts = timestamper.TimestamperNull(infile)
else:
if self.args.start:
start = self.args.start
@@ -78,14 +77,14 @@ def cmd_insert(self):
try:
start = self.parse_time(filename)
except ValueError:
self.die("Error extracting time from filename '%s'",
self.die("error extracting time from filename '%s'",
filename)
if not self.args.rate:
self.die("Need to specify --rate")
self.die("error: --rate is needed, but was not specified")
rate = self.args.rate
ts = nilmdb.timestamper.TimestamperRate(infile, start, rate)
ts = timestamper.TimestamperRate(infile, start, rate)
# Print info
if not self.args.quiet:
@@ -101,6 +100,6 @@ def cmd_insert(self):
# ugly bracketed ranges of 16-digit numbers and a mangled URL.
# Need to consider adding something like e.prettyprint()
# that is smarter about the contents of the error.
self.die("Error inserting data: %s", str(e))
self.die("error inserting data: %s", str(e))
return

View File

@@ -1,8 +1,9 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
import fnmatch
import argparse
from argparse import ArgumentDefaultsHelpFormatter as def_form
def setup(self, sub):
@@ -13,27 +14,53 @@ def setup(self, sub):
optionally filtering by layout or path. Wildcards
are accepted.
""")
cmd.set_defaults(handler = cmd_list)
cmd.set_defaults(verify = cmd_list_verify,
handler = cmd_list)
group = cmd.add_argument_group("Stream filtering")
group.add_argument("-p", "--path", metavar="PATH", default="*",
help="Match only this path (-p can be omitted)")
group.add_argument("path_positional", default="*",
nargs="?", help=argparse.SUPPRESS)
group.add_argument("-l", "--layout", default="*",
help="Match only this stream layout")
group.add_argument("-p", "--path", default="*",
help="Match only this path")
group = cmd.add_argument_group("Interval details")
group.add_argument("-d", "--detail", action="store_true",
help="Show available data time intervals")
group.add_argument("-T", "--timestamp-raw", action="store_true",
help="Show raw timestamps in time intervals")
group.add_argument("-s", "--start",
metavar="TIME", type=self.arg_time,
help="Starting timestamp (free-form)")
help="Starting timestamp (free-form, inclusive)")
group.add_argument("-e", "--end",
metavar="TIME", type=self.arg_time,
help="Ending timestamp (free-form)")
help="Ending timestamp (free-form, noninclusive)")
def cmd_list_verify(self):
# A hidden "path_positional" argument lets the user leave off the
# "-p" when specifying the path. Handle it here.
got_opt = self.args.path != "*"
got_pos = self.args.path_positional != "*"
if got_pos:
if got_opt:
self.parser.error("too many paths specified")
else:
self.args.path = self.args.path_positional
if self.args.start is not None and self.args.end is not None:
if self.args.start > self.args.end:
self.parser.error("start is after end")
def cmd_list(self):
"""List available streams"""
streams = self.client.stream_list()
if self.args.timestamp_raw:
time_string = repr
else:
time_string = self.time_string
for (path, layout) in streams:
if not (fnmatch.fnmatch(path, self.args.path) and
fnmatch.fnmatch(layout, self.args.layout)):
@@ -46,9 +73,7 @@ def cmd_list(self):
printed = False
for (start, end) in self.client.stream_intervals(path, self.args.start,
self.args.end):
printf(" [ %s -> %s ]\n",
self.time_string(start),
self.time_string(end))
printf(" [ %s -> %s ]\n", time_string(start), time_string(end))
printed = True
if not printed:
printf(" (no intervals)\n")

View File

@@ -1,5 +1,5 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
def setup(self, sub):
@@ -43,21 +43,21 @@ def cmd_metadata(self):
for keyval in keyvals:
kv = keyval.split('=')
if len(kv) != 2 or kv[0] == "":
self.die("Error parsing key=value argument '%s'", keyval)
self.die("error parsing key=value argument '%s'", keyval)
data[kv[0]] = kv[1]
# Make the call
try:
handler(self.args.path, data)
except nilmdb.client.ClientError as e:
self.die("Error setting/updating metadata: %s", str(e))
self.die("error setting/updating metadata: %s", str(e))
else:
# Get (or unspecified)
keys = self.args.get or None
try:
data = self.client.stream_get_metadata(self.args.path, keys)
except nilmdb.client.ClientError as e:
self.die("Error getting metadata: %s", str(e))
self.die("error getting metadata: %s", str(e))
for key, value in sorted(data.items()):
# Omit nonexistant keys
if value is None:

44
nilmdb/cmdline/remove.py Normal file
View File

@@ -0,0 +1,44 @@
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
import sys
def setup(self, sub):
cmd = sub.add_parser("remove", help="Remove data",
description="""
Remove all data from a specified time range within a
stream.
""")
cmd.set_defaults(verify = cmd_remove_verify,
handler = cmd_remove)
group = cmd.add_argument_group("Data selection")
group.add_argument("path",
help="Path of stream, e.g. /foo/bar")
group.add_argument("-s", "--start", required=True,
metavar="TIME", type=self.arg_time,
help="Starting timestamp (free-form, inclusive)")
group.add_argument("-e", "--end", required=True,
metavar="TIME", type=self.arg_time,
help="Ending timestamp (free-form, noninclusive)")
group = cmd.add_argument_group("Output format")
group.add_argument("-c", "--count", action="store_true",
help="Output number of data points removed")
def cmd_remove_verify(self):
if self.args.start is not None and self.args.end is not None:
if self.args.start > self.args.end:
self.parser.error("start is after end")
def cmd_remove(self):
try:
count = self.client.stream_remove(self.args.path,
self.args.start, self.args.end)
except nilmdb.client.ClientError as e:
self.die("error removing data: %s", str(e))
if self.args.count:
printf("%d\n", count)
return 0

View File

@@ -1,72 +0,0 @@
import Queue
import threading
import sys
# This file provides a class that will convert a function that
# takes a callback into a generator that returns an iterator.
# Based partially on http://stackoverflow.com/questions/9968592/
class IteratorizerThread(threading.Thread):
def __init__(self, queue, function):
"""
function: function to execute, which takes the
callback (provided by this class) as an argument
"""
threading.Thread.__init__(self)
self.function = function
self.queue = queue
self.die = False
def callback(self, data):
if self.die:
raise Exception("should die")
self.queue.put((1, data))
def run(self):
try:
result = self.function(self.callback)
except:
if sys is not None: # can be None during unclean shutdown
self.queue.put((2, sys.exc_info()))
else:
self.queue.put((0, result))
class Iteratorizer(object):
def __init__(self, function):
"""
function: function to execute, which takes the
callback (provided by this class) as an argument
"""
self.function = function
self.queue = Queue.Queue(maxsize = 1)
self.thread = IteratorizerThread(self.queue, self.function)
self.thread.daemon = True
self.thread.start()
def __del__(self):
# If we get garbage collected, try to get rid of the
# thread too by asking it to raise an exception, then
# draining the queue until it's gone.
self.thread.die = True
while self.thread.isAlive():
try:
self.queue.get(True, 0.01)
except: # pragma: no cover
pass
def __iter__(self):
return self
def next(self):
(typ, data) = self.queue.get()
if typ == 0:
# function returned
self.retval = data
raise StopIteration
elif typ == 1:
# data available
return data
else:
# exception
raise data[0], data[1], data[2]

15
nilmdb/server/__init__.py Normal file
View File

@@ -0,0 +1,15 @@
"""nilmdb.server"""
# Try to set up pyximport to automatically rebuild Cython modules. If
# this doesn't work, it's OK, as long as the modules were built externally.
# (e.g. python setup.py build_ext --inplace)
try:
import pyximport
pyximport.install()
import layout
except: # pragma: no cover
pass
from .nilmdb import NilmDB
from .server import Server
from .errors import *

462
nilmdb/server/bulkdata.py Normal file
View File

@@ -0,0 +1,462 @@
# Fixed record size bulk data storage
# Need absolute_import so that "import nilmdb" won't pull in
# nilmdb.py, but will pull the parent nilmdb module instead.
from __future__ import absolute_import
from __future__ import division
import nilmdb
from nilmdb.utils.printf import *
import os
import sys
import cPickle as pickle
import struct
import fnmatch
import mmap
import re
# Up to 256 open file descriptors at any given time.
# These variables are global so they can be used in the decorator arguments.
table_cache_size = 16
fd_cache_size = 16
@nilmdb.utils.must_close(wrap_verify = True)
class BulkData(object):
def __init__(self, basepath, **kwargs):
self.basepath = basepath
self.root = os.path.join(self.basepath, "data")
# Tuneables
if "file_size" in kwargs:
self.file_size = kwargs["file_size"]
else:
# Default to approximately 128 MiB per file
self.file_size = 128 * 1024 * 1024
if "files_per_dir" in kwargs:
self.files_per_dir = kwargs["files_per_dir"]
else:
# 32768 files per dir should work even on FAT32
self.files_per_dir = 32768
# Make root path
if not os.path.isdir(self.root):
os.mkdir(self.root)
def close(self):
self.getnode.cache_remove_all()
def _encode_filename(self, path):
# Encode all paths to UTF-8, regardless of sys.getfilesystemencoding(),
# because we want to be able to represent all code points and the user
# will never be directly exposed to filenames. We can then do path
# manipulations on the UTF-8 directly.
if isinstance(path, unicode):
return path.encode('utf-8')
return path
def create(self, unicodepath, layout_name):
"""
unicodepath: path to the data (e.g. u'/newton/prep').
Paths must contain at least two elements, e.g.:
/newton/prep
/newton/raw
/newton/upstairs/prep
/newton/upstairs/raw
layout_name: string for nilmdb.layout.get_named(), e.g. 'float32_8'
"""
path = self._encode_filename(unicodepath)
if path[0] != '/':
raise ValueError("paths must start with /")
[ group, node ] = path.rsplit("/", 1)
if group == '':
raise ValueError("invalid path; path must contain at least one "
"folder")
# Get layout, and build format string for struct module
try:
layout = nilmdb.server.layout.get_named(layout_name)
struct_fmt = '<d' # Little endian, double timestamp
struct_mapping = {
"int8": 'b',
"uint8": 'B',
"int16": 'h',
"uint16": 'H',
"int32": 'i',
"uint32": 'I',
"int64": 'q',
"uint64": 'Q',
"float32": 'f',
"float64": 'd',
}
for n in range(layout.count):
struct_fmt += struct_mapping[layout.datatype]
except KeyError:
raise ValueError("no such layout, or bad data types")
# Create the table. Note that we make a distinction here
# between NilmDB paths (always Unix style, split apart
# manually) and OS paths (built up with os.path.join)
# Make directories leading up to this one
elements = path.lstrip('/').split('/')
for i in range(len(elements)):
ospath = os.path.join(self.root, *elements[0:i])
if Table.exists(ospath):
raise ValueError("path is subdir of existing node")
if not os.path.isdir(ospath):
os.mkdir(ospath)
# Make the final dir
ospath = os.path.join(self.root, *elements)
if os.path.isdir(ospath):
raise ValueError("subdirs of this path already exist")
os.mkdir(ospath)
# Write format string to file
Table.create(ospath, struct_fmt, self.file_size, self.files_per_dir)
# Open and cache it
self.getnode(unicodepath)
# Success
return
def destroy(self, unicodepath):
"""Fully remove all data at a particular path. No way to undo
it! The group/path structure is removed, too."""
path = self._encode_filename(unicodepath)
# Get OS path
elements = path.lstrip('/').split('/')
ospath = os.path.join(self.root, *elements)
# Remove Table object from cache
self.getnode.cache_remove(self, unicodepath)
# Remove the contents of the target directory
if not Table.exists(ospath):
raise ValueError("nothing at that path")
for (root, dirs, files) in os.walk(ospath, topdown = False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
# Remove empty parent directories
for i in reversed(range(len(elements))):
ospath = os.path.join(self.root, *elements[0:i+1])
try:
os.rmdir(ospath)
except OSError:
break
# Cache open tables
@nilmdb.utils.lru_cache(size = table_cache_size,
onremove = lambda x: x.close())
def getnode(self, unicodepath):
"""Return a Table object corresponding to the given database
path, which must exist."""
path = self._encode_filename(unicodepath)
elements = path.lstrip('/').split('/')
ospath = os.path.join(self.root, *elements)
return Table(ospath)
@nilmdb.utils.must_close(wrap_verify = True)
class Table(object):
"""Tools to help access a single table (data at a specific OS path)."""
# See design.md for design details
# Class methods, to help keep format details in this class.
@classmethod
def exists(cls, root):
"""Return True if a table appears to exist at this OS path"""
return os.path.isfile(os.path.join(root, "_format"))
@classmethod
def create(cls, root, struct_fmt, file_size, files_per_dir):
"""Initialize a table at the given OS path.
'struct_fmt' is a Struct module format description"""
# Calculate rows per file so that each file is approximately
# file_size bytes.
packer = struct.Struct(struct_fmt)
rows_per_file = max(file_size // packer.size, 1)
format = { "rows_per_file": rows_per_file,
"files_per_dir": files_per_dir,
"struct_fmt": struct_fmt,
"version": 1 }
with open(os.path.join(root, "_format"), "wb") as f:
pickle.dump(format, f, 2)
# Normal methods
def __init__(self, root):
"""'root' is the full OS path to the directory of this table"""
self.root = root
# Load the format and build packer
with open(os.path.join(self.root, "_format"), "rb") as f:
format = pickle.load(f)
if format["version"] != 1: # pragma: no cover (just future proofing)
raise NotImplementedError("version " + format["version"] +
" bulk data store not supported")
self.rows_per_file = format["rows_per_file"]
self.files_per_dir = format["files_per_dir"]
self.packer = struct.Struct(format["struct_fmt"])
self.file_size = self.packer.size * self.rows_per_file
# Find nrows
self.nrows = self._get_nrows()
def close(self):
self.mmap_open.cache_remove_all()
# Internal helpers
def _get_nrows(self):
"""Find nrows by locating the lexicographically last filename
and using its size"""
# Note that this just finds a 'nrows' that is guaranteed to be
# greater than the row number of any piece of data that
# currently exists, not necessarily all data that _ever_
# existed.
regex = re.compile("^[0-9a-f]{4,}$")
# Find the last directory. We sort and loop through all of them,
# starting with the numerically greatest, because the dirs could be
# empty if something was deleted.
subdirs = sorted(filter(regex.search, os.listdir(self.root)),
key = lambda x: int(x, 16), reverse = True)
for subdir in subdirs:
# Now find the last file in that dir
path = os.path.join(self.root, subdir)
files = filter(regex.search, os.listdir(path))
if not files: # pragma: no cover (shouldn't occur)
# Empty dir: try the next one
continue
# Find the numerical max
filename = max(files, key = lambda x: int(x, 16))
offset = os.path.getsize(os.path.join(self.root, subdir, filename))
# Convert to row number
return self._row_from_offset(subdir, filename, offset)
# No files, so no data
return 0
def _offset_from_row(self, row):
"""Return a (subdir, filename, offset, count) tuple:
subdir: subdirectory for the file
filename: the filename that contains the specified row
offset: byte offset of the specified row within the file
count: number of rows (starting at offset) that fit in the file
"""
filenum = row // self.rows_per_file
# It's OK if these format specifiers are too short; the filenames
# will just get longer but will still sort correctly.
dirname = sprintf("%04x", filenum // self.files_per_dir)
filename = sprintf("%04x", filenum % self.files_per_dir)
offset = (row % self.rows_per_file) * self.packer.size
count = self.rows_per_file - (row % self.rows_per_file)
return (dirname, filename, offset, count)
def _row_from_offset(self, subdir, filename, offset):
"""Return the row number that corresponds to the given
'subdir/filename' and byte-offset within that file."""
if (offset % self.packer.size) != 0: # pragma: no cover; shouldn't occur
raise ValueError("file offset is not a multiple of data size")
filenum = int(subdir, 16) * self.files_per_dir + int(filename, 16)
row = (filenum * self.rows_per_file) + (offset // self.packer.size)
return row
# Cache open files
@nilmdb.utils.lru_cache(size = fd_cache_size,
keys = slice(0,3), # exclude newsize
onremove = lambda x: x.close())
def mmap_open(self, subdir, filename, newsize = None):
"""Open and map a given 'subdir/filename' (relative to self.root).
Will be automatically closed when evicted from the cache.
If 'newsize' is provided, the file is truncated to the given
size before the mapping is returned. (Note that the LRU cache
on this function means the truncate will only happen if the
object isn't already cached; mmap.resize should be used too.)"""
try:
os.mkdir(os.path.join(self.root, subdir))
except OSError:
pass
f = open(os.path.join(self.root, subdir, filename), "a+", 0)
if newsize is not None:
# mmap can't map a zero-length file, so this allows the
# caller to set the filesize between file creation and
# mmap.
f.truncate(newsize)
mm = mmap.mmap(f.fileno(), 0)
return mm
def mmap_open_resize(self, subdir, filename, newsize):
"""Open and map a given 'subdir/filename' (relative to self.root).
The file is resized to the given size."""
# Pass new size to mmap_open
mm = self.mmap_open(subdir, filename, newsize)
# In case we got a cached copy, need to call mm.resize too.
mm.resize(newsize)
return mm
def append(self, data):
"""Append the data and flush it to disk.
data is a nested Python list [[row],[row],[...]]"""
remaining = len(data)
dataiter = iter(data)
while remaining:
# See how many rows we can fit into the current file, and open it
(subdir, fname, offset, count) = self._offset_from_row(self.nrows)
if count > remaining:
count = remaining
newsize = offset + count * self.packer.size
mm = self.mmap_open_resize(subdir, fname, newsize)
mm.seek(offset)
# Write the data
for i in xrange(count):
row = dataiter.next()
mm.write(self.packer.pack(*row))
remaining -= count
self.nrows += count
def __getitem__(self, key):
"""Extract data and return it. Supports simple indexing
(table[n]) and range slices (table[n:m]). Returns a nested
Python list [[row],[row],[...]]"""
# Handle simple slices
if isinstance(key, slice):
# Fall back to brute force if the slice isn't simple
if ((key.step is not None and key.step != 1) or
key.start is None or
key.stop is None or
key.start >= key.stop or
key.start < 0 or
key.stop > self.nrows):
return [ self[x] for x in xrange(*key.indices(self.nrows)) ]
ret = []
row = key.start
remaining = key.stop - key.start
while remaining:
(subdir, filename, offset, count) = self._offset_from_row(row)
if count > remaining:
count = remaining
mm = self.mmap_open(subdir, filename)
for i in xrange(count):
ret.append(list(self.packer.unpack_from(mm, offset)))
offset += self.packer.size
remaining -= count
row += count
return ret
# Handle single points
if key < 0 or key >= self.nrows:
raise IndexError("Index out of range")
(subdir, filename, offset, count) = self._offset_from_row(key)
mm = self.mmap_open(subdir, filename)
# unpack_from ignores the mmap object's current seek position
return list(self.packer.unpack_from(mm, offset))
def _remove_rows(self, subdir, filename, start, stop):
"""Helper to mark specific rows as being removed from a
file, and potentially removing or truncating the file itself."""
# Import an existing list of deleted rows for this file
datafile = os.path.join(self.root, subdir, filename)
cachefile = datafile + ".removed"
try:
with open(cachefile, "rb") as f:
ranges = pickle.load(f)
cachefile_present = True
except:
ranges = []
cachefile_present = False
# Append our new range and sort
ranges.append((start, stop))
ranges.sort()
# Merge adjacent ranges into "out"
merged = []
prev = None
for new in ranges:
if prev is None:
# No previous range, so remember this one
prev = new
elif prev[1] == new[0]:
# Previous range connected to this new one; extend prev
prev = (prev[0], new[1])
else:
# Not connected; append previous and start again
merged.append(prev)
prev = new
if prev is not None:
merged.append(prev)
# If the range covered the whole file, we can delete it now.
# Note that the last file in a table may be only partially
# full (smaller than self.rows_per_file). We purposely leave
# those files around rather than deleting them, because the
# remainder will be filled on a subsequent append(), and things
# are generally easier if we don't have to special-case that.
if (len(merged) == 1 and
merged[0][0] == 0 and merged[0][1] == self.rows_per_file):
# Close potentially open file in mmap_open LRU cache
self.mmap_open.cache_remove(self, subdir, filename)
# Delete files
os.remove(datafile)
if cachefile_present:
os.remove(cachefile)
# Try deleting subdir, too
try:
os.rmdir(os.path.join(self.root, subdir))
except:
pass
else:
# Update cache. Try to do it atomically.
nilmdb.utils.atomic.replace_file(cachefile,
pickle.dumps(merged, 2))
def remove(self, start, stop):
"""Remove specified rows [start, stop) from this table.
If a file is left empty, it is fully removed. Otherwise, a
parallel data file is used to remember which rows have been
removed, and the file is otherwise untouched."""
if start < 0 or start > stop or stop > self.nrows:
raise IndexError("Index out of range")
row = start
remaining = stop - start
while remaining:
# Loop through each file that we need to touch
(subdir, filename, offset, count) = self._offset_from_row(row)
if count > remaining:
count = remaining
row_offset = offset // self.packer.size
# Mark the rows as being removed
self._remove_rows(subdir, filename, row_offset, row_offset + count)
remaining -= count
row += count
class TimestampOnlyTable(object):
"""Helper that lets us pass a Tables object into bisect, by
returning only the timestamp when a particular row is requested."""
def __init__(self, table):
self.table = table
def __getitem__(self, index):
return self.table[index][0]

12
nilmdb/server/errors.py Normal file
View File

@@ -0,0 +1,12 @@
"""Exceptions"""
class NilmDBError(Exception):
"""Base exception for NilmDB errors"""
def __init__(self, message = "Unspecified error"):
Exception.__init__(self, message)
class StreamError(NilmDBError):
pass
class OverlapError(NilmDBError):
pass

View File

@@ -1,58 +1,82 @@
"""Interval and IntervalSet
"""Interval, IntervalSet
Represents an interval of time, and a set of such intervals.
Intervals are closed, ie. they include timestamps [start, end]
Intervals are half-open, ie. they include data points with timestamps
[start, end)
"""
# First implementation kept a sorted list of intervals and used
# biesct() to optimize some operations, but this was too slow.
# This version is based on the quicksect implementation from python-bx,
# modified slightly to handle floating point intervals.
# Second version was based on the quicksect implementation from
# python-bx, modified slightly to handle floating point intervals.
# This didn't support deletion.
import pyximport
pyximport.install()
import bxintersect
# Third version is more similar to the first version, using a rb-tree
# instead of a simple sorted list to maintain O(log n) operations.
import bisect
# Fourth version is an optimized rb-tree that stores interval starts
# and ends directly in the tree, like bxinterval did.
cimport rbtree
cdef extern from "stdint.h":
ctypedef unsigned long long uint64_t
class IntervalError(Exception):
"""Error due to interval overlap, etc"""
pass
class Interval(bxintersect.Interval):
cdef class Interval:
"""Represents an interval of time."""
def __init__(self, start, end):
cdef public double start, end
def __init__(self, double start, double end):
"""
'start' and 'end' are arbitrary floats that represent time
"""
if start > end:
# Explicitly disallow zero-width intervals (since they're half-open)
raise IntervalError("start %s must precede end %s" % (start, end))
bxintersect.Interval.__init__(self, start, end)
self.start = float(start)
self.end = float(end)
def __repr__(self):
s = repr(self.start) + ", " + repr(self.end)
return self.__class__.__name__ + "(" + s + ")"
def __str__(self):
return "[" + str(self.start) + " -> " + str(self.end) + "]"
return "[" + repr(self.start) + " -> " + repr(self.end) + ")"
def intersects(self, other):
def __cmp__(self, Interval other):
"""Compare two intervals. If non-equal, order by start then end"""
if not isinstance(other, Interval):
raise TypeError("bad type")
if self.start == other.start:
if self.end < other.end:
return -1
if self.end > other.end:
return 1
return 0
if self.start < other.start:
return -1
return 1
cpdef intersects(self, Interval other):
"""Return True if two Interval objects intersect"""
if (self.end <= other.start or self.start >= other.end):
return False
return True
def subset(self, start, end):
cpdef subset(self, double start, double end):
"""Return a new Interval that is a subset of this one"""
# A subclass that tracks additional data might override this.
if start < self.start or end > self.end:
raise IntervalError("not a subset")
return Interval(start, end)
class DBInterval(Interval):
cdef class DBInterval(Interval):
"""
Like Interval, but also tracks corresponding start/end times and
positions within the database. These are not currently modified
@@ -66,6 +90,10 @@ class DBInterval(Interval):
end = 150
db_end = 200, db_endpos = 20000
"""
cpdef public double db_start, db_end
cpdef public uint64_t db_startpos, db_endpos
def __init__(self, start, end,
db_start, db_end,
db_startpos, db_endpos):
@@ -90,7 +118,7 @@ class DBInterval(Interval):
s += ", " + repr(self.db_startpos) + ", " + repr(self.db_endpos)
return self.__class__.__name__ + "(" + s + ")"
def subset(self, start, end):
cpdef subset(self, double start, double end):
"""
Return a new DBInterval that is a subset of this one
"""
@@ -100,21 +128,25 @@ class DBInterval(Interval):
self.db_start, self.db_end,
self.db_startpos, self.db_endpos)
class IntervalSet(object):
cdef class IntervalSet:
"""
A non-intersecting set of intervals.
"""
cdef public rbtree.RBTree tree
def __init__(self, source=None):
"""
'source' is an Interval or IntervalSet to add.
"""
self.tree = bxintersect.IntervalTree()
self.tree = rbtree.RBTree()
if source is not None:
self += source
def __iter__(self):
return self.tree.traverse()
for node in self.tree:
if node.obj:
yield node.obj
def __len__(self):
return sum(1 for x in self)
@@ -127,7 +159,7 @@ class IntervalSet(object):
descs = [ str(x) for x in self ]
return "[" + ", ".join(descs) + "]"
def __eq__(self, other):
def __match__(self, other):
# This isn't particularly efficient, but it shouldn't get used in the
# general case.
"""Test equality of two IntervalSets.
@@ -146,8 +178,8 @@ class IntervalSet(object):
else:
return False
this = [ x for x in self ]
that = [ x for x in other ]
this = list(self)
that = list(other)
try:
while True:
@@ -178,10 +210,20 @@ class IntervalSet(object):
except IndexError:
return False
def __ne__(self, other):
return not self.__eq__(other)
# Use __richcmp__ instead of __eq__, __ne__ for Cython.
def __richcmp__(self, other, int op):
if op == 2: # ==
return self.__match__(other)
elif op == 3: # !=
return not self.__match__(other)
return False
#def __eq__(self, other):
# return self.__match__(other)
#
#def __ne__(self, other):
# return not self.__match__(other)
def __iadd__(self, other):
def __iadd__(self, object other not None):
"""Inplace add -- modifies self
This throws an exception if the regions being added intersect."""
@@ -189,19 +231,36 @@ class IntervalSet(object):
if self.intersects(other):
raise IntervalError("Tried to add overlapping interval "
"to this set")
self.tree.insert_interval(other)
self.tree.insert(rbtree.RBNode(other.start, other.end, other))
else:
for x in other:
self.__iadd__(x)
return self
def __add__(self, other):
def iadd_nocheck(self, Interval other not None):
"""Inplace add -- modifies self.
'Optimized' version that doesn't check for intersection and
only inserts the new interval into the tree."""
self.tree.insert(rbtree.RBNode(other.start, other.end, other))
def __isub__(self, Interval other not None):
"""Inplace subtract -- modifies self
Removes an interval from the set. Must exist exactly
as provided -- cannot remove a subset of an existing interval."""
i = self.tree.find(other.start, other.end)
if i is None:
raise IntervalError("interval " + str(other) + " not in tree")
self.tree.delete(i)
return self
def __add__(self, other not None):
"""Add -- returns a new object"""
new = IntervalSet(self)
new += IntervalSet(other)
return new
def __and__(self, other):
def __and__(self, other not None):
"""
Compute a new IntervalSet from the intersection of two others
@@ -211,15 +270,16 @@ class IntervalSet(object):
out = IntervalSet()
if not isinstance(other, IntervalSet):
other = [ other ]
for x in other:
for i in self.intersection(x):
out.tree.insert_interval(i)
for i in self.intersection(other):
out.tree.insert(rbtree.RBNode(i.start, i.end, i))
else:
for x in other:
for i in self.intersection(x):
out.tree.insert(rbtree.RBNode(i.start, i.end, i))
return out
def intersection(self, interval):
def intersection(self, Interval interval not None, orig = False):
"""
Compute a sequence of intervals that correspond to the
intersection between `self` and the provided interval.
@@ -228,14 +288,42 @@ class IntervalSet(object):
Output intervals are built as subsets of the intervals in the
first argument (self).
"""
for i in self.tree.find(interval.start, interval.end):
if i.start > interval.start and i.end < interval.end:
yield i
else:
yield i.subset(max(i.start, interval.start),
min(i.end, interval.end))
def intersects(self, other):
If orig = True, also return the original interval that was
(potentially) subsetted to make the one that is being
returned.
"""
if not isinstance(interval, Interval):
raise TypeError("bad type")
for n in self.tree.intersect(interval.start, interval.end):
i = n.obj
if i:
if i.start >= interval.start and i.end <= interval.end:
if orig:
yield (i, i)
else:
yield i
else:
subset = i.subset(max(i.start, interval.start),
min(i.end, interval.end))
if orig:
yield (subset, i)
else:
yield subset
cpdef intersects(self, Interval other):
"""Return True if this IntervalSet intersects another interval"""
return len(self.tree.find(other.start, other.end)) > 0
for n in self.tree.intersect(other.start, other.end):
if n.obj.intersects(other):
return True
return False
def find_end(self, double t):
"""
Return an Interval from this tree that ends at time t, or
None if it doesn't exist.
"""
n = self.tree.find_left_end(t)
if n and n.obj.end == t:
return n.obj
return None

View File

@@ -0,0 +1 @@
rbtree.pxd

View File

@@ -1,6 +1,5 @@
# cython: profile=False
import tables
import time
import sys
import inspect
@@ -122,15 +121,6 @@ class Layout:
s += " %d" % d[i+1]
return s + "\n"
# PyTables description
def description(self):
"""Return the PyTables description of this layout"""
desc = {}
desc['timestamp'] = tables.Col.from_type('float64', pos=0)
for n in range(self.count):
desc['c' + str(n+1)] = tables.Col.from_type(self.datatype, pos=n+1)
return tables.Description(desc)
# Get a layout by name
def get_named(typestring):
try:
@@ -180,7 +170,7 @@ class Parser(object):
if line[0] == '\#':
continue
(ts, row) = self.layout.parse(line)
if ts < last_ts:
if ts <= last_ts:
raise ValueError("timestamp is not "
"monotonically increasing")
last_ts = ts

View File

@@ -4,27 +4,26 @@
Object that represents a NILM database file.
Manages both the SQL database and the PyTables storage backend.
Manages both the SQL database and the table storage backend.
"""
# Need absolute_import so that "import nilmdb" won't pull in nilmdb.py,
# but will pull the nilmdb module instead.
# Need absolute_import so that "import nilmdb" won't pull in
# nilmdb.py, but will pull the parent nilmdb module instead.
from __future__ import absolute_import
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nilmdb.server.interval import (Interval, DBInterval,
IntervalSet, IntervalError)
from nilmdb.server import bulkdata
from nilmdb.server.errors import *
import sqlite3
import tables
import time
import sys
import os
import errno
import bisect
import pyximport
pyximport.install()
from nilmdb.interval import Interval, DBInterval, IntervalSet, IntervalError
# Note about performance and transactions:
#
# Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
@@ -76,30 +75,14 @@ _sql_schema_updates = {
""",
}
class NilmDBError(Exception):
"""Base exception for NilmDB errors"""
def __init__(self, message = "Unspecified error"):
Exception.__init__(self, self.__class__.__name__ + ": " + message)
class StreamError(NilmDBError):
pass
class OverlapError(NilmDBError):
pass
# Helper that lets us pass a Pytables table into bisect
class BisectableTable(object):
def __init__(self, table):
self.table = table
def __getitem__(self, index):
return self.table[index][0]
@nilmdb.utils.must_close()
class NilmDB(object):
verbose = 0
def __init__(self, basepath, sync=True, max_results=None):
def __init__(self, basepath, sync=True, max_results=None,
bulkdata_args={}):
# set up path
self.basepath = os.path.abspath(basepath.rstrip('/'))
self.basepath = os.path.abspath(basepath)
# Create the database path if it doesn't exist
try:
@@ -108,16 +91,16 @@ class NilmDB(object):
if e.errno != errno.EEXIST:
raise IOError("can't create tree " + self.basepath)
# Our HD5 file goes inside it
h5filename = os.path.abspath(self.basepath + "/data.h5")
self.h5file = tables.openFile(h5filename, "a", "NILM Database")
# Our data goes inside it
self.data = bulkdata.BulkData(self.basepath, **bulkdata_args)
# SQLite database too
sqlfilename = os.path.abspath(self.basepath + "/data.sql")
sqlfilename = os.path.join(self.basepath, "data.sql")
# We use check_same_thread = False, assuming that the rest
# of the code (e.g. Server) will be smart and not access this
# database from multiple threads simultaneously. That requirement
# may be relaxed later.
# database from multiple threads simultaneously. Otherwise
# false positives will occur when the database is only opened
# in one thread, and only accessed in another.
self.con = sqlite3.connect(sqlfilename, check_same_thread = False)
self._sql_schema_update()
@@ -134,17 +117,6 @@ class NilmDB(object):
else:
self.max_results = 16384
self.opened = True
# Cached intervals
self._cached_iset = {}
def __del__(self):
if "opened" in self.__dict__: # pragma: no cover
fprintf(sys.stderr,
"error: NilmDB.close() wasn't called, path %s",
self.basepath)
def get_basepath(self):
return self.basepath
@@ -152,8 +124,7 @@ class NilmDB(object):
if self.con:
self.con.commit()
self.con.close()
self.h5file.close()
del self.opened
self.data.close()
def _sql_schema_update(self):
cur = self.con.cursor()
@@ -170,60 +141,129 @@ class NilmDB(object):
with self.con:
cur.execute("PRAGMA user_version = {v:d}".format(v=version))
@nilmdb.utils.lru_cache(size = 16)
def _get_intervals(self, stream_id):
"""
Return a mutable IntervalSet corresponding to the given stream ID.
"""
# Load from database if not cached
if stream_id not in self._cached_iset:
iset = IntervalSet()
result = self.con.execute("SELECT start_time, end_time, "
"start_pos, end_pos "
"FROM ranges "
"WHERE stream_id=?", (stream_id,))
try:
for (start_time, end_time, start_pos, end_pos) in result:
iset += DBInterval(start_time, end_time,
start_time, end_time,
start_pos, end_pos)
except IntervalError as e: # pragma: no cover
raise NilmDBError("unexpected overlap in ranges table!")
self._cached_iset[stream_id] = iset
# Return cached value
return self._cached_iset[stream_id]
iset = IntervalSet()
result = self.con.execute("SELECT start_time, end_time, "
"start_pos, end_pos "
"FROM ranges "
"WHERE stream_id=?", (stream_id,))
try:
for (start_time, end_time, start_pos, end_pos) in result:
iset += DBInterval(start_time, end_time,
start_time, end_time,
start_pos, end_pos)
except IntervalError as e: # pragma: no cover
raise NilmDBError("unexpected overlap in ranges table!")
# TODO: Split add_interval into two pieces, one to add
# and one to flush to disk?
# Need to think about this. Basic problem is that we can't
# mess with intervals once they're in the IntervalSet,
# without mucking with bxinterval internals.
return iset
# Maybe add a separate optimization step?
# Join intervals that have a fairly small gap between them
def _sql_interval_insert(self, id, start, end, start_pos, end_pos):
"""Helper that adds interval to the SQL database only"""
self.con.execute("INSERT INTO ranges "
"(stream_id,start_time,end_time,start_pos,end_pos) "
"VALUES (?,?,?,?,?)",
(id, start, end, start_pos, end_pos))
def _sql_interval_delete(self, id, start, end, start_pos, end_pos):
"""Helper that removes interval from the SQL database only"""
self.con.execute("DELETE FROM ranges WHERE "
"stream_id=? AND start_time=? AND "
"end_time=? AND start_pos=? AND end_pos=?",
(id, start, end, start_pos, end_pos))
def _add_interval(self, stream_id, interval, start_pos, end_pos):
"""
Add interval to the internal interval cache, and to the database.
Note: arguments must be ints (not numpy.int64, etc)
"""
# Ensure this stream's intervals are cached, and add the new
# interval to that cache.
# Load this stream's intervals
iset = self._get_intervals(stream_id)
try:
iset += DBInterval(interval.start, interval.end,
interval.start, interval.end,
start_pos, end_pos)
except IntervalError as e: # pragma: no cover
# Check for overlap
if iset.intersects(interval): # pragma: no cover (gets caught earlier)
raise NilmDBError("new interval overlaps existing data")
# Check for adjacency. If there's a stream in the database
# that ends exactly when this one starts, and the database
# rows match up, we can make one interval that covers the
# time range [adjacent.start -> interval.end)
# and database rows [ adjacent.start_pos -> end_pos ].
# Only do this if the resulting interval isn't too large.
max_merged_rows = 8000 * 60 * 60 * 1.05 # 1.05 hours at 8 KHz
adjacent = iset.find_end(interval.start)
if (adjacent is not None and
start_pos == adjacent.db_endpos and
(end_pos - adjacent.db_startpos) < max_merged_rows):
# First delete the old one, both from our iset and the
# database
iset -= adjacent
self._sql_interval_delete(stream_id,
adjacent.db_start, adjacent.db_end,
adjacent.db_startpos, adjacent.db_endpos)
# Now update our interval so the fallthrough add is
# correct.
interval.start = adjacent.start
start_pos = adjacent.db_startpos
# Add the new interval to the iset
iset.iadd_nocheck(DBInterval(interval.start, interval.end,
interval.start, interval.end,
start_pos, end_pos))
# Insert into the database
self.con.execute("INSERT INTO ranges "
"(stream_id,start_time,end_time,start_pos,end_pos) "
"VALUES (?,?,?,?,?)",
(stream_id, interval.start, interval.end,
int(start_pos), int(end_pos)))
self._sql_interval_insert(stream_id, interval.start, interval.end,
int(start_pos), int(end_pos))
self.con.commit()
def _remove_interval(self, stream_id, original, remove):
"""
Remove an interval from the internal cache and the database.
stream_id: id of stream
original: original DBInterval; must be already present in DB
to_remove: DBInterval to remove; must be subset of 'original'
"""
# Just return if we have nothing to remove
if remove.start == remove.end: # pragma: no cover
return
# Load this stream's intervals
iset = self._get_intervals(stream_id)
# Remove existing interval from the cached set and the database
iset -= original
self._sql_interval_delete(stream_id,
original.db_start, original.db_end,
original.db_startpos, original.db_endpos)
# Add back the intervals that would be left over if the
# requested interval is removed. There may be two of them, if
# the removed piece was in the middle.
def add(iset, start, end, start_pos, end_pos):
iset += DBInterval(start, end, start, end, start_pos, end_pos)
self._sql_interval_insert(stream_id, start, end, start_pos, end_pos)
if original.start != remove.start:
# Interval before the removed region
add(iset, original.start, remove.start,
original.db_startpos, remove.db_startpos)
if original.end != remove.end:
# Interval after the removed region
add(iset, remove.end, original.end,
remove.db_endpos, original.db_endpos)
# Commit SQL changes
self.con.commit()
return
def stream_list(self, path = None, layout = None):
"""Return list of [path, layout] lists of all streams
in the database.
@@ -285,38 +325,11 @@ class NilmDB(object):
layout_name: string for nilmdb.layout.get_named(), e.g. 'float32_8'
"""
if path[0] != '/':
raise ValueError("paths must start with /")
[ group, node ] = path.rsplit("/", 1)
if group == '':
raise ValueError("invalid path")
# Create the bulk storage. Raises ValueError on error, which we
# pass along.
self.data.create(path, layout_name)
# Make the group structure, one element at a time
group_path = group.lstrip('/').split("/")
for i in range(len(group_path)):
parent = "/" + "/".join(group_path[0:i])
child = group_path[i]
try:
self.h5file.createGroup(parent, child)
except tables.NodeError:
pass
# Get description
try:
desc = nilmdb.layout.get_named(layout_name).description()
except KeyError:
raise ValueError("no such layout")
# Estimated table size (for PyTables optimization purposes): assume
# 3 months worth of data at 8 KHz. It's OK if this is wrong.
exp_rows = 8000 * 60*60*24*30*3
# Create the table
table = self.h5file.createTable(group, node,
description = desc,
expectedrows = exp_rows)
# Insert into SQL database once the PyTables is happy
# Insert into SQL database once the bulk storage is happy
with self.con as con:
con.execute("INSERT INTO streams (path, layout) VALUES (?,?)",
(path, layout_name))
@@ -337,8 +350,7 @@ class NilmDB(object):
"""
stream_id = self._stream_id(path)
with self.con as con:
con.execute("DELETE FROM metadata "
"WHERE stream_id=?", (stream_id,))
con.execute("DELETE FROM metadata WHERE stream_id=?", (stream_id,))
for key in data:
if data[key] != '':
con.execute("INSERT INTO metadata VALUES (?, ?, ?)",
@@ -361,49 +373,52 @@ class NilmDB(object):
data.update(newdata)
self.stream_set_metadata(path, data)
def stream_insert(self, path, parser, old_timestamp = None):
def stream_destroy(self, path):
"""Fully remove a table and all of its data from the database.
No way to undo it! Metadata is removed."""
stream_id = self._stream_id(path)
# Delete the cached interval data (if it was cached)
self._get_intervals.cache_remove(self, stream_id)
# Delete the data
self.data.destroy(path)
# Delete metadata, stream, intervals
with self.con as con:
con.execute("DELETE FROM metadata WHERE stream_id=?", (stream_id,))
con.execute("DELETE FROM ranges WHERE stream_id=?", (stream_id,))
con.execute("DELETE FROM streams WHERE id=?", (stream_id,))
def stream_insert(self, path, start, end, data):
"""Insert new data into the database.
path: Path at which to add the data
parser: nilmdb.layout.Parser instance full of data to insert
start: Starting timestamp
end: Ending timestamp
data: Rows of data, to be passed to PyTable's table.append
method. E.g. nilmdb.layout.Parser.data
"""
if (not parser.min_timestamp or not parser.max_timestamp or
not len(parser.data)):
raise StreamError("no data provided")
# If we were provided with an old timestamp, the expectation
# is that the client has a contiguous block of time it is sending,
# but it's doing it over multiple calls to stream_insert.
# old_timestamp is the max_timestamp of the previous insert.
# To make things continuous, use that as our starting timestamp
# instead of what the parser found.
if old_timestamp:
min_timestamp = old_timestamp
else:
min_timestamp = parser.min_timestamp
# First check for basic overlap using timestamp info given.
stream_id = self._stream_id(path)
iset = self._get_intervals(stream_id)
interval = Interval(min_timestamp, parser.max_timestamp)
interval = Interval(start, end)
if iset.intersects(interval):
raise OverlapError("new data overlaps existing data: "
raise OverlapError("new data overlaps existing data at range: "
+ str(iset & interval))
# Insert the data into pytables
table = self.h5file.getNode(path)
# Insert the data
table = self.data.getnode(path)
row_start = table.nrows
table.append(parser.data)
table.append(data)
row_end = table.nrows
table.flush()
# Insert the record into the sql database.
# Casts are to convert from numpy.int64.
self._add_interval(stream_id, interval, int(row_start), int(row_end))
self._add_interval(stream_id, interval, row_start, row_end)
# And that's all
return "ok"
def _find_start(self, table, interval):
def _find_start(self, table, dbinterval):
"""
Given a DBInterval, find the row in the database that
corresponds to the start time. Return the first database
@@ -411,14 +426,14 @@ class NilmDB(object):
equal to 'start'.
"""
# Optimization for the common case where an interval wasn't truncated
if interval.start == interval.db_start:
return interval.db_startpos
return bisect.bisect_left(BisectableTable(table),
interval.start,
interval.db_startpos,
interval.db_endpos)
if dbinterval.start == dbinterval.db_start:
return dbinterval.db_startpos
return bisect.bisect_left(bulkdata.TimestampOnlyTable(table),
dbinterval.start,
dbinterval.db_startpos,
dbinterval.db_endpos)
def _find_end(self, table, interval):
def _find_end(self, table, dbinterval):
"""
Given a DBInterval, find the row in the database that follows
the end time. Return the first database position after the
@@ -426,16 +441,16 @@ class NilmDB(object):
to 'end'.
"""
# Optimization for the common case where an interval wasn't truncated
if interval.end == interval.db_end:
return interval.db_endpos
if dbinterval.end == dbinterval.db_end:
return dbinterval.db_endpos
# Note that we still use bisect_left here, because we don't
# want to include the given timestamp in the results. This is
# so a queries like 1:00 -> 2:00 and 2:00 -> 3:00 return
# non-overlapping data.
return bisect.bisect_left(BisectableTable(table),
interval.end,
interval.db_startpos,
interval.db_endpos)
return bisect.bisect_left(bulkdata.TimestampOnlyTable(table),
dbinterval.end,
dbinterval.db_startpos,
dbinterval.db_endpos)
def stream_extract(self, path, start = None, end = None, count = False):
"""
@@ -456,8 +471,8 @@ class NilmDB(object):
than actually fetching the data. It is not limited by
max_results.
"""
table = self.h5file.getNode(path)
stream_id = self._stream_id(path)
table = self.data.getnode(path)
intervals = self._get_intervals(stream_id)
requested = Interval(start or 0, end or 1e12)
result = []
@@ -494,3 +509,45 @@ class NilmDB(object):
if count:
return matched
return (result, restart)
def stream_remove(self, path, start = None, end = None):
"""
Remove data from the specified time interval within a stream.
Removes all data in the interval [start, end), and intervals
are truncated or split appropriately. Returns the number of
data points removed.
"""
stream_id = self._stream_id(path)
table = self.data.getnode(path)
intervals = self._get_intervals(stream_id)
to_remove = Interval(start or 0, end or 1e12)
removed = 0
if start == end:
return 0
# Can't remove intervals from within the iterator, so we need to
# remember what's currently in the intersection now.
all_candidates = list(intervals.intersection(to_remove, orig = True))
for (dbint, orig) in all_candidates:
# Find row start and end
row_start = self._find_start(table, dbint)
row_end = self._find_end(table, dbint)
# Adjust the DBInterval to match the newly found ends
dbint.db_start = dbint.start
dbint.db_end = dbint.end
dbint.db_startpos = row_start
dbint.db_endpos = row_end
# Remove interval from the database
self._remove_interval(stream_id, orig, dbint)
# Remove data from the underlying table storage
table.remove(row_start, row_end)
# Count how many were removed
removed += row_end - row_start
return removed

23
nilmdb/server/rbtree.pxd Normal file
View File

@@ -0,0 +1,23 @@
cdef class RBNode:
cdef public object obj
cdef public double start, end
cdef public int red
cdef public RBNode left, right, parent
cdef class RBTree:
cdef public RBNode nil, root
cpdef getroot(RBTree self)
cdef void __rotate_left(RBTree self, RBNode x)
cdef void __rotate_right(RBTree self, RBNode y)
cdef RBNode __successor(RBTree self, RBNode x)
cpdef RBNode successor(RBTree self, RBNode x)
cdef RBNode __predecessor(RBTree self, RBNode x)
cpdef RBNode predecessor(RBTree self, RBNode x)
cpdef insert(RBTree self, RBNode z)
cdef void __insert_fixup(RBTree self, RBNode x)
cpdef delete(RBTree self, RBNode z)
cdef inline void __delete_fixup(RBTree self, RBNode x)
cpdef RBNode find(RBTree self, double start, double end)
cpdef RBNode find_left_end(RBTree self, double t)
cpdef RBNode find_right_start(RBTree self, double t)

377
nilmdb/server/rbtree.pyx Normal file
View File

@@ -0,0 +1,377 @@
# cython: profile=False
# cython: cdivision=True
"""
Jim Paris <jim@jtan.com>
Red-black tree, where keys are stored as start/end timestamps.
This is a basic interval tree that holds half-open intervals:
[start, end)
Intervals must not overlap. Fixing that would involve making this
into an augmented interval tree as described in CLRS 14.3.
Code that assumes non-overlapping intervals is marked with the
string 'non-overlapping'.
"""
import sys
cimport rbtree
cdef class RBNode:
"""One node of the Red/Black tree, containing a key (start, end)
and value (obj)"""
def __init__(self, double start, double end, object obj = None):
self.obj = obj
self.start = start
self.end = end
self.red = False
self.left = None
self.right = None
def __str__(self):
if self.red:
color = "R"
else:
color = "B"
if self.start == sys.float_info.min:
return "[node nil]"
return ("[node ("
+ str(self.obj) + ") "
+ str(self.start) + " -> " + str(self.end) + " "
+ color + "]")
cdef class RBTree:
"""Red/Black tree"""
# Init
def __init__(self):
self.nil = RBNode(start = sys.float_info.min,
end = sys.float_info.min)
self.nil.left = self.nil
self.nil.right = self.nil
self.nil.parent = self.nil
self.root = RBNode(start = sys.float_info.max,
end = sys.float_info.max)
self.root.left = self.nil
self.root.right = self.nil
self.root.parent = self.nil
# We have a dummy root node to simplify operations, so from an
# external point of view, its left child is the real root.
cpdef getroot(self):
return self.root.left
# Rotations and basic operations
cdef void __rotate_left(self, RBNode x):
"""Rotate left:
# x y
# / \ --> / \
# z y x w
# / \ / \
# v w z v
"""
cdef RBNode y = x.right
x.right = y.left
if y.left is not self.nil:
y.left.parent = x
y.parent = x.parent
if x is x.parent.left:
x.parent.left = y
else:
x.parent.right = y
y.left = x
x.parent = y
cdef void __rotate_right(self, RBNode y):
"""Rotate right:
# y x
# / \ --> / \
# x w z y
# / \ / \
# z v v w
"""
cdef RBNode x = y.left
y.left = x.right
if x.right is not self.nil:
x.right.parent = y
x.parent = y.parent
if y is y.parent.left:
y.parent.left = x
else:
y.parent.right = x
x.right = y
y.parent = x
cdef RBNode __successor(self, RBNode x):
"""Returns the successor of RBNode x"""
cdef RBNode y = x.right
if y is not self.nil:
while y.left is not self.nil:
y = y.left
else:
y = x.parent
while x is y.right:
x = y
y = y.parent
if y is self.root:
return self.nil
return y
cpdef RBNode successor(self, RBNode x):
"""Returns the successor of RBNode x, or None"""
cdef RBNode y = self.__successor(x)
return y if y is not self.nil else None
cdef RBNode __predecessor(self, RBNode x):
"""Returns the predecessor of RBNode x"""
cdef RBNode y = x.left
if y is not self.nil:
while y.right is not self.nil:
y = y.right
else:
y = x.parent
while x is y.left:
if y is self.root:
y = self.nil
break
x = y
y = y.parent
return y
cpdef RBNode predecessor(self, RBNode x):
"""Returns the predecessor of RBNode x, or None"""
cdef RBNode y = self.__predecessor(x)
return y if y is not self.nil else None
# Insertion
cpdef insert(self, RBNode z):
"""Insert RBNode z into RBTree and rebalance as necessary"""
z.left = self.nil
z.right = self.nil
cdef RBNode y = self.root
cdef RBNode x = self.root.left
while x is not self.nil:
y = x
if (x.start > z.start or (x.start == z.start and x.end > z.end)):
x = x.left
else:
x = x.right
z.parent = y
if (y is self.root or
(y.start > z.start or (y.start == z.start and y.end > z.end))):
y.left = z
else:
y.right = z
# relabel/rebalance
self.__insert_fixup(z)
cdef void __insert_fixup(self, RBNode x):
"""Rebalance/fix RBTree after a simple insertion of RBNode x"""
x.red = True
while x.parent.red:
if x.parent is x.parent.parent.left:
y = x.parent.parent.right
if y.red:
x.parent.red = False
y.red = False
x.parent.parent.red = True
x = x.parent.parent
else:
if x is x.parent.right:
x = x.parent
self.__rotate_left(x)
x.parent.red = False
x.parent.parent.red = True
self.__rotate_right(x.parent.parent)
else: # same as above, left/right switched
y = x.parent.parent.left
if y.red:
x.parent.red = False
y.red = False
x.parent.parent.red = True
x = x.parent.parent
else:
if x is x.parent.left:
x = x.parent
self.__rotate_right(x)
x.parent.red = False
x.parent.parent.red = True
self.__rotate_left(x.parent.parent)
self.root.left.red = False
# Deletion
cpdef delete(self, RBNode z):
if z.left is None or z.right is None:
raise AttributeError("you can only delete a node object "
+ "from the tree; use find() to get one")
cdef RBNode x, y
if z.left is self.nil or z.right is self.nil:
y = z
else:
y = self.__successor(z)
if y.left is self.nil:
x = y.right
else:
x = y.left
x.parent = y.parent
if x.parent is self.root:
self.root.left = x
else:
if y is y.parent.left:
y.parent.left = x
else:
y.parent.right = x
if y is not z:
# y is the node to splice out, x is its child
y.left = z.left
y.right = z.right
y.parent = z.parent
z.left.parent = y
z.right.parent = y
if z is z.parent.left:
z.parent.left = y
else:
z.parent.right = y
if not y.red:
y.red = z.red
self.__delete_fixup(x)
else:
y.red = z.red
else:
if not y.red:
self.__delete_fixup(x)
cdef void __delete_fixup(self, RBNode x):
"""Rebalance/fix RBTree after a deletion. RBNode x is the
child of the spliced out node."""
cdef RBNode rootLeft = self.root.left
while not x.red and x is not rootLeft:
if x is x.parent.left:
w = x.parent.right
if w.red:
w.red = False
x.parent.red = True
self.__rotate_left(x.parent)
w = x.parent.right
if not w.right.red and not w.left.red:
w.red = True
x = x.parent
else:
if not w.right.red:
w.left.red = False
w.red = True
self.__rotate_right(w)
w = x.parent.right
w.red = x.parent.red
x.parent.red = False
w.right.red = False
self.__rotate_left(x.parent)
x = rootLeft # exit loop
else: # same as above, left/right switched
w = x.parent.left
if w.red:
w.red = False
x.parent.red = True
self.__rotate_right(x.parent)
w = x.parent.left
if not w.left.red and not w.right.red:
w.red = True
x = x.parent
else:
if not w.left.red:
w.right.red = False
w.red = True
self.__rotate_left(w)
w = x.parent.left
w.red = x.parent.red
x.parent.red = False
w.left.red = False
self.__rotate_right(x.parent)
x = rootLeft # exit loop
x.red = False
# Walking, searching
def __iter__(self):
return self.inorder()
def inorder(self, RBNode x = None):
"""Generator that performs an inorder walk for the tree
rooted at RBNode x"""
if x is None:
x = self.getroot()
while x.left is not self.nil:
x = x.left
while x is not self.nil:
yield x
x = self.__successor(x)
cpdef RBNode find(self, double start, double end):
"""Return the node with exactly the given start and end."""
cdef RBNode x = self.getroot()
while x is not self.nil:
if start < x.start:
x = x.left
elif start == x.start:
if end == x.end:
break # found it
elif end < x.end:
x = x.left
else:
x = x.right
else:
x = x.right
return x if x is not self.nil else None
cpdef RBNode find_left_end(self, double t):
"""Find the leftmode node with end >= t. With non-overlapping
intervals, this is the first node that might overlap time t.
Note that this relies on non-overlapping intervals, since
it assumes that we can use the endpoints to traverse the
tree even though it was created using the start points."""
cdef RBNode x = self.getroot()
while x is not self.nil:
if t < x.end:
if x.left is self.nil:
break
x = x.left
elif t == x.end:
break
else:
if x.right is self.nil:
x = self.__successor(x)
break
x = x.right
return x if x is not self.nil else None
cpdef RBNode find_right_start(self, double t):
"""Find the rightmode node with start <= t. With non-overlapping
intervals, this is the last node that might overlap time t."""
cdef RBNode x = self.getroot()
while x is not self.nil:
if t < x.start:
if x.left is self.nil:
x = self.__predecessor(x)
break
x = x.left
elif t == x.start:
break
else:
if x.right is self.nil:
break
x = x.right
return x if x is not self.nil else None
# Intersections
def intersect(self, double start, double end):
"""Generator that returns nodes that overlap the given
(start,end) range. Assumes non-overlapping intervals."""
# Start with the leftmode node that ends after start
cdef RBNode n = self.find_left_end(start)
while n is not None:
if n.start >= end:
# this node starts after the requested end; we're done
break
if start < n.end:
# this node overlaps our requested area
yield n
n = self.successor(n)

View File

@@ -0,0 +1 @@
rbtree.pxd

View File

@@ -1,17 +1,19 @@
"""CherryPy-based server for accessing NILM database via HTTP"""
# Need absolute_import so that "import nilmdb" won't pull in nilmdb.py,
# but will pull the nilmdb module instead.
# Need absolute_import so that "import nilmdb" won't pull in
# nilmdb.py, but will pull the nilmdb module instead.
from __future__ import absolute_import
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nilmdb.server.errors import *
import cherrypy
import sys
import time
import os
import simplejson as json
import decorator
import traceback
try:
import cherrypy
@@ -24,8 +26,61 @@ class NilmApp(object):
def __init__(self, db):
self.db = db
version = "1.1"
version = "1.2"
# Decorators
def chunked_response(func):
"""Decorator to enable chunked responses."""
# Set this to False to get better tracebacks from some requests
# (/stream/extract, /stream/intervals).
func._cp_config = { 'response.stream': True }
return func
def response_type(content_type):
"""Return a decorator-generating function that sets the
response type to the specified string."""
def wrapper(func, *args, **kwargs):
cherrypy.response.headers['Content-Type'] = content_type
return func(*args, **kwargs)
return decorator.decorator(wrapper)
@decorator.decorator
def workaround_cp_bug_1200(func, *args, **kwargs): # pragma: no cover
"""Decorator to work around CherryPy bug #1200 in a response
generator.
Even if chunked responses are disabled, LookupError or
UnicodeError exceptions may still be swallowed by CherryPy due to
bug #1200. This throws them as generic Exceptions instead so that
they make it through.
"""
try:
for val in func(*args, **kwargs):
yield val
except (LookupError, UnicodeError) as e:
raise Exception("bug workaround; real exception is:\n" +
traceback.format_exc())
def exception_to_httperror(*expected):
"""Return a decorator-generating function that catches expected
errors and throws a HTTPError describing it instead.
@exception_to_httperror(NilmDBError, ValueError)
def foo():
pass
"""
def wrapper(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except expected as e:
message = sprintf("%s", str(e))
raise cherrypy.HTTPError("400 Bad Request", message)
# We need to preserve the function's argspecs for CherryPy to
# handle argument errors correctly. Decorator.decorator takes
# care of that.
return decorator.decorator(wrapper)
# CherryPy apps
class Root(NilmApp):
"""Root application for NILM database"""
@@ -59,7 +114,7 @@ class Root(NilmApp):
@cherrypy.expose
@cherrypy.tools.json_out()
def dbsize(self):
return nilmdb.du.du(self.db.get_basepath())
return nilmdb.utils.du(self.db.get_basepath())
class Stream(NilmApp):
"""Stream-specific operations"""
@@ -78,15 +133,20 @@ class Stream(NilmApp):
# /stream/create?path=/newton/prep&layout=PrepData
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, ValueError)
def create(self, path, layout):
"""Create a new stream in the database. Provide path
and one of the nilmdb.layout.layouts keys.
"""
try:
return self.db.stream_create(path, layout)
except Exception as e:
message = sprintf("%s: %s", type(e).__name__, e.message)
raise cherrypy.HTTPError("400 Bad Request", message)
return self.db.stream_create(path, layout)
# /stream/destroy?path=/newton/prep
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError)
def destroy(self, path):
"""Delete a stream and its associated data."""
return self.db.stream_destroy(path)
# /stream/get_metadata?path=/newton/prep
# /stream/get_metadata?path=/newton/prep&key=foo&key=bar
@@ -98,7 +158,7 @@ class Stream(NilmApp):
matching the given keys."""
try:
data = self.db.stream_get_metadata(path)
except nilmdb.nilmdb.StreamError as e:
except nilmdb.server.nilmdb.StreamError as e:
raise cherrypy.HTTPError("404 Not Found", e.message)
if key is None: # If no keys specified, return them all
key = data.keys()
@@ -115,49 +175,35 @@ class Stream(NilmApp):
# /stream/set_metadata?path=/newton/prep&data=<json>
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, LookupError, TypeError)
def set_metadata(self, path, data):
"""Set metadata for the named stream, replacing any
existing metadata. Data should be a json-encoded
dictionary"""
try:
data_dict = json.loads(data)
self.db.stream_set_metadata(path, data_dict)
except Exception as e:
message = sprintf("%s: %s", type(e).__name__, e.message)
raise cherrypy.HTTPError("400 Bad Request", message)
data_dict = json.loads(data)
self.db.stream_set_metadata(path, data_dict)
return "ok"
# /stream/update_metadata?path=/newton/prep&data=<json>
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, LookupError, TypeError)
def update_metadata(self, path, data):
"""Update metadata for the named stream. Data
should be a json-encoded dictionary"""
try:
data_dict = json.loads(data)
self.db.stream_update_metadata(path, data_dict)
except Exception as e:
message = sprintf("%s: %s", type(e).__name__, e.message)
raise cherrypy.HTTPError("400 Bad Request", message)
data_dict = json.loads(data)
self.db.stream_update_metadata(path, data_dict)
return "ok"
# /stream/insert?path=/newton/prep
@cherrypy.expose
@cherrypy.tools.json_out()
#@cherrypy.tools.disable_prb()
def insert(self, path, old_timestamp = None):
def insert(self, path, start, end):
"""
Insert new data into the database. Provide textual data
(matching the path's layout) as a HTTP PUT.
old_timestamp is used when making multiple, split-up insertions
for a larger contiguous block of data. The first insert
will return the maximum timestamp that it saw, and the second
insert should provide this timestamp as an argument. This is
used to extend the previous database interval rather than
start a new one.
"""
# Important that we always read the input before throwing any
# errors, to keep lengths happy for persistent connections.
# However, CherryPy 3.2.2 has a bug where this fails for GET
@@ -175,35 +221,75 @@ class Stream(NilmApp):
# Parse the input data
try:
parser = nilmdb.layout.Parser(layout)
parser = nilmdb.server.layout.Parser(layout)
parser.parse(body)
except nilmdb.layout.ParserError as e:
except nilmdb.server.layout.ParserError as e:
raise cherrypy.HTTPError("400 Bad Request",
"Error parsing input data: " +
"error parsing input data: " +
e.message)
if (not parser.min_timestamp or not parser.max_timestamp or
not len(parser.data)):
raise cherrypy.HTTPError("400 Bad Request",
"no data provided")
# Check limits
start = float(start)
end = float(end)
if parser.min_timestamp < start:
raise cherrypy.HTTPError("400 Bad Request", "Data timestamp " +
repr(parser.min_timestamp) +
" < start time " + repr(start))
if parser.max_timestamp >= end:
raise cherrypy.HTTPError("400 Bad Request", "Data timestamp " +
repr(parser.max_timestamp) +
" >= end time " + repr(end))
# Now do the nilmdb insert, passing it the parser full of data.
try:
if old_timestamp:
old_timestamp = float(old_timestamp)
result = self.db.stream_insert(path, parser, old_timestamp)
except nilmdb.nilmdb.NilmDBError as e:
result = self.db.stream_insert(path, start, end, parser.data)
except NilmDBError as e:
raise cherrypy.HTTPError("400 Bad Request", e.message)
# Return the maximum timestamp that we saw. The client will
# return this back to us as the old_timestamp parameter, if
# it has more data to send.
return ("ok", parser.max_timestamp)
# Done
return "ok"
# /stream/remove?path=/newton/prep
# /stream/remove?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError)
def remove(self, path, start = None, end = None):
"""
Remove data from the backend database. Removes all data in
the interval [start, end). Returns the number of data points
removed.
"""
if start is not None:
start = float(start)
if end is not None:
end = float(end)
if start is not None and end is not None:
if end < start:
raise cherrypy.HTTPError("400 Bad Request",
"end before start")
return self.db.stream_remove(path, start, end)
# /stream/intervals?path=/newton/prep
# /stream/intervals?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@chunked_response
@response_type("text/plain")
def intervals(self, path, start = None, end = None):
"""
Get intervals from backend database. Streams the resulting
intervals as JSON strings separated by newlines. This may
make multiple requests to the nilmdb backend to avoid causing
it to block for too long.
Note that the response type is set to 'text/plain' even
though we're sending back JSON; this is because we're not
really returning a single JSON object.
"""
if start is not None:
start = float(start)
@@ -219,9 +305,9 @@ class Stream(NilmApp):
if len(streams) != 1:
raise cherrypy.HTTPError("404 Not Found", "No such stream")
@workaround_cp_bug_1200
def content(start, end):
# Note: disable response.stream below to get better debug info
# from tracebacks in this subfunction.
# Note: disable chunked responses to see tracebacks from here.
while True:
(intervals, restart) = self.db.stream_intervals(path,start,end)
response = ''.join([ json.dumps(i) + "\n" for i in intervals ])
@@ -230,10 +316,11 @@ class Stream(NilmApp):
break
start = restart
return content(start, end)
intervals._cp_config = { 'response.stream': True } # chunked HTTP response
# /stream/extract?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@chunked_response
@response_type("text/plain")
def extract(self, path, start = None, end = None, count = False):
"""
Extract data from backend database. Streams the resulting
@@ -261,11 +348,11 @@ class Stream(NilmApp):
layout = streams[0][1]
# Get formatter
formatter = nilmdb.layout.Formatter(layout)
formatter = nilmdb.server.layout.Formatter(layout)
@workaround_cp_bug_1200
def content(start, end, count):
# Note: disable response.stream below to get better debug info
# from tracebacks in this subfunction.
# Note: disable chunked responses to see tracebacks from here.
if count:
matched = self.db.stream_extract(path, start, end, count)
yield sprintf("%d\n", matched)
@@ -281,8 +368,6 @@ class Stream(NilmApp):
return
start = restart
return content(start, end, count)
extract._cp_config = { 'response.stream': True } # chunked HTTP response
class Exiter(object):
"""App that exits the server, for testing"""
@@ -307,7 +392,7 @@ class Server(object):
# Need to wrap DB object in a serializer because we'll call
# into it from separate threads.
self.embedded = embedded
self.db = nilmdb.serializer.WrapObject(db)
self.db = nilmdb.utils.Serializer(db)
cherrypy.config.update({
'server.socket_host': host,
'server.socket_port': port,
@@ -318,11 +403,22 @@ class Server(object):
if self.embedded:
cherrypy.config.update({ 'environment': 'embedded' })
# Send a permissive Access-Control-Allow-Origin (CORS) header
# with all responses so that browsers can send cross-domain
# requests to this server.
cherrypy.config.update({ 'response.headers.Access-Control-Allow-Origin':
'*' })
# Send tracebacks in error responses. They're hidden by the
# error_page function for client errors (code 400-499).
cherrypy.config.update({ 'request.show_tracebacks' : True })
self.force_traceback = force_traceback
# Patch CherryPy error handler to never pad out error messages.
# This isn't necessary, but then again, neither is padding the
# error messages.
cherrypy._cperror._ie_friendly_error_sizes = {}
cherrypy.tree.apps = {}
cherrypy.tree.mount(Root(self.db, self.version), "/")
cherrypy.tree.mount(Stream(self.db), "/stream")
@@ -385,8 +481,10 @@ class Server(object):
cherrypy.engine.start()
os._exit = real_exit
# Signal that the engine has started successfully
if event is not None:
event.set()
if blocking:
try:
cherrypy.engine.wait(cherrypy.engine.states.EXITING,

11
nilmdb/utils/__init__.py Normal file
View File

@@ -0,0 +1,11 @@
"""NilmDB utilities"""
from .timer import Timer
from .iteratorizer import Iteratorizer
from .serializer import Serializer
from .lrucache import lru_cache
from .diskusage import du
from .mustclose import must_close
from .urllib import urlencode
from . import misc
from . import atomic

26
nilmdb/utils/atomic.py Normal file
View File

@@ -0,0 +1,26 @@
# Atomic file writing helper.
import os
def replace_file(filename, content):
"""Attempt to atomically and durably replace the filename with the
given contents. This is intended to be 'pretty good on most
OSes', but not necessarily bulletproof."""
newfilename = filename + ".new"
# Write to new file, flush it
with open(newfilename, "wb") as f:
f.write(content)
f.flush()
os.fsync(f.fileno())
# Move new file over old one
try:
os.rename(newfilename, filename)
except OSError: # pragma: no cover
# Some OSes might not support renaming over an existing file.
# This is definitely NOT atomic!
os.remove(filename)
os.rename(newfilename, filename)

View File

@@ -1,4 +1,3 @@
import nilmdb
import os
from math import log

View File

@@ -0,0 +1,99 @@
import Queue
import threading
import sys
import contextlib
# This file provides a context manager that converts a function
# that takes a callback into a generator that returns an iterable.
# This is done by running the function in a new thread.
# Based partially on http://stackoverflow.com/questions/9968592/
class IteratorizerThread(threading.Thread):
def __init__(self, queue, function, curl_hack):
"""
function: function to execute, which takes the
callback (provided by this class) as an argument
"""
threading.Thread.__init__(self)
self.function = function
self.queue = queue
self.die = False
self.curl_hack = curl_hack
def callback(self, data):
try:
if self.die:
raise Exception() # trigger termination
self.queue.put((1, data))
except:
if self.curl_hack:
# We can't raise exceptions, because the pycurl
# extension module will unconditionally print the
# exception itself, and not pass it up to the caller.
# Instead, just return a value that tells curl to
# abort. (-1 would be best, in case we were given 0
# bytes, but the extension doesn't support that).
self.queue.put((2, sys.exc_info()))
return 0
raise
def run(self):
try:
result = self.function(self.callback)
except:
self.queue.put((2, sys.exc_info()))
else:
self.queue.put((0, result))
@contextlib.contextmanager
def Iteratorizer(function, curl_hack = False):
"""
Context manager that takes a function expecting a callback,
and provides an iterable that yields the values passed to that
callback instead.
function: function to execute, which takes a callback
(provided by this context manager) as an argument
with iteratorizer(func) as it:
for i in it:
print 'callback was passed:', i
print 'function returned:', it.retval
"""
queue = Queue.Queue(maxsize = 1)
thread = IteratorizerThread(queue, function, curl_hack)
thread.daemon = True
thread.start()
class iteratorizer_gen(object):
def __init__(self, queue):
self.queue = queue
self.retval = None
def __iter__(self):
return self
def next(self):
(typ, data) = self.queue.get()
if typ == 0:
# function has returned
self.retval = data
raise StopIteration
elif typ == 1:
# data is available
return data
else:
# callback raised an exception
raise data[0], data[1], data[2]
try:
yield iteratorizer_gen(queue)
finally:
# Ask the thread to die, if it's still running.
thread.die = True
while thread.isAlive():
try:
queue.get(True, 0.01)
except:
pass

77
nilmdb/utils/lrucache.py Normal file
View File

@@ -0,0 +1,77 @@
# Memoize a function's return value with a least-recently-used cache
# Based on:
# http://code.activestate.com/recipes/498245-lru-and-lfu-cache-decorators/
# with added 'destructor' functionality.
import collections
import decorator
import warnings
def lru_cache(size = 10, onremove = None, keys = slice(None)):
"""Least-recently-used cache decorator.
@lru_cache(size = 10, onevict = None)
def f(...):
pass
Given a function and arguments, memoize its return value. Up to
'size' elements are cached. 'keys' is a slice object that
represents which arguments are used as the cache key.
When evicting a value from the cache, call the function
'onremove' with the value that's being evicted.
Call f.cache_remove(...) to evict the cache entry with the given
arguments. Call f.cache_remove_all() to evict all entries.
f.cache_hits and f.cache_misses give statistics.
"""
def decorate(func):
cache = collections.OrderedDict() # order: least- to most-recent
def evict(value):
if onremove:
onremove(value)
def wrapper(orig, *args, **kwargs):
if kwargs:
raise NotImplementedError("kwargs not supported")
key = args[keys]
try:
value = cache.pop(key)
orig.cache_hits += 1
except KeyError:
value = orig(*args)
orig.cache_misses += 1
if len(cache) >= size:
evict(cache.popitem(0)[1]) # evict LRU cache entry
cache[key] = value # (re-)insert this key at end
return value
def cache_remove(*args):
"""Remove the described key from this cache, if present."""
key = args
if key in cache:
evict(cache.pop(key))
else:
if len(cache) > 0 and len(args) != len(cache.iterkeys().next()):
raise KeyError("trying to remove from LRU cache, but "
"number of arguments doesn't match the "
"cache key length")
def cache_remove_all():
for key in cache:
evict(cache.pop(key))
def cache_info():
return (func.cache_hits, func.cache_misses)
new = decorator.decorator(wrapper, func)
func.cache_hits = 0
func.cache_misses = 0
new.cache_info = cache_info
new.cache_remove = cache_remove
new.cache_remove_all = cache_remove_all
return new
return decorate

8
nilmdb/utils/misc.py Normal file
View File

@@ -0,0 +1,8 @@
import itertools
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), ..., (sn,None)"
a, b = itertools.tee(iterable)
next(b, None)
return itertools.izip_longest(a, b)

63
nilmdb/utils/mustclose.py Normal file
View File

@@ -0,0 +1,63 @@
from nilmdb.utils.printf import *
import sys
import inspect
import decorator
def must_close(errorfile = sys.stderr, wrap_verify = False):
"""Class decorator that warns on 'errorfile' at deletion time if
the class's close() member wasn't called.
If 'wrap_verify' is True, every class method is wrapped with a
verifier that will raise AssertionError if the .close() method has
already been called."""
def class_decorator(cls):
# Helper to replace a class method with a wrapper function,
# while maintaining argument specs etc.
def wrap_class_method(wrapper_func):
method = wrapper_func.__name__
if method in cls.__dict__:
orig = getattr(cls, method).im_func
else:
orig = lambda self: None
setattr(cls, method, decorator.decorator(wrapper_func, orig))
@wrap_class_method
def __init__(orig, self, *args, **kwargs):
ret = orig(self, *args, **kwargs)
self.__dict__["_must_close"] = True
self.__dict__["_must_close_initialized"] = True
return ret
@wrap_class_method
def __del__(orig, self, *args, **kwargs):
if "_must_close" in self.__dict__:
fprintf(errorfile, "error: %s.close() wasn't called!\n",
self.__class__.__name__)
return orig(self, *args, **kwargs)
@wrap_class_method
def close(orig, self, *args, **kwargs):
del self._must_close
return orig(self, *args, **kwargs)
# Optionally wrap all other functions
def verifier(orig, self, *args, **kwargs):
if ("_must_close" not in self.__dict__ and
"_must_close_initialized" in self.__dict__):
raise AssertionError("called " + str(orig) + " after close")
return orig(self, *args, **kwargs)
if wrap_verify:
for (name, method) in inspect.getmembers(cls, inspect.ismethod):
# Skip class methods
if method.__self__ is not None:
continue
# Skip some methods
if name in [ "__del__", "__init__" ]:
continue
# Set up wrapper
setattr(cls, name, decorator.decorator(verifier,
method.im_func))
return cls
return class_decorator

View File

@@ -67,3 +67,6 @@ class WrapObject(object):
def __del__(self):
self.__wrap_call_queue.put((None, None, None, None))
self.__wrap_serializer.join()
# Just an alias
Serializer = WrapObject

View File

@@ -5,6 +5,7 @@
# with nilmdb.Timer("flush"):
# foo.flush()
from __future__ import print_function
import contextlib
import time
@@ -18,4 +19,4 @@ def Timer(name = None, tosyslog = False):
import syslog
syslog.syslog(msg)
else:
print msg
print(msg)

View File

@@ -1,11 +1,10 @@
"""File-like objects that add timestamps to the input lines"""
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nilmdb.utils import datetime_tz
import time
import os
import datetime_tz
class Timestamper(object):
"""A file-like object that adds timestamps to lines of an input file."""

37
nilmdb/utils/urllib.py Normal file
View File

@@ -0,0 +1,37 @@
from __future__ import absolute_import
from urllib import quote_plus, _is_unicode
# urllib.urlencode insists on encoding Unicode as ASCII. This is based
# on that function, except we always encode it as UTF-8 instead.
def urlencode(query):
"""Encode a dictionary into a URL query string.
If any values in the query arg are sequences, each sequence
element is converted to a separate parameter.
"""
query = query.items()
l = []
for k, v in query:
k = quote_plus(str(k))
if isinstance(v, str):
v = quote_plus(v)
l.append(k + '=' + v)
elif _is_unicode(v):
v = quote_plus(v.encode("utf-8","strict"))
l.append(k + '=' + v)
else:
try:
# is this a sufficient test for sequence-ness?
len(v)
except TypeError:
# not a sequence
v = quote_plus(str(v))
l.append(k + '=' + v)
else:
# loop over the sequence
for elt in v:
l.append(k + '=' + quote_plus(str(elt)))
return '&'.join(l)

View File

@@ -3,14 +3,17 @@
import nilmdb
import argparse
parser = argparse.ArgumentParser(description='Run the NILM server')
formatter = argparse.ArgumentDefaultsHelpFormatter
parser = argparse.ArgumentParser(description='Run the NILM server',
formatter_class = formatter)
parser.add_argument('-p', '--port', help='Port number', type=int, default=12380)
parser.add_argument('-d', '--database', help='Database directory', default="db")
parser.add_argument('-y', '--yappi', help='Run with yappi profiler',
action='store_true')
args = parser.parse_args()
# Start web app on a custom port
db = nilmdb.NilmDB("db")
db = nilmdb.NilmDB(args.database)
server = nilmdb.Server(db, host = "127.0.0.1",
port = args.port,
embedded = False)

46
runtests.py Executable file
View File

@@ -0,0 +1,46 @@
#!/usr/bin/python
import nose
import os
import sys
import glob
from collections import OrderedDict
class JimOrderPlugin(nose.plugins.Plugin):
"""When searching for tests and encountering a directory that
contains a 'test.order' file, run tests listed in that file, in the
order that they're listed. Globs are OK in that file and duplicates
are removed."""
name = 'jimorder'
score = 10000
def prepareTestLoader(self, loader):
def wrap(func):
def wrapper(name, *args, **kwargs):
addr = nose.selector.TestAddress(
name, workingDir=loader.workingDir)
try:
order = os.path.join(addr.filename, "test.order")
except:
order = None
if order and os.path.exists(order):
files = []
for line in open(order):
line = line.split('#')[0].strip()
if not line:
continue
fn = os.path.join(addr.filename, line.strip())
files.extend(sorted(glob.glob(fn)) or [fn])
files = list(OrderedDict.fromkeys(files))
tests = [ wrapper(fn, *args, **kwargs) for fn in files ]
return loader.suiteClass(tests)
return func(name, *args, **kwargs)
return wrapper
loader.loadTestsFromName = wrap(loader.loadTestsFromName)
return loader
# Use setup.cfg for most of the test configuration. Adding
# --with-jimorder here means that a normal "nosetests" run will
# still work, it just won't support test.order.
nose.main(addplugins = [ JimOrderPlugin() ],
argv = sys.argv + ["--with-jimorder"])

View File

@@ -1,23 +1,40 @@
[aliases]
test = nosetests
[nosetests]
# note: the value doesn't matter, that's why they're empty here
nocapture=
nologcapture= # comment to see cherrypy logs on failure
with-coverage=
cover-inclusive=
# Note: values must be set to 1, and have no comments on the same line,
# for "python setup.py nosetests" to work correctly.
nocapture=1
# Comment this out to see CherryPy logs on failure:
nologcapture=1
with-coverage=1
cover-inclusive=1
cover-package=nilmdb
cover-erase=
##cover-html= # this works, puts html output in cover/ dir
##cover-branches= # need nose 1.1.3 for this
stop=
cover-erase=1
# this works, puts html output in cover/ dir:
# cover-html=1
# need nose 1.1.3 for this:
# cover-branches=1
#debug=nose
#debug-log=nose.log
stop=1
verbosity=2
tests=tests
#tests=tests/test_bulkdata.py
#tests=tests/test_mustclose.py
#tests=tests/test_lrucache.py
#tests=tests/test_cmdline.py
#tests=tests/test_layout.py
tests=tests/test_interval.py
#tests=tests/test_rbtree.py
#tests=tests/test_interval.py
#tests=tests/test_rbtree.py,tests/test_interval.py
#tests=tests/test_interval.py
#tests=tests/test_client.py
#tests=tests/test_timestamper.py
#tests=tests/test_serializer.py
#tests=tests/test_iteratorizer.py
#tests=tests/test_client.py:TestClient.test_client_nilmdb
#with-profile=
#tests=tests/test_nilmdb.py
#with-profile=1
#profile-sort=time
##profile-restrict=10 # doesn't work right, treated as string or something

48
setup.py Executable file
View File

@@ -0,0 +1,48 @@
#!/usr/bin/python
# This is supposed to be using Distribute:
#
# distutils provides a "setup" method.
# setuptools is a set of monkeypatches on top of that.
# distribute is a particular version/implementation of setuptools.
#
# So we don't really know if this is using the old setuptools or the
# Distribute-provided version of setuptools.
from setuptools import setup, find_packages
from distutils.extension import Extension
from Cython.Build import cythonize
# Hack to workaround logging/multiprocessing issue:
# https://groups.google.com/d/msg/nose-users/fnJ-kAUbYHQ/_UsLN786ygcJ
try: import multiprocessing
except: pass
# Build cython modules.
cython_modules = cythonize("**/*.pyx")
# Run setup
setup(name='nilmdb',
version = '1.0',
url = 'https://git.jim.sh/jim/lees/nilmdb.git',
author = 'Jim Paris',
author_email = 'jim@jtan.com',
tests_require = [ 'nose',
'coverage',
],
setup_requires = [ 'cython',
],
install_requires = [ 'distribute',
'decorator',
],
packages = [ 'nilmdb',
'nilmdb.utils',
'nilmdb.utils.datetime_tz',
'nilmdb.server',
'nilmdb.client',
'nilmdb.cmdline',
],
ext_modules = cython_modules,
zip_safe = False,
)

124
tests/data/extract-7 Normal file
View File

@@ -0,0 +1,124 @@
# path: /newton/prep
# layout: PrepData
# start: 1332496830.0
# end: 1332496830.999
1332496830.000000 251774.000000 224241.000000 5688.100098 1915.530029 9329.219727 4183.709961 1212.349976 2641.790039
1332496830.008333 259567.000000 222698.000000 6207.600098 678.671997 9380.230469 4575.580078 2830.610107 2688.629883
1332496830.016667 263073.000000 223304.000000 4961.640137 2197.120117 7687.310059 4861.859863 2732.780029 3008.540039
1332496830.025000 257614.000000 223323.000000 5003.660156 3525.139893 7165.310059 4685.620117 1715.380005 3440.479980
1332496830.033333 255780.000000 221915.000000 6357.310059 2145.290039 8426.969727 3775.350098 1475.390015 3797.239990
1332496830.041667 260166.000000 223008.000000 6702.589844 1484.959961 9288.099609 3330.830078 1228.500000 3214.320068
1332496830.050000 261231.000000 226426.000000 4980.060059 2982.379883 8499.629883 4267.669922 994.088989 2292.889893
1332496830.058333 255117.000000 226642.000000 4584.410156 4656.439941 7860.149902 5317.310059 1473.599976 2111.689941
1332496830.066667 253300.000000 223554.000000 6455.089844 3036.649902 8869.750000 4986.310059 2607.360107 2839.590088
1332496830.075000 261061.000000 221263.000000 6951.979980 1500.239990 9386.099609 3791.679932 2677.010010 3980.629883
1332496830.083333 266503.000000 223198.000000 5189.609863 2594.560059 8571.530273 3175.000000 919.840027 3792.010010
1332496830.091667 260692.000000 225184.000000 3782.479980 4642.879883 7662.959961 3917.790039 -251.097000 2907.060059
1332496830.100000 253963.000000 225081.000000 5123.529785 3839.550049 8669.030273 4877.819824 943.723999 2527.449951
1332496830.108333 256555.000000 224169.000000 5930.600098 2298.540039 8906.709961 5331.680176 2549.909912 3053.560059
1332496830.116667 260889.000000 225010.000000 4681.129883 2971.870117 7900.040039 4874.080078 2322.429932 3649.120117
1332496830.125000 257944.000000 224923.000000 3291.139893 4357.089844 7131.589844 4385.560059 1077.050049 3664.040039
1332496830.133333 255009.000000 223018.000000 4584.819824 2864.000000 8469.490234 3625.580078 985.557007 3504.229980
1332496830.141667 260114.000000 221947.000000 5676.189941 1210.339966 9393.780273 3390.239990 1654.020020 3018.699951
1332496830.150000 264277.000000 224438.000000 4446.620117 2176.719971 8142.089844 4584.879883 2327.830078 2615.800049
1332496830.158333 259221.000000 226471.000000 2734.439941 4182.759766 6389.549805 5540.520020 1958.880005 2720.120117
1332496830.166667 252650.000000 224831.000000 4163.640137 2989.989990 7179.200195 5213.060059 1929.550049 3457.659912
1332496830.175000 257083.000000 222048.000000 5759.040039 702.440979 8566.549805 3552.020020 1832.939941 3956.189941
1332496830.183333 263130.000000 222967.000000 5141.140137 1166.119995 8666.959961 2720.370117 971.374023 3479.729980
1332496830.191667 260236.000000 225265.000000 3425.139893 3339.080078 7853.609863 3674.949951 525.908020 2443.310059
1332496830.200000 253503.000000 224527.000000 4398.129883 2927.429932 8110.279785 4842.470215 1513.869995 2467.100098
1332496830.208333 256126.000000 222693.000000 6043.529785 656.223999 8797.559570 4832.410156 2832.370117 3426.139893
1332496830.216667 261677.000000 223608.000000 5830.459961 1033.910034 8123.939941 3980.689941 1927.959961 4092.719971
1332496830.225000 259457.000000 225536.000000 4015.570068 2995.989990 7135.439941 3713.550049 307.220001 3849.429932
1332496830.233333 253352.000000 224216.000000 4650.560059 3196.620117 8131.279785 3586.159912 70.832298 3074.179932
1332496830.241667 256124.000000 221513.000000 6100.479980 821.979980 9757.540039 3474.510010 1647.520020 2559.860107
1332496830.250000 263024.000000 221559.000000 5789.959961 699.416992 9129.740234 4153.080078 2829.250000 2677.270020
1332496830.258333 261720.000000 224015.000000 4358.500000 2645.360107 7414.109863 4810.669922 2225.989990 3185.989990
1332496830.266667 254756.000000 224240.000000 4857.379883 3229.679932 7539.310059 4769.140137 1507.130005 3668.260010
1332496830.275000 256889.000000 222658.000000 6473.419922 1214.109985 9010.759766 3848.729980 1303.839966 3778.500000
1332496830.283333 264208.000000 223316.000000 5700.450195 1116.560059 9087.610352 3846.679932 1293.589966 2891.560059
1332496830.291667 263310.000000 225719.000000 3936.120117 3252.360107 7552.850098 4897.859863 1156.630005 2037.160034
1332496830.300000 255079.000000 225086.000000 4536.450195 3960.110107 7454.589844 5479.069824 1596.359985 2190.800049
1332496830.308333 254487.000000 222508.000000 6635.859863 1758.849976 8732.969727 4466.970215 2650.360107 3139.310059
1332496830.316667 261241.000000 222432.000000 6702.270020 1085.130005 8989.230469 3112.989990 1933.560059 3828.409912
1332496830.325000 262119.000000 225587.000000 4714.950195 2892.360107 8107.819824 2961.310059 239.977997 3273.719971
1332496830.333333 254999.000000 226514.000000 4532.089844 4126.899902 8200.129883 3872.590088 56.089001 2370.580078
1332496830.341667 254289.000000 224033.000000 6538.810059 2251.439941 9419.429688 4564.450195 2077.810059 2508.169922
1332496830.350000 261890.000000 221960.000000 6846.089844 1475.270020 9125.589844 4598.290039 3299.219971 3475.419922
1332496830.358333 264502.000000 223085.000000 5066.379883 3270.560059 7933.169922 4173.709961 1908.910034 3867.459961
1332496830.366667 257889.000000 223656.000000 4201.660156 4473.640137 7688.339844 4161.580078 687.578979 3653.689941
1332496830.375000 254270.000000 223151.000000 5715.140137 2752.139893 9273.320312 3772.949951 896.403992 3256.060059
1332496830.383333 258257.000000 224217.000000 6114.310059 1856.859985 9604.320312 4200.490234 1764.380005 2939.219971
1332496830.391667 260020.000000 226868.000000 4237.529785 3605.879883 8066.220215 5430.250000 2138.580078 2696.709961
1332496830.400000 255083.000000 225924.000000 3350.310059 4853.069824 7045.819824 5925.200195 1893.609985 2897.340088
1332496830.408333 254453.000000 222127.000000 5271.330078 2491.500000 8436.679688 5032.080078 2436.050049 3724.590088
1332496830.416667 262588.000000 219950.000000 5994.620117 789.273987 9029.650391 3515.739990 1953.569946 4014.520020
1332496830.425000 265610.000000 223333.000000 4391.410156 2400.959961 8146.459961 3536.959961 530.231995 3133.919922
1332496830.433333 257470.000000 226977.000000 2975.320068 4633.529785 7278.560059 4640.100098 -50.150200 2024.959961
1332496830.441667 250687.000000 226331.000000 4517.859863 3183.800049 8072.600098 5281.660156 1605.140015 2335.139893
1332496830.450000 255563.000000 224495.000000 5551.000000 1101.300049 8461.490234 4725.700195 2726.669922 3480.540039
1332496830.458333 261335.000000 224645.000000 4764.680176 1557.020020 7833.350098 3524.810059 1577.410034 4038.620117
1332496830.466667 260269.000000 224008.000000 3558.030029 2987.610107 7362.439941 3279.229980 562.442017 3786.550049
1332496830.475000 257435.000000 221777.000000 4972.600098 2166.879883 8481.440430 3328.719971 1037.130005 3271.370117
1332496830.483333 261046.000000 221550.000000 5816.180176 590.216980 9120.929688 3895.399902 2382.669922 2824.169922
1332496830.491667 262766.000000 224473.000000 4835.049805 1785.770020 7880.759766 4745.620117 2443.659912 3229.550049
1332496830.500000 256509.000000 226413.000000 3758.870117 3461.199951 6743.770020 4928.959961 1536.619995 3546.689941
1332496830.508333 250793.000000 224372.000000 5218.490234 2865.260010 7803.959961 4351.089844 1333.819946 3680.489990
1332496830.516667 256319.000000 222066.000000 6403.970215 732.344971 9627.759766 3089.300049 1516.780029 3653.689941
1332496830.525000 263343.000000 223235.000000 5200.430176 1388.579956 9372.849609 3371.229980 1450.390015 2678.909912
1332496830.533333 260903.000000 225110.000000 3722.580078 3246.659912 7876.540039 4716.810059 1498.439941 2116.520020
1332496830.541667 254416.000000 223769.000000 4841.649902 2956.399902 8115.919922 5392.359863 2142.810059 2652.320068
1332496830.550000 256698.000000 222172.000000 6471.229980 970.395996 8834.980469 4816.839844 2376.629883 3605.860107
1332496830.558333 261841.000000 223537.000000 5500.740234 1189.660034 8365.730469 4016.469971 1042.270020 3821.199951
1332496830.566667 259503.000000 225840.000000 3827.929932 3088.840088 7676.140137 3978.310059 -357.006989 3016.419922
1332496830.575000 253457.000000 224636.000000 4914.609863 3097.449951 8224.900391 4321.439941 171.373993 2412.360107
1332496830.583333 256029.000000 222221.000000 6841.799805 1028.500000 9252.299805 4387.569824 2418.139893 2510.100098
1332496830.591667 262840.000000 222550.000000 6210.250000 1410.729980 8538.900391 4152.580078 3009.300049 3219.760010
1332496830.600000 261633.000000 225065.000000 4284.529785 3357.209961 7282.169922 3823.590088 1402.839966 3644.669922
1332496830.608333 254591.000000 225109.000000 4693.160156 3647.739990 7745.160156 3686.379883 490.161011 3448.860107
1332496830.616667 254780.000000 223599.000000 6527.379883 1569.869995 9438.429688 3456.580078 1162.520020 3252.010010
1332496830.625000 260639.000000 224107.000000 6531.049805 1633.050049 9283.719727 4174.020020 2089.550049 2775.750000
1332496830.633333 261108.000000 225472.000000 4968.259766 3527.850098 7692.870117 5137.100098 2207.389893 2436.659912
1332496830.641667 255775.000000 223708.000000 4963.450195 4017.370117 7701.419922 5269.649902 2284.399902 2842.080078
1332496830.650000 257398.000000 220947.000000 6767.500000 1645.709961 9107.070312 4000.179932 2548.860107 3624.770020
1332496830.658333 264924.000000 221559.000000 6471.459961 1110.329956 9459.650391 3108.169922 1696.969971 3893.439941
1332496830.666667 265339.000000 225733.000000 4348.799805 3459.510010 8475.299805 4031.239990 573.346985 2910.270020
1332496830.675000 256814.000000 226995.000000 3479.540039 4949.790039 7499.910156 5624.709961 751.656006 2347.709961
1332496830.683333 253316.000000 225161.000000 5147.060059 3218.429932 8460.160156 5869.299805 2336.320068 2987.959961
1332496830.691667 259360.000000 223101.000000 5549.120117 1869.949951 8740.759766 4668.939941 2457.909912 3758.820068
1332496830.700000 262012.000000 224016.000000 4173.609863 3004.129883 8157.040039 3704.729980 987.963989 3652.750000
1332496830.708333 257176.000000 224420.000000 3517.300049 4118.750000 7822.240234 3718.229980 37.264900 2953.679932
1332496830.716667 255146.000000 223322.000000 4923.979980 2330.679932 9095.910156 3792.399902 1013.070007 2711.239990
1332496830.725000 260524.000000 223651.000000 5413.629883 1146.209961 8817.169922 4419.649902 2446.649902 2832.050049
1332496830.733333 262098.000000 225752.000000 4262.979980 2270.969971 7135.479980 5067.120117 2294.679932 3376.620117
1332496830.741667 256889.000000 225379.000000 3606.459961 3568.189941 6552.649902 4970.270020 1516.380005 3662.570068
1332496830.750000 253948.000000 222631.000000 5511.700195 2066.300049 7952.660156 4019.909912 1513.140015 3752.629883
1332496830.758333 259799.000000 222067.000000 5873.500000 608.583984 9253.780273 2870.739990 1348.239990 3344.199951
1332496830.766667 262547.000000 224901.000000 4346.080078 1928.099976 8590.969727 3455.459961 904.390991 2379.270020
1332496830.775000 256137.000000 226761.000000 3423.560059 3379.080078 7471.149902 4894.169922 1153.540039 2031.410034
1332496830.783333 250326.000000 225013.000000 5519.979980 2423.969971 7991.759766 5117.950195 2098.790039 3099.239990
1332496830.791667 255454.000000 222992.000000 6547.950195 496.496002 8751.339844 3900.560059 2132.290039 4076.810059
1332496830.800000 261286.000000 223489.000000 5152.850098 1501.510010 8425.610352 2888.030029 776.114014 3786.360107
1332496830.808333 258969.000000 224069.000000 3832.610107 3001.979980 7979.259766 3182.310059 52.716000 2874.800049
1332496830.816667 254946.000000 222035.000000 5317.879883 2139.800049 9103.139648 3955.610107 1235.170044 2394.149902
1332496830.825000 258676.000000 221205.000000 6594.910156 505.343994 9423.360352 4562.470215 2913.739990 2892.350098
1332496830.833333 262125.000000 223566.000000 5116.750000 1773.599976 8082.200195 4776.370117 2386.389893 3659.729980
1332496830.841667 257835.000000 225918.000000 3714.300049 3477.080078 7205.370117 4554.609863 711.539001 3878.419922
1332496830.850000 253660.000000 224371.000000 5022.450195 2592.429932 8277.200195 4119.370117 486.507996 3666.739990
1332496830.858333 259503.000000 222061.000000 6589.950195 659.935974 9596.919922 3598.100098 1702.489990 3036.600098
1332496830.866667 265495.000000 222843.000000 5541.850098 1728.430054 8459.959961 4492.000000 2231.969971 2430.620117
1332496830.875000 260929.000000 224996.000000 4000.949951 3745.989990 6983.790039 5430.859863 1855.260010 2533.379883
1332496830.883333 252716.000000 224335.000000 5086.560059 3401.149902 7597.970215 5196.120117 1755.719971 3079.760010
1332496830.891667 254110.000000 223111.000000 6822.189941 1229.079956 9164.339844 3761.229980 1679.390015 3584.879883
1332496830.900000 259969.000000 224693.000000 6183.950195 1538.500000 9222.080078 3139.169922 949.901978 3180.800049
1332496830.908333 259078.000000 226913.000000 4388.890137 3694.820068 8195.019531 3933.000000 426.079987 2388.449951
1332496830.916667 254563.000000 224760.000000 5168.439941 4020.939941 8450.269531 4758.910156 1458.900024 2286.429932
1332496830.925000 258059.000000 221217.000000 6883.459961 1649.530029 9232.780273 4457.649902 3057.820068 3031.949951
1332496830.933333 264667.000000 221177.000000 6218.509766 1645.729980 8657.179688 3663.500000 2528.280029 3978.340088
1332496830.941667 262925.000000 224382.000000 4627.500000 3635.929932 7892.799805 3431.320068 604.508972 3901.370117
1332496830.950000 254708.000000 225448.000000 4408.250000 4461.040039 8197.169922 3953.750000 -44.534599 3154.870117
1332496830.958333 253702.000000 224635.000000 5825.770020 2577.050049 9590.049805 4569.250000 1460.270020 2785.169922
1332496830.966667 260206.000000 224140.000000 5387.979980 1951.160034 8789.509766 5131.660156 2706.379883 2972.479980
1332496830.975000 261240.000000 224737.000000 3860.810059 3418.310059 7414.529785 5284.520020 2271.379883 3183.149902
1332496830.983333 256140.000000 223252.000000 3850.010010 3957.139893 7262.649902 4964.640137 1499.510010 3453.129883
1332496830.991667 256116.000000 221349.000000 5594.479980 2054.399902 8835.129883 3662.010010 1485.510010 3613.010010

View File

@@ -0,0 +1,19 @@
2.56437e+05 2.24430e+05 4.01161e+03 3.47534e+03 7.49589e+03 3.38894e+03 2.61397e+02 3.73126e+03
2.53963e+05 2.24167e+05 5.62107e+03 1.54801e+03 9.16517e+03 3.52293e+03 1.05893e+03 2.99696e+03
2.58508e+05 2.24930e+05 6.01140e+03 8.18866e+02 9.03995e+03 4.48244e+03 2.49039e+03 2.67934e+03
2.59627e+05 2.26022e+05 4.47450e+03 2.42302e+03 7.41419e+03 5.07197e+03 2.43938e+03 2.96296e+03
2.55187e+05 2.24632e+05 4.73857e+03 3.39804e+03 7.39512e+03 4.72645e+03 1.83903e+03 3.39353e+03
2.57102e+05 2.21623e+05 6.14413e+03 1.44109e+03 8.75648e+03 3.49532e+03 1.86994e+03 3.75253e+03
2.63653e+05 2.21770e+05 6.22177e+03 7.38962e+02 9.54760e+03 2.66682e+03 1.46266e+03 3.33257e+03
2.63613e+05 2.25256e+05 4.47712e+03 2.43745e+03 8.51021e+03 3.85563e+03 9.59442e+02 2.38718e+03
2.55350e+05 2.26264e+05 4.28372e+03 3.92394e+03 7.91247e+03 5.46652e+03 1.28499e+03 2.09372e+03
2.52727e+05 2.24609e+05 5.85193e+03 2.49198e+03 8.54063e+03 5.62305e+03 2.33978e+03 3.00714e+03
2.58475e+05 2.23578e+05 5.92487e+03 1.39448e+03 8.77962e+03 4.54418e+03 2.13203e+03 3.84976e+03
2.61563e+05 2.24609e+05 4.33614e+03 2.45575e+03 8.05538e+03 3.46911e+03 6.27873e+02 3.66420e+03
2.56401e+05 2.24441e+05 4.18715e+03 3.45717e+03 7.90669e+03 3.53355e+03 -5.84482e+00 2.96687e+03
2.54745e+05 2.22644e+05 6.02005e+03 1.94721e+03 9.28939e+03 3.80020e+03 1.34820e+03 2.37785e+03
2.60723e+05 2.22660e+05 6.69719e+03 1.03048e+03 9.26124e+03 4.34917e+03 2.84530e+03 2.73619e+03
2.63089e+05 2.25711e+05 4.77887e+03 2.60417e+03 7.39660e+03 4.59811e+03 2.17472e+03 3.40729e+03
2.55843e+05 2.27128e+05 4.02413e+03 4.39323e+03 6.79336e+03 4.62535e+03 7.52009e+02 3.44647e+03
2.51904e+05 2.24868e+05 5.82289e+03 3.02127e+03 8.46160e+03 3.80298e+03 8.07212e+02 3.53468e+03
2.57670e+05 2.22974e+05 6.73436e+03 1.60956e+03 9.92960e+03 2.98028e+03 1.44168e+03 3.05351e+03

View File

@@ -0,0 +1,11 @@
1332497040.000000 2.56439e+05 2.24775e+05 2.92897e+03 4.66646e+03 7.58491e+03 3.57351e+03 -4.34171e+02 2.98819e+03
1332497040.010000 2.51903e+05 2.23202e+05 4.23696e+03 3.49363e+03 8.53493e+03 4.29416e+03 8.49573e+02 2.38189e+03
1332497040.020000 2.57625e+05 2.20247e+05 5.47017e+03 1.35872e+03 9.18903e+03 4.56136e+03 2.65599e+03 2.60912e+03
1332497040.030000 2.63375e+05 2.20706e+05 4.51842e+03 1.80758e+03 8.17208e+03 4.17463e+03 2.57884e+03 3.32848e+03
1332497040.040000 2.59221e+05 2.22346e+05 2.98879e+03 3.66264e+03 6.87274e+03 3.94223e+03 1.25928e+03 3.51786e+03
1332497040.050000 2.51918e+05 2.22281e+05 4.22677e+03 2.84764e+03 7.78323e+03 3.81659e+03 8.04944e+02 3.46314e+03
1332497040.050000 2.54478e+05 2.21701e+05 5.61366e+03 1.02262e+03 9.26581e+03 3.50152e+03 1.29331e+03 3.07271e+03
1332497040.060000 2.59568e+05 2.22945e+05 4.97190e+03 1.28250e+03 8.62081e+03 4.06316e+03 1.85717e+03 2.61990e+03
1332497040.070000 2.57269e+05 2.23697e+05 3.60527e+03 3.05749e+03 7.22363e+03 4.90330e+03 1.93736e+03 2.35357e+03
1332497040.080000 2.52274e+05 2.21438e+05 5.01228e+03 2.86309e+03 7.87115e+03 4.80448e+03 2.18291e+03 2.93397e+03
1332497040.090000 2.56468e+05 2.19205e+05 6.29804e+03 8.09467e+02 9.12895e+03 3.52055e+03 2.16980e+03 3.88739e+03

18
tests/test.order Normal file
View File

@@ -0,0 +1,18 @@
test_printf.py
test_lrucache.py
test_mustclose.py
test_serializer.py
test_iteratorizer.py
test_timestamper.py
test_layout.py
test_rbtree.py
test_interval.py
test_bulkdata.py
test_nilmdb.py
test_client.py
test_cmdline.py
test_*.py

102
tests/test_bulkdata.py Normal file
View File

@@ -0,0 +1,102 @@
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.utils.printf import *
from nose.tools import *
from nose.tools import assert_raises
import itertools
from testutil.helpers import *
testdb = "tests/bulkdata-testdb"
import nilmdb.server.bulkdata
from nilmdb.server.bulkdata import BulkData
class TestBulkData(object):
def test_bulkdata(self):
for (size, files, db) in [ ( 0, 0, testdb ),
( 25, 1000, testdb ),
( 1000, 3, testdb.decode("utf-8") ) ]:
recursive_unlink(db)
os.mkdir(db)
self.do_basic(db, size, files)
def do_basic(self, db, size, files):
"""Do the basic test with variable file_size and files_per_dir"""
if not size or not files:
data = BulkData(db)
else:
data = BulkData(db, file_size = size, files_per_dir = files)
# create empty
with assert_raises(ValueError):
data.create("/foo", "uint16_8")
with assert_raises(ValueError):
data.create("foo/bar", "uint16_8")
with assert_raises(ValueError):
data.create("/foo/bar", "uint8_8")
data.create("/foo/bar", "uint16_8")
data.create(u"/foo/baz/quux", "float64_16")
with assert_raises(ValueError):
data.create("/foo/bar/baz", "uint16_8")
with assert_raises(ValueError):
data.create("/foo/baz", "float64_16")
# get node -- see if caching works
nodes = []
for i in range(5000):
nodes.append(data.getnode("/foo/bar"))
nodes.append(data.getnode("/foo/baz/quux"))
del nodes
# Test node
node = data.getnode("/foo/bar")
with assert_raises(IndexError):
x = node[0]
raw = []
for i in range(1000):
raw.append([10000+i, 1, 2, 3, 4, 5, 6, 7, 8 ])
node.append(raw[0:1])
node.append(raw[1:100])
node.append(raw[100:])
misc_slices = [ 0, 100, slice(None), slice(0), slice(10),
slice(5,10), slice(3,None), slice(3,-3),
slice(20,10), slice(200,100,-1), slice(None,0,-1),
slice(100,500,5) ]
# Extract slices
for s in misc_slices:
eq_(node[s], raw[s])
# Get some coverage of remove; remove is more fully tested
# in cmdline
with assert_raises(IndexError):
node.remove(9999,9998)
# close, reopen
# reopen
data.close()
if not size or not files:
data = BulkData(db)
else:
data = BulkData(db, file_size = size, files_per_dir = files)
node = data.getnode("/foo/bar")
# Extract slices
for s in misc_slices:
eq_(node[s], raw[s])
# destroy
with assert_raises(ValueError):
data.destroy("/foo")
with assert_raises(ValueError):
data.destroy("/foo/baz")
with assert_raises(ValueError):
data.destroy("/foo/qwerty")
data.destroy("/foo/baz/quux")
data.destroy("/foo/bar")
# close
data.close()

View File

@@ -1,8 +1,10 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.client import ClientError, ServerError
# -*- coding: utf-8 -*-
import datetime_tz
import nilmdb
from nilmdb.utils.printf import *
from nilmdb.utils import timestamper
from nilmdb.client import ClientError, ServerError
from nilmdb.utils import datetime_tz
from nose.tools import *
from nose.tools import assert_raises
@@ -15,8 +17,9 @@ import cStringIO
import simplejson as json
import unittest
import warnings
import resource
from test_helpers import *
from testutil.helpers import *
testdb = "tests/client-testdb"
@@ -67,7 +70,11 @@ class TestClient(object):
eq_(distutils.version.StrictVersion(version),
distutils.version.StrictVersion(test_server.version))
def test_client_2_nilmdb(self):
# Bad URLs should give 404, not 500
with assert_raises(ClientError):
client.http.get("/stream/create")
def test_client_2_createlist(self):
# Basic stream tests, like those in test_nilmdb:test_stream
client = nilmdb.Client(url = "http://localhost:12380/")
@@ -82,6 +89,8 @@ class TestClient(object):
# Bad layout type
with assert_raises(ClientError):
client.stream_create("/newton/prep", "NoSuchLayout")
# Create three streams
client.stream_create("/newton/prep", "PrepData")
client.stream_create("/newton/raw", "RawData")
client.stream_create("/newton/zzz/rawnotch", "RawNotchedData")
@@ -95,6 +104,20 @@ class TestClient(object):
eq_(client.stream_list(layout="RawData"), [ ["/newton/raw", "RawData"] ])
eq_(client.stream_list(path="/newton/raw"), [ ["/newton/raw", "RawData"] ])
# Try messing with resource limits to trigger errors and get
# more coverage. Here, make it so we can only create files 1
# byte in size, which will trigger an IOError in the server when
# we create a table.
limit = resource.getrlimit(resource.RLIMIT_FSIZE)
resource.setrlimit(resource.RLIMIT_FSIZE, (1, limit[1]))
with assert_raises(ServerError) as e:
client.stream_create("/newton/hello", "RawData")
resource.setrlimit(resource.RLIMIT_FSIZE, limit)
def test_client_3_metadata(self):
client = nilmdb.Client(url = "http://localhost:12380/")
# Set / get metadata
eq_(client.stream_get_metadata("/newton/prep"), {})
eq_(client.stream_get_metadata("/newton/raw"), {})
@@ -124,23 +147,24 @@ class TestClient(object):
with assert_raises(ClientError):
client.stream_update_metadata("/newton/prep", [1,2,3])
def test_client_3_insert(self):
def test_client_4_insert(self):
client = nilmdb.Client(url = "http://localhost:12380/")
datetime_tz.localtz_set("America/New_York")
testfile = "tests/data/prep-20120323T1000"
start = datetime_tz.datetime_tz.smartparse("20120323T1000")
start = start.totimestamp()
rate = 120
# First try a nonexistent path
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/no-such-path", data)
in_("404 Not Found", str(e.exception))
# Now try reversed timestamps
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
data = reversed(list(data))
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data)
@@ -149,36 +173,66 @@ class TestClient(object):
# Now try empty data (no server request made)
empty = cStringIO.StringIO("")
data = nilmdb.timestamper.TimestamperRate(empty, start, 120)
data = timestamper.TimestamperRate(empty, start, 120)
result = client.stream_insert("/newton/prep", data)
eq_(result, None)
# Try forcing a server request with empty data
with assert_raises(ClientError) as e:
client.http.put("stream/insert", "", { "path": "/newton/prep" })
client.http.put("stream/insert", "", { "path": "/newton/prep",
"start": 0, "end": 0 })
in_("400 Bad Request", str(e.exception))
in_("no data provided", str(e.exception))
# Specify start/end (starts too late)
data = timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data,
start + 5, start + 120)
in_("400 Bad Request", str(e.exception))
in_("Data timestamp 1332511200.0 < start time 1332511205.0",
str(e.exception))
# Specify start/end (ends too early)
data = timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data,
start, start + 1)
in_("400 Bad Request", str(e.exception))
# Client chunks the input, so the exact timestamp here might change
# if the chunk positions change.
in_("Data timestamp 1332511271.016667 >= end time 1332511201.0",
str(e.exception))
# Now do the real load
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
result = client.stream_insert("/newton/prep", data)
eq_(result[0], "ok")
data = timestamper.TimestamperRate(testfile, start, 120)
result = client.stream_insert("/newton/prep", data,
start, start + 119.999777)
eq_(result, "ok")
# Verify the intervals. Should be just one, even if the data
# was inserted in chunks, due to nilmdb interval concatenation.
intervals = list(client.stream_intervals("/newton/prep"))
eq_(intervals, [[start, start + 119.999777]])
# Try some overlapping data -- just insert it again
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data)
in_("400 Bad Request", str(e.exception))
in_("OverlapError", str(e.exception))
in_("verlap", str(e.exception))
def test_client_4_extract(self):
# Misc tests for extract. Most of them are in test_cmdline.
def test_client_5_extractremove(self):
# Misc tests for extract and remove. Most of them are in test_cmdline.
client = nilmdb.Client(url = "http://localhost:12380/")
for x in client.stream_extract("/newton/prep", 123, 123):
raise Exception("shouldn't be any data for this request")
raise AssertionError("shouldn't be any data for this request")
def test_client_5_generators(self):
with assert_raises(ClientError) as e:
client.stream_remove("/newton/prep", 123, 120)
def test_client_6_generators(self):
# A lot of the client functionality is already tested by test_cmdline,
# but this gets a bit more coverage that cmdline misses.
client = nilmdb.Client(url = "http://localhost:12380/")
@@ -215,7 +269,8 @@ class TestClient(object):
# Check PUT with generator out
with assert_raises(ClientError) as e:
client.http.put_gen("stream/insert", "",
{ "path": "/newton/prep" }).next()
{ "path": "/newton/prep",
"start": 0, "end": 0 }).next()
in_("400 Bad Request", str(e.exception))
in_("no data provided", str(e.exception))
@@ -226,25 +281,78 @@ class TestClient(object):
in_("404 Not Found", str(e.exception))
in_("No such stream", str(e.exception))
def test_client_6_chunked(self):
def test_client_7_headers(self):
# Make sure that /stream/intervals and /stream/extract
# properly return streaming, chunked response. Pokes around
# in client.http internals a bit to look at the response
# headers.
# properly return streaming, chunked, text/plain response.
# Pokes around in client.http internals a bit to look at the
# response headers.
client = nilmdb.Client(url = "http://localhost:12380/")
http = client.http
# Use a warning rather than returning a test failure, so that we can
# still disable chunked responses for debugging.
x = client.http.get("stream/intervals", { "path": "/newton/prep" },
retjson=False)
eq_(x.count('\n'), 2)
if "transfer-encoding: chunked" not in client.http._headers.lower():
warnings.warn("Non-chunked HTTP response for /stream/intervals")
x = client.http.get("stream/extract",
# Intervals
x = http.get("stream/intervals", { "path": "/newton/prep" },
retjson=False)
lines_(x, 1)
if "Transfer-Encoding: chunked" not in http._headers:
warnings.warn("Non-chunked HTTP response for /stream/intervals")
if "Content-Type: text/plain;charset=utf-8" not in http._headers:
raise AssertionError("/stream/intervals is not text/plain:\n" +
http._headers)
# Extract
x = http.get("stream/extract",
{ "path": "/newton/prep",
"start": "123",
"end": "123" }, retjson=False)
if "transfer-encoding: chunked" not in client.http._headers.lower():
if "Transfer-Encoding: chunked" not in http._headers:
warnings.warn("Non-chunked HTTP response for /stream/extract")
if "Content-Type: text/plain;charset=utf-8" not in http._headers:
raise AssertionError("/stream/extract is not text/plain:\n" +
http._headers)
# Make sure Access-Control-Allow-Origin gets set
if "Access-Control-Allow-Origin: " not in http._headers:
raise AssertionError("No Access-Control-Allow-Origin (CORS) "
"header in /stream/extract response:\n" +
http._headers)
def test_client_8_unicode(self):
# Basic Unicode tests
client = nilmdb.Client(url = "http://localhost:12380/")
# Delete streams that exist
for stream in client.stream_list():
client.stream_destroy(stream[0])
# Database is empty
eq_(client.stream_list(), [])
# Create Unicode stream, match it
raw = [ u"/düsseldorf/raw", u"uint16_6" ]
prep = [ u"/düsseldorf/prep", u"uint16_6" ]
client.stream_create(*raw)
eq_(client.stream_list(), [raw])
eq_(client.stream_list(layout=raw[1]), [raw])
eq_(client.stream_list(path=raw[0]), [raw])
client.stream_create(*prep)
eq_(client.stream_list(), [prep, raw])
# Set / get metadata with Unicode keys and values
eq_(client.stream_get_metadata(raw[0]), {})
eq_(client.stream_get_metadata(prep[0]), {})
meta1 = { u"alpha": u"α",
u"β": u"beta" }
meta2 = { u"alpha": u"α" }
meta3 = { u"β": u"beta" }
client.stream_set_metadata(prep[0], meta1)
client.stream_update_metadata(prep[0], {})
client.stream_update_metadata(raw[0], meta2)
client.stream_update_metadata(raw[0], meta3)
eq_(client.stream_get_metadata(prep[0]), meta1)
eq_(client.stream_get_metadata(raw[0]), meta1)
eq_(client.stream_get_metadata(raw[0], [ "alpha" ]), meta2)
eq_(client.stream_get_metadata(raw[0], [ "alpha", "β" ]), meta1)

View File

@@ -1,29 +1,35 @@
import nilmdb
from nilmdb.printf import *
import nilmdb.cmdline
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.utils.printf import *
import nilmdb.cmdline
from nilmdb.utils import datetime_tz
import unittest
from nose.tools import *
from nose.tools import assert_raises
import itertools
import datetime_tz
import os
import re
import shutil
import sys
import threading
import urllib2
from urllib2 import urlopen, HTTPError
import Queue
import cStringIO
import StringIO
import shlex
from test_helpers import *
from testutil.helpers import *
testdb = "tests/cmdline-testdb"
def server_start(max_results = None):
def server_start(max_results = None, bulkdata_args = {}):
global test_server, test_db
# Start web app on a custom port
test_db = nilmdb.NilmDB(testdb, sync = False, max_results = max_results)
test_db = nilmdb.NilmDB(testdb, sync = False,
max_results = max_results,
bulkdata_args = bulkdata_args)
test_server = nilmdb.Server(test_db, host = "127.0.0.1",
port = 12380, stoppable = False,
fast_shutdown = True,
@@ -45,12 +51,18 @@ def setup_module():
def teardown_module():
server_stop()
# Add an encoding property to StringIO so Python will convert Unicode
# properly when writing or reading.
class UTF8StringIO(StringIO.StringIO):
encoding = 'utf-8'
class TestCmdline(object):
def run(self, arg_string, infile=None, outfile=None):
"""Run a cmdline client with the specified argument string,
passing the given input. Returns a tuple with the output and
exit code"""
# printf("TZ=UTC ./nilmtool.py %s\n", arg_string)
class stdio_wrapper:
def __init__(self, stdin, stdout, stderr):
self.io = (stdin, stdout, stderr)
@@ -61,15 +73,18 @@ class TestCmdline(object):
( sys.stdin, sys.stdout, sys.stderr ) = self.saved
# Empty input if none provided
if infile is None:
infile = cStringIO.StringIO("")
infile = UTF8StringIO("")
# Capture stderr
errfile = cStringIO.StringIO()
errfile = UTF8StringIO()
if outfile is None:
# If no output file, capture stdout with stderr
outfile = errfile
with stdio_wrapper(infile, outfile, errfile) as s:
try:
nilmdb.cmdline.Cmdline(shlex.split(arg_string)).run()
# shlex doesn't support Unicode very well. Encode the
# string as UTF-8 explicitly before splitting.
args = shlex.split(arg_string.encode('utf-8'))
nilmdb.cmdline.Cmdline(args).run()
sys.exit(0)
except SystemExit as e:
exitcode = e.code
@@ -83,14 +98,24 @@ class TestCmdline(object):
self.dump()
eq_(self.exitcode, 0)
def fail(self, arg_string, infile = None, exitcode = None):
def fail(self, arg_string, infile = None,
exitcode = None, require_error = True):
self.run(arg_string, infile)
if exitcode is not None and self.exitcode != exitcode:
# Wrong exit code
self.dump()
eq_(self.exitcode, exitcode)
if self.exitcode == 0:
# Success, when we wanted failure
self.dump()
ne_(self.exitcode, 0)
# Make sure the output contains the word "error" at the
# beginning of a line, but only if an exitcode wasn't
# specified.
if require_error and not re.search("^error",
self.captured, re.MULTILINE):
raise AssertionError("command failed, but output doesn't "
"contain the string 'error'")
def contain(self, checkstring):
in_(checkstring, self.captured)
@@ -103,8 +128,8 @@ class TestCmdline(object):
with open(file) as f:
contents = f.read()
if contents != self.captured:
#print contents[1:1000] + "\n"
#print self.captured[1:1000] + "\n"
print contents[1:1000] + "\n"
print self.captured[1:1000] + "\n"
raise AssertionError("captured data doesn't match " + file)
def matchfilecount(self, file):
@@ -120,7 +145,7 @@ class TestCmdline(object):
def dump(self):
printf("-----dump start-----\n%s-----dump end-----\n", self.captured)
def test_cmdline_01_basic(self):
def test_01_basic(self):
# help
self.ok("--help")
@@ -166,14 +191,14 @@ class TestCmdline(object):
self.fail("extract --start 2000-01-01 --start 2001-01-02")
self.contain("duplicated argument")
def test_cmdline_02_info(self):
def test_02_info(self):
self.ok("info")
self.contain("Server URL: http://localhost:12380/")
self.contain("Server version: " + test_server.version)
self.contain("Server database path")
self.contain("Server database size")
def test_cmdline_03_createlist(self):
def test_03_createlist(self):
# Basic stream tests, like those in test_client.
# No streams
@@ -190,22 +215,44 @@ class TestCmdline(object):
# Bad layout type
self.fail("create /newton/prep NoSuchLayout")
self.contain("no such layout")
self.fail("create /newton/prep float32_0")
self.contain("no such layout")
self.fail("create /newton/prep float33_1")
self.contain("no such layout")
# Create a few streams
self.ok("create /newton/zzz/rawnotch RawNotchedData")
self.ok("create /newton/prep PrepData")
self.ok("create /newton/raw RawData")
self.ok("create /newton/zzz/rawnotch RawNotchedData")
# Verify we got those 3 streams
# Should not be able to create a stream with another stream as
# its parent
self.fail("create /newton/prep/blah PrepData")
self.contain("path is subdir of existing node")
# Should not be able to create a stream at a location that
# has other nodes as children
self.fail("create /newton/zzz PrepData")
self.contain("subdirs of this path already exist")
# Verify we got those 3 streams and they're returned in
# alphabetical order.
self.ok("list")
self.match("/newton/prep PrepData\n"
"/newton/raw RawData\n"
"/newton/zzz/rawnotch RawNotchedData\n")
# Match just one type or one path
# Match just one type or one path. Also check
# that --path is optional
self.ok("list --path /newton/raw")
self.match("/newton/raw RawData\n")
self.ok("list /newton/raw")
self.match("/newton/raw RawData\n")
self.fail("list -p /newton/raw /newton/raw")
self.contain("too many paths")
self.ok("list --layout RawData")
self.match("/newton/raw RawData\n")
@@ -217,10 +264,17 @@ class TestCmdline(object):
self.ok("list --path *zzz* --layout Raw*")
self.match("/newton/zzz/rawnotch RawNotchedData\n")
self.ok("list *zzz* --layout Raw*")
self.match("/newton/zzz/rawnotch RawNotchedData\n")
self.ok("list --path *zzz* --layout Prep*")
self.match("")
def test_cmdline_04_metadata(self):
# reversed range
self.fail("list /newton/prep --start 2020-01-01 --end 2000-01-01")
self.contain("start is after end")
def test_04_metadata(self):
# Set / get metadata
self.fail("metadata")
self.fail("metadata --get")
@@ -277,7 +331,7 @@ class TestCmdline(object):
self.fail("metadata /newton/nosuchpath")
self.contain("No stream at path /newton/nosuchpath")
def test_cmdline_05_parsetime(self):
def test_05_parsetime(self):
os.environ['TZ'] = "America/New_York"
cmd = nilmdb.cmdline.Cmdline(None)
test = datetime_tz.datetime_tz.now()
@@ -286,30 +340,24 @@ class TestCmdline(object):
eq_(cmd.parse_time("hi there 20120405 1400-0400 testing! 123"), test)
eq_(cmd.parse_time("20120405 1800 UTC"), test)
eq_(cmd.parse_time("20120405 1400-0400 UTC"), test)
with assert_raises(ValueError):
print cmd.parse_time("20120405 1400-9999")
with assert_raises(ValueError):
print cmd.parse_time("hello")
with assert_raises(ValueError):
print cmd.parse_time("-")
with assert_raises(ValueError):
print cmd.parse_time("")
with assert_raises(ValueError):
print cmd.parse_time("14:00")
for badtime in [ "20120405 1400-9999", "hello", "-", "", "4:00" ]:
with assert_raises(ValueError):
x = cmd.parse_time(badtime)
x = cmd.parse_time("now")
eq_(cmd.parse_time("snapshot-20120405-140000.raw.gz"), test)
eq_(cmd.parse_time("prep-20120405T1400"), test)
def test_cmdline_06_insert(self):
def test_06_insert(self):
self.ok("insert --help")
self.fail("insert /foo/bar baz qwer")
self.contain("Error getting stream info")
self.contain("error getting stream info")
self.fail("insert /newton/prep baz qwer")
self.match("Error opening input file baz\n")
self.match("error opening input file baz\n")
self.fail("insert /newton/prep")
self.contain("Error extracting time")
self.contain("error extracting time")
self.fail("insert --start 19801205 /newton/prep 1 2 3 4")
self.contain("--start can only be used with one input file")
@@ -322,6 +370,14 @@ class TestCmdline(object):
with open("tests/data/prep-20120323T1004-timestamped") as input:
self.ok("insert --none /newton/prep", input)
# insert pre-timestamped data, with bad times (non-monotonic)
os.environ['TZ'] = "UTC"
with open("tests/data/prep-20120323T1004-badtimes") as input:
self.fail("insert --none /newton/prep", input)
self.contain("error parsing input data")
self.contain("line 7:")
self.contain("timestamp is not monotonically increasing")
# insert data with normal timestamper from filename
os.environ['TZ'] = "UTC"
self.ok("insert --rate 120 /newton/prep "
@@ -350,7 +406,7 @@ class TestCmdline(object):
os.environ['TZ'] = "UTC"
self.fail("insert --rate 120 /newton/raw "
"tests/data/prep-20120323T1004")
self.contain("Error parsing input data")
self.contain("error parsing input data")
# empty data does nothing
self.ok("insert --rate 120 --start '03/23/2012 06:05:00' /newton/prep "
@@ -359,57 +415,75 @@ class TestCmdline(object):
# bad start time
self.fail("insert --rate 120 --start 'whatever' /newton/prep /dev/null")
def test_cmdline_07_detail(self):
def test_07_detail(self):
# Just count the number of lines, it's probably fine
self.ok("list --detail")
eq_(self.captured.count('\n'), 11)
lines_(self.captured, 8)
self.ok("list --detail --path *prep")
eq_(self.captured.count('\n'), 7)
lines_(self.captured, 4)
self.ok("list --detail --path *prep --start='23 Mar 2012 10:02'")
eq_(self.captured.count('\n'), 5)
lines_(self.captured, 3)
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05'")
eq_(self.captured.count('\n'), 3)
lines_(self.captured, 2)
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05:15'")
eq_(self.captured.count('\n'), 2)
lines_(self.captured, 2)
self.contain("10:05:15.000")
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05:15.50'")
eq_(self.captured.count('\n'), 2)
lines_(self.captured, 2)
self.contain("10:05:15.500")
self.ok("list --detail --path *prep --start='23 Mar 2012 19:05:15.50'")
eq_(self.captured.count('\n'), 2)
lines_(self.captured, 2)
self.contain("no intervals")
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05:15.50'"
+ " --end='23 Mar 2012 10:05:15.50'")
eq_(self.captured.count('\n'), 2)
lines_(self.captured, 2)
self.contain("10:05:15.500")
self.ok("list --detail")
eq_(self.captured.count('\n'), 11)
lines_(self.captured, 8)
def test_cmdline_08_extract(self):
# Verify the "raw timestamp" output
self.ok("list --detail --path *prep --timestamp-raw "
"--start='23 Mar 2012 10:05:15.50'")
lines_(self.captured, 2)
self.contain("[ 1332497115.5 -> 1332497159.991668 ]")
self.ok("list --detail --path *prep -T "
"--start='23 Mar 2012 10:05:15.612'")
lines_(self.captured, 2)
self.contain("[ 1332497115.612 -> 1332497159.991668 ]")
def test_08_extract(self):
# nonexistent stream
self.fail("extract /no/such/foo --start 2000-01-01 --end 2020-01-01")
self.contain("Error getting stream info")
self.contain("error getting stream info")
# empty ranges return an error
# reversed range
self.fail("extract -a /newton/prep --start 2020-01-01 --end 2000-01-01")
self.contain("start is after end")
# empty ranges return error 2
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'", exitcode = 2)
"--end '23 Mar 2012 10:00:30'",
exitcode = 2, require_error = False)
self.contain("no data")
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2012 10:00:30.000001' " +
"--end '23 Mar 2012 10:00:30.000001'", exitcode = 2)
"--end '23 Mar 2012 10:00:30.000001'",
exitcode = 2, require_error = False)
self.contain("no data")
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2022 10:00:30' " +
"--end '23 Mar 2022 10:00:30'", exitcode = 2)
"--end '23 Mar 2022 10:00:30'",
exitcode = 2, require_error = False)
self.contain("no data")
# but are ok if we're just counting results
@@ -441,18 +515,330 @@ class TestCmdline(object):
test(4, "10:00:30.008333", "10:00:30.025")
test(5, "10:00:30", "10:00:31", extra="--annotate --bare")
test(6, "10:00:30", "10:00:31", extra="-b")
test(7, "10:00:30", "10:00:30.999", extra="-a -T")
test(7, "10:00:30", "10:00:30.999", extra="-a --timestamp-raw")
# all data put in by tests
self.ok("extract -a /newton/prep --start 2000-01-01 --end 2020-01-01")
eq_(self.captured.count('\n'), 43204)
lines_(self.captured, 43204)
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("43200\n")
def test_cmdline_09_truncated(self):
def test_09_truncated(self):
# Test truncated responses by overriding the nilmdb max_results
server_stop()
server_start(max_results = 2)
self.ok("list --detail")
eq_(self.captured.count('\n'), 11)
lines_(self.captured, 8)
server_stop()
server_start()
def test_10_remove(self):
# Removing data
# Try nonexistent stream
self.fail("remove /no/such/foo --start 2000-01-01 --end 2020-01-01")
self.contain("No stream at path")
self.fail("remove /newton/prep --start 2020-01-01 --end 2000-01-01")
self.contain("start is after end")
# empty ranges return success, backwards ranges return error
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
self.match("")
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:00:30.000001' " +
"--end '23 Mar 2012 10:00:30.000001'")
self.match("")
self.ok("remove /newton/prep " +
"--start '23 Mar 2022 10:00:30' " +
"--end '23 Mar 2022 10:00:30'")
self.match("")
# Verbose
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
self.match("0\n")
self.ok("remove --count /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
self.match("0\n")
# Make sure we have the data we expect
self.ok("list --detail /newton/prep")
self.match("/newton/prep PrepData\n" +
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:03:59.991668 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:04:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:05:59.991668 +0000 ]\n")
# Remove various chunks of prep data and make sure
# they're gone.
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:40'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:10' " +
"--end '23 Mar 2012 10:00:20'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:05' " +
"--end '23 Mar 2012 10:00:25'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:03:50' " +
"--end '23 Mar 2012 10:06:50'")
self.match("15600\n")
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("24000\n")
# See the missing chunks in list output
self.ok("list --detail /newton/prep")
self.match("/newton/prep PrepData\n" +
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:00:05.000000 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:00:25.000000 +0000"
" -> Fri, 23 Mar 2012 10:00:30.000000 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:00:40.000000 +0000"
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:03:50.000000 +0000 ]\n")
# Remove all data, verify it's missing
self.ok("remove /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("") # no count requested this time
self.ok("list --detail /newton/prep")
self.match("/newton/prep PrepData\n" +
" (no intervals)\n")
# Reinsert some data, to verify that no overlaps with deleted
# data are reported
os.environ['TZ'] = "UTC"
self.ok("insert --rate 120 /newton/prep "
"tests/data/prep-20120323T1000 "
"tests/data/prep-20120323T1002")
def test_11_destroy(self):
# Delete records
self.ok("destroy --help")
self.fail("destroy")
self.contain("too few arguments")
self.fail("destroy /no/such/stream")
self.contain("No stream at path")
self.fail("destroy asdfasdf")
self.contain("No stream at path")
# From previous tests, we have:
self.ok("list")
self.match("/newton/prep PrepData\n"
"/newton/raw RawData\n"
"/newton/zzz/rawnotch RawNotchedData\n")
# Notice how they're not empty
self.ok("list --detail")
lines_(self.captured, 7)
# Delete some
self.ok("destroy /newton/prep")
self.ok("list")
self.match("/newton/raw RawData\n"
"/newton/zzz/rawnotch RawNotchedData\n")
self.ok("destroy /newton/zzz/rawnotch")
self.ok("list")
self.match("/newton/raw RawData\n")
self.ok("destroy /newton/raw")
self.ok("create /newton/raw RawData")
self.ok("destroy /newton/raw")
self.ok("list")
self.match("")
# Re-create a previously deleted location, and some new ones
rebuild = [ "/newton/prep", "/newton/zzz",
"/newton/raw", "/newton/asdf/qwer" ]
for path in rebuild:
# Create the path
self.ok("create " + path + " PrepData")
self.ok("list")
self.contain(path)
# Make sure it was created empty
self.ok("list --detail --path " + path)
self.contain("(no intervals)")
def test_12_unicode(self):
# Unicode paths.
self.ok("destroy /newton/asdf/qwer")
self.ok("destroy /newton/prep")
self.ok("destroy /newton/raw")
self.ok("destroy /newton/zzz")
self.ok(u"create /düsseldorf/raw uint16_6")
self.ok("list --detail")
self.contain(u"/düsseldorf/raw uint16_6")
self.contain("(no intervals)")
# Unicode metadata
self.ok(u"metadata /düsseldorf/raw --set α=beta 'γ'")
self.ok(u"metadata /düsseldorf/raw --update 'α=β ε τ α'")
self.ok(u"metadata /düsseldorf/raw")
self.match(u"α=β ε τ α\nγ\n")
self.ok(u"destroy /düsseldorf/raw")
def test_13_files(self):
# Test BulkData's ability to split into multiple files,
# by forcing the file size to be really small.
server_stop()
server_start(bulkdata_args = { "file_size" : 920, # 23 rows per file
"files_per_dir" : 3 })
# Fill data
self.ok("create /newton/prep float32_8")
os.environ['TZ'] = "UTC"
with open("tests/data/prep-20120323T1004-timestamped") as input:
self.ok("insert --none /newton/prep", input)
# Extract it
self.ok("extract /newton/prep --start '2000-01-01' " +
"--end '2012-03-23 10:04:01'")
lines_(self.captured, 120)
self.ok("extract /newton/prep --start '2000-01-01' " +
"--end '2022-03-23 10:04:01'")
lines_(self.captured, 14400)
# Make sure there were lots of files generated in the database
# dir
nfiles = 0
for (dirpath, dirnames, filenames) in os.walk(testdb):
nfiles += len(filenames)
assert(nfiles > 500)
# Make sure we can restart the server with a different file
# size and have it still work
server_stop()
server_start()
self.ok("extract /newton/prep --start '2000-01-01' " +
"--end '2022-03-23 10:04:01'")
lines_(self.captured, 14400)
# Now recreate the data one more time and make sure there are
# fewer files.
self.ok("destroy /newton/prep")
self.fail("destroy /newton/prep") # already destroyed
self.ok("create /newton/prep float32_8")
os.environ['TZ'] = "UTC"
with open("tests/data/prep-20120323T1004-timestamped") as input:
self.ok("insert --none /newton/prep", input)
nfiles = 0
for (dirpath, dirnames, filenames) in os.walk(testdb):
nfiles += len(filenames)
lt_(nfiles, 50)
self.ok("destroy /newton/prep") # destroy again
def test_14_remove_files(self):
# Test BulkData's ability to remove when data is split into
# multiple files. Should be a fairly comprehensive test of
# remove functionality.
server_stop()
server_start(bulkdata_args = { "file_size" : 920, # 23 rows per file
"files_per_dir" : 3 })
# Insert data. Just for fun, insert out of order
self.ok("create /newton/prep PrepData")
os.environ['TZ'] = "UTC"
self.ok("insert --rate 120 /newton/prep "
"tests/data/prep-20120323T1002 "
"tests/data/prep-20120323T1000")
# Should take up about 2.8 MB here (including directory entries)
du_before = nilmdb.utils.diskusage.du_bytes(testdb)
# Make sure we have the data we expect
self.ok("list --detail")
self.match("/newton/prep PrepData\n" +
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:03:59.991668 +0000 ]\n")
# Remove various chunks of prep data and make sure
# they're gone.
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("28800\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:03:30'")
self.match("21600\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:10' " +
"--end '23 Mar 2012 10:00:20'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:05' " +
"--end '23 Mar 2012 10:00:25'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:03:50' " +
"--end '23 Mar 2012 10:06:50'")
self.match("1200\n")
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("3600\n")
# See the missing chunks in list output
self.ok("list --detail")
self.match("/newton/prep PrepData\n" +
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:00:05.000000 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:00:25.000000 +0000"
" -> Fri, 23 Mar 2012 10:00:30.000000 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:03:30.000000 +0000"
" -> Fri, 23 Mar 2012 10:03:50.000000 +0000 ]\n")
# We have 1/8 of the data that we had before, so the file size
# should have dropped below 1/4 of what it used to be
du_after = nilmdb.utils.diskusage.du_bytes(testdb)
lt_(du_after, (du_before / 4))
# Remove anything that came from the 10:02 data file
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:02:00' --end '2020-01-01'")
# Re-insert 19 lines from that file, then remove them again.
# With the specific file_size above, this will cause the last
# file in the bulk data storage to be exactly file_size large,
# so removing the data should also remove that last file.
self.ok("insert --rate 120 /newton/prep " +
"tests/data/prep-20120323T1002-first19lines")
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:02:00' --end '2020-01-01'")
# Shut down and restart server, to force nrows to get refreshed.
server_stop()
server_start()
# Re-add the full 10:02 data file. This tests adding new data once
# we removed data near the end.
self.ok("insert --rate 120 /newton/prep tests/data/prep-20120323T1002")
# See if we can extract it all
self.ok("extract /newton/prep --start 2000-01-01 --end 2020-01-01")
lines_(self.captured, 15600)

View File

@@ -1,25 +1,34 @@
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.printf import *
import datetime_tz
from nilmdb.utils.printf import *
from nilmdb.utils import datetime_tz
from nose.tools import *
from nose.tools import assert_raises
import itertools
from nilmdb.interval import Interval, DBInterval, IntervalSet, IntervalError
from nilmdb.server.interval import (Interval, DBInterval,
IntervalSet, IntervalError)
from test_helpers import *
from testutil.helpers import *
import unittest
# set to False to skip live renders
do_live_renders = False
def render(iset, description = "", live = True):
import testutil.renderdot as renderdot
r = renderdot.RBTreeRenderer(iset.tree)
return r.render(description, live and do_live_renders)
def makeset(string):
"""Build an IntervalSet from a string, for testing purposes
Each character is 1 second
[ = interval start
| = interval end + adjacent start
| = interval end + next start
] = interval end
. = zero-width interval (identical start and end)
anything else is ignored
"""
iset = IntervalSet()
@@ -30,9 +39,11 @@ def makeset(string):
elif (c == "|"):
iset += Interval(start, day)
start = day
elif (c == "]"):
elif (c == ")"):
iset += Interval(start, day)
del start
elif (c == "."):
iset += Interval(day, day)
return iset
class TestInterval:
@@ -68,24 +79,24 @@ class TestInterval:
assert(Interval(d1, d3) < Interval(d2, d3))
assert(Interval(d2, d2) > Interval(d1, d3))
assert(Interval(d3, d3) == Interval(d3, d3))
with assert_raises(AttributeError):
x = (i == 123)
#with assert_raises(TypeError): # was AttributeError, that's wrong
# x = (i == 123)
# subset
assert(Interval(d1, d3).subset(d1, d2) == Interval(d1, d2))
eq_(Interval(d1, d3).subset(d1, d2), Interval(d1, d2))
with assert_raises(IntervalError):
x = Interval(d2, d3).subset(d1, d2)
# big integers and floats
x = Interval(5000111222, 6000111222)
eq_(str(x), "[5000111222.0 -> 6000111222.0]")
eq_(str(x), "[5000111222.0 -> 6000111222.0)")
x = Interval(123.45, 234.56)
eq_(str(x), "[123.45 -> 234.56]")
eq_(str(x), "[123.45 -> 234.56)")
# misc
i = Interval(d1, d2)
eq_(repr(i), repr(eval(repr(i))))
eq_(str(i), "[1332561600.0 -> 1332648000.0]")
eq_(str(i), "[1332561600.0 -> 1332648000.0)")
def test_interval_intersect(self):
# Test Interval intersections
@@ -106,7 +117,7 @@ class TestInterval:
except IntervalError:
assert(i not in should_intersect[True] and
i not in should_intersect[False])
with assert_raises(AttributeError):
with assert_raises(TypeError):
x = i1.intersects(1234)
def test_intervalset_construct(self):
@@ -127,6 +138,15 @@ class TestInterval:
x = iseta != 3
ne_(IntervalSet(a), IntervalSet(b))
# Note that assignment makes a new reference (not a copy)
isetd = IntervalSet(isetb)
isete = isetd
eq_(isetd, isetb)
eq_(isetd, isete)
isetd -= a
ne_(isetd, isetb)
eq_(isetd, isete)
# test iterator
for interval in iseta:
pass
@@ -148,11 +168,18 @@ class TestInterval:
iset = IntervalSet(a)
iset += IntervalSet(b)
eq_(iset, IntervalSet([a, b]))
iset = IntervalSet(a)
iset += b
eq_(iset, IntervalSet([a, b]))
iset = IntervalSet(a)
iset.iadd_nocheck(b)
eq_(iset, IntervalSet([a, b]))
iset = IntervalSet(a) + IntervalSet(b)
eq_(iset, IntervalSet([a, b]))
iset = IntervalSet(b) + a
eq_(iset, IntervalSet([a, b]))
@@ -165,54 +192,81 @@ class TestInterval:
# misc
eq_(repr(iset), repr(eval(repr(iset))))
eq_(str(iset), "[[100.0 -> 200.0], [200.0 -> 300.0]]")
eq_(str(iset), "[[100.0 -> 200.0), [200.0 -> 300.0)]")
def test_intervalset_geniset(self):
# Test basic iset construction
assert(makeset(" [----] ") ==
makeset(" [-|--] "))
eq_(makeset(" [----) "),
makeset(" [-|--) "))
assert(makeset("[] [--] ") +
makeset(" [] [--]") ==
makeset("[|] [-----]"))
eq_(makeset("[) [--) ") +
makeset(" [) [--)"),
makeset("[|) [-----)"))
assert(makeset(" [-------]") ==
makeset(" [-|-----|"))
eq_(makeset(" [-------)"),
makeset(" [-|-----|"))
def test_intervalset_intersect(self):
# Test intersection (&)
with assert_raises(AttributeError):
x = makeset("[--]") & 1234
with assert_raises(TypeError): # was AttributeError
x = makeset("[--)") & 1234
assert(makeset("[---------]") &
makeset(" [---] ") ==
makeset(" [---] "))
# Intersection with interval
eq_(makeset("[---|---)[)") &
list(makeset(" [------) "))[0],
makeset(" [-----) "))
assert(makeset(" [---] ") &
makeset("[---------]") ==
makeset(" [---] "))
# Intersection with sets
eq_(makeset("[---------)") &
makeset(" [---) "),
makeset(" [---) "))
assert(makeset(" [-----]") &
makeset(" [-----] ") ==
makeset(" [--] "))
eq_(makeset(" [---) ") &
makeset("[---------)"),
makeset(" [---) "))
assert(makeset(" [---]") &
makeset(" [--] ") ==
makeset(" "))
eq_(makeset(" [-----)") &
makeset(" [-----) "),
makeset(" [--) "))
assert(makeset(" [-|---]") &
makeset(" [-----|-] ") ==
makeset(" [----] "))
eq_(makeset(" [--) [--)") &
makeset(" [------) "),
makeset(" [-) [-) "))
assert(makeset(" [-|-] ") &
makeset(" [-|--|--] ") ==
makeset(" [---] "))
eq_(makeset(" [---)") &
makeset(" [--) "),
makeset(" "))
assert(makeset(" [----][--]") &
makeset("[-] [--] []") ==
makeset(" [] [-] []"))
eq_(makeset(" [-|---)") &
makeset(" [-----|-) "),
makeset(" [----) "))
eq_(makeset(" [-|-) ") &
makeset(" [-|--|--) "),
makeset(" [---) "))
# Border cases -- will give different results if intervals are
# half open or fully closed. Right now, they are half open,
# although that's a little messy since the database intervals
# often contain a data point at the endpoint.
half_open = True
if half_open:
eq_(makeset(" [---)") &
makeset(" [----) "),
makeset(" "))
eq_(makeset(" [----)[--)") &
makeset("[-) [--) [)"),
makeset(" [) [-) [)"))
else:
eq_(makeset(" [---)") &
makeset(" [----) "),
makeset(" . "))
eq_(makeset(" [----)[--)") &
makeset("[-) [--) [)"),
makeset(" [) [-). [)"))
class TestIntervalDB:
def test_dbinterval(self):
# Test DBInterval class
i = DBInterval(100, 200, 100, 200, 10000, 20000)
@@ -255,66 +309,65 @@ class TestInterval:
for i in IntervalSet(iseta.intersection(Interval(125,250))):
assert(isinstance(i, DBInterval))
class TestIntervalShape:
def test_interval_shape(self):
class TestIntervalTree:
def test_interval_tree(self):
import random
random.seed(1234)
# make a set of 500 intervals
# make a set of 100 intervals
iset = IntervalSet()
j = 500
j = 100
for i in random.sample(xrange(j),j):
interval = Interval(i, i+1)
iset += interval
render(iset, "Random Insertion")
# Plot it
import renderdot
r = renderdot.Renderer(lambda node: node.cleft,
lambda node: node.cright,
lambda node: False,
lambda node: node.start,
lambda node: node.end,
iset.tree.emptynode())
r.render_dot_live(iset.tree.rootnode(), "Random")
# remove about half of them
for i in random.sample(xrange(j),j):
if random.randint(0,1):
iset -= Interval(i, i+1)
# make a set of 500 intervals, inserted in order
# try removing an interval that doesn't exist
with assert_raises(IntervalError):
iset -= Interval(1234,5678)
render(iset, "Random Insertion, deletion")
# make a set of 100 intervals, inserted in order
iset = IntervalSet()
j = 500
j = 100
for i in xrange(j):
interval = Interval(i, i+1)
iset += interval
# Plot it
import renderdot
r = renderdot.Renderer(lambda node: node.cleft,
lambda node: node.cright,
lambda node: False,
lambda node: node.start,
lambda node: node.end,
iset.tree.emptynode())
r.render_dot_live(iset.tree.rootnode(), "In-order")
assert(False)
render(iset, "In-order insertion")
class TestIntervalSpeed:
#@unittest.skip("this is slow")
@unittest.skip("this is slow")
def test_interval_speed(self):
import yappi
import time
import aplotter
import testutil.aplotter as aplotter
import random
import math
print
yappi.start()
speeds = {}
for j in [ 2**x for x in range(5,22) ]:
limit = 10 # was 20
for j in [ 2**x for x in range(5,limit) ]:
start = time.time()
iset = IntervalSet()
for i in xrange(j):
for i in random.sample(xrange(j),j):
interval = Interval(i, i+1)
iset += interval
speed = (time.time() - start) * 1000000.0
printf("%d: %g μs (%g μs each)\n", j, speed, speed/j)
printf("%d: %g μs (%g μs each, O(n log n) ratio %g)\n",
j,
speed,
speed/j,
speed / (j*math.log(j))) # should be constant
speeds[j] = speed
aplotter.plot(speeds.keys(), speeds.values(), plot_slope=True)
yappi.stop()
yappi.print_stats(sort_type=yappi.SORTTYPE_TTOT, limit=10)

View File

@@ -1,5 +1,5 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nose
from nose.tools import *
@@ -7,14 +7,13 @@ from nose.tools import assert_raises
import threading
import time
from test_helpers import *
import nilmdb.iteratorizer
from testutil.helpers import *
def func_with_callback(a, b, callback):
callback(a)
callback(b)
callback(a+b)
return "return value"
class TestIteratorizer(object):
def test(self):
@@ -27,20 +26,21 @@ class TestIteratorizer(object):
eq_(self.result, "123")
# Now make it an iterator
it = nilmdb.iteratorizer.Iteratorizer(lambda x:
func_with_callback(1, 2, x))
result = ""
for i in it:
result += str(i)
eq_(result, "123")
# Make sure things work when an exception occurs
it = nilmdb.iteratorizer.Iteratorizer(lambda x:
func_with_callback(1, "a", x))
result = ""
with assert_raises(TypeError) as e:
f = lambda x: func_with_callback(1, 2, x)
with nilmdb.utils.Iteratorizer(f) as it:
for i in it:
result += str(i)
eq_(result, "123")
eq_(it.retval, "return value")
# Make sure things work when an exception occurs
result = ""
with nilmdb.utils.Iteratorizer(
lambda x: func_with_callback(1, "a", x)) as it:
with assert_raises(TypeError) as e:
for i in it:
result += str(i)
eq_(result, "1a")
# Now try to trigger the case where we stop iterating
@@ -48,7 +48,14 @@ class TestIteratorizer(object):
# itself. This doesn't have a particular result in the test,
# but gains coverage.
def foo():
it = nilmdb.iteratorizer.Iteratorizer(lambda x:
func_with_callback(1, 2, x))
it.next()
with nilmdb.utils.Iteratorizer(f) as it:
it.next()
foo()
eq_(it.retval, None)
# Do the same thing when the curl hack is applied
def foo():
with nilmdb.utils.Iteratorizer(f, curl_hack = True) as it:
it.next()
foo()
eq_(it.retval, None)

View File

@@ -2,7 +2,7 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nose.tools import *
from nose.tools import assert_raises
@@ -20,17 +20,21 @@ import cStringIO
import random
import unittest
from test_helpers import *
from testutil.helpers import *
from nilmdb.layout import *
from nilmdb.server.layout import *
class TestLayouts(object):
# Some nilmdb.layout tests. Not complete, just fills in missing
# coverage.
def test_layouts(self):
x = nilmdb.layout.get_named("PrepData").description()
y = nilmdb.layout.get_named("float32_8").description()
eq_(repr(x), repr(y))
x = nilmdb.server.layout.get_named("PrepData")
y = nilmdb.server.layout.get_named("float32_8")
eq_(x.count, y.count)
eq_(x.datatype, y.datatype)
y = nilmdb.server.layout.get_named("float32_7")
ne_(x.count, y.count)
eq_(x.datatype, y.datatype)
def test_parsing(self):
self.real_t_parsing("PrepData", "RawData", "RawNotchedData")
@@ -85,11 +89,23 @@ class TestLayouts(object):
# non-monotonic
parser = Parser(name_raw)
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
"1234567890.000000 1 2 3 4 5 6\n" )
"1234567890.099999 1 2 3 4 5 6\n" )
with assert_raises(ParserError) as e:
parser.parse(data)
in_("not monotonically increasing", str(e.exception))
parser = Parser(name_raw)
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
"1234567890.100000 1 2 3 4 5 6\n" )
with assert_raises(ParserError) as e:
parser.parse(data)
in_("not monotonically increasing", str(e.exception))
parser = Parser(name_raw)
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
"1234567890.100001 1 2 3 4 5 6\n" )
parser.parse(data)
# RawData with values out of bounds
parser = Parser(name_raw)
data = ( "1234567890.000000 1 2 3 4 500000 6\n" +

83
tests/test_lrucache.py Normal file
View File

@@ -0,0 +1,83 @@
import nilmdb
from nilmdb.utils.printf import *
import nose
from nose.tools import *
from nose.tools import assert_raises
import threading
import time
import inspect
from testutil.helpers import *
@nilmdb.utils.lru_cache(size = 3)
def foo1(n):
return n
@nilmdb.utils.lru_cache(size = 5)
def foo2(n):
return n
def foo3d(n):
foo3d.destructed.append(n)
foo3d.destructed = []
@nilmdb.utils.lru_cache(size = 3, onremove = foo3d)
def foo3(n):
return n
class Foo:
def __init__(self):
self.calls = 0
@nilmdb.utils.lru_cache(size = 3, keys = slice(1, 2))
def foo(self, n, **kwargs):
self.calls += 1
class TestLRUCache(object):
def test(self):
[ foo1(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo1.cache_info(), (6, 3))
[ foo1(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo1.cache_info(), (15, 3))
[ foo1(n) for n in [ 4, 2, 1, 1, 4 ] ]
eq_(foo1.cache_info(), (18, 5))
[ foo2(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo2.cache_info(), (6, 3))
[ foo2(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo2.cache_info(), (15, 3))
[ foo2(n) for n in [ 4, 2, 1, 1, 4 ] ]
eq_(foo2.cache_info(), (19, 4))
[ foo3(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo3.cache_info(), (6, 3))
[ foo3(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo3.cache_info(), (15, 3))
[ foo3(n) for n in [ 4, 2, 1, 1, 4 ] ]
eq_(foo3.cache_info(), (18, 5))
eq_(foo3d.destructed, [1, 3])
with assert_raises(KeyError):
foo3.cache_remove(1,2,3)
foo3.cache_remove(1)
eq_(foo3d.destructed, [1, 3, 1])
foo3.cache_remove_all()
eq_(foo3d.destructed, [1, 3, 1, 2, 4 ])
foo = Foo()
foo.foo(5)
foo.foo(6)
foo.foo(7)
foo.foo(5)
eq_(foo.calls, 3)
# Can't handle keyword arguments right now
with assert_raises(NotImplementedError):
foo.foo(3, asdf = 7)
# Verify that argspecs were maintained
eq_(inspect.getargspec(foo1),
inspect.ArgSpec(args=['n'],
varargs=None, keywords=None, defaults=None))
eq_(inspect.getargspec(foo.foo),
inspect.ArgSpec(args=['self', 'n'],
varargs=None, keywords="kwargs", defaults=None))

110
tests/test_mustclose.py Normal file
View File

@@ -0,0 +1,110 @@
import nilmdb
from nilmdb.utils.printf import *
import nose
from nose.tools import *
from nose.tools import assert_raises
from testutil.helpers import *
import sys
import cStringIO
import gc
import inspect
err = cStringIO.StringIO()
@nilmdb.utils.must_close(errorfile = err)
class Foo:
def __init__(self, arg):
fprintf(err, "Init %s\n", arg)
def __del__(self):
fprintf(err, "Deleting\n")
def close(self):
fprintf(err, "Closing\n")
@nilmdb.utils.must_close(errorfile = err, wrap_verify = True)
class Bar:
def __init__(self):
fprintf(err, "Init\n")
def __del__(self):
fprintf(err, "Deleting\n")
def close(self):
fprintf(err, "Closing\n")
def blah(self, arg):
fprintf(err, "Blah %s\n", arg)
@nilmdb.utils.must_close(errorfile = err)
class Baz:
pass
class TestMustClose(object):
def test(self):
# Note: this test might fail if the Python interpreter doesn't
# garbage collect the object (and call its __del__ function)
# right after a "del x".
# Trigger error
err.truncate()
x = Foo("hi")
# Verify that the arg spec was maintained
eq_(inspect.getargspec(x.__init__),
inspect.ArgSpec(args = ['self', 'arg'],
varargs = None, keywords = None, defaults = None))
del x
gc.collect()
eq_(err.getvalue(),
"Init hi\n"
"error: Foo.close() wasn't called!\n"
"Deleting\n")
# No error
err.truncate(0)
y = Foo("bye")
y.close()
del y
gc.collect()
eq_(err.getvalue(),
"Init bye\n"
"Closing\n"
"Deleting\n")
# Verify function calls when wrap_verify is True
err.truncate(0)
z = Bar()
eq_(inspect.getargspec(z.blah),
inspect.ArgSpec(args = ['self', 'arg'],
varargs = None, keywords = None, defaults = None))
z.blah("boo")
z.close()
with assert_raises(AssertionError) as e:
z.blah("hello")
in_("called <function blah at 0x", str(e.exception))
in_("> after close", str(e.exception))
# Since the most recent assertion references 'z',
# we need to raise another assertion here so that
# 'z' will get properly deleted.
with assert_raises(AssertionError):
raise AssertionError()
del z
gc.collect()
eq_(err.getvalue(),
"Init\n"
"Blah boo\n"
"Closing\n"
"Deleting\n")
# Class with missing methods
err.truncate(0)
w = Baz()
w.close()
del w
eq_(err.getvalue(), "")

View File

@@ -14,6 +14,7 @@ import urllib2
from urllib2 import urlopen, HTTPError
import Queue
import cStringIO
import time
testdb = "tests/testdb"
@@ -21,7 +22,7 @@ testdb = "tests/testdb"
#def cleanup():
# os.unlink(testdb)
from test_helpers import *
from testutil.helpers import *
class Test00Nilmdb(object): # named 00 so it runs first
def test_NilmDB(self):
@@ -39,8 +40,8 @@ class Test00Nilmdb(object): # named 00 so it runs first
capture = cStringIO.StringIO()
old = sys.stdout
sys.stdout = capture
with nilmdb.Timer("test"):
nilmdb.timer.time.sleep(0.01)
with nilmdb.utils.Timer("test"):
time.sleep(0.01)
sys.stdout = old
in_("test: ", capture.getvalue())
@@ -69,12 +70,14 @@ class Test00Nilmdb(object): # named 00 so it runs first
eq_(db.stream_list(layout="RawData"), [ ["/newton/raw", "RawData"] ])
eq_(db.stream_list(path="/newton/raw"), [ ["/newton/raw", "RawData"] ])
# Verify that columns were made right
eq_(len(db.h5file.getNode("/newton/prep").cols), 9)
eq_(len(db.h5file.getNode("/newton/raw").cols), 7)
eq_(len(db.h5file.getNode("/newton/zzz/rawnotch").cols), 10)
assert(not db.h5file.getNode("/newton/prep").colindexed["timestamp"])
assert(not db.h5file.getNode("/newton/prep").colindexed["c1"])
# Verify that columns were made right (pytables specific)
if "h5file" in db.data.__dict__:
h5file = db.data.h5file
eq_(len(h5file.getNode("/newton/prep").cols), 9)
eq_(len(h5file.getNode("/newton/raw").cols), 7)
eq_(len(h5file.getNode("/newton/zzz/rawnotch").cols), 10)
assert(not h5file.getNode("/newton/prep").colindexed["timestamp"])
assert(not h5file.getNode("/newton/prep").colindexed["c1"])
# Set / get metadata
eq_(db.stream_get_metadata("/newton/prep"), {})
@@ -110,7 +113,8 @@ class TestBlockingServer(object):
self.server.start(blocking = True, event = event)
thread = threading.Thread(target = run_server)
thread.start()
event.wait(timeout = 2)
if not event.wait(timeout = 10):
raise AssertionError("server didn't start in 10 seconds")
# Send request to exit.
req = urlopen("http://127.0.0.1:12380/exit/", timeout = 1)
@@ -196,6 +200,6 @@ class TestServer(object):
# GET instead of POST (no body)
# (actual POST test is done by client code)
with assert_raises(HTTPError) as e:
getjson("/stream/insert?path=/newton/prep")
getjson("/stream/insert?path=/newton/prep&start=0&end=0")
eq_(e.exception.code, 400)

View File

@@ -1,12 +1,12 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nose.tools import *
from nose.tools import assert_raises
from cStringIO import StringIO
import sys
from test_helpers import *
from testutil.helpers import *
class TestPrintf(object):
def test_printf(self):

159
tests/test_rbtree.py Normal file
View File

@@ -0,0 +1,159 @@
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.utils.printf import *
from nose.tools import *
from nose.tools import assert_raises
from nilmdb.server.rbtree import RBTree, RBNode
from testutil.helpers import *
import unittest
# set to False to skip live renders
do_live_renders = False
def render(tree, description = "", live = True):
import testutil.renderdot as renderdot
r = renderdot.RBTreeRenderer(tree)
return r.render(description, live and do_live_renders)
class TestRBTree:
def test_rbtree(self):
rb = RBTree()
rb.insert(RBNode(10000, 10001))
rb.insert(RBNode(10004, 10007))
rb.insert(RBNode(10001, 10002))
# There was a typo that gave the RBTree a loop in this case.
# Verify that the dot isn't too big.
s = render(rb, live = False)
assert(len(s.splitlines()) < 30)
def test_rbtree_big(self):
import random
random.seed(1234)
# make a set of 100 intervals, inserted in order
rb = RBTree()
j = 100
for i in xrange(j):
rb.insert(RBNode(i, i+1))
render(rb, "in-order insert")
# remove about half of them
for i in random.sample(xrange(j),j):
if random.randint(0,1):
rb.delete(rb.find(i, i+1))
render(rb, "in-order insert, random delete")
# make a set of 100 intervals, inserted at random
rb = RBTree()
j = 100
for i in random.sample(xrange(j),j):
rb.insert(RBNode(i, i+1))
render(rb, "random insert")
# remove about half of them
for i in random.sample(xrange(j),j):
if random.randint(0,1):
rb.delete(rb.find(i, i+1))
render(rb, "random insert, random delete")
# in-order insert of 50 more
for i in xrange(50):
rb.insert(RBNode(i+500, i+501))
render(rb, "random insert, random delete, in-order insert")
def test_rbtree_basics(self):
rb = RBTree()
vals = [ 7, 14, 1, 2, 8, 11, 5, 15, 4]
for n in vals:
rb.insert(RBNode(n, n))
# stringify
s = ""
for node in rb:
s += str(node)
in_("[node (None) 1", s)
eq_(str(rb.nil), "[node nil]")
# inorder traversal, successor and predecessor
last = 0
for node in rb:
assert(node.start > last)
last = node.start
successor = rb.successor(node)
if successor:
assert(rb.predecessor(successor) is node)
predecessor = rb.predecessor(node)
if predecessor:
assert(rb.successor(predecessor) is node)
# Delete node not in the tree
with assert_raises(AttributeError):
rb.delete(RBNode(1,2))
# Delete all nodes!
for node in rb:
rb.delete(node)
# Build it up again, make sure it matches
for n in vals:
rb.insert(RBNode(n, n))
s2 = ""
for node in rb:
s2 += str(node)
assert(s == s2)
def test_rbtree_find(self):
# Get a little bit of coverage for some overlapping cases,
# even though the class doesn't fully support it.
rb = RBTree()
nodes = [ RBNode(1, 5), RBNode(1, 10), RBNode(1, 15) ]
for n in nodes:
rb.insert(n)
assert(rb.find(1, 5) is nodes[0])
assert(rb.find(1, 10) is nodes[1])
assert(rb.find(1, 15) is nodes[2])
def test_rbtree_find_leftright(self):
# Now let's get some ranges in there
rb = RBTree()
vals = [ 7, 14, 1, 2, 8, 11, 5, 15, 4]
for n in vals:
rb.insert(RBNode(n*10, n*10+5))
# Check find_end_left, find_right_start
for i in range(160):
left = rb.find_left_end(i)
right = rb.find_right_start(i)
if left:
# endpoint should be more than i
assert(left.end >= i)
# all earlier nodes should have a lower endpoint
for node in rb:
if node is left:
break
assert(node.end < i)
if right:
# startpoint should be less than i
assert(right.start <= i)
# all later nodes should have a higher startpoint
for node in reversed(list(rb)):
if node is right:
break
assert(node.start > i)
def test_rbtree_intersect(self):
# Fill with some ranges
rb = RBTree()
rb.insert(RBNode(10,20))
rb.insert(RBNode(20,25))
rb.insert(RBNode(30,40))
# Just a quick test; test_interval will do better.
eq_(len(list(rb.intersect(1,100))), 3)
eq_(len(list(rb.intersect(10,20))), 1)
eq_(len(list(rb.intersect(5,15))), 1)
eq_(len(list(rb.intersect(15,15))), 1)
eq_(len(list(rb.intersect(20,21))), 1)
eq_(len(list(rb.intersect(19,21))), 2)

View File

@@ -1,5 +1,5 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nose
from nose.tools import *
@@ -7,7 +7,7 @@ from nose.tools import assert_raises
import threading
import time
from test_helpers import *
from testutil.helpers import *
#raise nose.exc.SkipTest("Skip these")
@@ -57,7 +57,7 @@ class TestUnserialized(Base):
class TestSerialized(Base):
def setUp(self):
self.realfoo = Foo()
self.foo = nilmdb.serializer.WrapObject(self.realfoo)
self.foo = nilmdb.utils.Serializer(self.realfoo)
def tearDown(self):
del self.foo

View File

@@ -1,7 +1,6 @@
import nilmdb
from nilmdb.printf import *
import datetime_tz
from nilmdb.utils.printf import *
from nilmdb.utils import datetime_tz
from nose.tools import *
from nose.tools import assert_raises
@@ -9,7 +8,9 @@ import os
import sys
import cStringIO
from test_helpers import *
from testutil.helpers import *
from nilmdb.utils import timestamper
class TestTimestamper(object):
@@ -27,20 +28,20 @@ class TestTimestamper(object):
# full
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
ts = timestamper.TimestamperRate(input, start, 8000)
foo = ts.readlines()
eq_(foo, join(lines_out))
in_("TimestamperRate(..., start=", str(ts))
# first 30 or so bytes means the first 2 lines
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
ts = timestamper.TimestamperRate(input, start, 8000)
foo = ts.readlines(30)
eq_(foo, join(lines_out[0:2]))
# stop iteration early
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
ts = timestamper.TimestamperRate(input, start, 8000,
1332561600.000200)
foo = ""
for line in ts:
@@ -49,21 +50,21 @@ class TestTimestamper(object):
# stop iteration early (readlines)
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
ts = timestamper.TimestamperRate(input, start, 8000,
1332561600.000200)
foo = ts.readlines()
eq_(foo, join(lines_out[0:2]))
# stop iteration really early
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
ts = timestamper.TimestamperRate(input, start, 8000,
1332561600.000000)
foo = ts.readlines()
eq_(foo, "")
# use iterator
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
ts = timestamper.TimestamperRate(input, start, 8000)
foo = ""
for line in ts:
foo += line
@@ -71,21 +72,21 @@ class TestTimestamper(object):
# check that TimestamperNow gives similar result
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperNow(input)
ts = timestamper.TimestamperNow(input)
foo = ts.readlines()
ne_(foo, join(lines_out))
eq_(len(foo), len(join(lines_out)))
eq_(str(ts), "TimestamperNow(...)")
# Test passing a file (should be empty)
ts = nilmdb.timestamper.TimestamperNow("/dev/null")
ts = timestamper.TimestamperNow("/dev/null")
for line in ts:
raise AssertionError
ts.close()
# Test the null timestamper
input = cStringIO.StringIO(join(lines_out)) # note: lines_out
ts = nilmdb.timestamper.TimestamperNull(input)
ts = timestamper.TimestamperNull(input)
foo = ts.readlines()
eq_(foo, join(lines_out))
eq_(str(ts), "TimestamperNull(...)")

View File

@@ -0,0 +1 @@
# empty

View File

@@ -12,6 +12,10 @@ def eq_(a, b):
if not a == b:
raise AssertionError("%s != %s" % (myrepr(a), myrepr(b)))
def lt_(a, b):
if not a < b:
raise AssertionError("%s is not less than %s" % (myrepr(a), myrepr(b)))
def in_(a, b):
if a not in b:
raise AssertionError("%s not in %s" % (myrepr(a), myrepr(b)))
@@ -20,6 +24,14 @@ def ne_(a, b):
if not a != b:
raise AssertionError("unexpected %s == %s" % (myrepr(a), myrepr(b)))
def lines_(a, n):
l = a.count('\n')
if not l == n:
if len(a) > 5000:
a = a[0:5000] + " ... truncated"
raise AssertionError("wanted %d lines, got %d in output: '%s'"
% (n, l, a))
def recursive_unlink(path):
try:
shutil.rmtree(path)

View File

@@ -13,7 +13,7 @@ class Renderer(object):
# Rendering
def __render_dot_node(self, node, max_depth = 20):
from nilmdb.printf import sprintf
from nilmdb.utils.printf import sprintf
"""Render a single node and its children into a dot graph fragment"""
if max_depth == 0:
return ""
@@ -71,3 +71,20 @@ class Renderer(object):
gtk.main_quit()
window.widget.connect('key-press-event', quit)
gtk.main()
class RBTreeRenderer(Renderer):
def __init__(self, tree):
Renderer.__init__(self,
lambda node: node.left,
lambda node: node.right,
lambda node: node.red,
lambda node: node.start,
lambda node: node.end,
tree.nil)
self.tree = tree
def render(self, title = "RBTree", live = True):
if live:
return Renderer.render_dot_live(self, self.tree.getroot(), title)
else:
return Renderer.render_dot(self, self.tree.getroot(), title)

View File

@@ -1,20 +0,0 @@
./nilmtool.py create /bpnilm/2/raw RawData
if true; then
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-110000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-120001 /bpnilm/2/raw
else
for i in $(seq 2000 2050); do
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-010001 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-020002 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-030003 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-040004 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-050005 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-060006 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-070007 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-080008 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-090009 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-100010 /bpnilm/2/raw
done
fi