Compare commits
67 Commits
replace-py
...
nilmdb-0.2
Author | SHA1 | Date | |
---|---|---|---|
3b90318f83 | |||
1fb37604d3 | |||
018ecab310 | |||
6a1d6017e2 | |||
e7406f8147 | |||
f316026592 | |||
a8db747768 | |||
727af94722 | |||
6c89659df7 | |||
58c7c8f6ff | |||
225003f412 | |||
40b966aef2 | |||
294ec6988b | |||
fad23ebb22 | |||
b226dc4337 | |||
e7af863017 | |||
af6ce5b79c | |||
0a6fc943e2 | |||
67c6e178e1 | |||
9bf213707c | |||
5cd7899e98 | |||
ceec5fb9b3 | |||
85be497edb | |||
bd1b7107af | |||
b8275f108d | |||
2820ff9758 | |||
a015de893d | |||
b7f746e66d | |||
40cf4941f0 | |||
8a418ceb3e | |||
0312b6eb07 | |||
077f197d24 | |||
62354b4dce | |||
5970cd85cf | |||
4f6a742e6c | |||
87b43e5d04 | |||
f0c2a64ae3 | |||
e5d3deb6fe | |||
d321058b48 | |||
cea83140c0 | |||
7807d6caf0 | |||
3d0fad3c2a | |||
fe3b087435 | |||
bcefe52298 | |||
f88c148ccc | |||
4a47b1d04a | |||
80da937cb7 | |||
c81972e66e | |||
b09362fde1 | |||
b7688844fa | |||
3d212e7592 | |||
7aedfdf9c3 | |||
ebd4f74959 | |||
ebe2fbab92 | |||
4831a0cae1 | |||
07192c6ffb | |||
09d325e8ab | |||
11b0293d5f | |||
493bbed82c | |||
3bc25daaab | |||
40a3bc4bc3 | |||
c083d63c96 | |||
0221e3ea21 | |||
f5fd2b064e | |||
06e91a6a98 | |||
41b3f3c018 | |||
842076fef4 |
@@ -7,3 +7,4 @@
|
||||
exclude_lines =
|
||||
pragma: no cover
|
||||
if 0:
|
||||
omit = nilmdb/utils/datetime_tz*
|
||||
|
21
.gitignore
vendored
21
.gitignore
vendored
@@ -1,4 +1,23 @@
|
||||
db/
|
||||
# Tests
|
||||
tests/*testdb/
|
||||
.coverage
|
||||
db/
|
||||
|
||||
# Compiled / cythonized files
|
||||
docs/*.html
|
||||
build/
|
||||
*.pyc
|
||||
nilmdb/server/interval.c
|
||||
nilmdb/server/interval.so
|
||||
nilmdb/server/layout.c
|
||||
nilmdb/server/layout.so
|
||||
nilmdb/server/rbtree.c
|
||||
nilmdb/server/rbtree.so
|
||||
|
||||
# Setup junk
|
||||
dist/
|
||||
nilmdb.egg-info/
|
||||
|
||||
# Misc
|
||||
timeit*out
|
||||
|
||||
|
10
Makefile
10
Makefile
@@ -1,18 +1,10 @@
|
||||
all: test
|
||||
|
||||
tool:
|
||||
python nilmtool.py --help
|
||||
python nilmtool.py list --help
|
||||
python nilmtool.py -u asfdadsf list
|
||||
|
||||
lint:
|
||||
pylint -f parseable nilmdb
|
||||
|
||||
test:
|
||||
nosetests
|
||||
|
||||
profile:
|
||||
nosetests --with-profile
|
||||
python runtests.py
|
||||
|
||||
clean::
|
||||
find . -name '*pyc' | xargs rm -f
|
||||
|
12
README.txt
12
README.txt
@@ -1,4 +1,10 @@
|
||||
sudo apt-get install python-nose python-coverage
|
||||
sudo apt-get install python-tables python-cherrypy3
|
||||
sudo apt-get install cython # 0.17.1-1 or newer
|
||||
nilmdb: Non-Intrusive Load Monitor Database
|
||||
by Jim Paris <jim@jtan.com>
|
||||
|
||||
Prerequisites:
|
||||
|
||||
sudo apt-get install python2.7 python-cherrypy3 python-decorator python-nose python-coverage python-setuptools
|
||||
|
||||
Install:
|
||||
|
||||
python setup.py install
|
||||
|
200
design.md
200
design.md
@@ -1,200 +0,0 @@
|
||||
Structure
|
||||
---------
|
||||
nilmdb.nilmdb is the NILM database interface. It tracks a PyTables
|
||||
database holds actual rows of data, and a SQL database tracks metadata
|
||||
and ranges.
|
||||
|
||||
Access to the nilmdb must be single-threaded. This is handled with
|
||||
the nilmdb.serializer class.
|
||||
|
||||
nilmdb.server is a HTTP server that provides an interface to talk,
|
||||
thorugh the serialization layer, to the nilmdb object.
|
||||
|
||||
nilmdb.client is a HTTP client that connects to this.
|
||||
|
||||
Sqlite performance
|
||||
------------------
|
||||
|
||||
Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
|
||||
takes about 125msec. sqlite3 will commit transactions at 3 times:
|
||||
|
||||
1: explicit con.commit()
|
||||
|
||||
2: between a series of DML commands and non-DML commands, e.g.
|
||||
after a series of INSERT, SELECT, but before a CREATE TABLE or
|
||||
PRAGMA.
|
||||
|
||||
3: at the end of an explicit transaction, e.g. "with self.con as con:"
|
||||
|
||||
To speed up testing, or if this transaction speed becomes an issue,
|
||||
the sync=False option to NilmDB will set PRAGMA synchronous=OFF.
|
||||
|
||||
|
||||
Inserting streams
|
||||
-----------------
|
||||
|
||||
We need to send the contents of "data" as POST. Do we need chunked
|
||||
transfer?
|
||||
|
||||
- Don't know the size in advance, so we would need to use chunked if
|
||||
we send the entire thing in one request.
|
||||
- But we shouldn't send one chunk per line, so we need to buffer some
|
||||
anyway; why not just make new requests?
|
||||
- Consider the infinite-streaming case, we might want to send it
|
||||
immediately? Not really -- server still should do explicit inserts
|
||||
of fixed-size chunks.
|
||||
- Even chunked encoding needs the size of each chunk beforehand, so
|
||||
everything still gets buffered. Just a tradeoff of buffer size.
|
||||
|
||||
Before timestamps are added:
|
||||
- Raw data is about 440 kB/s (9 channels)
|
||||
- Prep data is about 12.5 kB/s (1 phase)
|
||||
- How do we know how much data to send?
|
||||
|
||||
- Remember that we can only do maybe 8-50 transactions per second on
|
||||
the sqlite database. So if one block of inserted data is one
|
||||
transaction, we'd need the raw case to be around 64kB per request,
|
||||
ideally more.
|
||||
- Maybe use a range, based on how long it's taking to read the data
|
||||
- If no more data, send it
|
||||
- If data > 1 MB, send it
|
||||
- If more than 10 seconds have elapsed, send it
|
||||
- Should those numbers come from the server?
|
||||
|
||||
Converting from ASCII to PyTables:
|
||||
- For each row getting added, we need to set attributes on a PyTables
|
||||
Row object and call table.append(). This means that there isn't a
|
||||
particularly efficient way of converting from ascii.
|
||||
- Could create a function like nilmdb.layout.Layout("foo".fillRow(asciiline)
|
||||
- But this means we're doing parsing on the serialized side
|
||||
- Let's keep parsing on the threaded server side so we can detect
|
||||
errors better, and not block the serialized nilmdb for a slow
|
||||
parsing process.
|
||||
- Client sends ASCII data
|
||||
- Server converts this ACSII data to a list of values
|
||||
- Maybe:
|
||||
|
||||
# threaded side creates this object
|
||||
parser = nilmdb.layout.Parser("layout_name")
|
||||
# threaded side parses and fills it with data
|
||||
parser.parse(textdata)
|
||||
# serialized side pulls out rows
|
||||
for n in xrange(parser.nrows):
|
||||
parser.fill_row(rowinstance, n)
|
||||
table.append()
|
||||
|
||||
|
||||
Inserting streams, inside nilmdb
|
||||
--------------------------------
|
||||
|
||||
- First check that the new stream doesn't overlap.
|
||||
- Get minimum timestamp, maximum timestamp from data parser.
|
||||
- (extend parser to verify monotonicity and track extents)
|
||||
- Get all intervals for this stream in the database
|
||||
- See if new interval overlaps any existing ones
|
||||
- If so, bail
|
||||
- Question: should we cache intervals inside NilmDB?
|
||||
- Assume database is fast for now, and always rebuild fom DB.
|
||||
- Can add a caching layer later if we need to.
|
||||
- `stream_get_ranges(path)` -> return IntervalSet?
|
||||
|
||||
Speed
|
||||
-----
|
||||
|
||||
- First approach was quadratic. Adding four hours of data:
|
||||
|
||||
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-110000 /bpnilm/1/raw
|
||||
real 24m31.093s
|
||||
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-120001 /bpnilm/1/raw
|
||||
real 43m44.528s
|
||||
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-130002 /bpnilm/1/raw
|
||||
real 93m29.713s
|
||||
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-140003 /bpnilm/1/raw
|
||||
real 166m53.007s
|
||||
|
||||
- Disabling pytables indexing didn't help:
|
||||
|
||||
real 31m21.492s
|
||||
real 52m51.963s
|
||||
real 102m8.151s
|
||||
real 176m12.469s
|
||||
|
||||
- Server RAM usage is constant.
|
||||
|
||||
- Speed problems were due to IntervalSet speed, of parsing intervals
|
||||
from the database and adding the new one each time.
|
||||
|
||||
- First optimization is to cache result of `nilmdb:_get_intervals`,
|
||||
which gives the best speedup.
|
||||
|
||||
- Also switched to internally using bxInterval from bx-python package.
|
||||
Speed of `tests/test_interval:TestIntervalSpeed` is pretty decent
|
||||
and seems to be growing logarithmically now. About 85μs per insertion
|
||||
for inserting 131k entries.
|
||||
|
||||
- Storing the interval data in SQL might be better, with a scheme like:
|
||||
http://www.logarithmic.net/pfh/blog/01235197474
|
||||
|
||||
- Next slowdown target is nilmdb.layout.Parser.parse().
|
||||
- Rewrote parsers using cython and sscanf
|
||||
- Stats (rev 10831), with _add_interval disabled
|
||||
layout.pyx.Parser.parse:128 6303 sec, 262k calls
|
||||
layout.pyx.parse:63 13913 sec, 5.1g calls
|
||||
numpy:records.py.fromrecords:569 7410 sec, 262k calls
|
||||
- Probably OK for now.
|
||||
|
||||
- After all updates, now takes about 8.5 minutes to insert an hour of
|
||||
data, constant after adding 171 hours (4.9 billion data points)
|
||||
|
||||
- Data set size: 98 gigs = 20 bytes per data point.
|
||||
6 uint16 data + 1 uint32 timestamp = 16 bytes per point
|
||||
So compression must be off -- will retry with compression forced on.
|
||||
|
||||
IntervalSet speed
|
||||
-----------------
|
||||
- Initial implementation was pretty slow, even with binary search in
|
||||
sorted list
|
||||
|
||||
- Replaced with bxInterval; now takes about log n time for an insertion
|
||||
- TestIntervalSpeed with range(17,18) and profiling
|
||||
- 85 μs each
|
||||
- 131072 calls to `__iadd__`
|
||||
- 131072 to bx.insert_interval
|
||||
- 131072 to bx.insert:395
|
||||
- 2355835 to bx.insert:106 (18x as many?)
|
||||
|
||||
- Tried blist too, worse than bxinterval.
|
||||
|
||||
- Might be algorithmic improvements to be made in Interval.py,
|
||||
like in `__and__`
|
||||
|
||||
- Replaced again with rbtree. Seems decent. Numbers are time per
|
||||
insert for 2**17 insertions, followed by total wall time and RAM
|
||||
usage for running "make test" with `test_rbtree` and `test_interval`
|
||||
with range(5,20):
|
||||
- old values with bxinterval:
|
||||
20.2 μS, total 20 s, 177 MB RAM
|
||||
- rbtree, plain python:
|
||||
97 μS, total 105 s, 846 MB RAM
|
||||
- rbtree converted to cython:
|
||||
26 μS, total 29 s, 320 MB RAM
|
||||
- rbtree and interval converted to cython:
|
||||
8.4 μS, total 12 s, 134 MB RAM
|
||||
|
||||
Layouts
|
||||
-------
|
||||
Current/old design has specific layouts: RawData, PrepData, RawNotchedData.
|
||||
Let's get rid of this entirely and switch to simpler data types that are
|
||||
just collections and counts of a single type. We'll still use strings
|
||||
to describe them, with format:
|
||||
|
||||
type_count
|
||||
|
||||
where type is "uint16", "float32", or "float64", and count is an integer.
|
||||
|
||||
nilmdb.layout.named() will parse these strings into the appropriate
|
||||
handlers. For compatibility:
|
||||
|
||||
"RawData" == "uint16_6"
|
||||
"RawNotchedData" == "uint16_9"
|
||||
"PrepData" == "float32_8"
|
9
docs/Makefile
Normal file
9
docs/Makefile
Normal file
@@ -0,0 +1,9 @@
|
||||
ALL_DOCS = $(wildcard *.md)
|
||||
|
||||
all: $(ALL_DOCS:.md=.html)
|
||||
|
||||
%.html: %.md
|
||||
pandoc -s $< > $@
|
||||
|
||||
clean:
|
||||
rm -f *.html
|
5
docs/TODO.md
Normal file
5
docs/TODO.md
Normal file
@@ -0,0 +1,5 @@
|
||||
- Documentation
|
||||
|
||||
- Machine-readable information in OverflowError, parser errors.
|
||||
Maybe subclass `cherrypy.HTTPError` and override `set_response`
|
||||
to add another JSON field?
|
268
docs/design.md
Normal file
268
docs/design.md
Normal file
@@ -0,0 +1,268 @@
|
||||
Structure
|
||||
---------
|
||||
nilmdb.nilmdb is the NILM database interface. A nilmdb.BulkData
|
||||
interface stores data in flat files, and a SQL database tracks
|
||||
metadata and ranges.
|
||||
|
||||
Access to the nilmdb must be single-threaded. This is handled with
|
||||
the nilmdb.serializer class. In the future this could probably
|
||||
be turned into a per-path serialization.
|
||||
|
||||
nilmdb.server is a HTTP server that provides an interface to talk,
|
||||
thorugh the serialization layer, to the nilmdb object.
|
||||
|
||||
nilmdb.client is a HTTP client that connects to this.
|
||||
|
||||
Sqlite performance
|
||||
------------------
|
||||
|
||||
Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
|
||||
takes about 125msec. sqlite3 will commit transactions at 3 times:
|
||||
|
||||
1. explicit con.commit()
|
||||
|
||||
2. between a series of DML commands and non-DML commands, e.g.
|
||||
after a series of INSERT, SELECT, but before a CREATE TABLE or
|
||||
PRAGMA.
|
||||
|
||||
3. at the end of an explicit transaction, e.g. "with self.con as con:"
|
||||
|
||||
To speed up testing, or if this transaction speed becomes an issue,
|
||||
the sync=False option to NilmDB will set PRAGMA synchronous=OFF.
|
||||
|
||||
|
||||
Inserting streams
|
||||
-----------------
|
||||
|
||||
We need to send the contents of "data" as POST. Do we need chunked
|
||||
transfer?
|
||||
|
||||
- Don't know the size in advance, so we would need to use chunked if
|
||||
we send the entire thing in one request.
|
||||
- But we shouldn't send one chunk per line, so we need to buffer some
|
||||
anyway; why not just make new requests?
|
||||
- Consider the infinite-streaming case, we might want to send it
|
||||
immediately? Not really -- server still should do explicit inserts
|
||||
of fixed-size chunks.
|
||||
- Even chunked encoding needs the size of each chunk beforehand, so
|
||||
everything still gets buffered. Just a tradeoff of buffer size.
|
||||
|
||||
Before timestamps are added:
|
||||
|
||||
- Raw data is about 440 kB/s (9 channels)
|
||||
- Prep data is about 12.5 kB/s (1 phase)
|
||||
- How do we know how much data to send?
|
||||
|
||||
- Remember that we can only do maybe 8-50 transactions per second on
|
||||
the sqlite database. So if one block of inserted data is one
|
||||
transaction, we'd need the raw case to be around 64kB per request,
|
||||
ideally more.
|
||||
- Maybe use a range, based on how long it's taking to read the data
|
||||
- If no more data, send it
|
||||
- If data > 1 MB, send it
|
||||
- If more than 10 seconds have elapsed, send it
|
||||
- Should those numbers come from the server?
|
||||
|
||||
Converting from ASCII to PyTables:
|
||||
|
||||
- For each row getting added, we need to set attributes on a PyTables
|
||||
Row object and call table.append(). This means that there isn't a
|
||||
particularly efficient way of converting from ascii.
|
||||
- Could create a function like nilmdb.layout.Layout("foo".fillRow(asciiline)
|
||||
- But this means we're doing parsing on the serialized side
|
||||
- Let's keep parsing on the threaded server side so we can detect
|
||||
errors better, and not block the serialized nilmdb for a slow
|
||||
parsing process.
|
||||
- Client sends ASCII data
|
||||
- Server converts this ACSII data to a list of values
|
||||
- Maybe:
|
||||
|
||||
# threaded side creates this object
|
||||
parser = nilmdb.layout.Parser("layout_name")
|
||||
# threaded side parses and fills it with data
|
||||
parser.parse(textdata)
|
||||
# serialized side pulls out rows
|
||||
for n in xrange(parser.nrows):
|
||||
parser.fill_row(rowinstance, n)
|
||||
table.append()
|
||||
|
||||
|
||||
Inserting streams, inside nilmdb
|
||||
--------------------------------
|
||||
|
||||
- First check that the new stream doesn't overlap.
|
||||
- Get minimum timestamp, maximum timestamp from data parser.
|
||||
- (extend parser to verify monotonicity and track extents)
|
||||
- Get all intervals for this stream in the database
|
||||
- See if new interval overlaps any existing ones
|
||||
- If so, bail
|
||||
- Question: should we cache intervals inside NilmDB?
|
||||
- Assume database is fast for now, and always rebuild fom DB.
|
||||
- Can add a caching layer later if we need to.
|
||||
- `stream_get_ranges(path)` -> return IntervalSet?
|
||||
|
||||
Speed
|
||||
-----
|
||||
|
||||
- First approach was quadratic. Adding four hours of data:
|
||||
|
||||
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-110000 /bpnilm/1/raw
|
||||
real 24m31.093s
|
||||
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-120001 /bpnilm/1/raw
|
||||
real 43m44.528s
|
||||
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-130002 /bpnilm/1/raw
|
||||
real 93m29.713s
|
||||
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-140003 /bpnilm/1/raw
|
||||
real 166m53.007s
|
||||
|
||||
- Disabling pytables indexing didn't help:
|
||||
|
||||
real 31m21.492s
|
||||
real 52m51.963s
|
||||
real 102m8.151s
|
||||
real 176m12.469s
|
||||
|
||||
- Server RAM usage is constant.
|
||||
|
||||
- Speed problems were due to IntervalSet speed, of parsing intervals
|
||||
from the database and adding the new one each time.
|
||||
|
||||
- First optimization is to cache result of `nilmdb:_get_intervals`,
|
||||
which gives the best speedup.
|
||||
|
||||
- Also switched to internally using bxInterval from bx-python package.
|
||||
Speed of `tests/test_interval:TestIntervalSpeed` is pretty decent
|
||||
and seems to be growing logarithmically now. About 85μs per insertion
|
||||
for inserting 131k entries.
|
||||
|
||||
- Storing the interval data in SQL might be better, with a scheme like:
|
||||
http://www.logarithmic.net/pfh/blog/01235197474
|
||||
|
||||
- Next slowdown target is nilmdb.layout.Parser.parse().
|
||||
- Rewrote parsers using cython and sscanf
|
||||
- Stats (rev 10831), with _add_interval disabled
|
||||
|
||||
layout.pyx.Parser.parse:128 6303 sec, 262k calls
|
||||
layout.pyx.parse:63 13913 sec, 5.1g calls
|
||||
numpy:records.py.fromrecords:569 7410 sec, 262k calls
|
||||
|
||||
- Probably OK for now.
|
||||
|
||||
- After all updates, now takes about 8.5 minutes to insert an hour of
|
||||
data, constant after adding 171 hours (4.9 billion data points)
|
||||
|
||||
- Data set size: 98 gigs = 20 bytes per data point.
|
||||
6 uint16 data + 1 uint32 timestamp = 16 bytes per point
|
||||
So compression must be off -- will retry with compression forced on.
|
||||
|
||||
IntervalSet speed
|
||||
-----------------
|
||||
- Initial implementation was pretty slow, even with binary search in
|
||||
sorted list
|
||||
|
||||
- Replaced with bxInterval; now takes about log n time for an insertion
|
||||
- TestIntervalSpeed with range(17,18) and profiling
|
||||
- 85 μs each
|
||||
- 131072 calls to `__iadd__`
|
||||
- 131072 to bx.insert_interval
|
||||
- 131072 to bx.insert:395
|
||||
- 2355835 to bx.insert:106 (18x as many?)
|
||||
|
||||
- Tried blist too, worse than bxinterval.
|
||||
|
||||
- Might be algorithmic improvements to be made in Interval.py,
|
||||
like in `__and__`
|
||||
|
||||
- Replaced again with rbtree. Seems decent. Numbers are time per
|
||||
insert for 2**17 insertions, followed by total wall time and RAM
|
||||
usage for running "make test" with `test_rbtree` and `test_interval`
|
||||
with range(5,20):
|
||||
- old values with bxinterval:
|
||||
20.2 μS, total 20 s, 177 MB RAM
|
||||
- rbtree, plain python:
|
||||
97 μS, total 105 s, 846 MB RAM
|
||||
- rbtree converted to cython:
|
||||
26 μS, total 29 s, 320 MB RAM
|
||||
- rbtree and interval converted to cython:
|
||||
8.4 μS, total 12 s, 134 MB RAM
|
||||
|
||||
Layouts
|
||||
-------
|
||||
Current/old design has specific layouts: RawData, PrepData, RawNotchedData.
|
||||
Let's get rid of this entirely and switch to simpler data types that are
|
||||
just collections and counts of a single type. We'll still use strings
|
||||
to describe them, with format:
|
||||
|
||||
type_count
|
||||
|
||||
where type is "uint16", "float32", or "float64", and count is an integer.
|
||||
|
||||
nilmdb.layout.named() will parse these strings into the appropriate
|
||||
handlers. For compatibility:
|
||||
|
||||
"RawData" == "uint16_6"
|
||||
"RawNotchedData" == "uint16_9"
|
||||
"PrepData" == "float32_8"
|
||||
|
||||
|
||||
BulkData design
|
||||
---------------
|
||||
|
||||
BulkData is a custom bulk data storage system that was written to
|
||||
replace PyTables. The general structure is a `data` subdirectory in
|
||||
the main NilmDB directory. Within `data`, paths are created for each
|
||||
created stream. These locations are called tables. For example,
|
||||
tables might be located at
|
||||
|
||||
nilmdb/data/newton/raw/
|
||||
nilmdb/data/newton/prep/
|
||||
nilmdb/data/cottage/raw/
|
||||
|
||||
Each table contains:
|
||||
|
||||
- An unchanging `_format` file (Python pickle format) that describes
|
||||
parameters of how the data is broken up, like files per directory,
|
||||
rows per file, and the binary data format
|
||||
|
||||
- Hex named subdirectories `("%04x", although more than 65536 can exist)`
|
||||
|
||||
- Hex named files within those subdirectories, like:
|
||||
|
||||
/nilmdb/data/newton/raw/000b/010a
|
||||
|
||||
The data format of these files is raw binary, interpreted by the
|
||||
Python `struct` module according to the format string in the
|
||||
`_format` file.
|
||||
|
||||
- Same as above, with `.removed` suffix, is an optional file (Python
|
||||
pickle format) containing a list of row numbers that have been
|
||||
logically removed from the file. If this range covers the entire
|
||||
file, the entire file will be removed.
|
||||
|
||||
- Note that the `bulkdata.nrows` variable is calculated once in
|
||||
`BulkData.__init__()`, and only ever incremented during use. Thus,
|
||||
even if all data is removed, `nrows` can remain high. However, if
|
||||
the server is restarted, the newly calculated `nrows` may be lower
|
||||
than in a previous run due to deleted data. To be specific, this
|
||||
sequence of events:
|
||||
|
||||
- insert data
|
||||
- remove all data
|
||||
- insert data
|
||||
|
||||
will result in having different row numbers in the database, and
|
||||
differently numbered files on the filesystem, than the sequence:
|
||||
|
||||
- insert data
|
||||
- remove all data
|
||||
- restart server
|
||||
- insert data
|
||||
|
||||
This is okay! Everything should remain consistent both in the
|
||||
`BulkData` and `NilmDB`. Not attempting to readjust `nrows` during
|
||||
deletion makes the code quite a bit simpler.
|
||||
|
||||
- Similarly, data files are never truncated shorter. Removing data
|
||||
from the end of the file will not shorten it; it will only be
|
||||
deleted when it has been fully filled and all of the data has been
|
||||
subsequently removed.
|
@@ -1,12 +1,4 @@
|
||||
"""Main NilmDB import"""
|
||||
|
||||
from .nilmdb import NilmDB
|
||||
from .server import Server
|
||||
from .client import Client
|
||||
|
||||
import pyximport; pyximport.install()
|
||||
import layout
|
||||
import interval
|
||||
|
||||
import cmdline
|
||||
|
||||
from server import NilmDB, Server
|
||||
from client import Client
|
||||
|
@@ -1,297 +0,0 @@
|
||||
# Fixed record size bulk data storage
|
||||
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
|
||||
import os
|
||||
import sys
|
||||
import cPickle as pickle
|
||||
import struct
|
||||
import fnmatch
|
||||
import mmap
|
||||
|
||||
# Up to 256 open file descriptors at any given time
|
||||
table_cache_size = 16
|
||||
fd_cache_size = 16
|
||||
|
||||
@nilmdb.utils.must_close()
|
||||
class BulkData(object):
|
||||
def __init__(self, basepath):
|
||||
self.basepath = basepath
|
||||
self.root = os.path.join(self.basepath, "data")
|
||||
|
||||
# Make root path
|
||||
if not os.path.isdir(self.root):
|
||||
os.mkdir(self.root)
|
||||
|
||||
def close(self):
|
||||
self.getnode.cache_remove_all()
|
||||
|
||||
def create(self, path, layout_name):
|
||||
"""
|
||||
path: path to the data (e.g. '/newton/prep').
|
||||
Paths must contain at least two elements, e.g.:
|
||||
/newton/prep
|
||||
/newton/raw
|
||||
/newton/upstairs/prep
|
||||
/newton/upstairs/raw
|
||||
|
||||
layout_name: string for nilmdb.layout.get_named(), e.g. 'float32_8'
|
||||
"""
|
||||
if path[0] != '/':
|
||||
raise ValueError("paths must start with /")
|
||||
[ group, node ] = path.rsplit("/", 1)
|
||||
if group == '':
|
||||
raise ValueError("invalid path")
|
||||
|
||||
# Get layout, and build format string for struct module
|
||||
try:
|
||||
layout = nilmdb.layout.get_named(layout_name)
|
||||
struct_fmt = '<d' # Little endian, double timestamp
|
||||
struct_mapping = {
|
||||
"int8": 'b',
|
||||
"uint8": 'B',
|
||||
"int16": 'h',
|
||||
"uint16": 'H',
|
||||
"int32": 'i',
|
||||
"uint32": 'I',
|
||||
"int64": 'q',
|
||||
"uint64": 'Q',
|
||||
"float32": 'f',
|
||||
"float64": 'd',
|
||||
}
|
||||
for n in range(layout.count):
|
||||
struct_fmt += struct_mapping[layout.datatype]
|
||||
except KeyError:
|
||||
raise ValueError("no such layout, or bad data types")
|
||||
|
||||
# Create the table. Note that we make a distinction here
|
||||
# between NilmDB paths (always Unix style, split apart
|
||||
# manually) and OS paths (built up with os.path.join)
|
||||
try:
|
||||
# Make directories leading up to this one
|
||||
elements = path.lstrip('/').split('/')
|
||||
for i in range(len(elements)):
|
||||
ospath = os.path.join(self.root, *elements[0:i])
|
||||
if Table.exists(ospath):
|
||||
raise ValueError("path is subdir of existing node")
|
||||
if not os.path.isdir(ospath):
|
||||
os.mkdir(ospath)
|
||||
|
||||
# Make the final dir
|
||||
ospath = os.path.join(self.root, *elements)
|
||||
if os.path.isdir(ospath):
|
||||
raise ValueError("subdirs of this path already exist")
|
||||
os.mkdir(ospath)
|
||||
|
||||
# Write format string to file
|
||||
Table.create(ospath, struct_fmt)
|
||||
except OSError as e:
|
||||
raise ValueError("error creating table at that path: " + e.strerror)
|
||||
|
||||
# Open and cache it
|
||||
self.getnode(path)
|
||||
|
||||
# Success
|
||||
return
|
||||
|
||||
def destroy(self, path):
|
||||
"""Fully remove all data at a particular path. No way to undo
|
||||
it! The group/path structure is removed, too."""
|
||||
|
||||
# Get OS path
|
||||
elements = path.lstrip('/').split('/')
|
||||
ospath = os.path.join(self.root, *elements)
|
||||
|
||||
# Remove Table object from cache
|
||||
self.getnode.cache_remove(self, ospath)
|
||||
|
||||
# Remove the contents of the target directory
|
||||
if not os.path.isfile(os.path.join(ospath, "format")):
|
||||
raise ValueError("nothing at that path")
|
||||
for file in os.listdir(ospath):
|
||||
os.remove(os.path.join(ospath, file))
|
||||
|
||||
# Remove empty parent directories
|
||||
for i in reversed(range(len(elements))):
|
||||
ospath = os.path.join(self.root, *elements[0:i+1])
|
||||
try:
|
||||
os.rmdir(ospath)
|
||||
except OSError:
|
||||
break
|
||||
|
||||
# Cache open tables
|
||||
@nilmdb.utils.lru_cache(size = table_cache_size,
|
||||
onremove = lambda x: x.close())
|
||||
def getnode(self, path):
|
||||
"""Return a Table object corresponding to the given database
|
||||
path, which must exist."""
|
||||
elements = path.lstrip('/').split('/')
|
||||
ospath = os.path.join(self.root, *elements)
|
||||
return Table(ospath)
|
||||
|
||||
@nilmdb.utils.must_close()
|
||||
class Table(object):
|
||||
"""Tools to help access a single table (data at a specific OS path)"""
|
||||
|
||||
# Class methods, to help keep format details in this class.
|
||||
@classmethod
|
||||
def exists(cls, root):
|
||||
"""Return True if a table appears to exist at this OS path"""
|
||||
return os.path.isfile(os.path.join(root, "format"))
|
||||
|
||||
@classmethod
|
||||
def create(cls, root, struct_fmt):
|
||||
"""Initialize a table at the given OS path.
|
||||
'struct_fmt' is a Struct module format description"""
|
||||
format = { "rows_per_file": 4 * 1024 * 1024,
|
||||
"struct_fmt": struct_fmt }
|
||||
with open(os.path.join(root, "format"), "wb") as f:
|
||||
pickle.dump(format, f, 2)
|
||||
|
||||
# Normal methods
|
||||
def __init__(self, root):
|
||||
"""'root' is the full OS path to the directory of this table"""
|
||||
self.root = root
|
||||
|
||||
# Load the format and build packer
|
||||
with open(self._fullpath("format"), "rb") as f:
|
||||
format = pickle.load(f)
|
||||
self.rows_per_file = format["rows_per_file"]
|
||||
self.packer = struct.Struct(format["struct_fmt"])
|
||||
self.file_size = self.packer.size * self.rows_per_file
|
||||
|
||||
# Find nrows by locating the lexicographically last filename
|
||||
# and using its size.
|
||||
pattern = '[0-9a-f]' * 8
|
||||
allfiles = fnmatch.filter(os.listdir(self.root), pattern)
|
||||
if allfiles:
|
||||
filename = max(allfiles)
|
||||
offset = os.path.getsize(self._fullpath(filename))
|
||||
self.nrows = self._row_from_fnoffset(filename, offset)
|
||||
else:
|
||||
self.nrows = 0
|
||||
|
||||
def close(self):
|
||||
self.mmap_open.cache_remove_all()
|
||||
|
||||
# Internal helpers
|
||||
def _fullpath(self, filename):
|
||||
return os.path.join(self.root, filename)
|
||||
|
||||
def _fnoffset_from_row(self, row):
|
||||
"""Return a (filename, offset, count) tuple:
|
||||
|
||||
filename: the filename that contains the specified row
|
||||
offset: byte offset of the specified row within the file
|
||||
count: number of rows (starting at offste) that fit in the file
|
||||
"""
|
||||
filenum = row // self.rows_per_file
|
||||
filename = sprintf("%08x", filenum)
|
||||
offset = (row % self.rows_per_file) * self.packer.size
|
||||
count = self.rows_per_file - (row % self.rows_per_file)
|
||||
return (filename, offset, count)
|
||||
|
||||
def _row_from_fnoffset(self, filename, offset):
|
||||
"""Return the row number that corresponds to the given
|
||||
filename and byte-offset within that file."""
|
||||
filenum = int(filename, 16)
|
||||
if (offset % self.packer.size) != 0:
|
||||
raise ValueError("file offset is not a multiple of data size")
|
||||
row = (filenum * self.rows_per_file) + (offset // self.packer.size)
|
||||
return row
|
||||
|
||||
# Cache open files
|
||||
@nilmdb.utils.lru_cache(size = fd_cache_size,
|
||||
onremove = lambda x: x.close())
|
||||
def mmap_open(self, file, newsize = None):
|
||||
"""Open and map a given filename (relative to self.root).
|
||||
Will be automatically closed when evicted from the cache.
|
||||
|
||||
If 'newsize' is provided, the file is truncated to the given
|
||||
size before the mapping is returned. (Note that the LRU cache
|
||||
on this function means the truncate will only happen if the
|
||||
object isn't already cached; mmap.resize should be used too)"""
|
||||
f = open(os.path.join(self.root, file), "a+", 0)
|
||||
if newsize is not None:
|
||||
# mmap can't map a zero-length file, so this allows the
|
||||
# caller to set the filesize between file creation and
|
||||
# mmap.
|
||||
f.truncate(newsize)
|
||||
mm = mmap.mmap(f.fileno(), 0)
|
||||
return mm
|
||||
|
||||
def append(self, data):
|
||||
"""Append the data and flush it to disk.
|
||||
data is a nested Python list [[row],[row],[...]]"""
|
||||
remaining = len(data)
|
||||
dataiter = iter(data)
|
||||
while remaining:
|
||||
# See how many rows we can fit into the current file, and open it
|
||||
(filename, offset, count) = self._fnoffset_from_row(self.nrows)
|
||||
if count > remaining:
|
||||
count = remaining
|
||||
newsize = offset + count * self.packer.size
|
||||
mm = self.mmap_open(filename, newsize)
|
||||
mm.seek(offset)
|
||||
|
||||
# Extend the file to the target length. We specified
|
||||
# newsize when opening, but that may have been ignored if
|
||||
# the mmap_open returned a cached object.
|
||||
mm.resize(newsize)
|
||||
|
||||
# Write the data
|
||||
for i in xrange(count):
|
||||
row = dataiter.next()
|
||||
mm.write(self.packer.pack(*row))
|
||||
remaining -= count
|
||||
self.nrows += count
|
||||
|
||||
def __getitem__(self, key):
|
||||
"""Extract data and return it. Supports simple indexing
|
||||
(table[n]) and range slices (table[n:m]). Returns a nested
|
||||
Python list [[row],[row],[...]]"""
|
||||
|
||||
# Handle simple slices
|
||||
if isinstance(key, slice):
|
||||
# Fall back to brute force if the slice isn't simple
|
||||
if ((key.step is not None and key.step != 1) or
|
||||
key.start is None or
|
||||
key.stop is None or
|
||||
key.start >= key.stop or
|
||||
key.start < 0 or
|
||||
key.stop > self.nrows):
|
||||
return [ self[x] for x in xrange(*key.indices(self.nrows)) ]
|
||||
|
||||
ret = []
|
||||
row = key.start
|
||||
remaining = key.stop - key.start
|
||||
while remaining:
|
||||
(filename, offset, count) = self._fnoffset_from_row(row)
|
||||
if count > remaining:
|
||||
count = remaining
|
||||
mm = self.mmap_open(filename)
|
||||
for i in xrange(count):
|
||||
ret.append(list(self.packer.unpack_from(mm, offset)))
|
||||
offset += self.packer.size
|
||||
remaining -= count
|
||||
row += count
|
||||
return ret
|
||||
|
||||
# Handle single points
|
||||
if key < 0 or key >= self.nrows:
|
||||
raise IndexError("Index out of range")
|
||||
(filename, offset, count) = self._fnoffset_from_row(key)
|
||||
mm = self.mmap_open(filename)
|
||||
# unpack_from ignores the mmap object's current seek position
|
||||
return self.packer.unpack_from(mm, offset)
|
||||
|
||||
class TimestampOnlyTable(object):
|
||||
"""Helper that lets us pass a Tables object into bisect, by
|
||||
returning only the timestamp when a particular row is requested."""
|
||||
def __init__(self, table):
|
||||
self.table = table
|
||||
def __getitem__(self, index):
|
||||
return self.table[index][0]
|
4
nilmdb/client/__init__.py
Normal file
4
nilmdb/client/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
"""nilmdb.client"""
|
||||
|
||||
from .client import Client
|
||||
from .errors import *
|
@@ -2,7 +2,9 @@
|
||||
|
||||
"""Class for performing HTTP client requests via libcurl"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
import nilmdb
|
||||
import nilmdb.utils
|
||||
import nilmdb.client.httpclient
|
||||
from nilmdb.utils.printf import *
|
||||
|
||||
import time
|
||||
@@ -12,11 +14,6 @@ import os
|
||||
import simplejson as json
|
||||
import itertools
|
||||
|
||||
import nilmdb.httpclient
|
||||
|
||||
# Other functions expect to see these in the nilmdb.client namespace
|
||||
from nilmdb.httpclient import ClientError, ServerError, Error
|
||||
|
||||
version = "1.0"
|
||||
|
||||
def float_to_string(f):
|
||||
@@ -29,7 +26,7 @@ class Client(object):
|
||||
client_version = version
|
||||
|
||||
def __init__(self, url):
|
||||
self.http = nilmdb.httpclient.HTTPClient(url)
|
||||
self.http = nilmdb.client.httpclient.HTTPClient(url)
|
||||
|
||||
def _json_param(self, data):
|
||||
"""Return compact json-encoded version of parameter"""
|
||||
@@ -96,6 +93,17 @@ class Client(object):
|
||||
params = { "path": path }
|
||||
return self.http.get("stream/destroy", params)
|
||||
|
||||
def stream_remove(self, path, start = None, end = None):
|
||||
"""Remove data from the specified time range"""
|
||||
params = {
|
||||
"path": path
|
||||
}
|
||||
if start is not None:
|
||||
params["start"] = float_to_string(start)
|
||||
if end is not None:
|
||||
params["end"] = float_to_string(end)
|
||||
return self.http.get("stream/remove", params)
|
||||
|
||||
def stream_insert(self, path, data, start = None, end = None):
|
||||
"""Insert data into a stream. data should be a file-like object
|
||||
that provides ASCII data that matches the database layout for path.
|
||||
@@ -114,11 +122,6 @@ class Client(object):
|
||||
max_time = 30
|
||||
end_epsilon = 1e-6
|
||||
|
||||
def pairwise(iterable):
|
||||
"s -> (s0,s1), (s1,s2), ..., (sn,None)"
|
||||
a, b = itertools.tee(iterable)
|
||||
next(b, None)
|
||||
return itertools.izip_longest(a, b)
|
||||
|
||||
def extract_timestamp(line):
|
||||
return float(line.split()[0])
|
||||
@@ -148,7 +151,7 @@ class Client(object):
|
||||
block_data = ""
|
||||
block_start = start
|
||||
result = None
|
||||
for (line, nextline) in pairwise(data):
|
||||
for (line, nextline) in nilmdb.utils.misc.pairwise(data):
|
||||
# If we don't have a starting time, extract it from the first line
|
||||
if block_start is None:
|
||||
block_start = extract_timestamp(line)
|
33
nilmdb/client/errors.py
Normal file
33
nilmdb/client/errors.py
Normal file
@@ -0,0 +1,33 @@
|
||||
"""HTTP client errors"""
|
||||
|
||||
from nilmdb.utils.printf import *
|
||||
|
||||
class Error(Exception):
|
||||
"""Base exception for both ClientError and ServerError responses"""
|
||||
def __init__(self,
|
||||
status = "Unspecified error",
|
||||
message = None,
|
||||
url = None,
|
||||
traceback = None):
|
||||
Exception.__init__(self, status)
|
||||
self.status = status # e.g. "400 Bad Request"
|
||||
self.message = message # textual message from the server
|
||||
self.url = url # URL we were requesting
|
||||
self.traceback = traceback # server traceback, if available
|
||||
def _format_error(self, show_url):
|
||||
s = sprintf("[%s]", self.status)
|
||||
if self.message:
|
||||
s += sprintf(" %s", self.message)
|
||||
if show_url and self.url: # pragma: no cover
|
||||
s += sprintf(" (%s)", self.url)
|
||||
if self.traceback: # pragma: no cover
|
||||
s += sprintf("\nServer traceback:\n%s", self.traceback)
|
||||
return s
|
||||
def __str__(self):
|
||||
return self._format_error(show_url = False)
|
||||
def __repr__(self): # pragma: no cover
|
||||
return self._format_error(show_url = True)
|
||||
class ClientError(Error):
|
||||
pass
|
||||
class ServerError(Error):
|
||||
pass
|
@@ -1,8 +1,9 @@
|
||||
"""HTTP client library"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
import nilmdb.utils
|
||||
from nilmdb.utils.printf import *
|
||||
from nilmdb.client.errors import *
|
||||
|
||||
import time
|
||||
import sys
|
||||
@@ -10,36 +11,9 @@ import re
|
||||
import os
|
||||
import simplejson as json
|
||||
import urlparse
|
||||
import urllib
|
||||
import pycurl
|
||||
import cStringIO
|
||||
|
||||
class Error(Exception):
|
||||
"""Base exception for both ClientError and ServerError responses"""
|
||||
def __init__(self,
|
||||
status = "Unspecified error",
|
||||
message = None,
|
||||
url = None,
|
||||
traceback = None):
|
||||
Exception.__init__(self, status)
|
||||
self.status = status # e.g. "400 Bad Request"
|
||||
self.message = message # textual message from the server
|
||||
self.url = url # URL we were requesting
|
||||
self.traceback = traceback # server traceback, if available
|
||||
def __str__(self):
|
||||
s = sprintf("[%s]", self.status)
|
||||
if self.message:
|
||||
s += sprintf(" %s", self.message)
|
||||
if self.url:
|
||||
s += sprintf(" (%s)", self.url)
|
||||
if self.traceback: # pragma: no cover
|
||||
s += sprintf("\nServer traceback:\n%s", self.traceback)
|
||||
return s
|
||||
class ClientError(Error):
|
||||
pass
|
||||
class ServerError(Error):
|
||||
pass
|
||||
|
||||
class HTTPClient(object):
|
||||
"""Class to manage and perform HTTP requests from the client"""
|
||||
def __init__(self, baseurl = ""):
|
||||
@@ -59,7 +33,8 @@ class HTTPClient(object):
|
||||
def _setup_url(self, url = "", params = ""):
|
||||
url = urlparse.urljoin(self.baseurl, url)
|
||||
if params:
|
||||
url = urlparse.urljoin(url, "?" + urllib.urlencode(params, True))
|
||||
url = urlparse.urljoin(
|
||||
url, "?" + nilmdb.utils.urllib.urlencode(params))
|
||||
self.curl.setopt(pycurl.URL, url)
|
||||
self.url = url
|
||||
|
||||
@@ -112,13 +87,14 @@ class HTTPClient(object):
|
||||
self.curl.setopt(pycurl.WRITEFUNCTION, callback)
|
||||
self.curl.perform()
|
||||
try:
|
||||
for i in nilmdb.utils.Iteratorizer(func):
|
||||
if self._status == 200:
|
||||
# If we had a 200 response, yield the data to the caller.
|
||||
yield i
|
||||
else:
|
||||
# Otherwise, collect it into an error string.
|
||||
error_body += i
|
||||
with nilmdb.utils.Iteratorizer(func, curl_hack = True) as it:
|
||||
for i in it:
|
||||
if self._status == 200:
|
||||
# If we had a 200 response, yield the data to caller.
|
||||
yield i
|
||||
else:
|
||||
# Otherwise, collect it into an error string.
|
||||
error_body += i
|
||||
except pycurl.error as e:
|
||||
raise ServerError(status = "502 Error",
|
||||
url = self.url,
|
||||
@@ -188,9 +164,9 @@ class HTTPClient(object):
|
||||
|
||||
def put(self, url, postdata, params = None, retjson = True):
|
||||
"""Simple PUT"""
|
||||
self.curl.setopt(pycurl.UPLOAD, 1)
|
||||
self._setup_url(url, params)
|
||||
data = cStringIO.StringIO(postdata)
|
||||
self.curl.setopt(pycurl.UPLOAD, 1)
|
||||
self.curl.setopt(pycurl.READFUNCTION, data.read)
|
||||
return self._doreq(url, params, retjson)
|
||||
|
||||
@@ -216,8 +192,8 @@ class HTTPClient(object):
|
||||
|
||||
def put_gen(self, url, postdata, params = None, retjson = True):
|
||||
"""Simple PUT, returning a generator"""
|
||||
self.curl.setopt(pycurl.UPLOAD, 1)
|
||||
self._setup_url(url, params)
|
||||
data = cStringIO.StringIO(postdata)
|
||||
self.curl.setopt(pycurl.UPLOAD, 1)
|
||||
self.curl.setopt(pycurl.READFUNCTION, data.read)
|
||||
return self._doreq_gen(url, params, retjson)
|
@@ -1 +1,3 @@
|
||||
"""nilmdb.cmdline"""
|
||||
|
||||
from .cmdline import Cmdline
|
||||
|
@@ -1,22 +1,21 @@
|
||||
"""Command line client functionality"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb.client
|
||||
from nilmdb.utils import datetime_tz
|
||||
|
||||
import datetime_tz
|
||||
import dateutil.parser
|
||||
import sys
|
||||
import re
|
||||
import argparse
|
||||
from argparse import ArgumentDefaultsHelpFormatter as def_form
|
||||
|
||||
version = "0.1"
|
||||
version = "1.0"
|
||||
|
||||
# Valid subcommands. Defined in separate files just to break
|
||||
# things up -- they're still called with Cmdline as self.
|
||||
subcommands = [ "info", "create", "list", "metadata", "insert", "extract",
|
||||
"destroy" ]
|
||||
"remove", "destroy" ]
|
||||
|
||||
# Import the subcommand modules. Equivalent way of doing this would be
|
||||
# from . import info as cmd_info
|
||||
@@ -24,10 +23,16 @@ subcmd_mods = {}
|
||||
for cmd in subcommands:
|
||||
subcmd_mods[cmd] = __import__("nilmdb.cmdline." + cmd, fromlist = [ cmd ])
|
||||
|
||||
class JimArgumentParser(argparse.ArgumentParser):
|
||||
def error(self, message):
|
||||
self.print_usage(sys.stderr)
|
||||
self.exit(2, sprintf("error: %s\n", message))
|
||||
|
||||
class Cmdline(object):
|
||||
|
||||
def __init__(self, argv):
|
||||
self.argv = argv
|
||||
self.client = None
|
||||
|
||||
def arg_time(self, toparse):
|
||||
"""Parse a time string argument"""
|
||||
@@ -43,10 +48,10 @@ class Cmdline(object):
|
||||
If the string doesn't contain a timestamp, the current local
|
||||
timezone is assumed (e.g. from the TZ env var).
|
||||
"""
|
||||
# If string doesn't contain at least 6 digits, consider it
|
||||
# invalid. smartparse might otherwise accept empty strings
|
||||
# and strings with just separators.
|
||||
if len(re.findall(r"\d", toparse)) < 6:
|
||||
# If string isn't "now" and doesn't contain at least 4 digits,
|
||||
# consider it invalid. smartparse might otherwise accept
|
||||
# empty strings and strings with just separators.
|
||||
if toparse != "now" and len(re.findall(r"\d", toparse)) < 4:
|
||||
raise ValueError("not enough digits for a timestamp")
|
||||
|
||||
# Try to just parse the time as given
|
||||
@@ -93,8 +98,8 @@ class Cmdline(object):
|
||||
version_string = sprintf("nilmtool %s, client library %s",
|
||||
version, nilmdb.Client.client_version)
|
||||
|
||||
self.parser = argparse.ArgumentParser(add_help = False,
|
||||
formatter_class = def_form)
|
||||
self.parser = JimArgumentParser(add_help = False,
|
||||
formatter_class = def_form)
|
||||
|
||||
group = self.parser.add_argument_group("General options")
|
||||
group.add_argument("-h", "--help", action='help',
|
||||
@@ -119,7 +124,8 @@ class Cmdline(object):
|
||||
|
||||
def die(self, formatstr, *args):
|
||||
fprintf(sys.stderr, formatstr + "\n", *args)
|
||||
self.client.close()
|
||||
if self.client:
|
||||
self.client.close()
|
||||
sys.exit(-1)
|
||||
|
||||
def run(self):
|
||||
@@ -131,13 +137,17 @@ class Cmdline(object):
|
||||
self.parser_setup()
|
||||
self.args = self.parser.parse_args(self.argv)
|
||||
|
||||
# Run arg verify handler if there is one
|
||||
if "verify" in self.args:
|
||||
self.args.verify(self)
|
||||
|
||||
self.client = nilmdb.Client(self.args.url)
|
||||
|
||||
# Make a test connection to make sure things work
|
||||
try:
|
||||
server_version = self.client.version()
|
||||
except nilmdb.client.Error as e:
|
||||
self.die("Error connecting to server: %s", str(e))
|
||||
self.die("error connecting to server: %s", str(e))
|
||||
|
||||
# Now dispatch client request to appropriate function. Parser
|
||||
# should have ensured that we don't have any unknown commands
|
||||
|
@@ -1,17 +1,27 @@
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
import nilmdb.client
|
||||
import textwrap
|
||||
|
||||
from argparse import ArgumentDefaultsHelpFormatter as def_form
|
||||
from argparse import RawDescriptionHelpFormatter as raw_form
|
||||
|
||||
def setup(self, sub):
|
||||
cmd = sub.add_parser("create", help="Create a new stream",
|
||||
formatter_class = def_form,
|
||||
formatter_class = raw_form,
|
||||
description="""
|
||||
Create a new empty stream at the
|
||||
specified path and with the specifed
|
||||
layout type.
|
||||
""")
|
||||
Create a new empty stream at the specified path and with the specified
|
||||
layout type.
|
||||
|
||||
Layout types are of the format: type_count
|
||||
|
||||
'type' is a data type like 'float32', 'float64', 'uint16', 'int32', etc.
|
||||
|
||||
'count' is the number of columns of this type.
|
||||
|
||||
For example, 'float32_8' means the data for this stream has 8 columns of
|
||||
32-bit floating point values.
|
||||
""")
|
||||
cmd.set_defaults(handler = cmd_create)
|
||||
group = cmd.add_argument_group("Required arguments")
|
||||
group.add_argument("path",
|
||||
@@ -24,4 +34,4 @@ def cmd_create(self):
|
||||
try:
|
||||
self.client.stream_create(self.args.path, self.args.layout)
|
||||
except nilmdb.client.ClientError as e:
|
||||
self.die("Error creating stream: %s", str(e))
|
||||
self.die("error creating stream: %s", str(e))
|
||||
|
@@ -1,5 +1,5 @@
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
import nilmdb.client
|
||||
|
||||
from argparse import ArgumentDefaultsHelpFormatter as def_form
|
||||
@@ -22,4 +22,4 @@ def cmd_destroy(self):
|
||||
try:
|
||||
self.client.stream_destroy(self.args.path)
|
||||
except nilmdb.client.ClientError as e:
|
||||
self.die("Error deleting stream: %s", str(e))
|
||||
self.die("error destroying stream: %s", str(e))
|
||||
|
@@ -1,5 +1,6 @@
|
||||
from __future__ import absolute_import
|
||||
from __future__ import print_function
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
import nilmdb.client
|
||||
import sys
|
||||
|
||||
@@ -8,17 +9,18 @@ def setup(self, sub):
|
||||
description="""
|
||||
Extract data from a stream.
|
||||
""")
|
||||
cmd.set_defaults(handler = cmd_extract)
|
||||
cmd.set_defaults(verify = cmd_extract_verify,
|
||||
handler = cmd_extract)
|
||||
|
||||
group = cmd.add_argument_group("Data selection")
|
||||
group.add_argument("path",
|
||||
help="Path of stream, e.g. /foo/bar")
|
||||
group.add_argument("-s", "--start", required=True,
|
||||
metavar="TIME", type=self.arg_time,
|
||||
help="Starting timestamp (free-form)")
|
||||
help="Starting timestamp (free-form, inclusive)")
|
||||
group.add_argument("-e", "--end", required=True,
|
||||
metavar="TIME", type=self.arg_time,
|
||||
help="Ending timestamp (free-form)")
|
||||
help="Ending timestamp (free-form, noninclusive)")
|
||||
|
||||
group = cmd.add_argument_group("Output format")
|
||||
group.add_argument("-b", "--bare", action="store_true",
|
||||
@@ -26,20 +28,32 @@ def setup(self, sub):
|
||||
group.add_argument("-a", "--annotate", action="store_true",
|
||||
help="Include comments with some information "
|
||||
"about the stream")
|
||||
group.add_argument("-T", "--timestamp-raw", action="store_true",
|
||||
help="Show raw timestamps in annotated information")
|
||||
group.add_argument("-c", "--count", action="store_true",
|
||||
help="Just output a count of matched data points")
|
||||
|
||||
def cmd_extract_verify(self):
|
||||
if self.args.start is not None and self.args.end is not None:
|
||||
if self.args.start > self.args.end:
|
||||
self.parser.error("start is after end")
|
||||
|
||||
def cmd_extract(self):
|
||||
streams = self.client.stream_list(self.args.path)
|
||||
if len(streams) != 1:
|
||||
self.die("Error getting stream info for path %s", self.args.path)
|
||||
self.die("error getting stream info for path %s", self.args.path)
|
||||
layout = streams[0][1]
|
||||
|
||||
if self.args.timestamp_raw:
|
||||
time_string = repr
|
||||
else:
|
||||
time_string = self.time_string
|
||||
|
||||
if self.args.annotate:
|
||||
printf("# path: %s\n", self.args.path)
|
||||
printf("# layout: %s\n", layout)
|
||||
printf("# start: %s\n", self.time_string(self.args.start))
|
||||
printf("# end: %s\n", self.time_string(self.args.end))
|
||||
printf("# start: %s\n", time_string(self.args.start))
|
||||
printf("# end: %s\n", time_string(self.args.end))
|
||||
|
||||
printed = False
|
||||
for dataline in self.client.stream_extract(self.args.path,
|
||||
@@ -50,7 +64,7 @@ def cmd_extract(self):
|
||||
# Strip timestamp (first element). Doesn't make sense
|
||||
# if we are only returning a count.
|
||||
dataline = ' '.join(dataline.split(' ')[1:])
|
||||
print dataline
|
||||
print(dataline)
|
||||
printed = True
|
||||
if not printed:
|
||||
if self.args.annotate:
|
||||
|
@@ -1,4 +1,3 @@
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
|
||||
from argparse import ArgumentDefaultsHelpFormatter as def_form
|
||||
|
@@ -1,7 +1,7 @@
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
import nilmdb.client
|
||||
import nilmdb.timestamper
|
||||
import nilmdb.utils.timestamper as timestamper
|
||||
|
||||
import sys
|
||||
|
||||
@@ -51,12 +51,12 @@ def cmd_insert(self):
|
||||
# Find requested stream
|
||||
streams = self.client.stream_list(self.args.path)
|
||||
if len(streams) != 1:
|
||||
self.die("Error getting stream info for path %s", self.args.path)
|
||||
self.die("error getting stream info for path %s", self.args.path)
|
||||
|
||||
layout = streams[0][1]
|
||||
|
||||
if self.args.start and len(self.args.file) != 1:
|
||||
self.die("--start can only be used with one input file, for now")
|
||||
self.die("error: --start can only be used with one input file")
|
||||
|
||||
for filename in self.args.file:
|
||||
if filename == '-':
|
||||
@@ -65,11 +65,11 @@ def cmd_insert(self):
|
||||
try:
|
||||
infile = open(filename, "r")
|
||||
except IOError:
|
||||
self.die("Error opening input file %s", filename)
|
||||
self.die("error opening input file %s", filename)
|
||||
|
||||
# Build a timestamper for this file
|
||||
if self.args.none:
|
||||
ts = nilmdb.timestamper.TimestamperNull(infile)
|
||||
ts = timestamper.TimestamperNull(infile)
|
||||
else:
|
||||
if self.args.start:
|
||||
start = self.args.start
|
||||
@@ -77,14 +77,14 @@ def cmd_insert(self):
|
||||
try:
|
||||
start = self.parse_time(filename)
|
||||
except ValueError:
|
||||
self.die("Error extracting time from filename '%s'",
|
||||
self.die("error extracting time from filename '%s'",
|
||||
filename)
|
||||
|
||||
if not self.args.rate:
|
||||
self.die("Need to specify --rate")
|
||||
self.die("error: --rate is needed, but was not specified")
|
||||
rate = self.args.rate
|
||||
|
||||
ts = nilmdb.timestamper.TimestamperRate(infile, start, rate)
|
||||
ts = timestamper.TimestamperRate(infile, start, rate)
|
||||
|
||||
# Print info
|
||||
if not self.args.quiet:
|
||||
@@ -100,6 +100,6 @@ def cmd_insert(self):
|
||||
# ugly bracketed ranges of 16-digit numbers and a mangled URL.
|
||||
# Need to consider adding something like e.prettyprint()
|
||||
# that is smarter about the contents of the error.
|
||||
self.die("Error inserting data: %s", str(e))
|
||||
self.die("error inserting data: %s", str(e))
|
||||
|
||||
return
|
||||
|
@@ -1,8 +1,9 @@
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
import nilmdb.client
|
||||
|
||||
import fnmatch
|
||||
import argparse
|
||||
from argparse import ArgumentDefaultsHelpFormatter as def_form
|
||||
|
||||
def setup(self, sub):
|
||||
@@ -13,27 +14,53 @@ def setup(self, sub):
|
||||
optionally filtering by layout or path. Wildcards
|
||||
are accepted.
|
||||
""")
|
||||
cmd.set_defaults(handler = cmd_list)
|
||||
cmd.set_defaults(verify = cmd_list_verify,
|
||||
handler = cmd_list)
|
||||
|
||||
group = cmd.add_argument_group("Stream filtering")
|
||||
group.add_argument("-p", "--path", metavar="PATH", default="*",
|
||||
help="Match only this path (-p can be omitted)")
|
||||
group.add_argument("path_positional", default="*",
|
||||
nargs="?", help=argparse.SUPPRESS)
|
||||
group.add_argument("-l", "--layout", default="*",
|
||||
help="Match only this stream layout")
|
||||
group.add_argument("-p", "--path", default="*",
|
||||
help="Match only this path")
|
||||
|
||||
group = cmd.add_argument_group("Interval details")
|
||||
group.add_argument("-d", "--detail", action="store_true",
|
||||
help="Show available data time intervals")
|
||||
group.add_argument("-T", "--timestamp-raw", action="store_true",
|
||||
help="Show raw timestamps in time intervals")
|
||||
group.add_argument("-s", "--start",
|
||||
metavar="TIME", type=self.arg_time,
|
||||
help="Starting timestamp (free-form)")
|
||||
help="Starting timestamp (free-form, inclusive)")
|
||||
group.add_argument("-e", "--end",
|
||||
metavar="TIME", type=self.arg_time,
|
||||
help="Ending timestamp (free-form)")
|
||||
help="Ending timestamp (free-form, noninclusive)")
|
||||
|
||||
def cmd_list_verify(self):
|
||||
# A hidden "path_positional" argument lets the user leave off the
|
||||
# "-p" when specifying the path. Handle it here.
|
||||
got_opt = self.args.path != "*"
|
||||
got_pos = self.args.path_positional != "*"
|
||||
if got_pos:
|
||||
if got_opt:
|
||||
self.parser.error("too many paths specified")
|
||||
else:
|
||||
self.args.path = self.args.path_positional
|
||||
|
||||
if self.args.start is not None and self.args.end is not None:
|
||||
if self.args.start > self.args.end:
|
||||
self.parser.error("start is after end")
|
||||
|
||||
def cmd_list(self):
|
||||
"""List available streams"""
|
||||
streams = self.client.stream_list()
|
||||
|
||||
if self.args.timestamp_raw:
|
||||
time_string = repr
|
||||
else:
|
||||
time_string = self.time_string
|
||||
|
||||
for (path, layout) in streams:
|
||||
if not (fnmatch.fnmatch(path, self.args.path) and
|
||||
fnmatch.fnmatch(layout, self.args.layout)):
|
||||
@@ -46,9 +73,7 @@ def cmd_list(self):
|
||||
printed = False
|
||||
for (start, end) in self.client.stream_intervals(path, self.args.start,
|
||||
self.args.end):
|
||||
printf(" [ %s -> %s ]\n",
|
||||
self.time_string(start),
|
||||
self.time_string(end))
|
||||
printf(" [ %s -> %s ]\n", time_string(start), time_string(end))
|
||||
printed = True
|
||||
if not printed:
|
||||
printf(" (no intervals)\n")
|
||||
|
@@ -1,5 +1,5 @@
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
import nilmdb.client
|
||||
|
||||
def setup(self, sub):
|
||||
@@ -43,21 +43,21 @@ def cmd_metadata(self):
|
||||
for keyval in keyvals:
|
||||
kv = keyval.split('=')
|
||||
if len(kv) != 2 or kv[0] == "":
|
||||
self.die("Error parsing key=value argument '%s'", keyval)
|
||||
self.die("error parsing key=value argument '%s'", keyval)
|
||||
data[kv[0]] = kv[1]
|
||||
|
||||
# Make the call
|
||||
try:
|
||||
handler(self.args.path, data)
|
||||
except nilmdb.client.ClientError as e:
|
||||
self.die("Error setting/updating metadata: %s", str(e))
|
||||
self.die("error setting/updating metadata: %s", str(e))
|
||||
else:
|
||||
# Get (or unspecified)
|
||||
keys = self.args.get or None
|
||||
try:
|
||||
data = self.client.stream_get_metadata(self.args.path, keys)
|
||||
except nilmdb.client.ClientError as e:
|
||||
self.die("Error getting metadata: %s", str(e))
|
||||
self.die("error getting metadata: %s", str(e))
|
||||
for key, value in sorted(data.items()):
|
||||
# Omit nonexistant keys
|
||||
if value is None:
|
||||
|
44
nilmdb/cmdline/remove.py
Normal file
44
nilmdb/cmdline/remove.py
Normal file
@@ -0,0 +1,44 @@
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
import nilmdb.client
|
||||
import sys
|
||||
|
||||
def setup(self, sub):
|
||||
cmd = sub.add_parser("remove", help="Remove data",
|
||||
description="""
|
||||
Remove all data from a specified time range within a
|
||||
stream.
|
||||
""")
|
||||
cmd.set_defaults(verify = cmd_remove_verify,
|
||||
handler = cmd_remove)
|
||||
|
||||
group = cmd.add_argument_group("Data selection")
|
||||
group.add_argument("path",
|
||||
help="Path of stream, e.g. /foo/bar")
|
||||
group.add_argument("-s", "--start", required=True,
|
||||
metavar="TIME", type=self.arg_time,
|
||||
help="Starting timestamp (free-form, inclusive)")
|
||||
group.add_argument("-e", "--end", required=True,
|
||||
metavar="TIME", type=self.arg_time,
|
||||
help="Ending timestamp (free-form, noninclusive)")
|
||||
|
||||
group = cmd.add_argument_group("Output format")
|
||||
group.add_argument("-c", "--count", action="store_true",
|
||||
help="Output number of data points removed")
|
||||
|
||||
def cmd_remove_verify(self):
|
||||
if self.args.start is not None and self.args.end is not None:
|
||||
if self.args.start > self.args.end:
|
||||
self.parser.error("start is after end")
|
||||
|
||||
def cmd_remove(self):
|
||||
try:
|
||||
count = self.client.stream_remove(self.args.path,
|
||||
self.args.start, self.args.end)
|
||||
except nilmdb.client.ClientError as e:
|
||||
self.die("error removing data: %s", str(e))
|
||||
|
||||
if self.args.count:
|
||||
printf("%d\n", count)
|
||||
|
||||
return 0
|
15
nilmdb/server/__init__.py
Normal file
15
nilmdb/server/__init__.py
Normal file
@@ -0,0 +1,15 @@
|
||||
"""nilmdb.server"""
|
||||
|
||||
# Try to set up pyximport to automatically rebuild Cython modules. If
|
||||
# this doesn't work, it's OK, as long as the modules were built externally.
|
||||
# (e.g. python setup.py build_ext --inplace)
|
||||
try:
|
||||
import pyximport
|
||||
pyximport.install()
|
||||
import layout
|
||||
except: # pragma: no cover
|
||||
pass
|
||||
|
||||
from .nilmdb import NilmDB
|
||||
from .server import Server
|
||||
from .errors import *
|
462
nilmdb/server/bulkdata.py
Normal file
462
nilmdb/server/bulkdata.py
Normal file
@@ -0,0 +1,462 @@
|
||||
# Fixed record size bulk data storage
|
||||
|
||||
# Need absolute_import so that "import nilmdb" won't pull in
|
||||
# nilmdb.py, but will pull the parent nilmdb module instead.
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
|
||||
import os
|
||||
import sys
|
||||
import cPickle as pickle
|
||||
import struct
|
||||
import fnmatch
|
||||
import mmap
|
||||
import re
|
||||
|
||||
# Up to 256 open file descriptors at any given time.
|
||||
# These variables are global so they can be used in the decorator arguments.
|
||||
table_cache_size = 16
|
||||
fd_cache_size = 16
|
||||
|
||||
@nilmdb.utils.must_close(wrap_verify = True)
|
||||
class BulkData(object):
|
||||
def __init__(self, basepath, **kwargs):
|
||||
self.basepath = basepath
|
||||
self.root = os.path.join(self.basepath, "data")
|
||||
|
||||
# Tuneables
|
||||
if "file_size" in kwargs:
|
||||
self.file_size = kwargs["file_size"]
|
||||
else:
|
||||
# Default to approximately 128 MiB per file
|
||||
self.file_size = 128 * 1024 * 1024
|
||||
|
||||
if "files_per_dir" in kwargs:
|
||||
self.files_per_dir = kwargs["files_per_dir"]
|
||||
else:
|
||||
# 32768 files per dir should work even on FAT32
|
||||
self.files_per_dir = 32768
|
||||
|
||||
# Make root path
|
||||
if not os.path.isdir(self.root):
|
||||
os.mkdir(self.root)
|
||||
|
||||
def close(self):
|
||||
self.getnode.cache_remove_all()
|
||||
|
||||
def _encode_filename(self, path):
|
||||
# Encode all paths to UTF-8, regardless of sys.getfilesystemencoding(),
|
||||
# because we want to be able to represent all code points and the user
|
||||
# will never be directly exposed to filenames. We can then do path
|
||||
# manipulations on the UTF-8 directly.
|
||||
if isinstance(path, unicode):
|
||||
return path.encode('utf-8')
|
||||
return path
|
||||
|
||||
def create(self, unicodepath, layout_name):
|
||||
"""
|
||||
unicodepath: path to the data (e.g. u'/newton/prep').
|
||||
Paths must contain at least two elements, e.g.:
|
||||
/newton/prep
|
||||
/newton/raw
|
||||
/newton/upstairs/prep
|
||||
/newton/upstairs/raw
|
||||
|
||||
layout_name: string for nilmdb.layout.get_named(), e.g. 'float32_8'
|
||||
"""
|
||||
path = self._encode_filename(unicodepath)
|
||||
|
||||
if path[0] != '/':
|
||||
raise ValueError("paths must start with /")
|
||||
[ group, node ] = path.rsplit("/", 1)
|
||||
if group == '':
|
||||
raise ValueError("invalid path; path must contain at least one "
|
||||
"folder")
|
||||
|
||||
# Get layout, and build format string for struct module
|
||||
try:
|
||||
layout = nilmdb.server.layout.get_named(layout_name)
|
||||
struct_fmt = '<d' # Little endian, double timestamp
|
||||
struct_mapping = {
|
||||
"int8": 'b',
|
||||
"uint8": 'B',
|
||||
"int16": 'h',
|
||||
"uint16": 'H',
|
||||
"int32": 'i',
|
||||
"uint32": 'I',
|
||||
"int64": 'q',
|
||||
"uint64": 'Q',
|
||||
"float32": 'f',
|
||||
"float64": 'd',
|
||||
}
|
||||
for n in range(layout.count):
|
||||
struct_fmt += struct_mapping[layout.datatype]
|
||||
except KeyError:
|
||||
raise ValueError("no such layout, or bad data types")
|
||||
|
||||
# Create the table. Note that we make a distinction here
|
||||
# between NilmDB paths (always Unix style, split apart
|
||||
# manually) and OS paths (built up with os.path.join)
|
||||
|
||||
# Make directories leading up to this one
|
||||
elements = path.lstrip('/').split('/')
|
||||
for i in range(len(elements)):
|
||||
ospath = os.path.join(self.root, *elements[0:i])
|
||||
if Table.exists(ospath):
|
||||
raise ValueError("path is subdir of existing node")
|
||||
if not os.path.isdir(ospath):
|
||||
os.mkdir(ospath)
|
||||
|
||||
# Make the final dir
|
||||
ospath = os.path.join(self.root, *elements)
|
||||
if os.path.isdir(ospath):
|
||||
raise ValueError("subdirs of this path already exist")
|
||||
os.mkdir(ospath)
|
||||
|
||||
# Write format string to file
|
||||
Table.create(ospath, struct_fmt, self.file_size, self.files_per_dir)
|
||||
|
||||
# Open and cache it
|
||||
self.getnode(unicodepath)
|
||||
|
||||
# Success
|
||||
return
|
||||
|
||||
def destroy(self, unicodepath):
|
||||
"""Fully remove all data at a particular path. No way to undo
|
||||
it! The group/path structure is removed, too."""
|
||||
path = self._encode_filename(unicodepath)
|
||||
|
||||
# Get OS path
|
||||
elements = path.lstrip('/').split('/')
|
||||
ospath = os.path.join(self.root, *elements)
|
||||
|
||||
# Remove Table object from cache
|
||||
self.getnode.cache_remove(self, unicodepath)
|
||||
|
||||
# Remove the contents of the target directory
|
||||
if not Table.exists(ospath):
|
||||
raise ValueError("nothing at that path")
|
||||
for (root, dirs, files) in os.walk(ospath, topdown = False):
|
||||
for name in files:
|
||||
os.remove(os.path.join(root, name))
|
||||
for name in dirs:
|
||||
os.rmdir(os.path.join(root, name))
|
||||
|
||||
# Remove empty parent directories
|
||||
for i in reversed(range(len(elements))):
|
||||
ospath = os.path.join(self.root, *elements[0:i+1])
|
||||
try:
|
||||
os.rmdir(ospath)
|
||||
except OSError:
|
||||
break
|
||||
|
||||
# Cache open tables
|
||||
@nilmdb.utils.lru_cache(size = table_cache_size,
|
||||
onremove = lambda x: x.close())
|
||||
def getnode(self, unicodepath):
|
||||
"""Return a Table object corresponding to the given database
|
||||
path, which must exist."""
|
||||
path = self._encode_filename(unicodepath)
|
||||
elements = path.lstrip('/').split('/')
|
||||
ospath = os.path.join(self.root, *elements)
|
||||
return Table(ospath)
|
||||
|
||||
@nilmdb.utils.must_close(wrap_verify = True)
|
||||
class Table(object):
|
||||
"""Tools to help access a single table (data at a specific OS path)."""
|
||||
# See design.md for design details
|
||||
|
||||
# Class methods, to help keep format details in this class.
|
||||
@classmethod
|
||||
def exists(cls, root):
|
||||
"""Return True if a table appears to exist at this OS path"""
|
||||
return os.path.isfile(os.path.join(root, "_format"))
|
||||
|
||||
@classmethod
|
||||
def create(cls, root, struct_fmt, file_size, files_per_dir):
|
||||
"""Initialize a table at the given OS path.
|
||||
'struct_fmt' is a Struct module format description"""
|
||||
|
||||
# Calculate rows per file so that each file is approximately
|
||||
# file_size bytes.
|
||||
packer = struct.Struct(struct_fmt)
|
||||
rows_per_file = max(file_size // packer.size, 1)
|
||||
|
||||
format = { "rows_per_file": rows_per_file,
|
||||
"files_per_dir": files_per_dir,
|
||||
"struct_fmt": struct_fmt,
|
||||
"version": 1 }
|
||||
with open(os.path.join(root, "_format"), "wb") as f:
|
||||
pickle.dump(format, f, 2)
|
||||
|
||||
# Normal methods
|
||||
def __init__(self, root):
|
||||
"""'root' is the full OS path to the directory of this table"""
|
||||
self.root = root
|
||||
|
||||
# Load the format and build packer
|
||||
with open(os.path.join(self.root, "_format"), "rb") as f:
|
||||
format = pickle.load(f)
|
||||
|
||||
if format["version"] != 1: # pragma: no cover (just future proofing)
|
||||
raise NotImplementedError("version " + format["version"] +
|
||||
" bulk data store not supported")
|
||||
|
||||
self.rows_per_file = format["rows_per_file"]
|
||||
self.files_per_dir = format["files_per_dir"]
|
||||
self.packer = struct.Struct(format["struct_fmt"])
|
||||
self.file_size = self.packer.size * self.rows_per_file
|
||||
|
||||
# Find nrows
|
||||
self.nrows = self._get_nrows()
|
||||
|
||||
def close(self):
|
||||
self.mmap_open.cache_remove_all()
|
||||
|
||||
# Internal helpers
|
||||
def _get_nrows(self):
|
||||
"""Find nrows by locating the lexicographically last filename
|
||||
and using its size"""
|
||||
# Note that this just finds a 'nrows' that is guaranteed to be
|
||||
# greater than the row number of any piece of data that
|
||||
# currently exists, not necessarily all data that _ever_
|
||||
# existed.
|
||||
regex = re.compile("^[0-9a-f]{4,}$")
|
||||
|
||||
# Find the last directory. We sort and loop through all of them,
|
||||
# starting with the numerically greatest, because the dirs could be
|
||||
# empty if something was deleted.
|
||||
subdirs = sorted(filter(regex.search, os.listdir(self.root)),
|
||||
key = lambda x: int(x, 16), reverse = True)
|
||||
|
||||
for subdir in subdirs:
|
||||
# Now find the last file in that dir
|
||||
path = os.path.join(self.root, subdir)
|
||||
files = filter(regex.search, os.listdir(path))
|
||||
if not files: # pragma: no cover (shouldn't occur)
|
||||
# Empty dir: try the next one
|
||||
continue
|
||||
|
||||
# Find the numerical max
|
||||
filename = max(files, key = lambda x: int(x, 16))
|
||||
offset = os.path.getsize(os.path.join(self.root, subdir, filename))
|
||||
|
||||
# Convert to row number
|
||||
return self._row_from_offset(subdir, filename, offset)
|
||||
|
||||
# No files, so no data
|
||||
return 0
|
||||
|
||||
def _offset_from_row(self, row):
|
||||
"""Return a (subdir, filename, offset, count) tuple:
|
||||
|
||||
subdir: subdirectory for the file
|
||||
filename: the filename that contains the specified row
|
||||
offset: byte offset of the specified row within the file
|
||||
count: number of rows (starting at offset) that fit in the file
|
||||
"""
|
||||
filenum = row // self.rows_per_file
|
||||
# It's OK if these format specifiers are too short; the filenames
|
||||
# will just get longer but will still sort correctly.
|
||||
dirname = sprintf("%04x", filenum // self.files_per_dir)
|
||||
filename = sprintf("%04x", filenum % self.files_per_dir)
|
||||
offset = (row % self.rows_per_file) * self.packer.size
|
||||
count = self.rows_per_file - (row % self.rows_per_file)
|
||||
return (dirname, filename, offset, count)
|
||||
|
||||
def _row_from_offset(self, subdir, filename, offset):
|
||||
"""Return the row number that corresponds to the given
|
||||
'subdir/filename' and byte-offset within that file."""
|
||||
if (offset % self.packer.size) != 0: # pragma: no cover; shouldn't occur
|
||||
raise ValueError("file offset is not a multiple of data size")
|
||||
filenum = int(subdir, 16) * self.files_per_dir + int(filename, 16)
|
||||
row = (filenum * self.rows_per_file) + (offset // self.packer.size)
|
||||
return row
|
||||
|
||||
# Cache open files
|
||||
@nilmdb.utils.lru_cache(size = fd_cache_size,
|
||||
keys = slice(0,3), # exclude newsize
|
||||
onremove = lambda x: x.close())
|
||||
def mmap_open(self, subdir, filename, newsize = None):
|
||||
"""Open and map a given 'subdir/filename' (relative to self.root).
|
||||
Will be automatically closed when evicted from the cache.
|
||||
|
||||
If 'newsize' is provided, the file is truncated to the given
|
||||
size before the mapping is returned. (Note that the LRU cache
|
||||
on this function means the truncate will only happen if the
|
||||
object isn't already cached; mmap.resize should be used too.)"""
|
||||
try:
|
||||
os.mkdir(os.path.join(self.root, subdir))
|
||||
except OSError:
|
||||
pass
|
||||
f = open(os.path.join(self.root, subdir, filename), "a+", 0)
|
||||
if newsize is not None:
|
||||
# mmap can't map a zero-length file, so this allows the
|
||||
# caller to set the filesize between file creation and
|
||||
# mmap.
|
||||
f.truncate(newsize)
|
||||
mm = mmap.mmap(f.fileno(), 0)
|
||||
return mm
|
||||
|
||||
def mmap_open_resize(self, subdir, filename, newsize):
|
||||
"""Open and map a given 'subdir/filename' (relative to self.root).
|
||||
The file is resized to the given size."""
|
||||
# Pass new size to mmap_open
|
||||
mm = self.mmap_open(subdir, filename, newsize)
|
||||
# In case we got a cached copy, need to call mm.resize too.
|
||||
mm.resize(newsize)
|
||||
return mm
|
||||
|
||||
def append(self, data):
|
||||
"""Append the data and flush it to disk.
|
||||
data is a nested Python list [[row],[row],[...]]"""
|
||||
remaining = len(data)
|
||||
dataiter = iter(data)
|
||||
while remaining:
|
||||
# See how many rows we can fit into the current file, and open it
|
||||
(subdir, fname, offset, count) = self._offset_from_row(self.nrows)
|
||||
if count > remaining:
|
||||
count = remaining
|
||||
newsize = offset + count * self.packer.size
|
||||
mm = self.mmap_open_resize(subdir, fname, newsize)
|
||||
mm.seek(offset)
|
||||
|
||||
# Write the data
|
||||
for i in xrange(count):
|
||||
row = dataiter.next()
|
||||
mm.write(self.packer.pack(*row))
|
||||
remaining -= count
|
||||
self.nrows += count
|
||||
|
||||
def __getitem__(self, key):
|
||||
"""Extract data and return it. Supports simple indexing
|
||||
(table[n]) and range slices (table[n:m]). Returns a nested
|
||||
Python list [[row],[row],[...]]"""
|
||||
|
||||
# Handle simple slices
|
||||
if isinstance(key, slice):
|
||||
# Fall back to brute force if the slice isn't simple
|
||||
if ((key.step is not None and key.step != 1) or
|
||||
key.start is None or
|
||||
key.stop is None or
|
||||
key.start >= key.stop or
|
||||
key.start < 0 or
|
||||
key.stop > self.nrows):
|
||||
return [ self[x] for x in xrange(*key.indices(self.nrows)) ]
|
||||
|
||||
ret = []
|
||||
row = key.start
|
||||
remaining = key.stop - key.start
|
||||
while remaining:
|
||||
(subdir, filename, offset, count) = self._offset_from_row(row)
|
||||
if count > remaining:
|
||||
count = remaining
|
||||
mm = self.mmap_open(subdir, filename)
|
||||
for i in xrange(count):
|
||||
ret.append(list(self.packer.unpack_from(mm, offset)))
|
||||
offset += self.packer.size
|
||||
remaining -= count
|
||||
row += count
|
||||
return ret
|
||||
|
||||
# Handle single points
|
||||
if key < 0 or key >= self.nrows:
|
||||
raise IndexError("Index out of range")
|
||||
(subdir, filename, offset, count) = self._offset_from_row(key)
|
||||
mm = self.mmap_open(subdir, filename)
|
||||
# unpack_from ignores the mmap object's current seek position
|
||||
return list(self.packer.unpack_from(mm, offset))
|
||||
|
||||
def _remove_rows(self, subdir, filename, start, stop):
|
||||
"""Helper to mark specific rows as being removed from a
|
||||
file, and potentially removing or truncating the file itself."""
|
||||
# Import an existing list of deleted rows for this file
|
||||
datafile = os.path.join(self.root, subdir, filename)
|
||||
cachefile = datafile + ".removed"
|
||||
try:
|
||||
with open(cachefile, "rb") as f:
|
||||
ranges = pickle.load(f)
|
||||
cachefile_present = True
|
||||
except:
|
||||
ranges = []
|
||||
cachefile_present = False
|
||||
|
||||
# Append our new range and sort
|
||||
ranges.append((start, stop))
|
||||
ranges.sort()
|
||||
|
||||
# Merge adjacent ranges into "out"
|
||||
merged = []
|
||||
prev = None
|
||||
for new in ranges:
|
||||
if prev is None:
|
||||
# No previous range, so remember this one
|
||||
prev = new
|
||||
elif prev[1] == new[0]:
|
||||
# Previous range connected to this new one; extend prev
|
||||
prev = (prev[0], new[1])
|
||||
else:
|
||||
# Not connected; append previous and start again
|
||||
merged.append(prev)
|
||||
prev = new
|
||||
if prev is not None:
|
||||
merged.append(prev)
|
||||
|
||||
# If the range covered the whole file, we can delete it now.
|
||||
# Note that the last file in a table may be only partially
|
||||
# full (smaller than self.rows_per_file). We purposely leave
|
||||
# those files around rather than deleting them, because the
|
||||
# remainder will be filled on a subsequent append(), and things
|
||||
# are generally easier if we don't have to special-case that.
|
||||
if (len(merged) == 1 and
|
||||
merged[0][0] == 0 and merged[0][1] == self.rows_per_file):
|
||||
# Close potentially open file in mmap_open LRU cache
|
||||
self.mmap_open.cache_remove(self, subdir, filename)
|
||||
|
||||
# Delete files
|
||||
os.remove(datafile)
|
||||
if cachefile_present:
|
||||
os.remove(cachefile)
|
||||
|
||||
# Try deleting subdir, too
|
||||
try:
|
||||
os.rmdir(os.path.join(self.root, subdir))
|
||||
except:
|
||||
pass
|
||||
else:
|
||||
# Update cache. Try to do it atomically.
|
||||
nilmdb.utils.atomic.replace_file(cachefile,
|
||||
pickle.dumps(merged, 2))
|
||||
|
||||
def remove(self, start, stop):
|
||||
"""Remove specified rows [start, stop) from this table.
|
||||
|
||||
If a file is left empty, it is fully removed. Otherwise, a
|
||||
parallel data file is used to remember which rows have been
|
||||
removed, and the file is otherwise untouched."""
|
||||
if start < 0 or start > stop or stop > self.nrows:
|
||||
raise IndexError("Index out of range")
|
||||
|
||||
row = start
|
||||
remaining = stop - start
|
||||
while remaining:
|
||||
# Loop through each file that we need to touch
|
||||
(subdir, filename, offset, count) = self._offset_from_row(row)
|
||||
if count > remaining:
|
||||
count = remaining
|
||||
row_offset = offset // self.packer.size
|
||||
# Mark the rows as being removed
|
||||
self._remove_rows(subdir, filename, row_offset, row_offset + count)
|
||||
remaining -= count
|
||||
row += count
|
||||
|
||||
class TimestampOnlyTable(object):
|
||||
"""Helper that lets us pass a Tables object into bisect, by
|
||||
returning only the timestamp when a particular row is requested."""
|
||||
def __init__(self, table):
|
||||
self.table = table
|
||||
def __getitem__(self, index):
|
||||
return self.table[index][0]
|
12
nilmdb/server/errors.py
Normal file
12
nilmdb/server/errors.py
Normal file
@@ -0,0 +1,12 @@
|
||||
"""Exceptions"""
|
||||
|
||||
class NilmDBError(Exception):
|
||||
"""Base exception for NilmDB errors"""
|
||||
def __init__(self, message = "Unspecified error"):
|
||||
Exception.__init__(self, message)
|
||||
|
||||
class StreamError(NilmDBError):
|
||||
pass
|
||||
|
||||
class OverlapError(NilmDBError):
|
||||
pass
|
@@ -37,6 +37,7 @@ cdef class Interval:
|
||||
'start' and 'end' are arbitrary floats that represent time
|
||||
"""
|
||||
if start > end:
|
||||
# Explicitly disallow zero-width intervals (since they're half-open)
|
||||
raise IntervalError("start %s must precede end %s" % (start, end))
|
||||
self.start = float(start)
|
||||
self.end = float(end)
|
||||
@@ -278,7 +279,7 @@ cdef class IntervalSet:
|
||||
|
||||
return out
|
||||
|
||||
def intersection(self, Interval interval not None):
|
||||
def intersection(self, Interval interval not None, orig = False):
|
||||
"""
|
||||
Compute a sequence of intervals that correspond to the
|
||||
intersection between `self` and the provided interval.
|
||||
@@ -287,6 +288,10 @@ cdef class IntervalSet:
|
||||
|
||||
Output intervals are built as subsets of the intervals in the
|
||||
first argument (self).
|
||||
|
||||
If orig = True, also return the original interval that was
|
||||
(potentially) subsetted to make the one that is being
|
||||
returned.
|
||||
"""
|
||||
if not isinstance(interval, Interval):
|
||||
raise TypeError("bad type")
|
||||
@@ -294,11 +299,17 @@ cdef class IntervalSet:
|
||||
i = n.obj
|
||||
if i:
|
||||
if i.start >= interval.start and i.end <= interval.end:
|
||||
yield i
|
||||
if orig:
|
||||
yield (i, i)
|
||||
else:
|
||||
yield i
|
||||
else:
|
||||
subset = i.subset(max(i.start, interval.start),
|
||||
min(i.end, interval.end))
|
||||
yield subset
|
||||
if orig:
|
||||
yield (subset, i)
|
||||
else:
|
||||
yield subset
|
||||
|
||||
cpdef intersects(self, Interval other):
|
||||
"""Return True if this IntervalSet intersects another interval"""
|
@@ -170,7 +170,7 @@ class Parser(object):
|
||||
if line[0] == '\#':
|
||||
continue
|
||||
(ts, row) = self.layout.parse(line)
|
||||
if ts < last_ts:
|
||||
if ts <= last_ts:
|
||||
raise ValueError("timestamp is not "
|
||||
"monotonically increasing")
|
||||
last_ts = ts
|
@@ -7,11 +7,15 @@ Object that represents a NILM database file.
|
||||
Manages both the SQL database and the table storage backend.
|
||||
"""
|
||||
|
||||
# Need absolute_import so that "import nilmdb" won't pull in nilmdb.py,
|
||||
# but will pull the nilmdb module instead.
|
||||
# Need absolute_import so that "import nilmdb" won't pull in
|
||||
# nilmdb.py, but will pull the parent nilmdb module instead.
|
||||
from __future__ import absolute_import
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
from nilmdb.server.interval import (Interval, DBInterval,
|
||||
IntervalSet, IntervalError)
|
||||
from nilmdb.server import bulkdata
|
||||
from nilmdb.server.errors import *
|
||||
|
||||
import sqlite3
|
||||
import time
|
||||
@@ -20,12 +24,6 @@ import os
|
||||
import errno
|
||||
import bisect
|
||||
|
||||
import pyximport
|
||||
pyximport.install()
|
||||
from nilmdb.interval import Interval, DBInterval, IntervalSet, IntervalError
|
||||
|
||||
from . import bulkdata
|
||||
|
||||
# Note about performance and transactions:
|
||||
#
|
||||
# Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
|
||||
@@ -77,22 +75,12 @@ _sql_schema_updates = {
|
||||
""",
|
||||
}
|
||||
|
||||
class NilmDBError(Exception):
|
||||
"""Base exception for NilmDB errors"""
|
||||
def __init__(self, message = "Unspecified error"):
|
||||
Exception.__init__(self, self.__class__.__name__ + ": " + message)
|
||||
|
||||
class StreamError(NilmDBError):
|
||||
pass
|
||||
|
||||
class OverlapError(NilmDBError):
|
||||
pass
|
||||
|
||||
@nilmdb.utils.must_close()
|
||||
class NilmDB(object):
|
||||
verbose = 0
|
||||
|
||||
def __init__(self, basepath, sync=True, max_results=None):
|
||||
def __init__(self, basepath, sync=True, max_results=None,
|
||||
bulkdata_args={}):
|
||||
# set up path
|
||||
self.basepath = os.path.abspath(basepath)
|
||||
|
||||
@@ -104,7 +92,7 @@ class NilmDB(object):
|
||||
raise IOError("can't create tree " + self.basepath)
|
||||
|
||||
# Our data goes inside it
|
||||
self.data = bulkdata.BulkData(self.basepath)
|
||||
self.data = bulkdata.BulkData(self.basepath, **bulkdata_args)
|
||||
|
||||
# SQLite database too
|
||||
sqlfilename = os.path.join(self.basepath, "data.sql")
|
||||
@@ -173,6 +161,20 @@ class NilmDB(object):
|
||||
|
||||
return iset
|
||||
|
||||
def _sql_interval_insert(self, id, start, end, start_pos, end_pos):
|
||||
"""Helper that adds interval to the SQL database only"""
|
||||
self.con.execute("INSERT INTO ranges "
|
||||
"(stream_id,start_time,end_time,start_pos,end_pos) "
|
||||
"VALUES (?,?,?,?,?)",
|
||||
(id, start, end, start_pos, end_pos))
|
||||
|
||||
def _sql_interval_delete(self, id, start, end, start_pos, end_pos):
|
||||
"""Helper that removes interval from the SQL database only"""
|
||||
self.con.execute("DELETE FROM ranges WHERE "
|
||||
"stream_id=? AND start_time=? AND "
|
||||
"end_time=? AND start_pos=? AND end_pos=?",
|
||||
(id, start, end, start_pos, end_pos))
|
||||
|
||||
def _add_interval(self, stream_id, interval, start_pos, end_pos):
|
||||
"""
|
||||
Add interval to the internal interval cache, and to the database.
|
||||
@@ -191,7 +193,7 @@ class NilmDB(object):
|
||||
# time range [adjacent.start -> interval.end)
|
||||
# and database rows [ adjacent.start_pos -> end_pos ].
|
||||
# Only do this if the resulting interval isn't too large.
|
||||
max_merged_rows = 30000000 # a bit more than 1 hour at 8 KHz
|
||||
max_merged_rows = 8000 * 60 * 60 * 1.05 # 1.05 hours at 8 KHz
|
||||
adjacent = iset.find_end(interval.start)
|
||||
if (adjacent is not None and
|
||||
start_pos == adjacent.db_endpos and
|
||||
@@ -199,14 +201,9 @@ class NilmDB(object):
|
||||
# First delete the old one, both from our iset and the
|
||||
# database
|
||||
iset -= adjacent
|
||||
self.con.execute("DELETE FROM ranges WHERE "
|
||||
"stream_id=? AND start_time=? AND "
|
||||
"end_time=? AND start_pos=? AND "
|
||||
"end_pos=?", (stream_id,
|
||||
adjacent.db_start,
|
||||
adjacent.db_end,
|
||||
adjacent.db_startpos,
|
||||
adjacent.db_endpos))
|
||||
self._sql_interval_delete(stream_id,
|
||||
adjacent.db_start, adjacent.db_end,
|
||||
adjacent.db_startpos, adjacent.db_endpos)
|
||||
|
||||
# Now update our interval so the fallthrough add is
|
||||
# correct.
|
||||
@@ -219,14 +216,54 @@ class NilmDB(object):
|
||||
start_pos, end_pos))
|
||||
|
||||
# Insert into the database
|
||||
self.con.execute("INSERT INTO ranges "
|
||||
"(stream_id,start_time,end_time,start_pos,end_pos) "
|
||||
"VALUES (?,?,?,?,?)",
|
||||
(stream_id, interval.start, interval.end,
|
||||
int(start_pos), int(end_pos)))
|
||||
self._sql_interval_insert(stream_id, interval.start, interval.end,
|
||||
int(start_pos), int(end_pos))
|
||||
|
||||
self.con.commit()
|
||||
|
||||
def _remove_interval(self, stream_id, original, remove):
|
||||
"""
|
||||
Remove an interval from the internal cache and the database.
|
||||
|
||||
stream_id: id of stream
|
||||
original: original DBInterval; must be already present in DB
|
||||
to_remove: DBInterval to remove; must be subset of 'original'
|
||||
"""
|
||||
# Just return if we have nothing to remove
|
||||
if remove.start == remove.end: # pragma: no cover
|
||||
return
|
||||
|
||||
# Load this stream's intervals
|
||||
iset = self._get_intervals(stream_id)
|
||||
|
||||
# Remove existing interval from the cached set and the database
|
||||
iset -= original
|
||||
self._sql_interval_delete(stream_id,
|
||||
original.db_start, original.db_end,
|
||||
original.db_startpos, original.db_endpos)
|
||||
|
||||
# Add back the intervals that would be left over if the
|
||||
# requested interval is removed. There may be two of them, if
|
||||
# the removed piece was in the middle.
|
||||
def add(iset, start, end, start_pos, end_pos):
|
||||
iset += DBInterval(start, end, start, end, start_pos, end_pos)
|
||||
self._sql_interval_insert(stream_id, start, end, start_pos, end_pos)
|
||||
|
||||
if original.start != remove.start:
|
||||
# Interval before the removed region
|
||||
add(iset, original.start, remove.start,
|
||||
original.db_startpos, remove.db_startpos)
|
||||
|
||||
if original.end != remove.end:
|
||||
# Interval after the removed region
|
||||
add(iset, remove.end, original.end,
|
||||
remove.db_endpos, original.db_endpos)
|
||||
|
||||
# Commit SQL changes
|
||||
self.con.commit()
|
||||
|
||||
return
|
||||
|
||||
def stream_list(self, path = None, layout = None):
|
||||
"""Return list of [path, layout] lists of all streams
|
||||
in the database.
|
||||
@@ -341,7 +378,7 @@ class NilmDB(object):
|
||||
No way to undo it! Metadata is removed."""
|
||||
stream_id = self._stream_id(path)
|
||||
|
||||
# Delete the cached interval data
|
||||
# Delete the cached interval data (if it was cached)
|
||||
self._get_intervals.cache_remove(self, stream_id)
|
||||
|
||||
# Delete the data
|
||||
@@ -381,7 +418,7 @@ class NilmDB(object):
|
||||
# And that's all
|
||||
return "ok"
|
||||
|
||||
def _find_start(self, table, interval):
|
||||
def _find_start(self, table, dbinterval):
|
||||
"""
|
||||
Given a DBInterval, find the row in the database that
|
||||
corresponds to the start time. Return the first database
|
||||
@@ -389,14 +426,14 @@ class NilmDB(object):
|
||||
equal to 'start'.
|
||||
"""
|
||||
# Optimization for the common case where an interval wasn't truncated
|
||||
if interval.start == interval.db_start:
|
||||
return interval.db_startpos
|
||||
if dbinterval.start == dbinterval.db_start:
|
||||
return dbinterval.db_startpos
|
||||
return bisect.bisect_left(bulkdata.TimestampOnlyTable(table),
|
||||
interval.start,
|
||||
interval.db_startpos,
|
||||
interval.db_endpos)
|
||||
dbinterval.start,
|
||||
dbinterval.db_startpos,
|
||||
dbinterval.db_endpos)
|
||||
|
||||
def _find_end(self, table, interval):
|
||||
def _find_end(self, table, dbinterval):
|
||||
"""
|
||||
Given a DBInterval, find the row in the database that follows
|
||||
the end time. Return the first database position after the
|
||||
@@ -404,16 +441,16 @@ class NilmDB(object):
|
||||
to 'end'.
|
||||
"""
|
||||
# Optimization for the common case where an interval wasn't truncated
|
||||
if interval.end == interval.db_end:
|
||||
return interval.db_endpos
|
||||
if dbinterval.end == dbinterval.db_end:
|
||||
return dbinterval.db_endpos
|
||||
# Note that we still use bisect_left here, because we don't
|
||||
# want to include the given timestamp in the results. This is
|
||||
# so a queries like 1:00 -> 2:00 and 2:00 -> 3:00 return
|
||||
# non-overlapping data.
|
||||
return bisect.bisect_left(bulkdata.TimestampOnlyTable(table),
|
||||
interval.end,
|
||||
interval.db_startpos,
|
||||
interval.db_endpos)
|
||||
dbinterval.end,
|
||||
dbinterval.db_startpos,
|
||||
dbinterval.db_endpos)
|
||||
|
||||
def stream_extract(self, path, start = None, end = None, count = False):
|
||||
"""
|
||||
@@ -434,8 +471,8 @@ class NilmDB(object):
|
||||
than actually fetching the data. It is not limited by
|
||||
max_results.
|
||||
"""
|
||||
table = self.data.getnode(path)
|
||||
stream_id = self._stream_id(path)
|
||||
table = self.data.getnode(path)
|
||||
intervals = self._get_intervals(stream_id)
|
||||
requested = Interval(start or 0, end or 1e12)
|
||||
result = []
|
||||
@@ -472,3 +509,45 @@ class NilmDB(object):
|
||||
if count:
|
||||
return matched
|
||||
return (result, restart)
|
||||
|
||||
def stream_remove(self, path, start = None, end = None):
|
||||
"""
|
||||
Remove data from the specified time interval within a stream.
|
||||
Removes all data in the interval [start, end), and intervals
|
||||
are truncated or split appropriately. Returns the number of
|
||||
data points removed.
|
||||
"""
|
||||
stream_id = self._stream_id(path)
|
||||
table = self.data.getnode(path)
|
||||
intervals = self._get_intervals(stream_id)
|
||||
to_remove = Interval(start or 0, end or 1e12)
|
||||
removed = 0
|
||||
|
||||
if start == end:
|
||||
return 0
|
||||
|
||||
# Can't remove intervals from within the iterator, so we need to
|
||||
# remember what's currently in the intersection now.
|
||||
all_candidates = list(intervals.intersection(to_remove, orig = True))
|
||||
|
||||
for (dbint, orig) in all_candidates:
|
||||
# Find row start and end
|
||||
row_start = self._find_start(table, dbint)
|
||||
row_end = self._find_end(table, dbint)
|
||||
|
||||
# Adjust the DBInterval to match the newly found ends
|
||||
dbint.db_start = dbint.start
|
||||
dbint.db_end = dbint.end
|
||||
dbint.db_startpos = row_start
|
||||
dbint.db_endpos = row_end
|
||||
|
||||
# Remove interval from the database
|
||||
self._remove_interval(stream_id, orig, dbint)
|
||||
|
||||
# Remove data from the underlying table storage
|
||||
table.remove(row_start, row_end)
|
||||
|
||||
# Count how many were removed
|
||||
removed += row_end - row_start
|
||||
|
||||
return removed
|
@@ -1,16 +1,19 @@
|
||||
"""CherryPy-based server for accessing NILM database via HTTP"""
|
||||
|
||||
# Need absolute_import so that "import nilmdb" won't pull in nilmdb.py,
|
||||
# but will pull the nilmdb module instead.
|
||||
# Need absolute_import so that "import nilmdb" won't pull in
|
||||
# nilmdb.py, but will pull the nilmdb module instead.
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
from nilmdb.server.errors import *
|
||||
|
||||
import cherrypy
|
||||
import sys
|
||||
import time
|
||||
import os
|
||||
import simplejson as json
|
||||
import decorator
|
||||
import traceback
|
||||
|
||||
try:
|
||||
import cherrypy
|
||||
@@ -23,33 +26,59 @@ class NilmApp(object):
|
||||
def __init__(self, db):
|
||||
self.db = db
|
||||
|
||||
version = "1.1"
|
||||
version = "1.2"
|
||||
|
||||
# Decorators
|
||||
def chunked_response(func):
|
||||
"""Decorator to enable chunked responses"""
|
||||
"""Decorator to enable chunked responses."""
|
||||
# Set this to False to get better tracebacks from some requests
|
||||
# (/stream/extract, /stream/intervals).
|
||||
func._cp_config = { 'response.stream': True }
|
||||
return func
|
||||
|
||||
def workaround_cp_bug_1200(func): # pragma: no cover (just a workaround)
|
||||
def response_type(content_type):
|
||||
"""Return a decorator-generating function that sets the
|
||||
response type to the specified string."""
|
||||
def wrapper(func, *args, **kwargs):
|
||||
cherrypy.response.headers['Content-Type'] = content_type
|
||||
return func(*args, **kwargs)
|
||||
return decorator.decorator(wrapper)
|
||||
|
||||
@decorator.decorator
|
||||
def workaround_cp_bug_1200(func, *args, **kwargs): # pragma: no cover
|
||||
"""Decorator to work around CherryPy bug #1200 in a response
|
||||
generator"""
|
||||
# Even if chunked responses are disabled, you may still miss miss
|
||||
# LookupError, or UnicodeError exceptions due to CherryPy bug
|
||||
# #1200. This throws them as generic Exceptions insteads.
|
||||
import functools
|
||||
import traceback
|
||||
@functools.wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
generator.
|
||||
|
||||
Even if chunked responses are disabled, LookupError or
|
||||
UnicodeError exceptions may still be swallowed by CherryPy due to
|
||||
bug #1200. This throws them as generic Exceptions instead so that
|
||||
they make it through.
|
||||
"""
|
||||
try:
|
||||
for val in func(*args, **kwargs):
|
||||
yield val
|
||||
except (LookupError, UnicodeError) as e:
|
||||
raise Exception("bug workaround; real exception is:\n" +
|
||||
traceback.format_exc())
|
||||
|
||||
def exception_to_httperror(*expected):
|
||||
"""Return a decorator-generating function that catches expected
|
||||
errors and throws a HTTPError describing it instead.
|
||||
|
||||
@exception_to_httperror(NilmDBError, ValueError)
|
||||
def foo():
|
||||
pass
|
||||
"""
|
||||
def wrapper(func, *args, **kwargs):
|
||||
try:
|
||||
for val in func(*args, **kwargs):
|
||||
yield val
|
||||
except (LookupError, UnicodeError) as e:
|
||||
raise Exception("bug workaround; real exception is:\n" +
|
||||
traceback.format_exc())
|
||||
return wrapper
|
||||
return func(*args, **kwargs)
|
||||
except expected as e:
|
||||
message = sprintf("%s", str(e))
|
||||
raise cherrypy.HTTPError("400 Bad Request", message)
|
||||
# We need to preserve the function's argspecs for CherryPy to
|
||||
# handle argument errors correctly. Decorator.decorator takes
|
||||
# care of that.
|
||||
return decorator.decorator(wrapper)
|
||||
|
||||
# CherryPy apps
|
||||
class Root(NilmApp):
|
||||
@@ -104,26 +133,20 @@ class Stream(NilmApp):
|
||||
# /stream/create?path=/newton/prep&layout=PrepData
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@exception_to_httperror(NilmDBError, ValueError)
|
||||
def create(self, path, layout):
|
||||
"""Create a new stream in the database. Provide path
|
||||
and one of the nilmdb.layout.layouts keys.
|
||||
"""
|
||||
try:
|
||||
return self.db.stream_create(path, layout)
|
||||
except Exception as e:
|
||||
message = sprintf("%s: %s", type(e).__name__, e.message)
|
||||
raise cherrypy.HTTPError("400 Bad Request", message)
|
||||
return self.db.stream_create(path, layout)
|
||||
|
||||
# /stream/destroy?path=/newton/prep
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@exception_to_httperror(NilmDBError)
|
||||
def destroy(self, path):
|
||||
"""Delete a stream and its associated data."""
|
||||
try:
|
||||
return self.db.stream_destroy(path)
|
||||
except Exception as e:
|
||||
message = sprintf("%s: %s", type(e).__name__, e.message)
|
||||
raise cherrypy.HTTPError("400 Bad Request", message)
|
||||
return self.db.stream_destroy(path)
|
||||
|
||||
# /stream/get_metadata?path=/newton/prep
|
||||
# /stream/get_metadata?path=/newton/prep&key=foo&key=bar
|
||||
@@ -135,7 +158,7 @@ class Stream(NilmApp):
|
||||
matching the given keys."""
|
||||
try:
|
||||
data = self.db.stream_get_metadata(path)
|
||||
except nilmdb.nilmdb.StreamError as e:
|
||||
except nilmdb.server.nilmdb.StreamError as e:
|
||||
raise cherrypy.HTTPError("404 Not Found", e.message)
|
||||
if key is None: # If no keys specified, return them all
|
||||
key = data.keys()
|
||||
@@ -152,30 +175,24 @@ class Stream(NilmApp):
|
||||
# /stream/set_metadata?path=/newton/prep&data=<json>
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@exception_to_httperror(NilmDBError, LookupError, TypeError)
|
||||
def set_metadata(self, path, data):
|
||||
"""Set metadata for the named stream, replacing any
|
||||
existing metadata. Data should be a json-encoded
|
||||
dictionary"""
|
||||
try:
|
||||
data_dict = json.loads(data)
|
||||
self.db.stream_set_metadata(path, data_dict)
|
||||
except Exception as e:
|
||||
message = sprintf("%s: %s", type(e).__name__, e.message)
|
||||
raise cherrypy.HTTPError("400 Bad Request", message)
|
||||
data_dict = json.loads(data)
|
||||
self.db.stream_set_metadata(path, data_dict)
|
||||
return "ok"
|
||||
|
||||
# /stream/update_metadata?path=/newton/prep&data=<json>
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@exception_to_httperror(NilmDBError, LookupError, TypeError)
|
||||
def update_metadata(self, path, data):
|
||||
"""Update metadata for the named stream. Data
|
||||
should be a json-encoded dictionary"""
|
||||
try:
|
||||
data_dict = json.loads(data)
|
||||
self.db.stream_update_metadata(path, data_dict)
|
||||
except Exception as e:
|
||||
message = sprintf("%s: %s", type(e).__name__, e.message)
|
||||
raise cherrypy.HTTPError("400 Bad Request", message)
|
||||
data_dict = json.loads(data)
|
||||
self.db.stream_update_metadata(path, data_dict)
|
||||
return "ok"
|
||||
|
||||
# /stream/insert?path=/newton/prep
|
||||
@@ -204,11 +221,11 @@ class Stream(NilmApp):
|
||||
|
||||
# Parse the input data
|
||||
try:
|
||||
parser = nilmdb.layout.Parser(layout)
|
||||
parser = nilmdb.server.layout.Parser(layout)
|
||||
parser.parse(body)
|
||||
except nilmdb.layout.ParserError as e:
|
||||
except nilmdb.server.layout.ParserError as e:
|
||||
raise cherrypy.HTTPError("400 Bad Request",
|
||||
"Error parsing input data: " +
|
||||
"error parsing input data: " +
|
||||
e.message)
|
||||
|
||||
if (not parser.min_timestamp or not parser.max_timestamp or
|
||||
@@ -231,22 +248,48 @@ class Stream(NilmApp):
|
||||
# Now do the nilmdb insert, passing it the parser full of data.
|
||||
try:
|
||||
result = self.db.stream_insert(path, start, end, parser.data)
|
||||
except nilmdb.nilmdb.NilmDBError as e:
|
||||
except NilmDBError as e:
|
||||
raise cherrypy.HTTPError("400 Bad Request", e.message)
|
||||
|
||||
# Done
|
||||
return "ok"
|
||||
|
||||
# /stream/remove?path=/newton/prep
|
||||
# /stream/remove?path=/newton/prep&start=1234567890.0&end=1234567899.0
|
||||
@cherrypy.expose
|
||||
@cherrypy.tools.json_out()
|
||||
@exception_to_httperror(NilmDBError)
|
||||
def remove(self, path, start = None, end = None):
|
||||
"""
|
||||
Remove data from the backend database. Removes all data in
|
||||
the interval [start, end). Returns the number of data points
|
||||
removed.
|
||||
"""
|
||||
if start is not None:
|
||||
start = float(start)
|
||||
if end is not None:
|
||||
end = float(end)
|
||||
if start is not None and end is not None:
|
||||
if end < start:
|
||||
raise cherrypy.HTTPError("400 Bad Request",
|
||||
"end before start")
|
||||
return self.db.stream_remove(path, start, end)
|
||||
|
||||
# /stream/intervals?path=/newton/prep
|
||||
# /stream/intervals?path=/newton/prep&start=1234567890.0&end=1234567899.0
|
||||
@cherrypy.expose
|
||||
@chunked_response
|
||||
@response_type("text/plain")
|
||||
def intervals(self, path, start = None, end = None):
|
||||
"""
|
||||
Get intervals from backend database. Streams the resulting
|
||||
intervals as JSON strings separated by newlines. This may
|
||||
make multiple requests to the nilmdb backend to avoid causing
|
||||
it to block for too long.
|
||||
|
||||
Note that the response type is set to 'text/plain' even
|
||||
though we're sending back JSON; this is because we're not
|
||||
really returning a single JSON object.
|
||||
"""
|
||||
if start is not None:
|
||||
start = float(start)
|
||||
@@ -277,6 +320,7 @@ class Stream(NilmApp):
|
||||
# /stream/extract?path=/newton/prep&start=1234567890.0&end=1234567899.0
|
||||
@cherrypy.expose
|
||||
@chunked_response
|
||||
@response_type("text/plain")
|
||||
def extract(self, path, start = None, end = None, count = False):
|
||||
"""
|
||||
Extract data from backend database. Streams the resulting
|
||||
@@ -304,7 +348,7 @@ class Stream(NilmApp):
|
||||
layout = streams[0][1]
|
||||
|
||||
# Get formatter
|
||||
formatter = nilmdb.layout.Formatter(layout)
|
||||
formatter = nilmdb.server.layout.Formatter(layout)
|
||||
|
||||
@workaround_cp_bug_1200
|
||||
def content(start, end, count):
|
||||
@@ -359,11 +403,22 @@ class Server(object):
|
||||
if self.embedded:
|
||||
cherrypy.config.update({ 'environment': 'embedded' })
|
||||
|
||||
# Send a permissive Access-Control-Allow-Origin (CORS) header
|
||||
# with all responses so that browsers can send cross-domain
|
||||
# requests to this server.
|
||||
cherrypy.config.update({ 'response.headers.Access-Control-Allow-Origin':
|
||||
'*' })
|
||||
|
||||
# Send tracebacks in error responses. They're hidden by the
|
||||
# error_page function for client errors (code 400-499).
|
||||
cherrypy.config.update({ 'request.show_tracebacks' : True })
|
||||
self.force_traceback = force_traceback
|
||||
|
||||
# Patch CherryPy error handler to never pad out error messages.
|
||||
# This isn't necessary, but then again, neither is padding the
|
||||
# error messages.
|
||||
cherrypy._cperror._ie_friendly_error_sizes = {}
|
||||
|
||||
cherrypy.tree.apps = {}
|
||||
cherrypy.tree.mount(Root(self.db, self.version), "/")
|
||||
cherrypy.tree.mount(Stream(self.db), "/stream")
|
||||
@@ -426,8 +481,10 @@ class Server(object):
|
||||
cherrypy.engine.start()
|
||||
os._exit = real_exit
|
||||
|
||||
# Signal that the engine has started successfully
|
||||
if event is not None:
|
||||
event.set()
|
||||
|
||||
if blocking:
|
||||
try:
|
||||
cherrypy.engine.wait(cherrypy.engine.states.EXITING,
|
@@ -6,3 +6,6 @@ from .serializer import Serializer
|
||||
from .lrucache import lru_cache
|
||||
from .diskusage import du
|
||||
from .mustclose import must_close
|
||||
from .urllib import urlencode
|
||||
from . import misc
|
||||
from . import atomic
|
||||
|
26
nilmdb/utils/atomic.py
Normal file
26
nilmdb/utils/atomic.py
Normal file
@@ -0,0 +1,26 @@
|
||||
# Atomic file writing helper.
|
||||
|
||||
import os
|
||||
|
||||
def replace_file(filename, content):
|
||||
"""Attempt to atomically and durably replace the filename with the
|
||||
given contents. This is intended to be 'pretty good on most
|
||||
OSes', but not necessarily bulletproof."""
|
||||
|
||||
newfilename = filename + ".new"
|
||||
|
||||
# Write to new file, flush it
|
||||
with open(newfilename, "wb") as f:
|
||||
f.write(content)
|
||||
f.flush()
|
||||
os.fsync(f.fileno())
|
||||
|
||||
# Move new file over old one
|
||||
try:
|
||||
os.rename(newfilename, filename)
|
||||
except OSError: # pragma: no cover
|
||||
# Some OSes might not support renaming over an existing file.
|
||||
# This is definitely NOT atomic!
|
||||
os.remove(filename)
|
||||
os.rename(newfilename, filename)
|
||||
|
@@ -1,4 +1,3 @@
|
||||
import nilmdb
|
||||
import os
|
||||
from math import log
|
||||
|
||||
|
@@ -1,14 +1,16 @@
|
||||
import Queue
|
||||
import threading
|
||||
import sys
|
||||
import contextlib
|
||||
|
||||
# This file provides a class that will convert a function that
|
||||
# takes a callback into a generator that returns an iterator.
|
||||
# This file provides a context manager that converts a function
|
||||
# that takes a callback into a generator that returns an iterable.
|
||||
# This is done by running the function in a new thread.
|
||||
|
||||
# Based partially on http://stackoverflow.com/questions/9968592/
|
||||
|
||||
class IteratorizerThread(threading.Thread):
|
||||
def __init__(self, queue, function):
|
||||
def __init__(self, queue, function, curl_hack):
|
||||
"""
|
||||
function: function to execute, which takes the
|
||||
callback (provided by this class) as an argument
|
||||
@@ -17,56 +19,81 @@ class IteratorizerThread(threading.Thread):
|
||||
self.function = function
|
||||
self.queue = queue
|
||||
self.die = False
|
||||
self.curl_hack = curl_hack
|
||||
|
||||
def callback(self, data):
|
||||
if self.die:
|
||||
raise Exception("should die")
|
||||
self.queue.put((1, data))
|
||||
try:
|
||||
if self.die:
|
||||
raise Exception() # trigger termination
|
||||
self.queue.put((1, data))
|
||||
except:
|
||||
if self.curl_hack:
|
||||
# We can't raise exceptions, because the pycurl
|
||||
# extension module will unconditionally print the
|
||||
# exception itself, and not pass it up to the caller.
|
||||
# Instead, just return a value that tells curl to
|
||||
# abort. (-1 would be best, in case we were given 0
|
||||
# bytes, but the extension doesn't support that).
|
||||
self.queue.put((2, sys.exc_info()))
|
||||
return 0
|
||||
raise
|
||||
|
||||
def run(self):
|
||||
try:
|
||||
result = self.function(self.callback)
|
||||
except:
|
||||
if sys is not None: # can be None during unclean shutdown
|
||||
self.queue.put((2, sys.exc_info()))
|
||||
self.queue.put((2, sys.exc_info()))
|
||||
else:
|
||||
self.queue.put((0, result))
|
||||
|
||||
class Iteratorizer(object):
|
||||
def __init__(self, function):
|
||||
"""
|
||||
function: function to execute, which takes the
|
||||
callback (provided by this class) as an argument
|
||||
"""
|
||||
self.function = function
|
||||
self.queue = Queue.Queue(maxsize = 1)
|
||||
self.thread = IteratorizerThread(self.queue, self.function)
|
||||
self.thread.daemon = True
|
||||
self.thread.start()
|
||||
@contextlib.contextmanager
|
||||
def Iteratorizer(function, curl_hack = False):
|
||||
"""
|
||||
Context manager that takes a function expecting a callback,
|
||||
and provides an iterable that yields the values passed to that
|
||||
callback instead.
|
||||
|
||||
def __del__(self):
|
||||
# If we get garbage collected, try to get rid of the
|
||||
# thread too by asking it to raise an exception, then
|
||||
# draining the queue until it's gone.
|
||||
self.thread.die = True
|
||||
while self.thread.isAlive():
|
||||
function: function to execute, which takes a callback
|
||||
(provided by this context manager) as an argument
|
||||
|
||||
with iteratorizer(func) as it:
|
||||
for i in it:
|
||||
print 'callback was passed:', i
|
||||
print 'function returned:', it.retval
|
||||
"""
|
||||
queue = Queue.Queue(maxsize = 1)
|
||||
thread = IteratorizerThread(queue, function, curl_hack)
|
||||
thread.daemon = True
|
||||
thread.start()
|
||||
|
||||
class iteratorizer_gen(object):
|
||||
def __init__(self, queue):
|
||||
self.queue = queue
|
||||
self.retval = None
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def next(self):
|
||||
(typ, data) = self.queue.get()
|
||||
if typ == 0:
|
||||
# function has returned
|
||||
self.retval = data
|
||||
raise StopIteration
|
||||
elif typ == 1:
|
||||
# data is available
|
||||
return data
|
||||
else:
|
||||
# callback raised an exception
|
||||
raise data[0], data[1], data[2]
|
||||
|
||||
try:
|
||||
yield iteratorizer_gen(queue)
|
||||
finally:
|
||||
# Ask the thread to die, if it's still running.
|
||||
thread.die = True
|
||||
while thread.isAlive():
|
||||
try:
|
||||
self.queue.get(True, 0.01)
|
||||
except: # pragma: no cover
|
||||
queue.get(True, 0.01)
|
||||
except:
|
||||
pass
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def next(self):
|
||||
(typ, data) = self.queue.get()
|
||||
if typ == 0:
|
||||
# function returned
|
||||
self.retval = data
|
||||
raise StopIteration
|
||||
elif typ == 1:
|
||||
# data available
|
||||
return data
|
||||
else:
|
||||
# exception
|
||||
raise data[0], data[1], data[2]
|
||||
|
@@ -4,17 +4,19 @@
|
||||
# with added 'destructor' functionality.
|
||||
|
||||
import collections
|
||||
import functools
|
||||
import decorator
|
||||
import warnings
|
||||
|
||||
def lru_cache(size = 10, onremove = None):
|
||||
def lru_cache(size = 10, onremove = None, keys = slice(None)):
|
||||
"""Least-recently-used cache decorator.
|
||||
|
||||
@lru_cache(size = 10, onevict = None)
|
||||
def f(...):
|
||||
pass
|
||||
|
||||
Given a function and arguments, memoize its return value.
|
||||
Up to 'size' elements are cached.
|
||||
Given a function and arguments, memoize its return value. Up to
|
||||
'size' elements are cached. 'keys' is a slice object that
|
||||
represents which arguments are used as the cache key.
|
||||
|
||||
When evicting a value from the cache, call the function
|
||||
'onremove' with the value that's being evicted.
|
||||
@@ -24,43 +26,52 @@ def lru_cache(size = 10, onremove = None):
|
||||
f.cache_hits and f.cache_misses give statistics.
|
||||
"""
|
||||
|
||||
def decorator(func):
|
||||
def decorate(func):
|
||||
cache = collections.OrderedDict() # order: least- to most-recent
|
||||
|
||||
def evict(value):
|
||||
if onremove:
|
||||
onremove(value)
|
||||
|
||||
@functools.wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
key = args + tuple(sorted(kwargs.items()))
|
||||
def wrapper(orig, *args, **kwargs):
|
||||
if kwargs:
|
||||
raise NotImplementedError("kwargs not supported")
|
||||
key = args[keys]
|
||||
try:
|
||||
value = cache.pop(key)
|
||||
wrapper.cache_hits += 1
|
||||
orig.cache_hits += 1
|
||||
except KeyError:
|
||||
value = func(*args, **kwargs)
|
||||
wrapper.cache_misses += 1
|
||||
value = orig(*args)
|
||||
orig.cache_misses += 1
|
||||
if len(cache) >= size:
|
||||
evict(cache.popitem(0)[1]) # evict LRU cache entry
|
||||
cache[key] = value # (re-)insert this key at end
|
||||
return value
|
||||
|
||||
def cache_remove(*args, **kwargs):
|
||||
"""Remove the described key from this cache, if present.
|
||||
Note that if the original wrapped function was implicitly
|
||||
passed 'self', you need to pass it as an argument here too."""
|
||||
key = args + tuple(sorted(kwargs.items()))
|
||||
def cache_remove(*args):
|
||||
"""Remove the described key from this cache, if present."""
|
||||
key = args
|
||||
if key in cache:
|
||||
evict(cache.pop(key))
|
||||
else:
|
||||
if len(cache) > 0 and len(args) != len(cache.iterkeys().next()):
|
||||
raise KeyError("trying to remove from LRU cache, but "
|
||||
"number of arguments doesn't match the "
|
||||
"cache key length")
|
||||
|
||||
def cache_remove_all():
|
||||
for key in cache:
|
||||
evict(cache.pop(key))
|
||||
|
||||
wrapper.cache_hits = 0
|
||||
wrapper.cache_misses = 0
|
||||
wrapper.cache_remove = cache_remove
|
||||
wrapper.cache_remove_all = cache_remove_all
|
||||
def cache_info():
|
||||
return (func.cache_hits, func.cache_misses)
|
||||
|
||||
return wrapper
|
||||
return decorator
|
||||
new = decorator.decorator(wrapper, func)
|
||||
func.cache_hits = 0
|
||||
func.cache_misses = 0
|
||||
new.cache_info = cache_info
|
||||
new.cache_remove = cache_remove
|
||||
new.cache_remove_all = cache_remove_all
|
||||
return new
|
||||
|
||||
return decorate
|
||||
|
8
nilmdb/utils/misc.py
Normal file
8
nilmdb/utils/misc.py
Normal file
@@ -0,0 +1,8 @@
|
||||
import itertools
|
||||
|
||||
def pairwise(iterable):
|
||||
"s -> (s0,s1), (s1,s2), ..., (sn,None)"
|
||||
a, b = itertools.tee(iterable)
|
||||
next(b, None)
|
||||
return itertools.izip_longest(a, b)
|
||||
|
@@ -1,42 +1,63 @@
|
||||
# Class decorator that warns on stderr at deletion time if the class's
|
||||
# close() member wasn't called.
|
||||
|
||||
from nilmdb.utils.printf import *
|
||||
import sys
|
||||
import inspect
|
||||
import decorator
|
||||
|
||||
def must_close(errorfile = sys.stderr):
|
||||
def decorator(cls):
|
||||
def dummy(*args, **kwargs):
|
||||
pass
|
||||
if "__init__" not in cls.__dict__:
|
||||
cls.__init__ = dummy
|
||||
if "__del__" not in cls.__dict__:
|
||||
cls.__del__ = dummy
|
||||
if "close" not in cls.__dict__:
|
||||
cls.close = dummy
|
||||
def must_close(errorfile = sys.stderr, wrap_verify = False):
|
||||
"""Class decorator that warns on 'errorfile' at deletion time if
|
||||
the class's close() member wasn't called.
|
||||
|
||||
orig_init = cls.__init__
|
||||
orig_del = cls.__del__
|
||||
orig_close = cls.close
|
||||
If 'wrap_verify' is True, every class method is wrapped with a
|
||||
verifier that will raise AssertionError if the .close() method has
|
||||
already been called."""
|
||||
def class_decorator(cls):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
ret = orig_init(self, *args, **kwargs)
|
||||
# Helper to replace a class method with a wrapper function,
|
||||
# while maintaining argument specs etc.
|
||||
def wrap_class_method(wrapper_func):
|
||||
method = wrapper_func.__name__
|
||||
if method in cls.__dict__:
|
||||
orig = getattr(cls, method).im_func
|
||||
else:
|
||||
orig = lambda self: None
|
||||
setattr(cls, method, decorator.decorator(wrapper_func, orig))
|
||||
|
||||
@wrap_class_method
|
||||
def __init__(orig, self, *args, **kwargs):
|
||||
ret = orig(self, *args, **kwargs)
|
||||
self.__dict__["_must_close"] = True
|
||||
self.__dict__["_must_close_initialized"] = True
|
||||
return ret
|
||||
|
||||
def __del__(self):
|
||||
@wrap_class_method
|
||||
def __del__(orig, self, *args, **kwargs):
|
||||
if "_must_close" in self.__dict__:
|
||||
fprintf(errorfile, "error: %s.close() wasn't called!\n",
|
||||
self.__class__.__name__)
|
||||
return orig_del(self)
|
||||
return orig(self, *args, **kwargs)
|
||||
|
||||
def close(self, *args, **kwargs):
|
||||
@wrap_class_method
|
||||
def close(orig, self, *args, **kwargs):
|
||||
del self._must_close
|
||||
return orig_close(self)
|
||||
return orig(self, *args, **kwargs)
|
||||
|
||||
cls.__init__ = __init__
|
||||
cls.__del__ = __del__
|
||||
cls.close = close
|
||||
# Optionally wrap all other functions
|
||||
def verifier(orig, self, *args, **kwargs):
|
||||
if ("_must_close" not in self.__dict__ and
|
||||
"_must_close_initialized" in self.__dict__):
|
||||
raise AssertionError("called " + str(orig) + " after close")
|
||||
return orig(self, *args, **kwargs)
|
||||
if wrap_verify:
|
||||
for (name, method) in inspect.getmembers(cls, inspect.ismethod):
|
||||
# Skip class methods
|
||||
if method.__self__ is not None:
|
||||
continue
|
||||
# Skip some methods
|
||||
if name in [ "__del__", "__init__" ]:
|
||||
continue
|
||||
# Set up wrapper
|
||||
setattr(cls, name, decorator.decorator(verifier,
|
||||
method.im_func))
|
||||
|
||||
return cls
|
||||
return decorator
|
||||
return class_decorator
|
||||
|
@@ -5,6 +5,7 @@
|
||||
# with nilmdb.Timer("flush"):
|
||||
# foo.flush()
|
||||
|
||||
from __future__ import print_function
|
||||
import contextlib
|
||||
import time
|
||||
|
||||
@@ -18,4 +19,4 @@ def Timer(name = None, tosyslog = False):
|
||||
import syslog
|
||||
syslog.syslog(msg)
|
||||
else:
|
||||
print msg
|
||||
print(msg)
|
||||
|
@@ -1,11 +1,10 @@
|
||||
"""File-like objects that add timestamps to the input lines"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
from nilmdb.utils.printf import *
|
||||
from nilmdb.utils import datetime_tz
|
||||
|
||||
import time
|
||||
import os
|
||||
import datetime_tz
|
||||
|
||||
class Timestamper(object):
|
||||
"""A file-like object that adds timestamps to lines of an input file."""
|
37
nilmdb/utils/urllib.py
Normal file
37
nilmdb/utils/urllib.py
Normal file
@@ -0,0 +1,37 @@
|
||||
from __future__ import absolute_import
|
||||
from urllib import quote_plus, _is_unicode
|
||||
|
||||
# urllib.urlencode insists on encoding Unicode as ASCII. This is based
|
||||
# on that function, except we always encode it as UTF-8 instead.
|
||||
|
||||
def urlencode(query):
|
||||
"""Encode a dictionary into a URL query string.
|
||||
|
||||
If any values in the query arg are sequences, each sequence
|
||||
element is converted to a separate parameter.
|
||||
"""
|
||||
|
||||
query = query.items()
|
||||
|
||||
l = []
|
||||
for k, v in query:
|
||||
k = quote_plus(str(k))
|
||||
if isinstance(v, str):
|
||||
v = quote_plus(v)
|
||||
l.append(k + '=' + v)
|
||||
elif _is_unicode(v):
|
||||
v = quote_plus(v.encode("utf-8","strict"))
|
||||
l.append(k + '=' + v)
|
||||
else:
|
||||
try:
|
||||
# is this a sufficient test for sequence-ness?
|
||||
len(v)
|
||||
except TypeError:
|
||||
# not a sequence
|
||||
v = quote_plus(str(v))
|
||||
l.append(k + '=' + v)
|
||||
else:
|
||||
# loop over the sequence
|
||||
for elt in v:
|
||||
l.append(k + '=' + quote_plus(str(elt)))
|
||||
return '&'.join(l)
|
46
runtests.py
Executable file
46
runtests.py
Executable file
@@ -0,0 +1,46 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
import nose
|
||||
import os
|
||||
import sys
|
||||
import glob
|
||||
from collections import OrderedDict
|
||||
|
||||
class JimOrderPlugin(nose.plugins.Plugin):
|
||||
"""When searching for tests and encountering a directory that
|
||||
contains a 'test.order' file, run tests listed in that file, in the
|
||||
order that they're listed. Globs are OK in that file and duplicates
|
||||
are removed."""
|
||||
name = 'jimorder'
|
||||
score = 10000
|
||||
|
||||
def prepareTestLoader(self, loader):
|
||||
def wrap(func):
|
||||
def wrapper(name, *args, **kwargs):
|
||||
addr = nose.selector.TestAddress(
|
||||
name, workingDir=loader.workingDir)
|
||||
try:
|
||||
order = os.path.join(addr.filename, "test.order")
|
||||
except:
|
||||
order = None
|
||||
if order and os.path.exists(order):
|
||||
files = []
|
||||
for line in open(order):
|
||||
line = line.split('#')[0].strip()
|
||||
if not line:
|
||||
continue
|
||||
fn = os.path.join(addr.filename, line.strip())
|
||||
files.extend(sorted(glob.glob(fn)) or [fn])
|
||||
files = list(OrderedDict.fromkeys(files))
|
||||
tests = [ wrapper(fn, *args, **kwargs) for fn in files ]
|
||||
return loader.suiteClass(tests)
|
||||
return func(name, *args, **kwargs)
|
||||
return wrapper
|
||||
loader.loadTestsFromName = wrap(loader.loadTestsFromName)
|
||||
return loader
|
||||
|
||||
# Use setup.cfg for most of the test configuration. Adding
|
||||
# --with-jimorder here means that a normal "nosetests" run will
|
||||
# still work, it just won't support test.order.
|
||||
nose.main(addplugins = [ JimOrderPlugin() ],
|
||||
argv = sys.argv + ["--with-jimorder"])
|
32
setup.cfg
32
setup.cfg
@@ -1,15 +1,26 @@
|
||||
[aliases]
|
||||
test = nosetests
|
||||
|
||||
[nosetests]
|
||||
# note: the value doesn't matter, that's why they're empty here
|
||||
nocapture=
|
||||
nologcapture= # comment to see cherrypy logs on failure
|
||||
with-coverage=
|
||||
cover-inclusive=
|
||||
# Note: values must be set to 1, and have no comments on the same line,
|
||||
# for "python setup.py nosetests" to work correctly.
|
||||
nocapture=1
|
||||
# Comment this out to see CherryPy logs on failure:
|
||||
nologcapture=1
|
||||
with-coverage=1
|
||||
cover-inclusive=1
|
||||
cover-package=nilmdb
|
||||
cover-erase=
|
||||
##cover-html= # this works, puts html output in cover/ dir
|
||||
##cover-branches= # need nose 1.1.3 for this
|
||||
stop=
|
||||
cover-erase=1
|
||||
# this works, puts html output in cover/ dir:
|
||||
# cover-html=1
|
||||
# need nose 1.1.3 for this:
|
||||
# cover-branches=1
|
||||
#debug=nose
|
||||
#debug-log=nose.log
|
||||
stop=1
|
||||
verbosity=2
|
||||
tests=tests
|
||||
#tests=tests/test_bulkdata.py
|
||||
#tests=tests/test_mustclose.py
|
||||
#tests=tests/test_lrucache.py
|
||||
#tests=tests/test_cmdline.py
|
||||
@@ -23,6 +34,7 @@ verbosity=2
|
||||
#tests=tests/test_serializer.py
|
||||
#tests=tests/test_iteratorizer.py
|
||||
#tests=tests/test_client.py:TestClient.test_client_nilmdb
|
||||
#with-profile=
|
||||
#tests=tests/test_nilmdb.py
|
||||
#with-profile=1
|
||||
#profile-sort=time
|
||||
##profile-restrict=10 # doesn't work right, treated as string or something
|
||||
|
48
setup.py
Executable file
48
setup.py
Executable file
@@ -0,0 +1,48 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
# This is supposed to be using Distribute:
|
||||
#
|
||||
# distutils provides a "setup" method.
|
||||
# setuptools is a set of monkeypatches on top of that.
|
||||
# distribute is a particular version/implementation of setuptools.
|
||||
#
|
||||
# So we don't really know if this is using the old setuptools or the
|
||||
# Distribute-provided version of setuptools.
|
||||
|
||||
from setuptools import setup, find_packages
|
||||
from distutils.extension import Extension
|
||||
|
||||
from Cython.Build import cythonize
|
||||
|
||||
# Hack to workaround logging/multiprocessing issue:
|
||||
# https://groups.google.com/d/msg/nose-users/fnJ-kAUbYHQ/_UsLN786ygcJ
|
||||
try: import multiprocessing
|
||||
except: pass
|
||||
|
||||
# Build cython modules.
|
||||
cython_modules = cythonize("**/*.pyx")
|
||||
|
||||
# Run setup
|
||||
setup(name='nilmdb',
|
||||
version = '1.0',
|
||||
url = 'https://git.jim.sh/jim/lees/nilmdb.git',
|
||||
author = 'Jim Paris',
|
||||
author_email = 'jim@jtan.com',
|
||||
tests_require = [ 'nose',
|
||||
'coverage',
|
||||
],
|
||||
setup_requires = [ 'cython',
|
||||
],
|
||||
install_requires = [ 'distribute',
|
||||
'decorator',
|
||||
],
|
||||
packages = [ 'nilmdb',
|
||||
'nilmdb.utils',
|
||||
'nilmdb.utils.datetime_tz',
|
||||
'nilmdb.server',
|
||||
'nilmdb.client',
|
||||
'nilmdb.cmdline',
|
||||
],
|
||||
ext_modules = cython_modules,
|
||||
zip_safe = False,
|
||||
)
|
124
tests/data/extract-7
Normal file
124
tests/data/extract-7
Normal file
@@ -0,0 +1,124 @@
|
||||
# path: /newton/prep
|
||||
# layout: PrepData
|
||||
# start: 1332496830.0
|
||||
# end: 1332496830.999
|
||||
1332496830.000000 251774.000000 224241.000000 5688.100098 1915.530029 9329.219727 4183.709961 1212.349976 2641.790039
|
||||
1332496830.008333 259567.000000 222698.000000 6207.600098 678.671997 9380.230469 4575.580078 2830.610107 2688.629883
|
||||
1332496830.016667 263073.000000 223304.000000 4961.640137 2197.120117 7687.310059 4861.859863 2732.780029 3008.540039
|
||||
1332496830.025000 257614.000000 223323.000000 5003.660156 3525.139893 7165.310059 4685.620117 1715.380005 3440.479980
|
||||
1332496830.033333 255780.000000 221915.000000 6357.310059 2145.290039 8426.969727 3775.350098 1475.390015 3797.239990
|
||||
1332496830.041667 260166.000000 223008.000000 6702.589844 1484.959961 9288.099609 3330.830078 1228.500000 3214.320068
|
||||
1332496830.050000 261231.000000 226426.000000 4980.060059 2982.379883 8499.629883 4267.669922 994.088989 2292.889893
|
||||
1332496830.058333 255117.000000 226642.000000 4584.410156 4656.439941 7860.149902 5317.310059 1473.599976 2111.689941
|
||||
1332496830.066667 253300.000000 223554.000000 6455.089844 3036.649902 8869.750000 4986.310059 2607.360107 2839.590088
|
||||
1332496830.075000 261061.000000 221263.000000 6951.979980 1500.239990 9386.099609 3791.679932 2677.010010 3980.629883
|
||||
1332496830.083333 266503.000000 223198.000000 5189.609863 2594.560059 8571.530273 3175.000000 919.840027 3792.010010
|
||||
1332496830.091667 260692.000000 225184.000000 3782.479980 4642.879883 7662.959961 3917.790039 -251.097000 2907.060059
|
||||
1332496830.100000 253963.000000 225081.000000 5123.529785 3839.550049 8669.030273 4877.819824 943.723999 2527.449951
|
||||
1332496830.108333 256555.000000 224169.000000 5930.600098 2298.540039 8906.709961 5331.680176 2549.909912 3053.560059
|
||||
1332496830.116667 260889.000000 225010.000000 4681.129883 2971.870117 7900.040039 4874.080078 2322.429932 3649.120117
|
||||
1332496830.125000 257944.000000 224923.000000 3291.139893 4357.089844 7131.589844 4385.560059 1077.050049 3664.040039
|
||||
1332496830.133333 255009.000000 223018.000000 4584.819824 2864.000000 8469.490234 3625.580078 985.557007 3504.229980
|
||||
1332496830.141667 260114.000000 221947.000000 5676.189941 1210.339966 9393.780273 3390.239990 1654.020020 3018.699951
|
||||
1332496830.150000 264277.000000 224438.000000 4446.620117 2176.719971 8142.089844 4584.879883 2327.830078 2615.800049
|
||||
1332496830.158333 259221.000000 226471.000000 2734.439941 4182.759766 6389.549805 5540.520020 1958.880005 2720.120117
|
||||
1332496830.166667 252650.000000 224831.000000 4163.640137 2989.989990 7179.200195 5213.060059 1929.550049 3457.659912
|
||||
1332496830.175000 257083.000000 222048.000000 5759.040039 702.440979 8566.549805 3552.020020 1832.939941 3956.189941
|
||||
1332496830.183333 263130.000000 222967.000000 5141.140137 1166.119995 8666.959961 2720.370117 971.374023 3479.729980
|
||||
1332496830.191667 260236.000000 225265.000000 3425.139893 3339.080078 7853.609863 3674.949951 525.908020 2443.310059
|
||||
1332496830.200000 253503.000000 224527.000000 4398.129883 2927.429932 8110.279785 4842.470215 1513.869995 2467.100098
|
||||
1332496830.208333 256126.000000 222693.000000 6043.529785 656.223999 8797.559570 4832.410156 2832.370117 3426.139893
|
||||
1332496830.216667 261677.000000 223608.000000 5830.459961 1033.910034 8123.939941 3980.689941 1927.959961 4092.719971
|
||||
1332496830.225000 259457.000000 225536.000000 4015.570068 2995.989990 7135.439941 3713.550049 307.220001 3849.429932
|
||||
1332496830.233333 253352.000000 224216.000000 4650.560059 3196.620117 8131.279785 3586.159912 70.832298 3074.179932
|
||||
1332496830.241667 256124.000000 221513.000000 6100.479980 821.979980 9757.540039 3474.510010 1647.520020 2559.860107
|
||||
1332496830.250000 263024.000000 221559.000000 5789.959961 699.416992 9129.740234 4153.080078 2829.250000 2677.270020
|
||||
1332496830.258333 261720.000000 224015.000000 4358.500000 2645.360107 7414.109863 4810.669922 2225.989990 3185.989990
|
||||
1332496830.266667 254756.000000 224240.000000 4857.379883 3229.679932 7539.310059 4769.140137 1507.130005 3668.260010
|
||||
1332496830.275000 256889.000000 222658.000000 6473.419922 1214.109985 9010.759766 3848.729980 1303.839966 3778.500000
|
||||
1332496830.283333 264208.000000 223316.000000 5700.450195 1116.560059 9087.610352 3846.679932 1293.589966 2891.560059
|
||||
1332496830.291667 263310.000000 225719.000000 3936.120117 3252.360107 7552.850098 4897.859863 1156.630005 2037.160034
|
||||
1332496830.300000 255079.000000 225086.000000 4536.450195 3960.110107 7454.589844 5479.069824 1596.359985 2190.800049
|
||||
1332496830.308333 254487.000000 222508.000000 6635.859863 1758.849976 8732.969727 4466.970215 2650.360107 3139.310059
|
||||
1332496830.316667 261241.000000 222432.000000 6702.270020 1085.130005 8989.230469 3112.989990 1933.560059 3828.409912
|
||||
1332496830.325000 262119.000000 225587.000000 4714.950195 2892.360107 8107.819824 2961.310059 239.977997 3273.719971
|
||||
1332496830.333333 254999.000000 226514.000000 4532.089844 4126.899902 8200.129883 3872.590088 56.089001 2370.580078
|
||||
1332496830.341667 254289.000000 224033.000000 6538.810059 2251.439941 9419.429688 4564.450195 2077.810059 2508.169922
|
||||
1332496830.350000 261890.000000 221960.000000 6846.089844 1475.270020 9125.589844 4598.290039 3299.219971 3475.419922
|
||||
1332496830.358333 264502.000000 223085.000000 5066.379883 3270.560059 7933.169922 4173.709961 1908.910034 3867.459961
|
||||
1332496830.366667 257889.000000 223656.000000 4201.660156 4473.640137 7688.339844 4161.580078 687.578979 3653.689941
|
||||
1332496830.375000 254270.000000 223151.000000 5715.140137 2752.139893 9273.320312 3772.949951 896.403992 3256.060059
|
||||
1332496830.383333 258257.000000 224217.000000 6114.310059 1856.859985 9604.320312 4200.490234 1764.380005 2939.219971
|
||||
1332496830.391667 260020.000000 226868.000000 4237.529785 3605.879883 8066.220215 5430.250000 2138.580078 2696.709961
|
||||
1332496830.400000 255083.000000 225924.000000 3350.310059 4853.069824 7045.819824 5925.200195 1893.609985 2897.340088
|
||||
1332496830.408333 254453.000000 222127.000000 5271.330078 2491.500000 8436.679688 5032.080078 2436.050049 3724.590088
|
||||
1332496830.416667 262588.000000 219950.000000 5994.620117 789.273987 9029.650391 3515.739990 1953.569946 4014.520020
|
||||
1332496830.425000 265610.000000 223333.000000 4391.410156 2400.959961 8146.459961 3536.959961 530.231995 3133.919922
|
||||
1332496830.433333 257470.000000 226977.000000 2975.320068 4633.529785 7278.560059 4640.100098 -50.150200 2024.959961
|
||||
1332496830.441667 250687.000000 226331.000000 4517.859863 3183.800049 8072.600098 5281.660156 1605.140015 2335.139893
|
||||
1332496830.450000 255563.000000 224495.000000 5551.000000 1101.300049 8461.490234 4725.700195 2726.669922 3480.540039
|
||||
1332496830.458333 261335.000000 224645.000000 4764.680176 1557.020020 7833.350098 3524.810059 1577.410034 4038.620117
|
||||
1332496830.466667 260269.000000 224008.000000 3558.030029 2987.610107 7362.439941 3279.229980 562.442017 3786.550049
|
||||
1332496830.475000 257435.000000 221777.000000 4972.600098 2166.879883 8481.440430 3328.719971 1037.130005 3271.370117
|
||||
1332496830.483333 261046.000000 221550.000000 5816.180176 590.216980 9120.929688 3895.399902 2382.669922 2824.169922
|
||||
1332496830.491667 262766.000000 224473.000000 4835.049805 1785.770020 7880.759766 4745.620117 2443.659912 3229.550049
|
||||
1332496830.500000 256509.000000 226413.000000 3758.870117 3461.199951 6743.770020 4928.959961 1536.619995 3546.689941
|
||||
1332496830.508333 250793.000000 224372.000000 5218.490234 2865.260010 7803.959961 4351.089844 1333.819946 3680.489990
|
||||
1332496830.516667 256319.000000 222066.000000 6403.970215 732.344971 9627.759766 3089.300049 1516.780029 3653.689941
|
||||
1332496830.525000 263343.000000 223235.000000 5200.430176 1388.579956 9372.849609 3371.229980 1450.390015 2678.909912
|
||||
1332496830.533333 260903.000000 225110.000000 3722.580078 3246.659912 7876.540039 4716.810059 1498.439941 2116.520020
|
||||
1332496830.541667 254416.000000 223769.000000 4841.649902 2956.399902 8115.919922 5392.359863 2142.810059 2652.320068
|
||||
1332496830.550000 256698.000000 222172.000000 6471.229980 970.395996 8834.980469 4816.839844 2376.629883 3605.860107
|
||||
1332496830.558333 261841.000000 223537.000000 5500.740234 1189.660034 8365.730469 4016.469971 1042.270020 3821.199951
|
||||
1332496830.566667 259503.000000 225840.000000 3827.929932 3088.840088 7676.140137 3978.310059 -357.006989 3016.419922
|
||||
1332496830.575000 253457.000000 224636.000000 4914.609863 3097.449951 8224.900391 4321.439941 171.373993 2412.360107
|
||||
1332496830.583333 256029.000000 222221.000000 6841.799805 1028.500000 9252.299805 4387.569824 2418.139893 2510.100098
|
||||
1332496830.591667 262840.000000 222550.000000 6210.250000 1410.729980 8538.900391 4152.580078 3009.300049 3219.760010
|
||||
1332496830.600000 261633.000000 225065.000000 4284.529785 3357.209961 7282.169922 3823.590088 1402.839966 3644.669922
|
||||
1332496830.608333 254591.000000 225109.000000 4693.160156 3647.739990 7745.160156 3686.379883 490.161011 3448.860107
|
||||
1332496830.616667 254780.000000 223599.000000 6527.379883 1569.869995 9438.429688 3456.580078 1162.520020 3252.010010
|
||||
1332496830.625000 260639.000000 224107.000000 6531.049805 1633.050049 9283.719727 4174.020020 2089.550049 2775.750000
|
||||
1332496830.633333 261108.000000 225472.000000 4968.259766 3527.850098 7692.870117 5137.100098 2207.389893 2436.659912
|
||||
1332496830.641667 255775.000000 223708.000000 4963.450195 4017.370117 7701.419922 5269.649902 2284.399902 2842.080078
|
||||
1332496830.650000 257398.000000 220947.000000 6767.500000 1645.709961 9107.070312 4000.179932 2548.860107 3624.770020
|
||||
1332496830.658333 264924.000000 221559.000000 6471.459961 1110.329956 9459.650391 3108.169922 1696.969971 3893.439941
|
||||
1332496830.666667 265339.000000 225733.000000 4348.799805 3459.510010 8475.299805 4031.239990 573.346985 2910.270020
|
||||
1332496830.675000 256814.000000 226995.000000 3479.540039 4949.790039 7499.910156 5624.709961 751.656006 2347.709961
|
||||
1332496830.683333 253316.000000 225161.000000 5147.060059 3218.429932 8460.160156 5869.299805 2336.320068 2987.959961
|
||||
1332496830.691667 259360.000000 223101.000000 5549.120117 1869.949951 8740.759766 4668.939941 2457.909912 3758.820068
|
||||
1332496830.700000 262012.000000 224016.000000 4173.609863 3004.129883 8157.040039 3704.729980 987.963989 3652.750000
|
||||
1332496830.708333 257176.000000 224420.000000 3517.300049 4118.750000 7822.240234 3718.229980 37.264900 2953.679932
|
||||
1332496830.716667 255146.000000 223322.000000 4923.979980 2330.679932 9095.910156 3792.399902 1013.070007 2711.239990
|
||||
1332496830.725000 260524.000000 223651.000000 5413.629883 1146.209961 8817.169922 4419.649902 2446.649902 2832.050049
|
||||
1332496830.733333 262098.000000 225752.000000 4262.979980 2270.969971 7135.479980 5067.120117 2294.679932 3376.620117
|
||||
1332496830.741667 256889.000000 225379.000000 3606.459961 3568.189941 6552.649902 4970.270020 1516.380005 3662.570068
|
||||
1332496830.750000 253948.000000 222631.000000 5511.700195 2066.300049 7952.660156 4019.909912 1513.140015 3752.629883
|
||||
1332496830.758333 259799.000000 222067.000000 5873.500000 608.583984 9253.780273 2870.739990 1348.239990 3344.199951
|
||||
1332496830.766667 262547.000000 224901.000000 4346.080078 1928.099976 8590.969727 3455.459961 904.390991 2379.270020
|
||||
1332496830.775000 256137.000000 226761.000000 3423.560059 3379.080078 7471.149902 4894.169922 1153.540039 2031.410034
|
||||
1332496830.783333 250326.000000 225013.000000 5519.979980 2423.969971 7991.759766 5117.950195 2098.790039 3099.239990
|
||||
1332496830.791667 255454.000000 222992.000000 6547.950195 496.496002 8751.339844 3900.560059 2132.290039 4076.810059
|
||||
1332496830.800000 261286.000000 223489.000000 5152.850098 1501.510010 8425.610352 2888.030029 776.114014 3786.360107
|
||||
1332496830.808333 258969.000000 224069.000000 3832.610107 3001.979980 7979.259766 3182.310059 52.716000 2874.800049
|
||||
1332496830.816667 254946.000000 222035.000000 5317.879883 2139.800049 9103.139648 3955.610107 1235.170044 2394.149902
|
||||
1332496830.825000 258676.000000 221205.000000 6594.910156 505.343994 9423.360352 4562.470215 2913.739990 2892.350098
|
||||
1332496830.833333 262125.000000 223566.000000 5116.750000 1773.599976 8082.200195 4776.370117 2386.389893 3659.729980
|
||||
1332496830.841667 257835.000000 225918.000000 3714.300049 3477.080078 7205.370117 4554.609863 711.539001 3878.419922
|
||||
1332496830.850000 253660.000000 224371.000000 5022.450195 2592.429932 8277.200195 4119.370117 486.507996 3666.739990
|
||||
1332496830.858333 259503.000000 222061.000000 6589.950195 659.935974 9596.919922 3598.100098 1702.489990 3036.600098
|
||||
1332496830.866667 265495.000000 222843.000000 5541.850098 1728.430054 8459.959961 4492.000000 2231.969971 2430.620117
|
||||
1332496830.875000 260929.000000 224996.000000 4000.949951 3745.989990 6983.790039 5430.859863 1855.260010 2533.379883
|
||||
1332496830.883333 252716.000000 224335.000000 5086.560059 3401.149902 7597.970215 5196.120117 1755.719971 3079.760010
|
||||
1332496830.891667 254110.000000 223111.000000 6822.189941 1229.079956 9164.339844 3761.229980 1679.390015 3584.879883
|
||||
1332496830.900000 259969.000000 224693.000000 6183.950195 1538.500000 9222.080078 3139.169922 949.901978 3180.800049
|
||||
1332496830.908333 259078.000000 226913.000000 4388.890137 3694.820068 8195.019531 3933.000000 426.079987 2388.449951
|
||||
1332496830.916667 254563.000000 224760.000000 5168.439941 4020.939941 8450.269531 4758.910156 1458.900024 2286.429932
|
||||
1332496830.925000 258059.000000 221217.000000 6883.459961 1649.530029 9232.780273 4457.649902 3057.820068 3031.949951
|
||||
1332496830.933333 264667.000000 221177.000000 6218.509766 1645.729980 8657.179688 3663.500000 2528.280029 3978.340088
|
||||
1332496830.941667 262925.000000 224382.000000 4627.500000 3635.929932 7892.799805 3431.320068 604.508972 3901.370117
|
||||
1332496830.950000 254708.000000 225448.000000 4408.250000 4461.040039 8197.169922 3953.750000 -44.534599 3154.870117
|
||||
1332496830.958333 253702.000000 224635.000000 5825.770020 2577.050049 9590.049805 4569.250000 1460.270020 2785.169922
|
||||
1332496830.966667 260206.000000 224140.000000 5387.979980 1951.160034 8789.509766 5131.660156 2706.379883 2972.479980
|
||||
1332496830.975000 261240.000000 224737.000000 3860.810059 3418.310059 7414.529785 5284.520020 2271.379883 3183.149902
|
||||
1332496830.983333 256140.000000 223252.000000 3850.010010 3957.139893 7262.649902 4964.640137 1499.510010 3453.129883
|
||||
1332496830.991667 256116.000000 221349.000000 5594.479980 2054.399902 8835.129883 3662.010010 1485.510010 3613.010010
|
19
tests/data/prep-20120323T1002-first19lines
Normal file
19
tests/data/prep-20120323T1002-first19lines
Normal file
@@ -0,0 +1,19 @@
|
||||
2.56437e+05 2.24430e+05 4.01161e+03 3.47534e+03 7.49589e+03 3.38894e+03 2.61397e+02 3.73126e+03
|
||||
2.53963e+05 2.24167e+05 5.62107e+03 1.54801e+03 9.16517e+03 3.52293e+03 1.05893e+03 2.99696e+03
|
||||
2.58508e+05 2.24930e+05 6.01140e+03 8.18866e+02 9.03995e+03 4.48244e+03 2.49039e+03 2.67934e+03
|
||||
2.59627e+05 2.26022e+05 4.47450e+03 2.42302e+03 7.41419e+03 5.07197e+03 2.43938e+03 2.96296e+03
|
||||
2.55187e+05 2.24632e+05 4.73857e+03 3.39804e+03 7.39512e+03 4.72645e+03 1.83903e+03 3.39353e+03
|
||||
2.57102e+05 2.21623e+05 6.14413e+03 1.44109e+03 8.75648e+03 3.49532e+03 1.86994e+03 3.75253e+03
|
||||
2.63653e+05 2.21770e+05 6.22177e+03 7.38962e+02 9.54760e+03 2.66682e+03 1.46266e+03 3.33257e+03
|
||||
2.63613e+05 2.25256e+05 4.47712e+03 2.43745e+03 8.51021e+03 3.85563e+03 9.59442e+02 2.38718e+03
|
||||
2.55350e+05 2.26264e+05 4.28372e+03 3.92394e+03 7.91247e+03 5.46652e+03 1.28499e+03 2.09372e+03
|
||||
2.52727e+05 2.24609e+05 5.85193e+03 2.49198e+03 8.54063e+03 5.62305e+03 2.33978e+03 3.00714e+03
|
||||
2.58475e+05 2.23578e+05 5.92487e+03 1.39448e+03 8.77962e+03 4.54418e+03 2.13203e+03 3.84976e+03
|
||||
2.61563e+05 2.24609e+05 4.33614e+03 2.45575e+03 8.05538e+03 3.46911e+03 6.27873e+02 3.66420e+03
|
||||
2.56401e+05 2.24441e+05 4.18715e+03 3.45717e+03 7.90669e+03 3.53355e+03 -5.84482e+00 2.96687e+03
|
||||
2.54745e+05 2.22644e+05 6.02005e+03 1.94721e+03 9.28939e+03 3.80020e+03 1.34820e+03 2.37785e+03
|
||||
2.60723e+05 2.22660e+05 6.69719e+03 1.03048e+03 9.26124e+03 4.34917e+03 2.84530e+03 2.73619e+03
|
||||
2.63089e+05 2.25711e+05 4.77887e+03 2.60417e+03 7.39660e+03 4.59811e+03 2.17472e+03 3.40729e+03
|
||||
2.55843e+05 2.27128e+05 4.02413e+03 4.39323e+03 6.79336e+03 4.62535e+03 7.52009e+02 3.44647e+03
|
||||
2.51904e+05 2.24868e+05 5.82289e+03 3.02127e+03 8.46160e+03 3.80298e+03 8.07212e+02 3.53468e+03
|
||||
2.57670e+05 2.22974e+05 6.73436e+03 1.60956e+03 9.92960e+03 2.98028e+03 1.44168e+03 3.05351e+03
|
11
tests/data/prep-20120323T1004-badtimes
Normal file
11
tests/data/prep-20120323T1004-badtimes
Normal file
@@ -0,0 +1,11 @@
|
||||
1332497040.000000 2.56439e+05 2.24775e+05 2.92897e+03 4.66646e+03 7.58491e+03 3.57351e+03 -4.34171e+02 2.98819e+03
|
||||
1332497040.010000 2.51903e+05 2.23202e+05 4.23696e+03 3.49363e+03 8.53493e+03 4.29416e+03 8.49573e+02 2.38189e+03
|
||||
1332497040.020000 2.57625e+05 2.20247e+05 5.47017e+03 1.35872e+03 9.18903e+03 4.56136e+03 2.65599e+03 2.60912e+03
|
||||
1332497040.030000 2.63375e+05 2.20706e+05 4.51842e+03 1.80758e+03 8.17208e+03 4.17463e+03 2.57884e+03 3.32848e+03
|
||||
1332497040.040000 2.59221e+05 2.22346e+05 2.98879e+03 3.66264e+03 6.87274e+03 3.94223e+03 1.25928e+03 3.51786e+03
|
||||
1332497040.050000 2.51918e+05 2.22281e+05 4.22677e+03 2.84764e+03 7.78323e+03 3.81659e+03 8.04944e+02 3.46314e+03
|
||||
1332497040.050000 2.54478e+05 2.21701e+05 5.61366e+03 1.02262e+03 9.26581e+03 3.50152e+03 1.29331e+03 3.07271e+03
|
||||
1332497040.060000 2.59568e+05 2.22945e+05 4.97190e+03 1.28250e+03 8.62081e+03 4.06316e+03 1.85717e+03 2.61990e+03
|
||||
1332497040.070000 2.57269e+05 2.23697e+05 3.60527e+03 3.05749e+03 7.22363e+03 4.90330e+03 1.93736e+03 2.35357e+03
|
||||
1332497040.080000 2.52274e+05 2.21438e+05 5.01228e+03 2.86309e+03 7.87115e+03 4.80448e+03 2.18291e+03 2.93397e+03
|
||||
1332497040.090000 2.56468e+05 2.19205e+05 6.29804e+03 8.09467e+02 9.12895e+03 3.52055e+03 2.16980e+03 3.88739e+03
|
18
tests/test.order
Normal file
18
tests/test.order
Normal file
@@ -0,0 +1,18 @@
|
||||
test_printf.py
|
||||
test_lrucache.py
|
||||
test_mustclose.py
|
||||
|
||||
test_serializer.py
|
||||
test_iteratorizer.py
|
||||
|
||||
test_timestamper.py
|
||||
test_layout.py
|
||||
test_rbtree.py
|
||||
test_interval.py
|
||||
|
||||
test_bulkdata.py
|
||||
test_nilmdb.py
|
||||
test_client.py
|
||||
test_cmdline.py
|
||||
|
||||
test_*.py
|
102
tests/test_bulkdata.py
Normal file
102
tests/test_bulkdata.py
Normal file
@@ -0,0 +1,102 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
from nose.tools import *
|
||||
from nose.tools import assert_raises
|
||||
import itertools
|
||||
|
||||
from testutil.helpers import *
|
||||
|
||||
testdb = "tests/bulkdata-testdb"
|
||||
|
||||
import nilmdb.server.bulkdata
|
||||
from nilmdb.server.bulkdata import BulkData
|
||||
|
||||
class TestBulkData(object):
|
||||
|
||||
def test_bulkdata(self):
|
||||
for (size, files, db) in [ ( 0, 0, testdb ),
|
||||
( 25, 1000, testdb ),
|
||||
( 1000, 3, testdb.decode("utf-8") ) ]:
|
||||
recursive_unlink(db)
|
||||
os.mkdir(db)
|
||||
self.do_basic(db, size, files)
|
||||
|
||||
def do_basic(self, db, size, files):
|
||||
"""Do the basic test with variable file_size and files_per_dir"""
|
||||
if not size or not files:
|
||||
data = BulkData(db)
|
||||
else:
|
||||
data = BulkData(db, file_size = size, files_per_dir = files)
|
||||
|
||||
# create empty
|
||||
with assert_raises(ValueError):
|
||||
data.create("/foo", "uint16_8")
|
||||
with assert_raises(ValueError):
|
||||
data.create("foo/bar", "uint16_8")
|
||||
with assert_raises(ValueError):
|
||||
data.create("/foo/bar", "uint8_8")
|
||||
data.create("/foo/bar", "uint16_8")
|
||||
data.create(u"/foo/baz/quux", "float64_16")
|
||||
with assert_raises(ValueError):
|
||||
data.create("/foo/bar/baz", "uint16_8")
|
||||
with assert_raises(ValueError):
|
||||
data.create("/foo/baz", "float64_16")
|
||||
|
||||
# get node -- see if caching works
|
||||
nodes = []
|
||||
for i in range(5000):
|
||||
nodes.append(data.getnode("/foo/bar"))
|
||||
nodes.append(data.getnode("/foo/baz/quux"))
|
||||
del nodes
|
||||
|
||||
# Test node
|
||||
node = data.getnode("/foo/bar")
|
||||
with assert_raises(IndexError):
|
||||
x = node[0]
|
||||
raw = []
|
||||
for i in range(1000):
|
||||
raw.append([10000+i, 1, 2, 3, 4, 5, 6, 7, 8 ])
|
||||
node.append(raw[0:1])
|
||||
node.append(raw[1:100])
|
||||
node.append(raw[100:])
|
||||
|
||||
misc_slices = [ 0, 100, slice(None), slice(0), slice(10),
|
||||
slice(5,10), slice(3,None), slice(3,-3),
|
||||
slice(20,10), slice(200,100,-1), slice(None,0,-1),
|
||||
slice(100,500,5) ]
|
||||
# Extract slices
|
||||
for s in misc_slices:
|
||||
eq_(node[s], raw[s])
|
||||
|
||||
# Get some coverage of remove; remove is more fully tested
|
||||
# in cmdline
|
||||
with assert_raises(IndexError):
|
||||
node.remove(9999,9998)
|
||||
|
||||
# close, reopen
|
||||
# reopen
|
||||
data.close()
|
||||
if not size or not files:
|
||||
data = BulkData(db)
|
||||
else:
|
||||
data = BulkData(db, file_size = size, files_per_dir = files)
|
||||
node = data.getnode("/foo/bar")
|
||||
|
||||
# Extract slices
|
||||
for s in misc_slices:
|
||||
eq_(node[s], raw[s])
|
||||
|
||||
# destroy
|
||||
with assert_raises(ValueError):
|
||||
data.destroy("/foo")
|
||||
with assert_raises(ValueError):
|
||||
data.destroy("/foo/baz")
|
||||
with assert_raises(ValueError):
|
||||
data.destroy("/foo/qwerty")
|
||||
data.destroy("/foo/baz/quux")
|
||||
data.destroy("/foo/bar")
|
||||
|
||||
# close
|
||||
data.close()
|
@@ -1,8 +1,10 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
from nilmdb.utils import timestamper
|
||||
from nilmdb.client import ClientError, ServerError
|
||||
|
||||
import datetime_tz
|
||||
from nilmdb.utils import datetime_tz
|
||||
|
||||
from nose.tools import *
|
||||
from nose.tools import assert_raises
|
||||
@@ -15,8 +17,9 @@ import cStringIO
|
||||
import simplejson as json
|
||||
import unittest
|
||||
import warnings
|
||||
import resource
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
testdb = "tests/client-testdb"
|
||||
|
||||
@@ -67,7 +70,11 @@ class TestClient(object):
|
||||
eq_(distutils.version.StrictVersion(version),
|
||||
distutils.version.StrictVersion(test_server.version))
|
||||
|
||||
def test_client_2_nilmdb(self):
|
||||
# Bad URLs should give 404, not 500
|
||||
with assert_raises(ClientError):
|
||||
client.http.get("/stream/create")
|
||||
|
||||
def test_client_2_createlist(self):
|
||||
# Basic stream tests, like those in test_nilmdb:test_stream
|
||||
client = nilmdb.Client(url = "http://localhost:12380/")
|
||||
|
||||
@@ -82,6 +89,8 @@ class TestClient(object):
|
||||
# Bad layout type
|
||||
with assert_raises(ClientError):
|
||||
client.stream_create("/newton/prep", "NoSuchLayout")
|
||||
|
||||
# Create three streams
|
||||
client.stream_create("/newton/prep", "PrepData")
|
||||
client.stream_create("/newton/raw", "RawData")
|
||||
client.stream_create("/newton/zzz/rawnotch", "RawNotchedData")
|
||||
@@ -95,6 +104,20 @@ class TestClient(object):
|
||||
eq_(client.stream_list(layout="RawData"), [ ["/newton/raw", "RawData"] ])
|
||||
eq_(client.stream_list(path="/newton/raw"), [ ["/newton/raw", "RawData"] ])
|
||||
|
||||
# Try messing with resource limits to trigger errors and get
|
||||
# more coverage. Here, make it so we can only create files 1
|
||||
# byte in size, which will trigger an IOError in the server when
|
||||
# we create a table.
|
||||
limit = resource.getrlimit(resource.RLIMIT_FSIZE)
|
||||
resource.setrlimit(resource.RLIMIT_FSIZE, (1, limit[1]))
|
||||
with assert_raises(ServerError) as e:
|
||||
client.stream_create("/newton/hello", "RawData")
|
||||
resource.setrlimit(resource.RLIMIT_FSIZE, limit)
|
||||
|
||||
|
||||
def test_client_3_metadata(self):
|
||||
client = nilmdb.Client(url = "http://localhost:12380/")
|
||||
|
||||
# Set / get metadata
|
||||
eq_(client.stream_get_metadata("/newton/prep"), {})
|
||||
eq_(client.stream_get_metadata("/newton/raw"), {})
|
||||
@@ -124,7 +147,7 @@ class TestClient(object):
|
||||
with assert_raises(ClientError):
|
||||
client.stream_update_metadata("/newton/prep", [1,2,3])
|
||||
|
||||
def test_client_3_insert(self):
|
||||
def test_client_4_insert(self):
|
||||
client = nilmdb.Client(url = "http://localhost:12380/")
|
||||
|
||||
datetime_tz.localtz_set("America/New_York")
|
||||
@@ -135,13 +158,13 @@ class TestClient(object):
|
||||
rate = 120
|
||||
|
||||
# First try a nonexistent path
|
||||
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
|
||||
data = timestamper.TimestamperRate(testfile, start, 120)
|
||||
with assert_raises(ClientError) as e:
|
||||
result = client.stream_insert("/newton/no-such-path", data)
|
||||
in_("404 Not Found", str(e.exception))
|
||||
|
||||
# Now try reversed timestamps
|
||||
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
|
||||
data = timestamper.TimestamperRate(testfile, start, 120)
|
||||
data = reversed(list(data))
|
||||
with assert_raises(ClientError) as e:
|
||||
result = client.stream_insert("/newton/prep", data)
|
||||
@@ -150,7 +173,7 @@ class TestClient(object):
|
||||
|
||||
# Now try empty data (no server request made)
|
||||
empty = cStringIO.StringIO("")
|
||||
data = nilmdb.timestamper.TimestamperRate(empty, start, 120)
|
||||
data = timestamper.TimestamperRate(empty, start, 120)
|
||||
result = client.stream_insert("/newton/prep", data)
|
||||
eq_(result, None)
|
||||
|
||||
@@ -162,7 +185,7 @@ class TestClient(object):
|
||||
in_("no data provided", str(e.exception))
|
||||
|
||||
# Specify start/end (starts too late)
|
||||
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
|
||||
data = timestamper.TimestamperRate(testfile, start, 120)
|
||||
with assert_raises(ClientError) as e:
|
||||
result = client.stream_insert("/newton/prep", data,
|
||||
start + 5, start + 120)
|
||||
@@ -171,7 +194,7 @@ class TestClient(object):
|
||||
str(e.exception))
|
||||
|
||||
# Specify start/end (ends too early)
|
||||
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
|
||||
data = timestamper.TimestamperRate(testfile, start, 120)
|
||||
with assert_raises(ClientError) as e:
|
||||
result = client.stream_insert("/newton/prep", data,
|
||||
start, start + 1)
|
||||
@@ -182,7 +205,7 @@ class TestClient(object):
|
||||
str(e.exception))
|
||||
|
||||
# Now do the real load
|
||||
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
|
||||
data = timestamper.TimestamperRate(testfile, start, 120)
|
||||
result = client.stream_insert("/newton/prep", data,
|
||||
start, start + 119.999777)
|
||||
eq_(result, "ok")
|
||||
@@ -193,20 +216,23 @@ class TestClient(object):
|
||||
eq_(intervals, [[start, start + 119.999777]])
|
||||
|
||||
# Try some overlapping data -- just insert it again
|
||||
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
|
||||
data = timestamper.TimestamperRate(testfile, start, 120)
|
||||
with assert_raises(ClientError) as e:
|
||||
result = client.stream_insert("/newton/prep", data)
|
||||
in_("400 Bad Request", str(e.exception))
|
||||
in_("OverlapError", str(e.exception))
|
||||
in_("verlap", str(e.exception))
|
||||
|
||||
def test_client_4_extract(self):
|
||||
# Misc tests for extract. Most of them are in test_cmdline.
|
||||
def test_client_5_extractremove(self):
|
||||
# Misc tests for extract and remove. Most of them are in test_cmdline.
|
||||
client = nilmdb.Client(url = "http://localhost:12380/")
|
||||
|
||||
for x in client.stream_extract("/newton/prep", 123, 123):
|
||||
raise Exception("shouldn't be any data for this request")
|
||||
raise AssertionError("shouldn't be any data for this request")
|
||||
|
||||
def test_client_5_generators(self):
|
||||
with assert_raises(ClientError) as e:
|
||||
client.stream_remove("/newton/prep", 123, 120)
|
||||
|
||||
def test_client_6_generators(self):
|
||||
# A lot of the client functionality is already tested by test_cmdline,
|
||||
# but this gets a bit more coverage that cmdline misses.
|
||||
client = nilmdb.Client(url = "http://localhost:12380/")
|
||||
@@ -255,25 +281,78 @@ class TestClient(object):
|
||||
in_("404 Not Found", str(e.exception))
|
||||
in_("No such stream", str(e.exception))
|
||||
|
||||
def test_client_6_chunked(self):
|
||||
def test_client_7_headers(self):
|
||||
# Make sure that /stream/intervals and /stream/extract
|
||||
# properly return streaming, chunked response. Pokes around
|
||||
# in client.http internals a bit to look at the response
|
||||
# headers.
|
||||
# properly return streaming, chunked, text/plain response.
|
||||
# Pokes around in client.http internals a bit to look at the
|
||||
# response headers.
|
||||
|
||||
client = nilmdb.Client(url = "http://localhost:12380/")
|
||||
http = client.http
|
||||
|
||||
# Use a warning rather than returning a test failure, so that we can
|
||||
# still disable chunked responses for debugging.
|
||||
x = client.http.get("stream/intervals", { "path": "/newton/prep" },
|
||||
|
||||
# Intervals
|
||||
x = http.get("stream/intervals", { "path": "/newton/prep" },
|
||||
retjson=False)
|
||||
lines_(x, 1)
|
||||
if "transfer-encoding: chunked" not in client.http._headers.lower():
|
||||
if "Transfer-Encoding: chunked" not in http._headers:
|
||||
warnings.warn("Non-chunked HTTP response for /stream/intervals")
|
||||
if "Content-Type: text/plain;charset=utf-8" not in http._headers:
|
||||
raise AssertionError("/stream/intervals is not text/plain:\n" +
|
||||
http._headers)
|
||||
|
||||
x = client.http.get("stream/extract",
|
||||
# Extract
|
||||
x = http.get("stream/extract",
|
||||
{ "path": "/newton/prep",
|
||||
"start": "123",
|
||||
"end": "123" }, retjson=False)
|
||||
if "transfer-encoding: chunked" not in client.http._headers.lower():
|
||||
if "Transfer-Encoding: chunked" not in http._headers:
|
||||
warnings.warn("Non-chunked HTTP response for /stream/extract")
|
||||
if "Content-Type: text/plain;charset=utf-8" not in http._headers:
|
||||
raise AssertionError("/stream/extract is not text/plain:\n" +
|
||||
http._headers)
|
||||
|
||||
# Make sure Access-Control-Allow-Origin gets set
|
||||
if "Access-Control-Allow-Origin: " not in http._headers:
|
||||
raise AssertionError("No Access-Control-Allow-Origin (CORS) "
|
||||
"header in /stream/extract response:\n" +
|
||||
http._headers)
|
||||
|
||||
def test_client_8_unicode(self):
|
||||
# Basic Unicode tests
|
||||
client = nilmdb.Client(url = "http://localhost:12380/")
|
||||
|
||||
# Delete streams that exist
|
||||
for stream in client.stream_list():
|
||||
client.stream_destroy(stream[0])
|
||||
|
||||
# Database is empty
|
||||
eq_(client.stream_list(), [])
|
||||
|
||||
# Create Unicode stream, match it
|
||||
raw = [ u"/düsseldorf/raw", u"uint16_6" ]
|
||||
prep = [ u"/düsseldorf/prep", u"uint16_6" ]
|
||||
client.stream_create(*raw)
|
||||
eq_(client.stream_list(), [raw])
|
||||
eq_(client.stream_list(layout=raw[1]), [raw])
|
||||
eq_(client.stream_list(path=raw[0]), [raw])
|
||||
client.stream_create(*prep)
|
||||
eq_(client.stream_list(), [prep, raw])
|
||||
|
||||
# Set / get metadata with Unicode keys and values
|
||||
eq_(client.stream_get_metadata(raw[0]), {})
|
||||
eq_(client.stream_get_metadata(prep[0]), {})
|
||||
meta1 = { u"alpha": u"α",
|
||||
u"β": u"beta" }
|
||||
meta2 = { u"alpha": u"α" }
|
||||
meta3 = { u"β": u"beta" }
|
||||
client.stream_set_metadata(prep[0], meta1)
|
||||
client.stream_update_metadata(prep[0], {})
|
||||
client.stream_update_metadata(raw[0], meta2)
|
||||
client.stream_update_metadata(raw[0], meta3)
|
||||
eq_(client.stream_get_metadata(prep[0]), meta1)
|
||||
eq_(client.stream_get_metadata(raw[0]), meta1)
|
||||
eq_(client.stream_get_metadata(raw[0], [ "alpha" ]), meta2)
|
||||
eq_(client.stream_get_metadata(raw[0], [ "alpha", "β" ]), meta1)
|
||||
|
@@ -1,29 +1,35 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
import nilmdb.cmdline
|
||||
from nilmdb.utils import datetime_tz
|
||||
|
||||
import unittest
|
||||
from nose.tools import *
|
||||
from nose.tools import assert_raises
|
||||
import itertools
|
||||
import datetime_tz
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import sys
|
||||
import threading
|
||||
import urllib2
|
||||
from urllib2 import urlopen, HTTPError
|
||||
import Queue
|
||||
import cStringIO
|
||||
import StringIO
|
||||
import shlex
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
testdb = "tests/cmdline-testdb"
|
||||
|
||||
def server_start(max_results = None):
|
||||
def server_start(max_results = None, bulkdata_args = {}):
|
||||
global test_server, test_db
|
||||
# Start web app on a custom port
|
||||
test_db = nilmdb.NilmDB(testdb, sync = False, max_results = max_results)
|
||||
test_db = nilmdb.NilmDB(testdb, sync = False,
|
||||
max_results = max_results,
|
||||
bulkdata_args = bulkdata_args)
|
||||
test_server = nilmdb.Server(test_db, host = "127.0.0.1",
|
||||
port = 12380, stoppable = False,
|
||||
fast_shutdown = True,
|
||||
@@ -45,13 +51,18 @@ def setup_module():
|
||||
def teardown_module():
|
||||
server_stop()
|
||||
|
||||
# Add an encoding property to StringIO so Python will convert Unicode
|
||||
# properly when writing or reading.
|
||||
class UTF8StringIO(StringIO.StringIO):
|
||||
encoding = 'utf-8'
|
||||
|
||||
class TestCmdline(object):
|
||||
|
||||
def run(self, arg_string, infile=None, outfile=None):
|
||||
"""Run a cmdline client with the specified argument string,
|
||||
passing the given input. Returns a tuple with the output and
|
||||
exit code"""
|
||||
#print "TZ=UTC ./nilmtool.py " + arg_string
|
||||
# printf("TZ=UTC ./nilmtool.py %s\n", arg_string)
|
||||
class stdio_wrapper:
|
||||
def __init__(self, stdin, stdout, stderr):
|
||||
self.io = (stdin, stdout, stderr)
|
||||
@@ -62,15 +73,18 @@ class TestCmdline(object):
|
||||
( sys.stdin, sys.stdout, sys.stderr ) = self.saved
|
||||
# Empty input if none provided
|
||||
if infile is None:
|
||||
infile = cStringIO.StringIO("")
|
||||
infile = UTF8StringIO("")
|
||||
# Capture stderr
|
||||
errfile = cStringIO.StringIO()
|
||||
errfile = UTF8StringIO()
|
||||
if outfile is None:
|
||||
# If no output file, capture stdout with stderr
|
||||
outfile = errfile
|
||||
with stdio_wrapper(infile, outfile, errfile) as s:
|
||||
try:
|
||||
nilmdb.cmdline.Cmdline(shlex.split(arg_string)).run()
|
||||
# shlex doesn't support Unicode very well. Encode the
|
||||
# string as UTF-8 explicitly before splitting.
|
||||
args = shlex.split(arg_string.encode('utf-8'))
|
||||
nilmdb.cmdline.Cmdline(args).run()
|
||||
sys.exit(0)
|
||||
except SystemExit as e:
|
||||
exitcode = e.code
|
||||
@@ -84,14 +98,24 @@ class TestCmdline(object):
|
||||
self.dump()
|
||||
eq_(self.exitcode, 0)
|
||||
|
||||
def fail(self, arg_string, infile = None, exitcode = None):
|
||||
def fail(self, arg_string, infile = None,
|
||||
exitcode = None, require_error = True):
|
||||
self.run(arg_string, infile)
|
||||
if exitcode is not None and self.exitcode != exitcode:
|
||||
# Wrong exit code
|
||||
self.dump()
|
||||
eq_(self.exitcode, exitcode)
|
||||
if self.exitcode == 0:
|
||||
# Success, when we wanted failure
|
||||
self.dump()
|
||||
ne_(self.exitcode, 0)
|
||||
# Make sure the output contains the word "error" at the
|
||||
# beginning of a line, but only if an exitcode wasn't
|
||||
# specified.
|
||||
if require_error and not re.search("^error",
|
||||
self.captured, re.MULTILINE):
|
||||
raise AssertionError("command failed, but output doesn't "
|
||||
"contain the string 'error'")
|
||||
|
||||
def contain(self, checkstring):
|
||||
in_(checkstring, self.captured)
|
||||
@@ -104,8 +128,8 @@ class TestCmdline(object):
|
||||
with open(file) as f:
|
||||
contents = f.read()
|
||||
if contents != self.captured:
|
||||
#print contents[1:1000] + "\n"
|
||||
#print self.captured[1:1000] + "\n"
|
||||
print contents[1:1000] + "\n"
|
||||
print self.captured[1:1000] + "\n"
|
||||
raise AssertionError("captured data doesn't match " + file)
|
||||
|
||||
def matchfilecount(self, file):
|
||||
@@ -121,7 +145,7 @@ class TestCmdline(object):
|
||||
def dump(self):
|
||||
printf("-----dump start-----\n%s-----dump end-----\n", self.captured)
|
||||
|
||||
def test_cmdline_01_basic(self):
|
||||
def test_01_basic(self):
|
||||
|
||||
# help
|
||||
self.ok("--help")
|
||||
@@ -167,14 +191,14 @@ class TestCmdline(object):
|
||||
self.fail("extract --start 2000-01-01 --start 2001-01-02")
|
||||
self.contain("duplicated argument")
|
||||
|
||||
def test_cmdline_02_info(self):
|
||||
def test_02_info(self):
|
||||
self.ok("info")
|
||||
self.contain("Server URL: http://localhost:12380/")
|
||||
self.contain("Server version: " + test_server.version)
|
||||
self.contain("Server database path")
|
||||
self.contain("Server database size")
|
||||
|
||||
def test_cmdline_03_createlist(self):
|
||||
def test_03_createlist(self):
|
||||
# Basic stream tests, like those in test_client.
|
||||
|
||||
# No streams
|
||||
@@ -191,6 +215,10 @@ class TestCmdline(object):
|
||||
# Bad layout type
|
||||
self.fail("create /newton/prep NoSuchLayout")
|
||||
self.contain("no such layout")
|
||||
self.fail("create /newton/prep float32_0")
|
||||
self.contain("no such layout")
|
||||
self.fail("create /newton/prep float33_1")
|
||||
self.contain("no such layout")
|
||||
|
||||
# Create a few streams
|
||||
self.ok("create /newton/zzz/rawnotch RawNotchedData")
|
||||
@@ -214,10 +242,17 @@ class TestCmdline(object):
|
||||
"/newton/raw RawData\n"
|
||||
"/newton/zzz/rawnotch RawNotchedData\n")
|
||||
|
||||
# Match just one type or one path
|
||||
# Match just one type or one path. Also check
|
||||
# that --path is optional
|
||||
self.ok("list --path /newton/raw")
|
||||
self.match("/newton/raw RawData\n")
|
||||
|
||||
self.ok("list /newton/raw")
|
||||
self.match("/newton/raw RawData\n")
|
||||
|
||||
self.fail("list -p /newton/raw /newton/raw")
|
||||
self.contain("too many paths")
|
||||
|
||||
self.ok("list --layout RawData")
|
||||
self.match("/newton/raw RawData\n")
|
||||
|
||||
@@ -229,10 +264,17 @@ class TestCmdline(object):
|
||||
self.ok("list --path *zzz* --layout Raw*")
|
||||
self.match("/newton/zzz/rawnotch RawNotchedData\n")
|
||||
|
||||
self.ok("list *zzz* --layout Raw*")
|
||||
self.match("/newton/zzz/rawnotch RawNotchedData\n")
|
||||
|
||||
self.ok("list --path *zzz* --layout Prep*")
|
||||
self.match("")
|
||||
|
||||
def test_cmdline_04_metadata(self):
|
||||
# reversed range
|
||||
self.fail("list /newton/prep --start 2020-01-01 --end 2000-01-01")
|
||||
self.contain("start is after end")
|
||||
|
||||
def test_04_metadata(self):
|
||||
# Set / get metadata
|
||||
self.fail("metadata")
|
||||
self.fail("metadata --get")
|
||||
@@ -289,7 +331,7 @@ class TestCmdline(object):
|
||||
self.fail("metadata /newton/nosuchpath")
|
||||
self.contain("No stream at path /newton/nosuchpath")
|
||||
|
||||
def test_cmdline_05_parsetime(self):
|
||||
def test_05_parsetime(self):
|
||||
os.environ['TZ'] = "America/New_York"
|
||||
cmd = nilmdb.cmdline.Cmdline(None)
|
||||
test = datetime_tz.datetime_tz.now()
|
||||
@@ -298,30 +340,24 @@ class TestCmdline(object):
|
||||
eq_(cmd.parse_time("hi there 20120405 1400-0400 testing! 123"), test)
|
||||
eq_(cmd.parse_time("20120405 1800 UTC"), test)
|
||||
eq_(cmd.parse_time("20120405 1400-0400 UTC"), test)
|
||||
with assert_raises(ValueError):
|
||||
print cmd.parse_time("20120405 1400-9999")
|
||||
with assert_raises(ValueError):
|
||||
print cmd.parse_time("hello")
|
||||
with assert_raises(ValueError):
|
||||
print cmd.parse_time("-")
|
||||
with assert_raises(ValueError):
|
||||
print cmd.parse_time("")
|
||||
with assert_raises(ValueError):
|
||||
print cmd.parse_time("14:00")
|
||||
for badtime in [ "20120405 1400-9999", "hello", "-", "", "4:00" ]:
|
||||
with assert_raises(ValueError):
|
||||
x = cmd.parse_time(badtime)
|
||||
x = cmd.parse_time("now")
|
||||
eq_(cmd.parse_time("snapshot-20120405-140000.raw.gz"), test)
|
||||
eq_(cmd.parse_time("prep-20120405T1400"), test)
|
||||
|
||||
def test_cmdline_06_insert(self):
|
||||
def test_06_insert(self):
|
||||
self.ok("insert --help")
|
||||
|
||||
self.fail("insert /foo/bar baz qwer")
|
||||
self.contain("Error getting stream info")
|
||||
self.contain("error getting stream info")
|
||||
|
||||
self.fail("insert /newton/prep baz qwer")
|
||||
self.match("Error opening input file baz\n")
|
||||
self.match("error opening input file baz\n")
|
||||
|
||||
self.fail("insert /newton/prep")
|
||||
self.contain("Error extracting time")
|
||||
self.contain("error extracting time")
|
||||
|
||||
self.fail("insert --start 19801205 /newton/prep 1 2 3 4")
|
||||
self.contain("--start can only be used with one input file")
|
||||
@@ -334,6 +370,14 @@ class TestCmdline(object):
|
||||
with open("tests/data/prep-20120323T1004-timestamped") as input:
|
||||
self.ok("insert --none /newton/prep", input)
|
||||
|
||||
# insert pre-timestamped data, with bad times (non-monotonic)
|
||||
os.environ['TZ'] = "UTC"
|
||||
with open("tests/data/prep-20120323T1004-badtimes") as input:
|
||||
self.fail("insert --none /newton/prep", input)
|
||||
self.contain("error parsing input data")
|
||||
self.contain("line 7:")
|
||||
self.contain("timestamp is not monotonically increasing")
|
||||
|
||||
# insert data with normal timestamper from filename
|
||||
os.environ['TZ'] = "UTC"
|
||||
self.ok("insert --rate 120 /newton/prep "
|
||||
@@ -362,7 +406,7 @@ class TestCmdline(object):
|
||||
os.environ['TZ'] = "UTC"
|
||||
self.fail("insert --rate 120 /newton/raw "
|
||||
"tests/data/prep-20120323T1004")
|
||||
self.contain("Error parsing input data")
|
||||
self.contain("error parsing input data")
|
||||
|
||||
# empty data does nothing
|
||||
self.ok("insert --rate 120 --start '03/23/2012 06:05:00' /newton/prep "
|
||||
@@ -371,7 +415,7 @@ class TestCmdline(object):
|
||||
# bad start time
|
||||
self.fail("insert --rate 120 --start 'whatever' /newton/prep /dev/null")
|
||||
|
||||
def test_cmdline_07_detail(self):
|
||||
def test_07_detail(self):
|
||||
# Just count the number of lines, it's probably fine
|
||||
self.ok("list --detail")
|
||||
lines_(self.captured, 8)
|
||||
@@ -405,23 +449,41 @@ class TestCmdline(object):
|
||||
self.ok("list --detail")
|
||||
lines_(self.captured, 8)
|
||||
|
||||
def test_cmdline_08_extract(self):
|
||||
# Verify the "raw timestamp" output
|
||||
self.ok("list --detail --path *prep --timestamp-raw "
|
||||
"--start='23 Mar 2012 10:05:15.50'")
|
||||
lines_(self.captured, 2)
|
||||
self.contain("[ 1332497115.5 -> 1332497159.991668 ]")
|
||||
|
||||
self.ok("list --detail --path *prep -T "
|
||||
"--start='23 Mar 2012 10:05:15.612'")
|
||||
lines_(self.captured, 2)
|
||||
self.contain("[ 1332497115.612 -> 1332497159.991668 ]")
|
||||
|
||||
def test_08_extract(self):
|
||||
# nonexistent stream
|
||||
self.fail("extract /no/such/foo --start 2000-01-01 --end 2020-01-01")
|
||||
self.contain("Error getting stream info")
|
||||
self.contain("error getting stream info")
|
||||
|
||||
# empty ranges return an error
|
||||
# reversed range
|
||||
self.fail("extract -a /newton/prep --start 2020-01-01 --end 2000-01-01")
|
||||
self.contain("start is after end")
|
||||
|
||||
# empty ranges return error 2
|
||||
self.fail("extract -a /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:30' " +
|
||||
"--end '23 Mar 2012 10:00:30'", exitcode = 2)
|
||||
"--end '23 Mar 2012 10:00:30'",
|
||||
exitcode = 2, require_error = False)
|
||||
self.contain("no data")
|
||||
self.fail("extract -a /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:30.000001' " +
|
||||
"--end '23 Mar 2012 10:00:30.000001'", exitcode = 2)
|
||||
"--end '23 Mar 2012 10:00:30.000001'",
|
||||
exitcode = 2, require_error = False)
|
||||
self.contain("no data")
|
||||
self.fail("extract -a /newton/prep " +
|
||||
"--start '23 Mar 2022 10:00:30' " +
|
||||
"--end '23 Mar 2022 10:00:30'", exitcode = 2)
|
||||
"--end '23 Mar 2022 10:00:30'",
|
||||
exitcode = 2, require_error = False)
|
||||
self.contain("no data")
|
||||
|
||||
# but are ok if we're just counting results
|
||||
@@ -453,6 +515,8 @@ class TestCmdline(object):
|
||||
test(4, "10:00:30.008333", "10:00:30.025")
|
||||
test(5, "10:00:30", "10:00:31", extra="--annotate --bare")
|
||||
test(6, "10:00:30", "10:00:31", extra="-b")
|
||||
test(7, "10:00:30", "10:00:30.999", extra="-a -T")
|
||||
test(7, "10:00:30", "10:00:30.999", extra="-a --timestamp-raw")
|
||||
|
||||
# all data put in by tests
|
||||
self.ok("extract -a /newton/prep --start 2000-01-01 --end 2020-01-01")
|
||||
@@ -460,7 +524,7 @@ class TestCmdline(object):
|
||||
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
|
||||
self.match("43200\n")
|
||||
|
||||
def test_cmdline_09_truncated(self):
|
||||
def test_09_truncated(self):
|
||||
# Test truncated responses by overriding the nilmdb max_results
|
||||
server_stop()
|
||||
server_start(max_results = 2)
|
||||
@@ -469,7 +533,102 @@ class TestCmdline(object):
|
||||
server_stop()
|
||||
server_start()
|
||||
|
||||
def test_cmdline_10_destroy(self):
|
||||
def test_10_remove(self):
|
||||
# Removing data
|
||||
|
||||
# Try nonexistent stream
|
||||
self.fail("remove /no/such/foo --start 2000-01-01 --end 2020-01-01")
|
||||
self.contain("No stream at path")
|
||||
|
||||
self.fail("remove /newton/prep --start 2020-01-01 --end 2000-01-01")
|
||||
self.contain("start is after end")
|
||||
|
||||
# empty ranges return success, backwards ranges return error
|
||||
self.ok("remove /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:30' " +
|
||||
"--end '23 Mar 2012 10:00:30'")
|
||||
self.match("")
|
||||
self.ok("remove /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:30.000001' " +
|
||||
"--end '23 Mar 2012 10:00:30.000001'")
|
||||
self.match("")
|
||||
self.ok("remove /newton/prep " +
|
||||
"--start '23 Mar 2022 10:00:30' " +
|
||||
"--end '23 Mar 2022 10:00:30'")
|
||||
self.match("")
|
||||
|
||||
# Verbose
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:30' " +
|
||||
"--end '23 Mar 2012 10:00:30'")
|
||||
self.match("0\n")
|
||||
self.ok("remove --count /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:30' " +
|
||||
"--end '23 Mar 2012 10:00:30'")
|
||||
self.match("0\n")
|
||||
|
||||
# Make sure we have the data we expect
|
||||
self.ok("list --detail /newton/prep")
|
||||
self.match("/newton/prep PrepData\n" +
|
||||
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
|
||||
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:03:59.991668 +0000 ]\n"
|
||||
" [ Fri, 23 Mar 2012 10:04:00.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:05:59.991668 +0000 ]\n")
|
||||
|
||||
# Remove various chunks of prep data and make sure
|
||||
# they're gone.
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:30' " +
|
||||
"--end '23 Mar 2012 10:00:40'")
|
||||
self.match("1200\n")
|
||||
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:10' " +
|
||||
"--end '23 Mar 2012 10:00:20'")
|
||||
self.match("1200\n")
|
||||
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:05' " +
|
||||
"--end '23 Mar 2012 10:00:25'")
|
||||
self.match("1200\n")
|
||||
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:03:50' " +
|
||||
"--end '23 Mar 2012 10:06:50'")
|
||||
self.match("15600\n")
|
||||
|
||||
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
|
||||
self.match("24000\n")
|
||||
|
||||
# See the missing chunks in list output
|
||||
self.ok("list --detail /newton/prep")
|
||||
self.match("/newton/prep PrepData\n" +
|
||||
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:00:05.000000 +0000 ]\n"
|
||||
" [ Fri, 23 Mar 2012 10:00:25.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:00:30.000000 +0000 ]\n"
|
||||
" [ Fri, 23 Mar 2012 10:00:40.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
|
||||
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:03:50.000000 +0000 ]\n")
|
||||
|
||||
# Remove all data, verify it's missing
|
||||
self.ok("remove /newton/prep --start 2000-01-01 --end 2020-01-01")
|
||||
self.match("") # no count requested this time
|
||||
self.ok("list --detail /newton/prep")
|
||||
self.match("/newton/prep PrepData\n" +
|
||||
" (no intervals)\n")
|
||||
|
||||
# Reinsert some data, to verify that no overlaps with deleted
|
||||
# data are reported
|
||||
os.environ['TZ'] = "UTC"
|
||||
self.ok("insert --rate 120 /newton/prep "
|
||||
"tests/data/prep-20120323T1000 "
|
||||
"tests/data/prep-20120323T1002")
|
||||
|
||||
def test_11_destroy(self):
|
||||
# Delete records
|
||||
self.ok("destroy --help")
|
||||
|
||||
@@ -490,7 +649,7 @@ class TestCmdline(object):
|
||||
|
||||
# Notice how they're not empty
|
||||
self.ok("list --detail")
|
||||
lines_(self.captured, 8)
|
||||
lines_(self.captured, 7)
|
||||
|
||||
# Delete some
|
||||
self.ok("destroy /newton/prep")
|
||||
@@ -519,3 +678,167 @@ class TestCmdline(object):
|
||||
# Make sure it was created empty
|
||||
self.ok("list --detail --path " + path)
|
||||
self.contain("(no intervals)")
|
||||
|
||||
def test_12_unicode(self):
|
||||
# Unicode paths.
|
||||
self.ok("destroy /newton/asdf/qwer")
|
||||
self.ok("destroy /newton/prep")
|
||||
self.ok("destroy /newton/raw")
|
||||
self.ok("destroy /newton/zzz")
|
||||
|
||||
self.ok(u"create /düsseldorf/raw uint16_6")
|
||||
self.ok("list --detail")
|
||||
self.contain(u"/düsseldorf/raw uint16_6")
|
||||
self.contain("(no intervals)")
|
||||
|
||||
# Unicode metadata
|
||||
self.ok(u"metadata /düsseldorf/raw --set α=beta 'γ=δ'")
|
||||
self.ok(u"metadata /düsseldorf/raw --update 'α=β ε τ α'")
|
||||
self.ok(u"metadata /düsseldorf/raw")
|
||||
self.match(u"α=β ε τ α\nγ=δ\n")
|
||||
|
||||
self.ok(u"destroy /düsseldorf/raw")
|
||||
|
||||
def test_13_files(self):
|
||||
# Test BulkData's ability to split into multiple files,
|
||||
# by forcing the file size to be really small.
|
||||
server_stop()
|
||||
server_start(bulkdata_args = { "file_size" : 920, # 23 rows per file
|
||||
"files_per_dir" : 3 })
|
||||
|
||||
# Fill data
|
||||
self.ok("create /newton/prep float32_8")
|
||||
os.environ['TZ'] = "UTC"
|
||||
with open("tests/data/prep-20120323T1004-timestamped") as input:
|
||||
self.ok("insert --none /newton/prep", input)
|
||||
|
||||
# Extract it
|
||||
self.ok("extract /newton/prep --start '2000-01-01' " +
|
||||
"--end '2012-03-23 10:04:01'")
|
||||
lines_(self.captured, 120)
|
||||
self.ok("extract /newton/prep --start '2000-01-01' " +
|
||||
"--end '2022-03-23 10:04:01'")
|
||||
lines_(self.captured, 14400)
|
||||
|
||||
# Make sure there were lots of files generated in the database
|
||||
# dir
|
||||
nfiles = 0
|
||||
for (dirpath, dirnames, filenames) in os.walk(testdb):
|
||||
nfiles += len(filenames)
|
||||
assert(nfiles > 500)
|
||||
|
||||
# Make sure we can restart the server with a different file
|
||||
# size and have it still work
|
||||
server_stop()
|
||||
server_start()
|
||||
self.ok("extract /newton/prep --start '2000-01-01' " +
|
||||
"--end '2022-03-23 10:04:01'")
|
||||
lines_(self.captured, 14400)
|
||||
|
||||
# Now recreate the data one more time and make sure there are
|
||||
# fewer files.
|
||||
self.ok("destroy /newton/prep")
|
||||
self.fail("destroy /newton/prep") # already destroyed
|
||||
self.ok("create /newton/prep float32_8")
|
||||
os.environ['TZ'] = "UTC"
|
||||
with open("tests/data/prep-20120323T1004-timestamped") as input:
|
||||
self.ok("insert --none /newton/prep", input)
|
||||
nfiles = 0
|
||||
for (dirpath, dirnames, filenames) in os.walk(testdb):
|
||||
nfiles += len(filenames)
|
||||
lt_(nfiles, 50)
|
||||
self.ok("destroy /newton/prep") # destroy again
|
||||
|
||||
def test_14_remove_files(self):
|
||||
# Test BulkData's ability to remove when data is split into
|
||||
# multiple files. Should be a fairly comprehensive test of
|
||||
# remove functionality.
|
||||
server_stop()
|
||||
server_start(bulkdata_args = { "file_size" : 920, # 23 rows per file
|
||||
"files_per_dir" : 3 })
|
||||
|
||||
# Insert data. Just for fun, insert out of order
|
||||
self.ok("create /newton/prep PrepData")
|
||||
os.environ['TZ'] = "UTC"
|
||||
self.ok("insert --rate 120 /newton/prep "
|
||||
"tests/data/prep-20120323T1002 "
|
||||
"tests/data/prep-20120323T1000")
|
||||
|
||||
# Should take up about 2.8 MB here (including directory entries)
|
||||
du_before = nilmdb.utils.diskusage.du_bytes(testdb)
|
||||
|
||||
# Make sure we have the data we expect
|
||||
self.ok("list --detail")
|
||||
self.match("/newton/prep PrepData\n" +
|
||||
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
|
||||
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:03:59.991668 +0000 ]\n")
|
||||
|
||||
# Remove various chunks of prep data and make sure
|
||||
# they're gone.
|
||||
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
|
||||
self.match("28800\n")
|
||||
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:30' " +
|
||||
"--end '23 Mar 2012 10:03:30'")
|
||||
self.match("21600\n")
|
||||
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:10' " +
|
||||
"--end '23 Mar 2012 10:00:20'")
|
||||
self.match("1200\n")
|
||||
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:00:05' " +
|
||||
"--end '23 Mar 2012 10:00:25'")
|
||||
self.match("1200\n")
|
||||
|
||||
self.ok("remove -c /newton/prep " +
|
||||
"--start '23 Mar 2012 10:03:50' " +
|
||||
"--end '23 Mar 2012 10:06:50'")
|
||||
self.match("1200\n")
|
||||
|
||||
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
|
||||
self.match("3600\n")
|
||||
|
||||
# See the missing chunks in list output
|
||||
self.ok("list --detail")
|
||||
self.match("/newton/prep PrepData\n" +
|
||||
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:00:05.000000 +0000 ]\n"
|
||||
" [ Fri, 23 Mar 2012 10:00:25.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:00:30.000000 +0000 ]\n"
|
||||
" [ Fri, 23 Mar 2012 10:03:30.000000 +0000"
|
||||
" -> Fri, 23 Mar 2012 10:03:50.000000 +0000 ]\n")
|
||||
|
||||
# We have 1/8 of the data that we had before, so the file size
|
||||
# should have dropped below 1/4 of what it used to be
|
||||
du_after = nilmdb.utils.diskusage.du_bytes(testdb)
|
||||
lt_(du_after, (du_before / 4))
|
||||
|
||||
# Remove anything that came from the 10:02 data file
|
||||
self.ok("remove /newton/prep " +
|
||||
"--start '23 Mar 2012 10:02:00' --end '2020-01-01'")
|
||||
|
||||
# Re-insert 19 lines from that file, then remove them again.
|
||||
# With the specific file_size above, this will cause the last
|
||||
# file in the bulk data storage to be exactly file_size large,
|
||||
# so removing the data should also remove that last file.
|
||||
self.ok("insert --rate 120 /newton/prep " +
|
||||
"tests/data/prep-20120323T1002-first19lines")
|
||||
self.ok("remove /newton/prep " +
|
||||
"--start '23 Mar 2012 10:02:00' --end '2020-01-01'")
|
||||
|
||||
# Shut down and restart server, to force nrows to get refreshed.
|
||||
server_stop()
|
||||
server_start()
|
||||
|
||||
# Re-add the full 10:02 data file. This tests adding new data once
|
||||
# we removed data near the end.
|
||||
self.ok("insert --rate 120 /newton/prep tests/data/prep-20120323T1002")
|
||||
|
||||
# See if we can extract it all
|
||||
self.ok("extract /newton/prep --start 2000-01-01 --end 2020-01-01")
|
||||
lines_(self.captured, 15600)
|
||||
|
@@ -2,21 +2,22 @@
|
||||
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
import datetime_tz
|
||||
from nilmdb.utils import datetime_tz
|
||||
|
||||
from nose.tools import *
|
||||
from nose.tools import assert_raises
|
||||
import itertools
|
||||
|
||||
from nilmdb.interval import Interval, DBInterval, IntervalSet, IntervalError
|
||||
from nilmdb.server.interval import (Interval, DBInterval,
|
||||
IntervalSet, IntervalError)
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
import unittest
|
||||
|
||||
# set to False to skip live renders
|
||||
do_live_renders = False
|
||||
def render(iset, description = "", live = True):
|
||||
import renderdot
|
||||
import testutil.renderdot as renderdot
|
||||
r = renderdot.RBTreeRenderer(iset.tree)
|
||||
return r.render(description, live and do_live_renders)
|
||||
|
||||
@@ -345,14 +346,15 @@ class TestIntervalSpeed:
|
||||
def test_interval_speed(self):
|
||||
import yappi
|
||||
import time
|
||||
import aplotter
|
||||
import testutil.aplotter as aplotter
|
||||
import random
|
||||
import math
|
||||
|
||||
print
|
||||
yappi.start()
|
||||
speeds = {}
|
||||
for j in [ 2**x for x in range(5,20) ]:
|
||||
limit = 10 # was 20
|
||||
for j in [ 2**x for x in range(5,limit) ]:
|
||||
start = time.time()
|
||||
iset = IntervalSet()
|
||||
for i in random.sample(xrange(j),j):
|
||||
|
@@ -7,12 +7,13 @@ from nose.tools import assert_raises
|
||||
import threading
|
||||
import time
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
def func_with_callback(a, b, callback):
|
||||
callback(a)
|
||||
callback(b)
|
||||
callback(a+b)
|
||||
return "return value"
|
||||
|
||||
class TestIteratorizer(object):
|
||||
def test(self):
|
||||
@@ -25,22 +26,21 @@ class TestIteratorizer(object):
|
||||
eq_(self.result, "123")
|
||||
|
||||
# Now make it an iterator
|
||||
it = nilmdb.utils.Iteratorizer(
|
||||
lambda x:
|
||||
func_with_callback(1, 2, x))
|
||||
result = ""
|
||||
for i in it:
|
||||
result += str(i)
|
||||
eq_(result, "123")
|
||||
|
||||
# Make sure things work when an exception occurs
|
||||
it = nilmdb.utils.Iteratorizer(
|
||||
lambda x:
|
||||
func_with_callback(1, "a", x))
|
||||
result = ""
|
||||
with assert_raises(TypeError) as e:
|
||||
f = lambda x: func_with_callback(1, 2, x)
|
||||
with nilmdb.utils.Iteratorizer(f) as it:
|
||||
for i in it:
|
||||
result += str(i)
|
||||
eq_(result, "123")
|
||||
eq_(it.retval, "return value")
|
||||
|
||||
# Make sure things work when an exception occurs
|
||||
result = ""
|
||||
with nilmdb.utils.Iteratorizer(
|
||||
lambda x: func_with_callback(1, "a", x)) as it:
|
||||
with assert_raises(TypeError) as e:
|
||||
for i in it:
|
||||
result += str(i)
|
||||
eq_(result, "1a")
|
||||
|
||||
# Now try to trigger the case where we stop iterating
|
||||
@@ -48,8 +48,14 @@ class TestIteratorizer(object):
|
||||
# itself. This doesn't have a particular result in the test,
|
||||
# but gains coverage.
|
||||
def foo():
|
||||
it = nilmdb.utils.Iteratorizer(
|
||||
lambda x:
|
||||
func_with_callback(1, 2, x))
|
||||
it.next()
|
||||
with nilmdb.utils.Iteratorizer(f) as it:
|
||||
it.next()
|
||||
foo()
|
||||
eq_(it.retval, None)
|
||||
|
||||
# Do the same thing when the curl hack is applied
|
||||
def foo():
|
||||
with nilmdb.utils.Iteratorizer(f, curl_hack = True) as it:
|
||||
it.next()
|
||||
foo()
|
||||
eq_(it.retval, None)
|
||||
|
@@ -20,19 +20,19 @@ import cStringIO
|
||||
import random
|
||||
import unittest
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
from nilmdb.layout import *
|
||||
from nilmdb.server.layout import *
|
||||
|
||||
class TestLayouts(object):
|
||||
# Some nilmdb.layout tests. Not complete, just fills in missing
|
||||
# coverage.
|
||||
def test_layouts(self):
|
||||
x = nilmdb.layout.get_named("PrepData")
|
||||
y = nilmdb.layout.get_named("float32_8")
|
||||
x = nilmdb.server.layout.get_named("PrepData")
|
||||
y = nilmdb.server.layout.get_named("float32_8")
|
||||
eq_(x.count, y.count)
|
||||
eq_(x.datatype, y.datatype)
|
||||
y = nilmdb.layout.get_named("float32_7")
|
||||
y = nilmdb.server.layout.get_named("float32_7")
|
||||
ne_(x.count, y.count)
|
||||
eq_(x.datatype, y.datatype)
|
||||
|
||||
@@ -89,11 +89,23 @@ class TestLayouts(object):
|
||||
# non-monotonic
|
||||
parser = Parser(name_raw)
|
||||
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
|
||||
"1234567890.000000 1 2 3 4 5 6\n" )
|
||||
"1234567890.099999 1 2 3 4 5 6\n" )
|
||||
with assert_raises(ParserError) as e:
|
||||
parser.parse(data)
|
||||
in_("not monotonically increasing", str(e.exception))
|
||||
|
||||
parser = Parser(name_raw)
|
||||
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
|
||||
"1234567890.100000 1 2 3 4 5 6\n" )
|
||||
with assert_raises(ParserError) as e:
|
||||
parser.parse(data)
|
||||
in_("not monotonically increasing", str(e.exception))
|
||||
|
||||
parser = Parser(name_raw)
|
||||
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
|
||||
"1234567890.100001 1 2 3 4 5 6\n" )
|
||||
parser.parse(data)
|
||||
|
||||
# RawData with values out of bounds
|
||||
parser = Parser(name_raw)
|
||||
data = ( "1234567890.000000 1 2 3 4 500000 6\n" +
|
||||
|
@@ -6,8 +6,9 @@ from nose.tools import *
|
||||
from nose.tools import assert_raises
|
||||
import threading
|
||||
import time
|
||||
import inspect
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
@nilmdb.utils.lru_cache(size = 3)
|
||||
def foo1(n):
|
||||
@@ -24,30 +25,59 @@ foo3d.destructed = []
|
||||
def foo3(n):
|
||||
return n
|
||||
|
||||
class Foo:
|
||||
def __init__(self):
|
||||
self.calls = 0
|
||||
@nilmdb.utils.lru_cache(size = 3, keys = slice(1, 2))
|
||||
def foo(self, n, **kwargs):
|
||||
self.calls += 1
|
||||
|
||||
class TestLRUCache(object):
|
||||
def test(self):
|
||||
|
||||
[ foo1(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
|
||||
eq_((foo1.cache_hits, foo1.cache_misses), (6, 3))
|
||||
eq_(foo1.cache_info(), (6, 3))
|
||||
[ foo1(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
|
||||
eq_((foo1.cache_hits, foo1.cache_misses), (15, 3))
|
||||
eq_(foo1.cache_info(), (15, 3))
|
||||
[ foo1(n) for n in [ 4, 2, 1, 1, 4 ] ]
|
||||
eq_((foo1.cache_hits, foo1.cache_misses), (18, 5))
|
||||
eq_(foo1.cache_info(), (18, 5))
|
||||
|
||||
[ foo2(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
|
||||
eq_((foo2.cache_hits, foo2.cache_misses), (6, 3))
|
||||
eq_(foo2.cache_info(), (6, 3))
|
||||
[ foo2(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
|
||||
eq_((foo2.cache_hits, foo2.cache_misses), (15, 3))
|
||||
eq_(foo2.cache_info(), (15, 3))
|
||||
[ foo2(n) for n in [ 4, 2, 1, 1, 4 ] ]
|
||||
eq_((foo2.cache_hits, foo2.cache_misses), (19, 4))
|
||||
eq_(foo2.cache_info(), (19, 4))
|
||||
|
||||
[ foo3(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
|
||||
eq_((foo3.cache_hits, foo3.cache_misses), (6, 3))
|
||||
eq_(foo3.cache_info(), (6, 3))
|
||||
[ foo3(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
|
||||
eq_((foo3.cache_hits, foo3.cache_misses), (15, 3))
|
||||
eq_(foo3.cache_info(), (15, 3))
|
||||
[ foo3(n) for n in [ 4, 2, 1, 1, 4 ] ]
|
||||
eq_((foo3.cache_hits, foo3.cache_misses), (18, 5))
|
||||
eq_(foo3.cache_info(), (18, 5))
|
||||
eq_(foo3d.destructed, [1, 3])
|
||||
with assert_raises(KeyError):
|
||||
foo3.cache_remove(1,2,3)
|
||||
foo3.cache_remove(1)
|
||||
eq_(foo3d.destructed, [1, 3, 1])
|
||||
foo3.cache_remove_all()
|
||||
eq_(foo3d.destructed, [1, 3, 1, 2, 4 ])
|
||||
|
||||
foo = Foo()
|
||||
foo.foo(5)
|
||||
foo.foo(6)
|
||||
foo.foo(7)
|
||||
foo.foo(5)
|
||||
eq_(foo.calls, 3)
|
||||
|
||||
# Can't handle keyword arguments right now
|
||||
with assert_raises(NotImplementedError):
|
||||
foo.foo(3, asdf = 7)
|
||||
|
||||
# Verify that argspecs were maintained
|
||||
eq_(inspect.getargspec(foo1),
|
||||
inspect.ArgSpec(args=['n'],
|
||||
varargs=None, keywords=None, defaults=None))
|
||||
eq_(inspect.getargspec(foo.foo),
|
||||
inspect.ArgSpec(args=['self', 'n'],
|
||||
varargs=None, keywords="kwargs", defaults=None))
|
||||
|
@@ -5,15 +5,29 @@ import nose
|
||||
from nose.tools import *
|
||||
from nose.tools import assert_raises
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
import sys
|
||||
import cStringIO
|
||||
import gc
|
||||
|
||||
import inspect
|
||||
|
||||
err = cStringIO.StringIO()
|
||||
|
||||
@nilmdb.utils.must_close(errorfile = err)
|
||||
class Foo:
|
||||
def __init__(self, arg):
|
||||
fprintf(err, "Init %s\n", arg)
|
||||
|
||||
def __del__(self):
|
||||
fprintf(err, "Deleting\n")
|
||||
|
||||
def close(self):
|
||||
fprintf(err, "Closing\n")
|
||||
|
||||
@nilmdb.utils.must_close(errorfile = err, wrap_verify = True)
|
||||
class Bar:
|
||||
def __init__(self):
|
||||
fprintf(err, "Init\n")
|
||||
|
||||
@@ -23,8 +37,11 @@ class Foo:
|
||||
def close(self):
|
||||
fprintf(err, "Closing\n")
|
||||
|
||||
def blah(self, arg):
|
||||
fprintf(err, "Blah %s\n", arg)
|
||||
|
||||
@nilmdb.utils.must_close(errorfile = err)
|
||||
class Bar:
|
||||
class Baz:
|
||||
pass
|
||||
|
||||
class TestMustClose(object):
|
||||
@@ -34,26 +51,60 @@ class TestMustClose(object):
|
||||
# garbage collect the object (and call its __del__ function)
|
||||
# right after a "del x".
|
||||
|
||||
x = Foo()
|
||||
# Trigger error
|
||||
err.truncate()
|
||||
x = Foo("hi")
|
||||
# Verify that the arg spec was maintained
|
||||
eq_(inspect.getargspec(x.__init__),
|
||||
inspect.ArgSpec(args = ['self', 'arg'],
|
||||
varargs = None, keywords = None, defaults = None))
|
||||
del x
|
||||
gc.collect()
|
||||
eq_(err.getvalue(),
|
||||
"Init\n"
|
||||
"Init hi\n"
|
||||
"error: Foo.close() wasn't called!\n"
|
||||
"Deleting\n")
|
||||
|
||||
# No error
|
||||
err.truncate(0)
|
||||
|
||||
y = Foo()
|
||||
y = Foo("bye")
|
||||
y.close()
|
||||
del y
|
||||
gc.collect()
|
||||
eq_(err.getvalue(),
|
||||
"Init\n"
|
||||
"Init bye\n"
|
||||
"Closing\n"
|
||||
"Deleting\n")
|
||||
|
||||
# Verify function calls when wrap_verify is True
|
||||
err.truncate(0)
|
||||
|
||||
z = Bar()
|
||||
eq_(inspect.getargspec(z.blah),
|
||||
inspect.ArgSpec(args = ['self', 'arg'],
|
||||
varargs = None, keywords = None, defaults = None))
|
||||
z.blah("boo")
|
||||
z.close()
|
||||
with assert_raises(AssertionError) as e:
|
||||
z.blah("hello")
|
||||
in_("called <function blah at 0x", str(e.exception))
|
||||
in_("> after close", str(e.exception))
|
||||
# Since the most recent assertion references 'z',
|
||||
# we need to raise another assertion here so that
|
||||
# 'z' will get properly deleted.
|
||||
with assert_raises(AssertionError):
|
||||
raise AssertionError()
|
||||
del z
|
||||
gc.collect()
|
||||
eq_(err.getvalue(),
|
||||
"Init\n"
|
||||
"Blah boo\n"
|
||||
"Closing\n"
|
||||
"Deleting\n")
|
||||
|
||||
# Class with missing methods
|
||||
err.truncate(0)
|
||||
w = Baz()
|
||||
w.close()
|
||||
del w
|
||||
eq_(err.getvalue(), "")
|
||||
|
||||
|
@@ -22,7 +22,7 @@ testdb = "tests/testdb"
|
||||
#def cleanup():
|
||||
# os.unlink(testdb)
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
class Test00Nilmdb(object): # named 00 so it runs first
|
||||
def test_NilmDB(self):
|
||||
@@ -113,7 +113,8 @@ class TestBlockingServer(object):
|
||||
self.server.start(blocking = True, event = event)
|
||||
thread = threading.Thread(target = run_server)
|
||||
thread.start()
|
||||
event.wait(timeout = 2)
|
||||
if not event.wait(timeout = 10):
|
||||
raise AssertionError("server didn't start in 10 seconds")
|
||||
|
||||
# Send request to exit.
|
||||
req = urlopen("http://127.0.0.1:12380/exit/", timeout = 1)
|
||||
|
@@ -6,7 +6,7 @@ from nose.tools import assert_raises
|
||||
from cStringIO import StringIO
|
||||
import sys
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
class TestPrintf(object):
|
||||
def test_printf(self):
|
||||
|
@@ -6,15 +6,15 @@ from nilmdb.utils.printf import *
|
||||
from nose.tools import *
|
||||
from nose.tools import assert_raises
|
||||
|
||||
from nilmdb.rbtree import RBTree, RBNode
|
||||
from nilmdb.server.rbtree import RBTree, RBNode
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
import unittest
|
||||
|
||||
# set to False to skip live renders
|
||||
do_live_renders = False
|
||||
def render(tree, description = "", live = True):
|
||||
import renderdot
|
||||
import testutil.renderdot as renderdot
|
||||
r = renderdot.RBTreeRenderer(tree)
|
||||
return r.render(description, live and do_live_renders)
|
||||
|
||||
|
@@ -7,7 +7,7 @@ from nose.tools import assert_raises
|
||||
import threading
|
||||
import time
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
#raise nose.exc.SkipTest("Skip these")
|
||||
|
||||
|
@@ -1,7 +1,6 @@
|
||||
import nilmdb
|
||||
from nilmdb.utils.printf import *
|
||||
|
||||
import datetime_tz
|
||||
from nilmdb.utils import datetime_tz
|
||||
|
||||
from nose.tools import *
|
||||
from nose.tools import assert_raises
|
||||
@@ -9,7 +8,9 @@ import os
|
||||
import sys
|
||||
import cStringIO
|
||||
|
||||
from test_helpers import *
|
||||
from testutil.helpers import *
|
||||
|
||||
from nilmdb.utils import timestamper
|
||||
|
||||
class TestTimestamper(object):
|
||||
|
||||
@@ -27,20 +28,20 @@ class TestTimestamper(object):
|
||||
|
||||
# full
|
||||
input = cStringIO.StringIO(join(lines_in))
|
||||
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
|
||||
ts = timestamper.TimestamperRate(input, start, 8000)
|
||||
foo = ts.readlines()
|
||||
eq_(foo, join(lines_out))
|
||||
in_("TimestamperRate(..., start=", str(ts))
|
||||
|
||||
# first 30 or so bytes means the first 2 lines
|
||||
input = cStringIO.StringIO(join(lines_in))
|
||||
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
|
||||
ts = timestamper.TimestamperRate(input, start, 8000)
|
||||
foo = ts.readlines(30)
|
||||
eq_(foo, join(lines_out[0:2]))
|
||||
|
||||
# stop iteration early
|
||||
input = cStringIO.StringIO(join(lines_in))
|
||||
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
|
||||
ts = timestamper.TimestamperRate(input, start, 8000,
|
||||
1332561600.000200)
|
||||
foo = ""
|
||||
for line in ts:
|
||||
@@ -49,21 +50,21 @@ class TestTimestamper(object):
|
||||
|
||||
# stop iteration early (readlines)
|
||||
input = cStringIO.StringIO(join(lines_in))
|
||||
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
|
||||
ts = timestamper.TimestamperRate(input, start, 8000,
|
||||
1332561600.000200)
|
||||
foo = ts.readlines()
|
||||
eq_(foo, join(lines_out[0:2]))
|
||||
|
||||
# stop iteration really early
|
||||
input = cStringIO.StringIO(join(lines_in))
|
||||
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
|
||||
ts = timestamper.TimestamperRate(input, start, 8000,
|
||||
1332561600.000000)
|
||||
foo = ts.readlines()
|
||||
eq_(foo, "")
|
||||
|
||||
# use iterator
|
||||
input = cStringIO.StringIO(join(lines_in))
|
||||
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
|
||||
ts = timestamper.TimestamperRate(input, start, 8000)
|
||||
foo = ""
|
||||
for line in ts:
|
||||
foo += line
|
||||
@@ -71,21 +72,21 @@ class TestTimestamper(object):
|
||||
|
||||
# check that TimestamperNow gives similar result
|
||||
input = cStringIO.StringIO(join(lines_in))
|
||||
ts = nilmdb.timestamper.TimestamperNow(input)
|
||||
ts = timestamper.TimestamperNow(input)
|
||||
foo = ts.readlines()
|
||||
ne_(foo, join(lines_out))
|
||||
eq_(len(foo), len(join(lines_out)))
|
||||
eq_(str(ts), "TimestamperNow(...)")
|
||||
|
||||
# Test passing a file (should be empty)
|
||||
ts = nilmdb.timestamper.TimestamperNow("/dev/null")
|
||||
ts = timestamper.TimestamperNow("/dev/null")
|
||||
for line in ts:
|
||||
raise AssertionError
|
||||
ts.close()
|
||||
|
||||
# Test the null timestamper
|
||||
input = cStringIO.StringIO(join(lines_out)) # note: lines_out
|
||||
ts = nilmdb.timestamper.TimestamperNull(input)
|
||||
ts = timestamper.TimestamperNull(input)
|
||||
foo = ts.readlines()
|
||||
eq_(foo, join(lines_out))
|
||||
eq_(str(ts), "TimestamperNull(...)")
|
||||
|
1
tests/testutil/__init__.py
Normal file
1
tests/testutil/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
# empty
|
@@ -12,6 +12,10 @@ def eq_(a, b):
|
||||
if not a == b:
|
||||
raise AssertionError("%s != %s" % (myrepr(a), myrepr(b)))
|
||||
|
||||
def lt_(a, b):
|
||||
if not a < b:
|
||||
raise AssertionError("%s is not less than %s" % (myrepr(a), myrepr(b)))
|
||||
|
||||
def in_(a, b):
|
||||
if a not in b:
|
||||
raise AssertionError("%s not in %s" % (myrepr(a), myrepr(b)))
|
||||
@@ -23,6 +27,8 @@ def ne_(a, b):
|
||||
def lines_(a, n):
|
||||
l = a.count('\n')
|
||||
if not l == n:
|
||||
if len(a) > 5000:
|
||||
a = a[0:5000] + " ... truncated"
|
||||
raise AssertionError("wanted %d lines, got %d in output: '%s'"
|
||||
% (n, l, a))
|
||||
|
@@ -1,54 +0,0 @@
|
||||
nosetests
|
||||
|
||||
32: 386 μs (12.0625 μs each)
|
||||
64: 672.102 μs (10.5016 μs each)
|
||||
128: 1510.86 μs (11.8036 μs each)
|
||||
256: 2782.11 μs (10.8676 μs each)
|
||||
512: 5591.87 μs (10.9216 μs each)
|
||||
1024: 12812.1 μs (12.5119 μs each)
|
||||
2048: 21835.1 μs (10.6617 μs each)
|
||||
4096: 46059.1 μs (11.2449 μs each)
|
||||
8192: 114127 μs (13.9315 μs each)
|
||||
16384: 181217 μs (11.0606 μs each)
|
||||
32768: 419649 μs (12.8067 μs each)
|
||||
65536: 804320 μs (12.2729 μs each)
|
||||
131072: 1.73534e+06 μs (13.2396 μs each)
|
||||
262144: 3.74451e+06 μs (14.2842 μs each)
|
||||
524288: 8.8694e+06 μs (16.917 μs each)
|
||||
1048576: 1.69993e+07 μs (16.2118 μs each)
|
||||
2097152: 3.29387e+07 μs (15.7064 μs each)
|
||||
|
|
||||
+3.29387e+07 *
|
||||
| ----
|
||||
| -----
|
||||
| ----
|
||||
| -----
|
||||
| -----
|
||||
| ----
|
||||
| -----
|
||||
| -----
|
||||
| ----
|
||||
| -----
|
||||
| ----
|
||||
| -----
|
||||
| ---
|
||||
| ---
|
||||
| ---
|
||||
| -------
|
||||
---+386---------------------------------------------------------------------+---
|
||||
+32 +2.09715e+06
|
||||
|
||||
name #n tsub ttot tavg
|
||||
..vl/lees/bucket/nilm/nilmdb/nilmdb/interval.py.__iadd__:184 4194272 10.025323 30.262723 0.000007
|
||||
..evl/lees/bucket/nilm/nilmdb/nilmdb/interval.py.__init__:27 4194272 24.715377 24.715377 0.000006
|
||||
../lees/bucket/nilm/nilmdb/nilmdb/interval.py.intersects:239 4194272 6.705053 12.577620 0.000003
|
||||
..im/devl/lees/bucket/nilm/nilmdb/tests/aplotter.py.plot:404 1 0.000048 0.001412 0.001412
|
||||
../lees/bucket/nilm/nilmdb/tests/aplotter.py.plot_double:311 1 0.000106 0.001346 0.001346
|
||||
..vl/lees/bucket/nilm/nilmdb/tests/aplotter.py.plot_data:201 1 0.000098 0.000672 0.000672
|
||||
..vl/lees/bucket/nilm/nilmdb/tests/aplotter.py.plot_line:241 16 0.000298 0.000496 0.000031
|
||||
..jim/devl/lees/bucket/nilm/nilmdb/nilmdb/printf.py.printf:4 17 0.000252 0.000334 0.000020
|
||||
..vl/lees/bucket/nilm/nilmdb/tests/aplotter.py.transposed:39 1 0.000229 0.000235 0.000235
|
||||
..vl/lees/bucket/nilm/nilmdb/tests/aplotter.py.y_reversed:45 1 0.000151 0.000174 0.000174
|
||||
|
||||
name tid fname ttot scnt
|
||||
_MainThread 47269783682784 ..b/python2.7/threading.py.setprofile:88 64.746000 1
|
22
timeit.sh
22
timeit.sh
@@ -1,22 +0,0 @@
|
||||
./nilmtool.py destroy /bpnilm/2/raw
|
||||
./nilmtool.py create /bpnilm/2/raw RawData
|
||||
|
||||
if false; then
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-110000 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-120001 -r 8000 /bpnilm/2/raw
|
||||
else
|
||||
# 170 hours, about 98 gigs uncompressed:
|
||||
for i in $(seq 2000 2016); do
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-010001 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-020002 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-030003 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-040004 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-050005 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-060006 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-070007 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-080008 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-090009 -r 8000 /bpnilm/2/raw
|
||||
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-100010 -r 8000 /bpnilm/2/raw
|
||||
done
|
||||
fi
|
||||
|
Reference in New Issue
Block a user