Compare commits

...

73 Commits

Author SHA1 Message Date
85be497edb Fix README 2013-01-21 17:30:01 -05:00
bd1b7107af Update TODO, clean up bulkdata error message 2013-01-21 11:43:28 -05:00
b8275f108d Make error message more helpful 2013-01-18 17:27:57 -05:00
2820ff9758 More fixes to mustclose decorator and argspecs 2013-01-18 17:21:30 -05:00
a015de893d Cleanup 2013-01-18 17:14:26 -05:00
b7f746e66d Fix lrucache decorator argspecs 2013-01-18 17:13:50 -05:00
40cf4941f0 Test that argspecs are maintained in lrucache 2013-01-18 17:01:46 -05:00
8a418ceb3e Fix issue where mustclose decorator doesn't maintain argspec 2013-01-18 16:57:15 -05:00
0312b6eb07 Test for issue where mustclose decorator didn't maintain argspec 2013-01-18 16:55:51 -05:00
077f197d24 Fix server returning 500 for bad HTTP parameters 2013-01-18 16:54:49 -05:00
62354b4dce Add test for bad-parameters-give-500-error 2013-01-17 19:58:48 -05:00
5970cd85cf Disable "ie-friendly" error message padding in CherryPy 2013-01-16 17:57:45 -05:00
4f6a742e6c Fix test failure 2013-01-16 17:31:31 -05:00
87b43e5d04 Command line errors cleaned up and made more consistent 2013-01-16 16:52:43 -05:00
f0c2a64ae3 Update doc formatting, .gitignore 2013-01-09 23:36:23 -05:00
e5d3deb6fe Removal support is complete.
`nrows` may change if you restart the server; documented why this is
the case in the design.md file.  It's not a problem.
2013-01-09 23:26:59 -05:00
d321058b48 Add basic versioning to bulkdata table format file. 2013-01-09 19:26:24 -05:00
cea83140c0 More work towards correctly removing rows. 2013-01-09 19:25:45 -05:00
7807d6caf0 Progress and tests for bulkdata.remove
Passes tests, but doesn't really handle nrows (and removing partially
full files) correctly, when deleting near the end of the data.
2013-01-09 17:39:29 -05:00
3d0fad3c2a Move some helper functions around 2013-01-09 17:39:29 -05:00
fe3b087435 Remove implemented in nilmdb; still needs bulkdata changes. 2013-01-08 21:07:52 -05:00
bcefe52298 nilmdb: Bring out range manipulating SQL so we can reuse it 2013-01-08 18:45:03 -05:00
f88c148ccc Interval removal work in progress. Needs NilmDB and BulkData work. 2013-01-08 18:37:01 -05:00
4a47b1d04a remove support: command line, client 2013-01-06 20:13:57 -05:00
80da937cb7 cmdline: return error when start > end (extract, list, remove) 2013-01-06 20:13:28 -05:00
c81972e66e Minor testsuite and commandline fixes.
Now supports "list /foo/bar" in addition to the older "list --path /foo/bar"
2013-01-06 19:25:07 -05:00
b09362fde1 Full coverage of nilmdb.utils.mustclose 2013-01-05 18:02:53 -05:00
b7688844fa Add a Nosetests plugin that lets me specify a test order within a directory. 2013-01-05 18:02:37 -05:00
3d212e7592 Move test helpers into subdirectory 2013-01-05 15:00:34 -05:00
7aedfdf9c3 Add lower level bulkdata test 2013-01-05 14:55:22 -05:00
ebd4f74959 Remove "pragma: no cover" from things that should get tested 2013-01-05 14:52:06 -05:00
ebe2fbab92 Add wrap_verify option to nilmdb.utils.must_close decorator 2013-01-05 14:51:41 -05:00
4831a0cae1 Small doc updates 2013-01-04 17:27:04 -05:00
07192c6ffb nilmdb.BulkData: Switch to nested subdir/filename layout
Use numbered subdirectories to avoid having too many files in one dir.
Add appropriate tests.

Also fix an issue where the mmap_open LRU cache could inappropriately
open a file twice because it was using the optional "newsize"
parameter as a key -- now lrucache can be given a slice object that
describes which arguments are important.
2013-01-04 16:51:05 -05:00
09d325e8ab Rename format -> _format in data dirs 2013-01-03 20:46:15 -05:00
11b0293d5f Clean up BulkData file size calculations, test more thoroughly
Now the goal is 128 MiB files, rather than a specific length.
2013-01-03 20:19:01 -05:00
493bbed82c More coverage and tests 2013-01-03 19:21:12 -05:00
3bc25daaab Trim urllib to get full coverage of the features in use 2013-01-03 17:10:07 -05:00
40a3bc4bc3 Update README with Python 2.7 requirement 2013-01-03 17:09:51 -05:00
c083d63c96 Tests for Unicode compliance 2013-01-03 17:03:52 -05:00
0221e3ea21 Update commandline test helpers to better handle Unicode
We replace cStringIO with StringIO subclass that forces UTF-8
encoding, and explicitly convert commandlines to UTF-8 before
shlex.  These changes will only affect tests, not normal commandline
operation.
2013-01-03 17:03:52 -05:00
f5fd2b064e Replace urllib.encode() with a version that encodes Unicode as UTF-8 instead 2013-01-03 17:02:38 -05:00
06e91a6a98 Always use function version of print() 2013-01-03 17:02:38 -05:00
41b3f3c018 Always use UTF-8 for filenames in nilmdb.bulkdata 2013-01-03 17:02:38 -05:00
842076fef4 Cleanup server error handling with decorator 2013-01-03 17:02:38 -05:00
10d58f6a47 More test coverage 2013-01-02 00:00:05 -05:00
e2464efc12 Test everything; remove debugging 2013-01-01 23:46:54 -05:00
1beae5024e Bulkdata extract works now. 2013-01-01 23:44:52 -05:00
c7c65b6542 Work around CherryPy bug #1200; related cleanups
Spent way too long trying to track down a cryptic error that turned
out to be a CherryPy bug.  Now we catch this using a decorator in the
'extract' and 'intervals' generators that transforms exceptions that
trigger the bugs into one that does not.  fun!
2013-01-01 23:03:53 -05:00
f41ff0a6e8 Inserting bulk data is essentially done, not tested 2013-01-01 21:04:35 -05:00
389c1d189f Make option to turn off chunked encoding for debugging more clear. 2013-01-01 21:03:33 -05:00
487298986e More work towards bulkdata 2012-12-31 18:44:57 -05:00
d4cd045c48 Fix path stuff, build packer in bulkdata.Table 2012-12-31 17:22:30 -05:00
3816645313 More work on BulkData 2012-12-31 17:22:30 -05:00
83b937c720 More Pytables -> bulkdata conversion 2012-12-31 17:22:30 -05:00
b3e6e8976f More work towards flat bulk data storage.
Cleaned up OS-specific path handling in nilmdb, bulkdata.
2012-12-31 17:22:30 -05:00
c890ea93cb WIP switching away from PyTables 2012-12-31 17:22:29 -05:00
84c68c6913 Better documentation, cache Tables 2012-12-31 17:22:29 -05:00
6f1e6fe232 Isolate all PyTables stuff to a single file.
This will make migrating to my own data storage engine easier.
2012-12-31 17:22:29 -05:00
b0d76312d1 Add must_close() decorator, use it in nilmdb
Warns at runtime if a class's close() method wasn't called before the
object was destroyed.
2012-12-31 17:21:19 -05:00
19c846c71c Remove outdated files 2012-12-31 15:55:43 -05:00
f355c73209 Refactor utility classes into nilmdb.utils subdir/namespace
There's some bug with the testing harness where placing e.g.
  from du import du
in nilmdb/utils/__init__.py doesn't quite work -- sometimes the
module "du" replaces the function "du".  Not exactly sure why;
we work around that by just renaming files so they don't match
the imported names directly.
2012-12-31 15:55:36 -05:00
173014ba19 Use nilmdb.lrucache for caching interval sets 2012-12-31 14:52:46 -05:00
24d4752bc3 Add LRU cache memoizing decorator for functions 2012-12-31 14:39:16 -05:00
a85b273e2e Remove compression.
Messes up extraction, since we random access for the timestamp binary
search.  In the future, maybe switching to multiple tables (one for
timestamp, one for compressed data) would be smart.
2012-12-14 17:19:23 -05:00
7f73b4b304 Use compression in pytables 2012-12-14 17:17:52 -05:00
f3eb6d1b79 Time it! 2012-12-14 16:57:02 -05:00
9082cc9f44 Merging adjacent intervals is working now!
Adjust test expectations accordingly, since the number of intervals
they print out will now be smaller.
2012-12-12 19:25:27 -05:00
bf64a40472 Some misc test additions, interval optimizations. Still need adjacency test 2012-12-11 23:31:55 -05:00
32dbeebc09 More insertion checks. Need to get interval concatenation working. 2012-12-11 18:08:00 -05:00
66ddc79b15 Inserting works again, with proper end/start for paired blocks.
timeit.sh script works too!
2012-12-07 20:30:39 -05:00
7a8bd0bf41 Don't include layout on client side 2012-12-07 16:24:15 -05:00
ee552de740 Start reworking/fixing insert timestamps 2012-12-06 20:25:24 -05:00
59 changed files with 2366 additions and 1591 deletions

3
.gitignore vendored
View File

@@ -2,3 +2,6 @@ db/
tests/*testdb/
.coverage
*.pyc
design.html
timeit*out

View File

@@ -8,11 +8,14 @@ tool:
lint:
pylint -f parseable nilmdb
%.html: %.md
pandoc -s $< > $@
test:
nosetests
python runtests.py
profile:
nosetests --with-profile
python runtests.py --with-profile
clean::
find . -name '*pyc' | xargs rm -f

View File

@@ -1,4 +1,3 @@
sudo apt-get install python-nose python-coverage
sudo apt-get install python-tables python-cherrypy3
sudo apt-get install python2.7 python-cherrypy3 python-decorator python-nose python-coverage
sudo apt-get install cython # 0.17.1-1 or newer

6
TODO
View File

@@ -1 +1,5 @@
- Merge adjacent intervals on insert (maybe with client help?)
-- Clean up error responses. Specifically I'd like to be able to add
json-formatted data to OverflowError and DB parsing errors. It
seems like subclassing cherrypy.HTTPError and overriding
set_response is the best thing to do -- it would let me get rid
of the _be_ie_unfriendly and other hacks in the server.

View File

@@ -1,11 +1,12 @@
Structure
---------
nilmdb.nilmdb is the NILM database interface. It tracks a PyTables
database holds actual rows of data, and a SQL database tracks metadata
and ranges.
nilmdb.nilmdb is the NILM database interface. A nilmdb.BulkData
interface stores data in flat files, and a SQL database tracks
metadata and ranges.
Access to the nilmdb must be single-threaded. This is handled with
the nilmdb.serializer class.
the nilmdb.serializer class. In the future this could probably
be turned into a per-path serialization.
nilmdb.server is a HTTP server that provides an interface to talk,
thorugh the serialization layer, to the nilmdb object.
@@ -18,13 +19,13 @@ Sqlite performance
Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
takes about 125msec. sqlite3 will commit transactions at 3 times:
1: explicit con.commit()
1. explicit con.commit()
2: between a series of DML commands and non-DML commands, e.g.
2. between a series of DML commands and non-DML commands, e.g.
after a series of INSERT, SELECT, but before a CREATE TABLE or
PRAGMA.
3: at the end of an explicit transaction, e.g. "with self.con as con:"
3. at the end of an explicit transaction, e.g. "with self.con as con:"
To speed up testing, or if this transaction speed becomes an issue,
the sync=False option to NilmDB will set PRAGMA synchronous=OFF.
@@ -47,6 +48,7 @@ transfer?
everything still gets buffered. Just a tradeoff of buffer size.
Before timestamps are added:
- Raw data is about 440 kB/s (9 channels)
- Prep data is about 12.5 kB/s (1 phase)
- How do we know how much data to send?
@@ -62,6 +64,7 @@ Before timestamps are added:
- Should those numbers come from the server?
Converting from ASCII to PyTables:
- For each row getting added, we need to set attributes on a PyTables
Row object and call table.append(). This means that there isn't a
particularly efficient way of converting from ascii.
@@ -138,10 +141,19 @@ Speed
- Next slowdown target is nilmdb.layout.Parser.parse().
- Rewrote parsers using cython and sscanf
- Stats (rev 10831), with _add_interval disabled
layout.pyx.Parser.parse:128 6303 sec, 262k calls
layout.pyx.parse:63 13913 sec, 5.1g calls
numpy:records.py.fromrecords:569 7410 sec, 262k calls
- Probably OK for now.
- Probably OK for now.
- After all updates, now takes about 8.5 minutes to insert an hour of
data, constant after adding 171 hours (4.9 billion data points)
- Data set size: 98 gigs = 20 bytes per data point.
6 uint16 data + 1 uint32 timestamp = 16 bytes per point
So compression must be off -- will retry with compression forced on.
IntervalSet speed
-----------------
@@ -191,3 +203,66 @@ handlers. For compatibility:
"RawData" == "uint16_6"
"RawNotchedData" == "uint16_9"
"PrepData" == "float32_8"
BulkData design
---------------
BulkData is a custom bulk data storage system that was written to
replace PyTables. The general structure is a `data` subdirectory in
the main NilmDB directory. Within `data`, paths are created for each
created stream. These locations are called tables. For example,
tables might be located at
nilmdb/data/newton/raw/
nilmdb/data/newton/prep/
nilmdb/data/cottage/raw/
Each table contains:
- An unchanging `_format` file (Python pickle format) that describes
parameters of how the data is broken up, like files per directory,
rows per file, and the binary data format
- Hex named subdirectories `("%04x", although more than 65536 can exist)`
- Hex named files within those subdirectories, like:
/nilmdb/data/newton/raw/000b/010a
The data format of these files is raw binary, interpreted by the
Python `struct` module according to the format string in the
`_format` file.
- Same as above, with `.removed` suffix, is an optional file (Python
pickle format) containing a list of row numbers that have been
logically removed from the file. If this range covers the entire
file, the entire file will be removed.
- Note that the `bulkdata.nrows` variable is calculated once in
`BulkData.__init__()`, and only ever incremented during use. Thus,
even if all data is removed, `nrows` can remain high. However, if
the server is restarted, the newly calculated `nrows` may be lower
than in a previous run due to deleted data. To be specific, this
sequence of events:
- insert data
- remove all data
- insert data
will result in having different row numbers in the database, and
differently numbered files on the filesystem, than the sequence:
- insert data
- remove all data
- restart server
- insert data
This is okay! Everything should remain consistent both in the
`BulkData` and `NilmDB`. Not attempting to readjust `nrows` during
deletion makes the code quite a bit simpler.
- Similarly, data files are never truncated shorter. Removing data
from the end of the file will not shorten it; it will only be
deleted when it has been fully filled and all of the data has been
subsequently removed.

View File

@@ -1,605 +0,0 @@
// The RedBlackEntry class is an Abstract Base Class. This means that no
// instance of the RedBlackEntry class can exist. Only classes which
// inherit from the RedBlackEntry class can exist. Furthermore any class
// which inherits from the RedBlackEntry class must define the member
// function GetKey(). The Print() member function does not have to
// be defined because a default definition exists.
//
// The GetKey() function should return an integer key for that entry.
// The key for an entry should never change otherwise bad things might occur.
class RedBlackEntry {
public:
RedBlackEntry();
virtual ~RedBlackEntry();
virtual int GetKey() const = 0;
virtual void Print() const;
};
class RedBlackTreeNode {
friend class RedBlackTree;
public:
void Print(RedBlackTreeNode*,
RedBlackTreeNode*) const;
RedBlackTreeNode();
RedBlackTreeNode(RedBlackEntry *);
RedBlackEntry * GetEntry() const;
~RedBlackTreeNode();
protected:
RedBlackEntry * storedEntry;
int key;
int red; /* if red=0 then the node is black */
RedBlackTreeNode * left;
RedBlackTreeNode * right;
RedBlackTreeNode * parent;
};
class RedBlackTree {
public:
RedBlackTree();
~RedBlackTree();
void Print() const;
RedBlackEntry * DeleteNode(RedBlackTreeNode *);
RedBlackTreeNode * Insert(RedBlackEntry *);
RedBlackTreeNode * GetPredecessorOf(RedBlackTreeNode *) const;
RedBlackTreeNode * GetSuccessorOf(RedBlackTreeNode *) const;
RedBlackTreeNode * Search(int key);
TemplateStack<RedBlackTreeNode *> * Enumerate(int low, int high) ;
void CheckAssumptions() const;
protected:
/* A sentinel is used for root and for nil. These sentinels are */
/* created when RedBlackTreeCreate is caled. root->left should always */
/* point to the node which is the root of the tree. nil points to a */
/* node which should always be black but has aribtrary children and */
/* parent and no key or info. The point of using these sentinels is so */
/* that the root and nil nodes do not require special cases in the code */
RedBlackTreeNode * root;
RedBlackTreeNode * nil;
void LeftRotate(RedBlackTreeNode *);
void RightRotate(RedBlackTreeNode *);
void TreeInsertHelp(RedBlackTreeNode *);
void TreePrintHelper(RedBlackTreeNode *) const;
void FixUpMaxHigh(RedBlackTreeNode *);
void DeleteFixUp(RedBlackTreeNode *);
};
const int MIN_INT=-MAX_INT;
RedBlackTreeNode::RedBlackTreeNode(){
};
RedBlackTreeNode::RedBlackTreeNode(RedBlackEntry * newEntry)
: storedEntry (newEntry) , key(newEntry->GetKey()) {
};
RedBlackTreeNode::~RedBlackTreeNode(){
};
RedBlackEntry * RedBlackTreeNode::GetEntry() const {return storedEntry;}
RedBlackEntry::RedBlackEntry(){
};
RedBlackEntry::~RedBlackEntry(){
};
void RedBlackEntry::Print() const {
cout << "No Print Method defined. Using Default: " << GetKey() << endl;
}
RedBlackTree::RedBlackTree()
{
nil = new RedBlackTreeNode;
nil->left = nil->right = nil->parent = nil;
nil->red = 0;
nil->key = MIN_INT;
nil->storedEntry = NULL;
root = new RedBlackTreeNode;
root->parent = root->left = root->right = nil;
root->key = MAX_INT;
root->red=0;
root->storedEntry = NULL;
}
/***********************************************************************/
/* FUNCTION: LeftRotate */
/**/
/* INPUTS: the node to rotate on */
/**/
/* OUTPUT: None */
/**/
/* Modifies Input: this, x */
/**/
/* EFFECTS: Rotates as described in _Introduction_To_Algorithms by */
/* Cormen, Leiserson, Rivest (Chapter 14). Basically this */
/* makes the parent of x be to the left of x, x the parent of */
/* its parent before the rotation and fixes other pointers */
/* accordingly. */
/***********************************************************************/
void RedBlackTree::LeftRotate(RedBlackTreeNode* x) {
RedBlackTreeNode* y;
/* I originally wrote this function to use the sentinel for */
/* nil to avoid checking for nil. However this introduces a */
/* very subtle bug because sometimes this function modifies */
/* the parent pointer of nil. This can be a problem if a */
/* function which calls LeftRotate also uses the nil sentinel */
/* and expects the nil sentinel's parent pointer to be unchanged */
/* after calling this function. For example, when DeleteFixUP */
/* calls LeftRotate it expects the parent pointer of nil to be */
/* unchanged. */
y=x->right;
x->right=y->left;
if (y->left != nil) y->left->parent=x; /* used to use sentinel here */
/* and do an unconditional assignment instead of testing for nil */
y->parent=x->parent;
/* instead of checking if x->parent is the root as in the book, we */
/* count on the root sentinel to implicitly take care of this case */
if( x == x->parent->left) {
x->parent->left=y;
} else {
x->parent->right=y;
}
y->left=x;
x->parent=y;
}
/***********************************************************************/
/* FUNCTION: RighttRotate */
/**/
/* INPUTS: node to rotate on */
/**/
/* OUTPUT: None */
/**/
/* Modifies Input?: this, y */
/**/
/* EFFECTS: Rotates as described in _Introduction_To_Algorithms by */
/* Cormen, Leiserson, Rivest (Chapter 14). Basically this */
/* makes the parent of x be to the left of x, x the parent of */
/* its parent before the rotation and fixes other pointers */
/* accordingly. */
/***********************************************************************/
void RedBlackTree::RightRotate(RedBlackTreeNode* y) {
RedBlackTreeNode* x;
/* I originally wrote this function to use the sentinel for */
/* nil to avoid checking for nil. However this introduces a */
/* very subtle bug because sometimes this function modifies */
/* the parent pointer of nil. This can be a problem if a */
/* function which calls LeftRotate also uses the nil sentinel */
/* and expects the nil sentinel's parent pointer to be unchanged */
/* after calling this function. For example, when DeleteFixUP */
/* calls LeftRotate it expects the parent pointer of nil to be */
/* unchanged. */
x=y->left;
y->left=x->right;
if (nil != x->right) x->right->parent=y; /*used to use sentinel here */
/* and do an unconditional assignment instead of testing for nil */
/* instead of checking if x->parent is the root as in the book, we */
/* count on the root sentinel to implicitly take care of this case */
x->parent=y->parent;
if( y == y->parent->left) {
y->parent->left=x;
} else {
y->parent->right=x;
}
x->right=y;
y->parent=x;
}
/***********************************************************************/
/* FUNCTION: TreeInsertHelp */
/**/
/* INPUTS: z is the node to insert */
/**/
/* OUTPUT: none */
/**/
/* Modifies Input: this, z */
/**/
/* EFFECTS: Inserts z into the tree as if it were a regular binary tree */
/* using the algorithm described in _Introduction_To_Algorithms_ */
/* by Cormen et al. This funciton is only intended to be called */
/* by the Insert function and not by the user */
/***********************************************************************/
void RedBlackTree::TreeInsertHelp(RedBlackTreeNode* z) {
/* This function should only be called by RedBlackTree::Insert */
RedBlackTreeNode* x;
RedBlackTreeNode* y;
z->left=z->right=nil;
y=root;
x=root->left;
while( x != nil) {
y=x;
if ( x->key > z->key) {
x=x->left;
} else { /* x->key <= z->key */
x=x->right;
}
}
z->parent=y;
if ( (y == root) ||
(y->key > z->key) ) {
y->left=z;
} else {
y->right=z;
}
}
/* Before calling InsertNode the node x should have its key set */
/***********************************************************************/
/* FUNCTION: InsertNode */
/**/
/* INPUTS: newEntry is the entry to insert*/
/**/
/* OUTPUT: This function returns a pointer to the newly inserted node */
/* which is guarunteed to be valid until this node is deleted. */
/* What this means is if another data structure stores this */
/* pointer then the tree does not need to be searched when this */
/* is to be deleted. */
/**/
/* Modifies Input: tree */
/**/
/* EFFECTS: Creates a node node which contains the appropriate key and */
/* info pointers and inserts it into the tree. */
/***********************************************************************/
/* jim */
RedBlackTreeNode * RedBlackTree::Insert(RedBlackEntry * newEntry)
{
RedBlackTreeNode * y;
RedBlackTreeNode * x;
RedBlackTreeNode * newNode;
x = new RedBlackTreeNode(newEntry);
TreeInsertHelp(x);
newNode = x;
x->red=1;
while(x->parent->red) { /* use sentinel instead of checking for root */
if (x->parent == x->parent->parent->left) {
y=x->parent->parent->right;
if (y->red) {
x->parent->red=0;
y->red=0;
x->parent->parent->red=1;
x=x->parent->parent;
} else {
if (x == x->parent->right) {
x=x->parent;
LeftRotate(x);
}
x->parent->red=0;
x->parent->parent->red=1;
RightRotate(x->parent->parent);
}
} else { /* case for x->parent == x->parent->parent->right */
/* this part is just like the section above with */
/* left and right interchanged */
y=x->parent->parent->left;
if (y->red) {
x->parent->red=0;
y->red=0;
x->parent->parent->red=1;
x=x->parent->parent;
} else {
if (x == x->parent->left) {
x=x->parent;
RightRotate(x);
}
x->parent->red=0;
x->parent->parent->red=1;
LeftRotate(x->parent->parent);
}
}
}
root->left->red=0;
return(newNode);
}
/***********************************************************************/
/* FUNCTION: GetSuccessorOf */
/**/
/* INPUTS: x is the node we want the succesor of */
/**/
/* OUTPUT: This function returns the successor of x or NULL if no */
/* successor exists. */
/**/
/* Modifies Input: none */
/**/
/* Note: uses the algorithm in _Introduction_To_Algorithms_ */
/***********************************************************************/
RedBlackTreeNode * RedBlackTree::GetSuccessorOf(RedBlackTreeNode * x) const
{
RedBlackTreeNode* y;
if (nil != (y = x->right)) { /* assignment to y is intentional */
while(y->left != nil) { /* returns the minium of the right subtree of x */
y=y->left;
}
return(y);
} else {
y=x->parent;
while(x == y->right) { /* sentinel used instead of checking for nil */
x=y;
y=y->parent;
}
if (y == root) return(nil);
return(y);
}
}
/***********************************************************************/
/* FUNCTION: GetPredecessorOf */
/**/
/* INPUTS: x is the node to get predecessor of */
/**/
/* OUTPUT: This function returns the predecessor of x or NULL if no */
/* predecessor exists. */
/**/
/* Modifies Input: none */
/**/
/* Note: uses the algorithm in _Introduction_To_Algorithms_ */
/***********************************************************************/
RedBlackTreeNode * RedBlackTree::GetPredecessorOf(RedBlackTreeNode * x) const {
RedBlackTreeNode* y;
if (nil != (y = x->left)) { /* assignment to y is intentional */
while(y->right != nil) { /* returns the maximum of the left subtree of x */
y=y->right;
}
return(y);
} else {
y=x->parent;
while(x == y->left) {
if (y == root) return(nil);
x=y;
y=y->parent;
}
return(y);
}
}
/***********************************************************************/
/* FUNCTION: Print */
/**/
/* INPUTS: none */
/**/
/* OUTPUT: none */
/**/
/* EFFECTS: This function recursively prints the nodes of the tree */
/* inorder. */
/**/
/* Modifies Input: none */
/**/
/* Note: This function should only be called from ITTreePrint */
/***********************************************************************/
void RedBlackTreeNode::Print(RedBlackTreeNode * nil,
RedBlackTreeNode * root) const {
storedEntry->Print();
printf(", key=%i ",key);
printf(" l->key=");
if( left == nil) printf("NULL"); else printf("%i",left->key);
printf(" r->key=");
if( right == nil) printf("NULL"); else printf("%i",right->key);
printf(" p->key=");
if( parent == root) printf("NULL"); else printf("%i",parent->key);
printf(" red=%i\n",red);
}
void RedBlackTree::TreePrintHelper( RedBlackTreeNode* x) const {
if (x != nil) {
TreePrintHelper(x->left);
x->Print(nil,root);
TreePrintHelper(x->right);
}
}
/***********************************************************************/
/* FUNCTION: Print */
/**/
/* INPUTS: none */
/**/
/* OUTPUT: none */
/**/
/* EFFECT: This function recursively prints the nodes of the tree */
/* inorder. */
/**/
/* Modifies Input: none */
/**/
/***********************************************************************/
void RedBlackTree::Print() const {
TreePrintHelper(root->left);
}
/***********************************************************************/
/* FUNCTION: DeleteFixUp */
/**/
/* INPUTS: x is the child of the spliced */
/* out node in DeleteNode. */
/**/
/* OUTPUT: none */
/**/
/* EFFECT: Performs rotations and changes colors to restore red-black */
/* properties after a node is deleted */
/**/
/* Modifies Input: this, x */
/**/
/* The algorithm from this function is from _Introduction_To_Algorithms_ */
/***********************************************************************/
void RedBlackTree::DeleteFixUp(RedBlackTreeNode* x) {
RedBlackTreeNode * w;
RedBlackTreeNode * rootLeft = root->left;
while( (!x->red) && (rootLeft != x)) {
if (x == x->parent->left) {
//
w=x->parent->right;
if (w->red) {
w->red=0;
x->parent->red=1;
LeftRotate(x->parent);
w=x->parent->right;
}
if ( (!w->right->red) && (!w->left->red) ) {
w->red=1;
x=x->parent;
} else {
if (!w->right->red) {
w->left->red=0;
w->red=1;
RightRotate(w);
w=x->parent->right;
}
w->red=x->parent->red;
x->parent->red=0;
w->right->red=0;
LeftRotate(x->parent);
x=rootLeft; /* this is to exit while loop */
}
//
} else { /* the code below is has left and right switched from above */
w=x->parent->left;
if (w->red) {
w->red=0;
x->parent->red=1;
RightRotate(x->parent);
w=x->parent->left;
}
if ( (!w->right->red) && (!w->left->red) ) {
w->red=1;
x=x->parent;
} else {
if (!w->left->red) {
w->right->red=0;
w->red=1;
LeftRotate(w);
w=x->parent->left;
}
w->red=x->parent->red;
x->parent->red=0;
w->left->red=0;
RightRotate(x->parent);
x=rootLeft; /* this is to exit while loop */
}
}
}
x->red=0;
}
/***********************************************************************/
/* FUNCTION: DeleteNode */
/**/
/* INPUTS: tree is the tree to delete node z from */
/**/
/* OUTPUT: returns the RedBlackEntry stored at deleted node */
/**/
/* EFFECT: Deletes z from tree and but don't call destructor */
/**/
/* Modifies Input: z */
/**/
/* The algorithm from this function is from _Introduction_To_Algorithms_ */
/***********************************************************************/
RedBlackEntry * RedBlackTree::DeleteNode(RedBlackTreeNode * z){
RedBlackTreeNode* y;
RedBlackTreeNode* x;
RedBlackEntry * returnValue = z->storedEntry;
y= ((z->left == nil) || (z->right == nil)) ? z : GetSuccessorOf(z);
x= (y->left == nil) ? y->right : y->left;
if (root == (x->parent = y->parent)) { /* assignment of y->p to x->p is intentional */
root->left=x;
} else {
if (y == y->parent->left) {
y->parent->left=x;
} else {
y->parent->right=x;
}
}
if (y != z) { /* y should not be nil in this case */
/* y is the node to splice out and x is its child */
y->left=z->left;
y->right=z->right;
y->parent=z->parent;
z->left->parent=z->right->parent=y;
if (z == z->parent->left) {
z->parent->left=y;
} else {
z->parent->right=y;
}
if (!(y->red)) {
y->red = z->red;
DeleteFixUp(x);
} else
y->red = z->red;
delete z;
} else {
if (!(y->red)) DeleteFixUp(x);
delete y;
}
return returnValue;
}
/***********************************************************************/
/* FUNCTION: Enumerate */
/**/
/* INPUTS: tree is the tree to look for keys between [low,high] */
/**/
/* OUTPUT: stack containing pointers to the nodes between [low,high] */
/**/
/* Modifies Input: none */
/**/
/* EFFECT: Returns a stack containing pointers to nodes containing */
/* keys which in [low,high]/ */
/**/
/***********************************************************************/
TemplateStack<RedBlackTreeNode *> * RedBlackTree::Enumerate(int low,
int high) {
TemplateStack<RedBlackTreeNode *> * enumResultStack =
new TemplateStack<RedBlackTreeNode *>(4);
RedBlackTreeNode* x=root->left;
RedBlackTreeNode* lastBest=NULL;
while(nil != x) {
if ( x->key > high ) {
x=x->left;
} else {
lastBest=x;
x=x->right;
}
}
while ( (lastBest) && (low <= lastBest->key) ) {
enumResultStack->Push(lastBest);
lastBest=GetPredecessorOf(lastBest);
}
return(enumResultStack);
}

View File

@@ -3,14 +3,10 @@
from .nilmdb import NilmDB
from .server import Server
from .client import Client
from .timer import Timer
import cmdline
import pyximport; pyximport.install()
import layout
import serializer
import timestamper
import interval
import du
import cmdline

460
nilmdb/bulkdata.py Normal file
View File

@@ -0,0 +1,460 @@
# Fixed record size bulk data storage
from __future__ import absolute_import
from __future__ import division
import nilmdb
from nilmdb.utils.printf import *
import os
import sys
import cPickle as pickle
import struct
import fnmatch
import mmap
import re
# Up to 256 open file descriptors at any given time.
# These variables are global so they can be used in the decorator arguments.
table_cache_size = 16
fd_cache_size = 16
@nilmdb.utils.must_close(wrap_verify = True)
class BulkData(object):
def __init__(self, basepath, **kwargs):
self.basepath = basepath
self.root = os.path.join(self.basepath, "data")
# Tuneables
if "file_size" in kwargs:
self.file_size = kwargs["file_size"]
else:
# Default to approximately 128 MiB per file
self.file_size = 128 * 1024 * 1024
if "files_per_dir" in kwargs:
self.files_per_dir = kwargs["files_per_dir"]
else:
# 32768 files per dir should work even on FAT32
self.files_per_dir = 32768
# Make root path
if not os.path.isdir(self.root):
os.mkdir(self.root)
def close(self):
self.getnode.cache_remove_all()
def _encode_filename(self, path):
# Encode all paths to UTF-8, regardless of sys.getfilesystemencoding(),
# because we want to be able to represent all code points and the user
# will never be directly exposed to filenames. We can then do path
# manipulations on the UTF-8 directly.
if isinstance(path, unicode):
return path.encode('utf-8')
return path
def create(self, unicodepath, layout_name):
"""
unicodepath: path to the data (e.g. u'/newton/prep').
Paths must contain at least two elements, e.g.:
/newton/prep
/newton/raw
/newton/upstairs/prep
/newton/upstairs/raw
layout_name: string for nilmdb.layout.get_named(), e.g. 'float32_8'
"""
path = self._encode_filename(unicodepath)
if path[0] != '/':
raise ValueError("paths must start with /")
[ group, node ] = path.rsplit("/", 1)
if group == '':
raise ValueError("invalid path; path must contain at least one "
"folder")
# Get layout, and build format string for struct module
try:
layout = nilmdb.layout.get_named(layout_name)
struct_fmt = '<d' # Little endian, double timestamp
struct_mapping = {
"int8": 'b',
"uint8": 'B',
"int16": 'h',
"uint16": 'H',
"int32": 'i',
"uint32": 'I',
"int64": 'q',
"uint64": 'Q',
"float32": 'f',
"float64": 'd',
}
for n in range(layout.count):
struct_fmt += struct_mapping[layout.datatype]
except KeyError:
raise ValueError("no such layout, or bad data types")
# Create the table. Note that we make a distinction here
# between NilmDB paths (always Unix style, split apart
# manually) and OS paths (built up with os.path.join)
# Make directories leading up to this one
elements = path.lstrip('/').split('/')
for i in range(len(elements)):
ospath = os.path.join(self.root, *elements[0:i])
if Table.exists(ospath):
raise ValueError("path is subdir of existing node")
if not os.path.isdir(ospath):
os.mkdir(ospath)
# Make the final dir
ospath = os.path.join(self.root, *elements)
if os.path.isdir(ospath):
raise ValueError("subdirs of this path already exist")
os.mkdir(ospath)
# Write format string to file
Table.create(ospath, struct_fmt, self.file_size, self.files_per_dir)
# Open and cache it
self.getnode(unicodepath)
# Success
return
def destroy(self, unicodepath):
"""Fully remove all data at a particular path. No way to undo
it! The group/path structure is removed, too."""
path = self._encode_filename(unicodepath)
# Get OS path
elements = path.lstrip('/').split('/')
ospath = os.path.join(self.root, *elements)
# Remove Table object from cache
self.getnode.cache_remove(self, unicodepath)
# Remove the contents of the target directory
if not Table.exists(ospath):
raise ValueError("nothing at that path")
for (root, dirs, files) in os.walk(ospath, topdown = False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
# Remove empty parent directories
for i in reversed(range(len(elements))):
ospath = os.path.join(self.root, *elements[0:i+1])
try:
os.rmdir(ospath)
except OSError:
break
# Cache open tables
@nilmdb.utils.lru_cache(size = table_cache_size,
onremove = lambda x: x.close())
def getnode(self, unicodepath):
"""Return a Table object corresponding to the given database
path, which must exist."""
path = self._encode_filename(unicodepath)
elements = path.lstrip('/').split('/')
ospath = os.path.join(self.root, *elements)
return Table(ospath)
@nilmdb.utils.must_close(wrap_verify = True)
class Table(object):
"""Tools to help access a single table (data at a specific OS path)."""
# See design.md for design details
# Class methods, to help keep format details in this class.
@classmethod
def exists(cls, root):
"""Return True if a table appears to exist at this OS path"""
return os.path.isfile(os.path.join(root, "_format"))
@classmethod
def create(cls, root, struct_fmt, file_size, files_per_dir):
"""Initialize a table at the given OS path.
'struct_fmt' is a Struct module format description"""
# Calculate rows per file so that each file is approximately
# file_size bytes.
packer = struct.Struct(struct_fmt)
rows_per_file = max(file_size // packer.size, 1)
format = { "rows_per_file": rows_per_file,
"files_per_dir": files_per_dir,
"struct_fmt": struct_fmt,
"version": 1 }
with open(os.path.join(root, "_format"), "wb") as f:
pickle.dump(format, f, 2)
# Normal methods
def __init__(self, root):
"""'root' is the full OS path to the directory of this table"""
self.root = root
# Load the format and build packer
with open(os.path.join(self.root, "_format"), "rb") as f:
format = pickle.load(f)
if format["version"] != 1: # pragma: no cover (just future proofing)
raise NotImplementedError("version " + format["version"] +
" bulk data store not supported")
self.rows_per_file = format["rows_per_file"]
self.files_per_dir = format["files_per_dir"]
self.packer = struct.Struct(format["struct_fmt"])
self.file_size = self.packer.size * self.rows_per_file
# Find nrows
self.nrows = self._get_nrows()
def close(self):
self.mmap_open.cache_remove_all()
# Internal helpers
def _get_nrows(self):
"""Find nrows by locating the lexicographically last filename
and using its size"""
# Note that this just finds a 'nrows' that is guaranteed to be
# greater than the row number of any piece of data that
# currently exists, not necessarily all data that _ever_
# existed.
regex = re.compile("^[0-9a-f]{4,}$")
# Find the last directory. We sort and loop through all of them,
# starting with the numerically greatest, because the dirs could be
# empty if something was deleted.
subdirs = sorted(filter(regex.search, os.listdir(self.root)),
key = lambda x: int(x, 16), reverse = True)
for subdir in subdirs:
# Now find the last file in that dir
path = os.path.join(self.root, subdir)
files = filter(regex.search, os.listdir(path))
if not files: # pragma: no cover (shouldn't occur)
# Empty dir: try the next one
continue
# Find the numerical max
filename = max(files, key = lambda x: int(x, 16))
offset = os.path.getsize(os.path.join(self.root, subdir, filename))
# Convert to row number
return self._row_from_offset(subdir, filename, offset)
# No files, so no data
return 0
def _offset_from_row(self, row):
"""Return a (subdir, filename, offset, count) tuple:
subdir: subdirectory for the file
filename: the filename that contains the specified row
offset: byte offset of the specified row within the file
count: number of rows (starting at offset) that fit in the file
"""
filenum = row // self.rows_per_file
# It's OK if these format specifiers are too short; the filenames
# will just get longer but will still sort correctly.
dirname = sprintf("%04x", filenum // self.files_per_dir)
filename = sprintf("%04x", filenum % self.files_per_dir)
offset = (row % self.rows_per_file) * self.packer.size
count = self.rows_per_file - (row % self.rows_per_file)
return (dirname, filename, offset, count)
def _row_from_offset(self, subdir, filename, offset):
"""Return the row number that corresponds to the given
'subdir/filename' and byte-offset within that file."""
if (offset % self.packer.size) != 0: # pragma: no cover; shouldn't occur
raise ValueError("file offset is not a multiple of data size")
filenum = int(subdir, 16) * self.files_per_dir + int(filename, 16)
row = (filenum * self.rows_per_file) + (offset // self.packer.size)
return row
# Cache open files
@nilmdb.utils.lru_cache(size = fd_cache_size,
keys = slice(0,3), # exclude newsize
onremove = lambda x: x.close())
def mmap_open(self, subdir, filename, newsize = None):
"""Open and map a given 'subdir/filename' (relative to self.root).
Will be automatically closed when evicted from the cache.
If 'newsize' is provided, the file is truncated to the given
size before the mapping is returned. (Note that the LRU cache
on this function means the truncate will only happen if the
object isn't already cached; mmap.resize should be used too.)"""
try:
os.mkdir(os.path.join(self.root, subdir))
except OSError:
pass
f = open(os.path.join(self.root, subdir, filename), "a+", 0)
if newsize is not None:
# mmap can't map a zero-length file, so this allows the
# caller to set the filesize between file creation and
# mmap.
f.truncate(newsize)
mm = mmap.mmap(f.fileno(), 0)
return mm
def mmap_open_resize(self, subdir, filename, newsize):
"""Open and map a given 'subdir/filename' (relative to self.root).
The file is resized to the given size."""
# Pass new size to mmap_open
mm = self.mmap_open(subdir, filename, newsize)
# In case we got a cached copy, need to call mm.resize too.
mm.resize(newsize)
return mm
def append(self, data):
"""Append the data and flush it to disk.
data is a nested Python list [[row],[row],[...]]"""
remaining = len(data)
dataiter = iter(data)
while remaining:
# See how many rows we can fit into the current file, and open it
(subdir, fname, offset, count) = self._offset_from_row(self.nrows)
if count > remaining:
count = remaining
newsize = offset + count * self.packer.size
mm = self.mmap_open_resize(subdir, fname, newsize)
mm.seek(offset)
# Write the data
for i in xrange(count):
row = dataiter.next()
mm.write(self.packer.pack(*row))
remaining -= count
self.nrows += count
def __getitem__(self, key):
"""Extract data and return it. Supports simple indexing
(table[n]) and range slices (table[n:m]). Returns a nested
Python list [[row],[row],[...]]"""
# Handle simple slices
if isinstance(key, slice):
# Fall back to brute force if the slice isn't simple
if ((key.step is not None and key.step != 1) or
key.start is None or
key.stop is None or
key.start >= key.stop or
key.start < 0 or
key.stop > self.nrows):
return [ self[x] for x in xrange(*key.indices(self.nrows)) ]
ret = []
row = key.start
remaining = key.stop - key.start
while remaining:
(subdir, filename, offset, count) = self._offset_from_row(row)
if count > remaining:
count = remaining
mm = self.mmap_open(subdir, filename)
for i in xrange(count):
ret.append(list(self.packer.unpack_from(mm, offset)))
offset += self.packer.size
remaining -= count
row += count
return ret
# Handle single points
if key < 0 or key >= self.nrows:
raise IndexError("Index out of range")
(subdir, filename, offset, count) = self._offset_from_row(key)
mm = self.mmap_open(subdir, filename)
# unpack_from ignores the mmap object's current seek position
return list(self.packer.unpack_from(mm, offset))
def _remove_rows(self, subdir, filename, start, stop):
"""Helper to mark specific rows as being removed from a
file, and potentially removing or truncating the file itself."""
# Import an existing list of deleted rows for this file
datafile = os.path.join(self.root, subdir, filename)
cachefile = datafile + ".removed"
try:
with open(cachefile, "rb") as f:
ranges = pickle.load(f)
cachefile_present = True
except:
ranges = []
cachefile_present = False
# Append our new range and sort
ranges.append((start, stop))
ranges.sort()
# Merge adjacent ranges into "out"
merged = []
prev = None
for new in ranges:
if prev is None:
# No previous range, so remember this one
prev = new
elif prev[1] == new[0]:
# Previous range connected to this new one; extend prev
prev = (prev[0], new[1])
else:
# Not connected; append previous and start again
merged.append(prev)
prev = new
if prev is not None:
merged.append(prev)
# If the range covered the whole file, we can delete it now.
# Note that the last file in a table may be only partially
# full (smaller than self.rows_per_file). We purposely leave
# those files around rather than deleting them, because the
# remainder will be filled on a subsequent append(), and things
# are generally easier if we don't have to special-case that.
if (len(merged) == 1 and
merged[0][0] == 0 and merged[0][1] == self.rows_per_file):
# Close potentially open file in mmap_open LRU cache
self.mmap_open.cache_remove(self, subdir, filename)
# Delete files
os.remove(datafile)
if cachefile_present:
os.remove(cachefile)
# Try deleting subdir, too
try:
os.rmdir(os.path.join(self.root, subdir))
except:
pass
else:
# Update cache. Try to do it atomically.
nilmdb.utils.atomic.replace_file(cachefile,
pickle.dumps(merged, 2))
def remove(self, start, stop):
"""Remove specified rows [start, stop) from this table.
If a file is left empty, it is fully removed. Otherwise, a
parallel data file is used to remember which rows have been
removed, and the file is otherwise untouched."""
if start < 0 or start > stop or stop > self.nrows:
raise IndexError("Index out of range")
row = start
remaining = stop - start
while remaining:
# Loop through each file that we need to touch
(subdir, filename, offset, count) = self._offset_from_row(row)
if count > remaining:
count = remaining
row_offset = offset // self.packer.size
# Mark the rows as being removed
self._remove_rows(subdir, filename, row_offset, row_offset + count)
remaining -= count
row += count
class TimestampOnlyTable(object):
"""Helper that lets us pass a Tables object into bisect, by
returning only the timestamp when a particular row is requested."""
def __init__(self, table):
self.table = table
def __getitem__(self, index):
return self.table[index][0]

View File

@@ -1,495 +0,0 @@
# cython: profile=False
# This is from bx-python 554:07aca5a9f6fc (BSD licensed), modified to
# store interval ranges as doubles rather than 32-bit integers.
"""
Data structure for performing intersect queries on a set of intervals which
preserves all information about the intervals (unlike bitset projection methods).
:Authors: James Taylor (james@jamestaylor.org),
Ian Schenk (ian.schenck@gmail.com),
Brent Pedersen (bpederse@gmail.com)
"""
# Historical note:
# This module original contained an implementation based on sorted endpoints
# and a binary search, using an idea from Scott Schwartz and Piotr Berman.
# Later an interval tree implementation was implemented by Ian for Galaxy's
# join tool (see `bx.intervals.operations.quicksect.py`). This was then
# converted to Cython by Brent, who also added support for
# upstream/downstream/neighbor queries. This was modified by James to
# handle half-open intervals strictly, to maintain sort order, and to
# implement the same interface as the original Intersecter.
#cython: cdivision=True
import operator
cdef extern from "stdlib.h":
int ceil(float f)
float log(float f)
int RAND_MAX
int rand()
int strlen(char *)
int iabs(int)
cdef inline double dmax2(double a, double b):
if b > a: return b
return a
cdef inline double dmax3(double a, double b, double c):
if b > a:
if c > b:
return c
return b
if a > c:
return a
return c
cdef inline double dmin3(double a, double b, double c):
if b < a:
if c < b:
return c
return b
if a < c:
return a
return c
cdef inline double dmin2(double a, double b):
if b < a: return b
return a
cdef float nlog = -1.0 / log(0.5)
cdef class IntervalNode:
"""
A single node of an `IntervalTree`.
NOTE: Unless you really know what you are doing, you probably should us
`IntervalTree` rather than using this directly.
"""
cdef float priority
cdef public object interval
cdef public double start, end
cdef double minend, maxend, minstart
cdef IntervalNode cleft, cright, croot
property left_node:
def __get__(self):
return self.cleft if self.cleft is not EmptyNode else None
property right_node:
def __get__(self):
return self.cright if self.cright is not EmptyNode else None
property root_node:
def __get__(self):
return self.croot if self.croot is not EmptyNode else None
def __repr__(self):
return "IntervalNode(%g, %g)" % (self.start, self.end)
def __cinit__(IntervalNode self, double start, double end, object interval):
# Python lacks the binomial distribution, so we convert a
# uniform into a binomial because it naturally scales with
# tree size. Also, python's uniform is perfect since the
# upper limit is not inclusive, which gives us undefined here.
self.priority = ceil(nlog * log(-1.0/(1.0 * rand()/RAND_MAX - 1)))
self.start = start
self.end = end
self.interval = interval
self.maxend = end
self.minstart = start
self.minend = end
self.cleft = EmptyNode
self.cright = EmptyNode
self.croot = EmptyNode
cpdef IntervalNode insert(IntervalNode self, double start, double end, object interval):
"""
Insert a new IntervalNode into the tree of which this node is
currently the root. The return value is the new root of the tree (which
may or may not be this node!)
"""
cdef IntervalNode croot = self
# If starts are the same, decide which to add interval to based on
# end, thus maintaining sortedness relative to start/end
cdef double decision_endpoint = start
if start == self.start:
decision_endpoint = end
if decision_endpoint > self.start:
# insert to cright tree
if self.cright is not EmptyNode:
self.cright = self.cright.insert( start, end, interval )
else:
self.cright = IntervalNode( start, end, interval )
# rebalance tree
if self.priority < self.cright.priority:
croot = self.rotate_left()
else:
# insert to cleft tree
if self.cleft is not EmptyNode:
self.cleft = self.cleft.insert( start, end, interval)
else:
self.cleft = IntervalNode( start, end, interval)
# rebalance tree
if self.priority < self.cleft.priority:
croot = self.rotate_right()
croot.set_ends()
self.cleft.croot = croot
self.cright.croot = croot
return croot
cdef IntervalNode rotate_right(IntervalNode self):
cdef IntervalNode croot = self.cleft
self.cleft = self.cleft.cright
croot.cright = self
self.set_ends()
return croot
cdef IntervalNode rotate_left(IntervalNode self):
cdef IntervalNode croot = self.cright
self.cright = self.cright.cleft
croot.cleft = self
self.set_ends()
return croot
cdef inline void set_ends(IntervalNode self):
if self.cright is not EmptyNode and self.cleft is not EmptyNode:
self.maxend = dmax3(self.end, self.cright.maxend, self.cleft.maxend)
self.minend = dmin3(self.end, self.cright.minend, self.cleft.minend)
self.minstart = dmin3(self.start, self.cright.minstart, self.cleft.minstart)
elif self.cright is not EmptyNode:
self.maxend = dmax2(self.end, self.cright.maxend)
self.minend = dmin2(self.end, self.cright.minend)
self.minstart = dmin2(self.start, self.cright.minstart)
elif self.cleft is not EmptyNode:
self.maxend = dmax2(self.end, self.cleft.maxend)
self.minend = dmin2(self.end, self.cleft.minend)
self.minstart = dmin2(self.start, self.cleft.minstart)
def intersect( self, double start, double end, sort=True ):
"""
given a start and a end, return a list of features
falling within that range
"""
cdef list results = []
self._intersect( start, end, results )
if sort:
results = sorted(results)
return results
find = intersect
cdef void _intersect( IntervalNode self, double start, double end, list results):
# Left subtree
if self.cleft is not EmptyNode and self.cleft.maxend > start:
self.cleft._intersect( start, end, results )
# This interval
if ( self.end > start ) and ( self.start < end ):
results.append( self.interval )
# Right subtree
if self.cright is not EmptyNode and self.start < end:
self.cright._intersect( start, end, results )
cdef void _seek_left(IntervalNode self, double position, list results, int n, double max_dist):
# we know we can bail in these 2 cases.
if self.maxend + max_dist < position:
return
if self.minstart > position:
return
# the ordering of these 3 blocks makes it so the results are
# ordered nearest to farest from the query position
if self.cright is not EmptyNode:
self.cright._seek_left(position, results, n, max_dist)
if -1 < position - self.end < max_dist:
results.append(self.interval)
# TODO: can these conditionals be more stringent?
if self.cleft is not EmptyNode:
self.cleft._seek_left(position, results, n, max_dist)
cdef void _seek_right(IntervalNode self, double position, list results, int n, double max_dist):
# we know we can bail in these 2 cases.
if self.maxend < position: return
if self.minstart - max_dist > position: return
#print "SEEK_RIGHT:",self, self.cleft, self.maxend, self.minstart, position
# the ordering of these 3 blocks makes it so the results are
# ordered nearest to farest from the query position
if self.cleft is not EmptyNode:
self.cleft._seek_right(position, results, n, max_dist)
if -1 < self.start - position < max_dist:
results.append(self.interval)
if self.cright is not EmptyNode:
self.cright._seek_right(position, results, n, max_dist)
cpdef left(self, position, int n=1, double max_dist=2500):
"""
find n features with a start > than `position`
f: a Interval object (or anything with an `end` attribute)
n: the number of features to return
max_dist: the maximum distance to look before giving up.
"""
cdef list results = []
# use start - 1 becuase .left() assumes strictly left-of
self._seek_left( position - 1, results, n, max_dist )
if len(results) == n: return results
r = results
r.sort(key=operator.attrgetter('end'), reverse=True)
return r[:n]
cpdef right(self, position, int n=1, double max_dist=2500):
"""
find n features with a end < than position
f: a Interval object (or anything with a `start` attribute)
n: the number of features to return
max_dist: the maximum distance to look before giving up.
"""
cdef list results = []
# use end + 1 becuase .right() assumes strictly right-of
self._seek_right(position + 1, results, n, max_dist)
if len(results) == n: return results
r = results
r.sort(key=operator.attrgetter('start'))
return r[:n]
def traverse(self):
if self.cleft is not EmptyNode:
for node in self.cleft.traverse():
yield node
yield self.interval
if self.cright is not EmptyNode:
for node in self.cright.traverse():
yield node
cdef IntervalNode EmptyNode = IntervalNode( 0, 0, Interval(0, 0))
## ---- Wrappers that retain the old interface -------------------------------
cdef class Interval:
"""
Basic feature, with required integer start and end properties.
Also accepts optional strand as +1 or -1 (used for up/downstream queries),
a name, and any arbitrary data is sent in on the info keyword argument
>>> from bx.intervals.intersection import Interval
>>> f1 = Interval(23, 36)
>>> f2 = Interval(34, 48, value={'chr':12, 'anno':'transposon'})
>>> f2
Interval(34, 48, value={'anno': 'transposon', 'chr': 12})
"""
cdef public double start, end
cdef public object value, chrom, strand
def __init__(self, double start, double end, object value=None, object chrom=None, object strand=None ):
assert start <= end, "start must be less than end"
self.start = start
self.end = end
self.value = value
self.chrom = chrom
self.strand = strand
def __repr__(self):
fstr = "Interval(%g, %g" % (self.start, self.end)
if not self.value is None:
fstr += ", value=" + str(self.value)
fstr += ")"
return fstr
def __richcmp__(self, other, op):
if op == 0:
# <
return self.start < other.start or self.end < other.end
elif op == 1:
# <=
return self == other or self < other
elif op == 2:
# ==
return self.start == other.start and self.end == other.end
elif op == 3:
# !=
return self.start != other.start or self.end != other.end
elif op == 4:
# >
return self.start > other.start or self.end > other.end
elif op == 5:
# >=
return self == other or self > other
cdef class IntervalTree:
"""
Data structure for performing window intersect queries on a set of
of possibly overlapping 1d intervals.
Usage
=====
Create an empty IntervalTree
>>> from bx.intervals.intersection import Interval, IntervalTree
>>> intersecter = IntervalTree()
An interval is a start and end position and a value (possibly None).
You can add any object as an interval:
>>> intersecter.insert( 0, 10, "food" )
>>> intersecter.insert( 3, 7, dict(foo='bar') )
>>> intersecter.find( 2, 5 )
['food', {'foo': 'bar'}]
If the object has start and end attributes (like the Interval class) there
is are some shortcuts:
>>> intersecter = IntervalTree()
>>> intersecter.insert_interval( Interval( 0, 10 ) )
>>> intersecter.insert_interval( Interval( 3, 7 ) )
>>> intersecter.insert_interval( Interval( 3, 40 ) )
>>> intersecter.insert_interval( Interval( 13, 50 ) )
>>> intersecter.find( 30, 50 )
[Interval(3, 40), Interval(13, 50)]
>>> intersecter.find( 100, 200 )
[]
Before/after for intervals
>>> intersecter.before_interval( Interval( 10, 20 ) )
[Interval(3, 7)]
>>> intersecter.before_interval( Interval( 5, 20 ) )
[]
Upstream/downstream
>>> intersecter.upstream_of_interval(Interval(11, 12))
[Interval(0, 10)]
>>> intersecter.upstream_of_interval(Interval(11, 12, strand="-"))
[Interval(13, 50)]
>>> intersecter.upstream_of_interval(Interval(1, 2, strand="-"), num_intervals=3)
[Interval(3, 7), Interval(3, 40), Interval(13, 50)]
"""
cdef IntervalNode root
def __cinit__( self ):
root = None
# ---- Position based interfaces -----------------------------------------
def insert( self, double start, double end, object value=None ):
"""
Insert the interval [start,end) associated with value `value`.
"""
if self.root is None:
self.root = IntervalNode( start, end, value )
else:
self.root = self.root.insert( start, end, value )
add = insert
def find( self, start, end ):
"""
Return a sorted list of all intervals overlapping [start,end).
"""
if self.root is None:
return []
return self.root.find( start, end )
def before( self, position, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie before `position` and are no
further than `max_dist` positions away
"""
if self.root is None:
return []
return self.root.left( position, num_intervals, max_dist )
def after( self, position, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie after `position` and are no
further than `max_dist` positions away
"""
if self.root is None:
return []
return self.root.right( position, num_intervals, max_dist )
# ---- Interval-like object based interfaces -----------------------------
def insert_interval( self, interval ):
"""
Insert an "interval" like object (one with at least start and end
attributes)
"""
self.insert( interval.start, interval.end, interval )
add_interval = insert_interval
def before_interval( self, interval, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie completely before `interval`
and are no further than `max_dist` positions away
"""
if self.root is None:
return []
return self.root.left( interval.start, num_intervals, max_dist )
def after_interval( self, interval, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie completely after `interval` and
are no further than `max_dist` positions away
"""
if self.root is None:
return []
return self.root.right( interval.end, num_intervals, max_dist )
def upstream_of_interval( self, interval, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie completely upstream of
`interval` and are no further than `max_dist` positions away
"""
if self.root is None:
return []
if interval.strand == -1 or interval.strand == "-":
return self.root.right( interval.end, num_intervals, max_dist )
else:
return self.root.left( interval.start, num_intervals, max_dist )
def downstream_of_interval( self, interval, num_intervals=1, max_dist=2500 ):
"""
Find `num_intervals` intervals that lie completely downstream of
`interval` and are no further than `max_dist` positions away
"""
if self.root is None:
return []
if interval.strand == -1 or interval.strand == "-":
return self.root.left( interval.start, num_intervals, max_dist )
else:
return self.root.right( interval.end, num_intervals, max_dist )
def traverse(self):
"""
iterator that traverses the tree
"""
if self.root is None:
return iter([])
return self.root.traverse()
# For backward compatibility
Intersecter = IntervalTree

View File

@@ -1,14 +1,18 @@
# -*- coding: utf-8 -*-
"""Class for performing HTTP client requests via libcurl"""
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import time
import sys
import re
import os
import simplejson as json
import itertools
import nilmdb.utils
import nilmdb.httpclient
# Other functions expect to see these in the nilmdb.client namespace
@@ -16,6 +20,10 @@ from nilmdb.httpclient import ClientError, ServerError, Error
version = "1.0"
def float_to_string(f):
# Use repr to maintain full precision in the string output.
return repr(float(f))
class Client(object):
"""Main client interface to the Nilm database."""
@@ -89,33 +97,83 @@ class Client(object):
params = { "path": path }
return self.http.get("stream/destroy", params)
def stream_insert(self, path, data):
def stream_remove(self, path, start = None, end = None):
"""Remove data from the specified time range"""
params = {
"path": path
}
if start is not None:
params["start"] = float_to_string(start)
if end is not None:
params["end"] = float_to_string(end)
return self.http.get("stream/remove", params)
def stream_insert(self, path, data, start = None, end = None):
"""Insert data into a stream. data should be a file-like object
that provides ASCII data that matches the database layout for path."""
that provides ASCII data that matches the database layout for path.
start and end are the starting and ending timestamp of this
stream; all timestamps t in the data must satisfy 'start <= t
< end'. If left unspecified, 'start' is the timestamp of the
first line of data, and 'end' is the timestamp on the last line
of data, plus a small delta of 1μs.
"""
params = { "path": path }
# See design.md for a discussion of how much data to send.
# These are soft limits -- actual data might be rounded up.
max_data = 1048576
max_time = 30
end_epsilon = 1e-6
def extract_timestamp(line):
return float(line.split()[0])
def sendit():
result = self.http.put("stream/insert", send_data, params)
params["old_timestamp"] = result[1]
return result
# If we have more data after this, use the timestamp of
# the next line as the end. Otherwise, use the given
# overall end time, or add end_epsilon to the last data
# point.
if nextline:
block_end = extract_timestamp(nextline)
if end and block_end > end:
# This is unexpected, but we'll defer to the server
# to return an error in this case.
block_end = end
elif end:
block_end = end
else:
block_end = extract_timestamp(line) + end_epsilon
# Send it
params["start"] = float_to_string(block_start)
params["end"] = float_to_string(block_end)
return self.http.put("stream/insert", block_data, params)
clock_start = time.time()
block_data = ""
block_start = start
result = None
start = time.time()
send_data = ""
for line in data:
elapsed = time.time() - start
send_data += line
for (line, nextline) in nilmdb.utils.misc.pairwise(data):
# If we don't have a starting time, extract it from the first line
if block_start is None:
block_start = extract_timestamp(line)
if (len(send_data) > max_data) or (elapsed > max_time):
clock_elapsed = time.time() - clock_start
block_data += line
# If we have enough data, or enough time has elapsed,
# send this block to the server, and empty things out
# for the next block.
if (len(block_data) > max_data) or (clock_elapsed > max_time):
result = sendit()
send_data = ""
start = time.time()
if len(send_data):
block_start = None
block_data = ""
clock_start = time.time()
# One last block?
if len(block_data):
result = sendit()
# Return the most recent JSON result we got back, or None if
@@ -130,9 +188,9 @@ class Client(object):
"path": path
}
if start is not None:
params["start"] = repr(start) # use repr to keep precision
params["start"] = float_to_string(start)
if end is not None:
params["end"] = repr(end)
params["end"] = float_to_string(end)
return self.http.get_gen("stream/intervals", params, retjson = True)
def stream_extract(self, path, start = None, end = None, count = False):
@@ -148,9 +206,9 @@ class Client(object):
"path": path,
}
if start is not None:
params["start"] = repr(start) # use repr to keep precision
params["start"] = float_to_string(start)
if end is not None:
params["end"] = repr(end)
params["end"] = float_to_string(end)
if count:
params["count"] = 1

View File

@@ -1,7 +1,7 @@
"""Command line client functionality"""
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb.client
import datetime_tz
@@ -11,12 +11,12 @@ import re
import argparse
from argparse import ArgumentDefaultsHelpFormatter as def_form
version = "0.1"
version = "1.0"
# Valid subcommands. Defined in separate files just to break
# things up -- they're still called with Cmdline as self.
subcommands = [ "info", "create", "list", "metadata", "insert", "extract",
"destroy" ]
"remove", "destroy" ]
# Import the subcommand modules. Equivalent way of doing this would be
# from . import info as cmd_info
@@ -24,10 +24,16 @@ subcmd_mods = {}
for cmd in subcommands:
subcmd_mods[cmd] = __import__("nilmdb.cmdline." + cmd, fromlist = [ cmd ])
class JimArgumentParser(argparse.ArgumentParser):
def error(self, message):
self.print_usage(sys.stderr)
self.exit(2, sprintf("error: %s\n", message))
class Cmdline(object):
def __init__(self, argv):
self.argv = argv
self.client = None
def arg_time(self, toparse):
"""Parse a time string argument"""
@@ -93,7 +99,7 @@ class Cmdline(object):
version_string = sprintf("nilmtool %s, client library %s",
version, nilmdb.Client.client_version)
self.parser = argparse.ArgumentParser(add_help = False,
self.parser = JimArgumentParser(add_help = False,
formatter_class = def_form)
group = self.parser.add_argument_group("General options")
@@ -119,6 +125,7 @@ class Cmdline(object):
def die(self, formatstr, *args):
fprintf(sys.stderr, formatstr + "\n", *args)
if self.client:
self.client.close()
sys.exit(-1)
@@ -131,13 +138,17 @@ class Cmdline(object):
self.parser_setup()
self.args = self.parser.parse_args(self.argv)
# Run arg verify handler if there is one
if "verify" in self.args:
self.args.verify(self)
self.client = nilmdb.Client(self.args.url)
# Make a test connection to make sure things work
try:
server_version = self.client.version()
except nilmdb.client.Error as e:
self.die("Error connecting to server: %s", str(e))
self.die("error connecting to server: %s", str(e))
# Now dispatch client request to appropriate function. Parser
# should have ensured that we don't have any unknown commands

View File

@@ -1,5 +1,5 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb.client
from argparse import ArgumentDefaultsHelpFormatter as def_form
@@ -24,4 +24,4 @@ def cmd_create(self):
try:
self.client.stream_create(self.args.path, self.args.layout)
except nilmdb.client.ClientError as e:
self.die("Error creating stream: %s", str(e))
self.die("error creating stream: %s", str(e))

View File

@@ -1,5 +1,5 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb.client
from argparse import ArgumentDefaultsHelpFormatter as def_form
@@ -22,4 +22,4 @@ def cmd_destroy(self):
try:
self.client.stream_destroy(self.args.path)
except nilmdb.client.ClientError as e:
self.die("Error deleting stream: %s", str(e))
self.die("error destroying stream: %s", str(e))

View File

@@ -1,7 +1,7 @@
from __future__ import absolute_import
from nilmdb.printf import *
from __future__ import print_function
from nilmdb.utils.printf import *
import nilmdb.client
import nilmdb.layout
import sys
def setup(self, sub):
@@ -9,17 +9,18 @@ def setup(self, sub):
description="""
Extract data from a stream.
""")
cmd.set_defaults(handler = cmd_extract)
cmd.set_defaults(verify = cmd_extract_verify,
handler = cmd_extract)
group = cmd.add_argument_group("Data selection")
group.add_argument("path",
help="Path of stream, e.g. /foo/bar")
group.add_argument("-s", "--start", required=True,
metavar="TIME", type=self.arg_time,
help="Starting timestamp (free-form)")
help="Starting timestamp (free-form, inclusive)")
group.add_argument("-e", "--end", required=True,
metavar="TIME", type=self.arg_time,
help="Ending timestamp (free-form)")
help="Ending timestamp (free-form, noninclusive)")
group = cmd.add_argument_group("Output format")
group.add_argument("-b", "--bare", action="store_true",
@@ -30,10 +31,15 @@ def setup(self, sub):
group.add_argument("-c", "--count", action="store_true",
help="Just output a count of matched data points")
def cmd_extract_verify(self):
if self.args.start is not None and self.args.end is not None:
if self.args.start > self.args.end:
self.parser.error("start is after end")
def cmd_extract(self):
streams = self.client.stream_list(self.args.path)
if len(streams) != 1:
self.die("Error getting stream info for path %s", self.args.path)
self.die("error getting stream info for path %s", self.args.path)
layout = streams[0][1]
if self.args.annotate:
@@ -51,7 +57,7 @@ def cmd_extract(self):
# Strip timestamp (first element). Doesn't make sense
# if we are only returning a count.
dataline = ' '.join(dataline.split(' ')[1:])
print dataline
print(dataline)
printed = True
if not printed:
if self.args.annotate:

View File

@@ -1,5 +1,5 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
from argparse import ArgumentDefaultsHelpFormatter as def_form

View File

@@ -1,7 +1,6 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb.client
import nilmdb.layout
import nilmdb.timestamper
import sys
@@ -52,12 +51,12 @@ def cmd_insert(self):
# Find requested stream
streams = self.client.stream_list(self.args.path)
if len(streams) != 1:
self.die("Error getting stream info for path %s", self.args.path)
self.die("error getting stream info for path %s", self.args.path)
layout = streams[0][1]
if self.args.start and len(self.args.file) != 1:
self.die("--start can only be used with one input file, for now")
self.die("error: --start can only be used with one input file")
for filename in self.args.file:
if filename == '-':
@@ -66,7 +65,7 @@ def cmd_insert(self):
try:
infile = open(filename, "r")
except IOError:
self.die("Error opening input file %s", filename)
self.die("error opening input file %s", filename)
# Build a timestamper for this file
if self.args.none:
@@ -78,11 +77,11 @@ def cmd_insert(self):
try:
start = self.parse_time(filename)
except ValueError:
self.die("Error extracting time from filename '%s'",
self.die("error extracting time from filename '%s'",
filename)
if not self.args.rate:
self.die("Need to specify --rate")
self.die("error: --rate is needed, but was not specified")
rate = self.args.rate
ts = nilmdb.timestamper.TimestamperRate(infile, start, rate)
@@ -101,6 +100,6 @@ def cmd_insert(self):
# ugly bracketed ranges of 16-digit numbers and a mangled URL.
# Need to consider adding something like e.prettyprint()
# that is smarter about the contents of the error.
self.die("Error inserting data: %s", str(e))
self.die("error inserting data: %s", str(e))
return

View File

@@ -1,8 +1,9 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb.client
import fnmatch
import argparse
from argparse import ArgumentDefaultsHelpFormatter as def_form
def setup(self, sub):
@@ -13,23 +14,41 @@ def setup(self, sub):
optionally filtering by layout or path. Wildcards
are accepted.
""")
cmd.set_defaults(handler = cmd_list)
cmd.set_defaults(verify = cmd_list_verify,
handler = cmd_list)
group = cmd.add_argument_group("Stream filtering")
group.add_argument("-p", "--path", metavar="PATH", default="*",
help="Match only this path (-p can be omitted)")
group.add_argument("path_positional", default="*",
nargs="?", help=argparse.SUPPRESS)
group.add_argument("-l", "--layout", default="*",
help="Match only this stream layout")
group.add_argument("-p", "--path", default="*",
help="Match only this path")
group = cmd.add_argument_group("Interval details")
group.add_argument("-d", "--detail", action="store_true",
help="Show available data time intervals")
group.add_argument("-s", "--start",
metavar="TIME", type=self.arg_time,
help="Starting timestamp (free-form)")
help="Starting timestamp (free-form, inclusive)")
group.add_argument("-e", "--end",
metavar="TIME", type=self.arg_time,
help="Ending timestamp (free-form)")
help="Ending timestamp (free-form, noninclusive)")
def cmd_list_verify(self):
# A hidden "path_positional" argument lets the user leave off the
# "-p" when specifying the path. Handle it here.
got_opt = self.args.path != "*"
got_pos = self.args.path_positional != "*"
if got_pos:
if got_opt:
self.parser.error("too many paths specified")
else:
self.args.path = self.args.path_positional
if self.args.start is not None and self.args.end is not None:
if self.args.start > self.args.end:
self.parser.error("start is after end")
def cmd_list(self):
"""List available streams"""

View File

@@ -1,5 +1,5 @@
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb.client
def setup(self, sub):
@@ -43,21 +43,21 @@ def cmd_metadata(self):
for keyval in keyvals:
kv = keyval.split('=')
if len(kv) != 2 or kv[0] == "":
self.die("Error parsing key=value argument '%s'", keyval)
self.die("error parsing key=value argument '%s'", keyval)
data[kv[0]] = kv[1]
# Make the call
try:
handler(self.args.path, data)
except nilmdb.client.ClientError as e:
self.die("Error setting/updating metadata: %s", str(e))
self.die("error setting/updating metadata: %s", str(e))
else:
# Get (or unspecified)
keys = self.args.get or None
try:
data = self.client.stream_get_metadata(self.args.path, keys)
except nilmdb.client.ClientError as e:
self.die("Error getting metadata: %s", str(e))
self.die("error getting metadata: %s", str(e))
for key, value in sorted(data.items()):
# Omit nonexistant keys
if value is None:

45
nilmdb/cmdline/remove.py Normal file
View File

@@ -0,0 +1,45 @@
from __future__ import absolute_import
from __future__ import print_function
from nilmdb.utils.printf import *
import nilmdb.client
import sys
def setup(self, sub):
cmd = sub.add_parser("remove", help="Remove data",
description="""
Remove all data from a specified time range within a
stream.
""")
cmd.set_defaults(verify = cmd_remove_verify,
handler = cmd_remove)
group = cmd.add_argument_group("Data selection")
group.add_argument("path",
help="Path of stream, e.g. /foo/bar")
group.add_argument("-s", "--start", required=True,
metavar="TIME", type=self.arg_time,
help="Starting timestamp (free-form, inclusive)")
group.add_argument("-e", "--end", required=True,
metavar="TIME", type=self.arg_time,
help="Ending timestamp (free-form, noninclusive)")
group = cmd.add_argument_group("Output format")
group.add_argument("-c", "--count", action="store_true",
help="Output number of data points removed")
def cmd_remove_verify(self):
if self.args.start is not None and self.args.end is not None:
if self.args.start > self.args.end:
self.parser.error("start is after end")
def cmd_remove(self):
try:
count = self.client.stream_remove(self.args.path,
self.args.start, self.args.end)
except nilmdb.client.ClientError as e:
self.die("error removing data: %s", str(e))
if self.args.count:
printf("%d\n", count)
return 0

View File

@@ -1,7 +1,8 @@
"""HTTP client library"""
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb.utils
import time
import sys
@@ -9,12 +10,9 @@ import re
import os
import simplejson as json
import urlparse
import urllib
import pycurl
import cStringIO
import nilmdb.iteratorizer
class Error(Exception):
"""Base exception for both ClientError and ServerError responses"""
def __init__(self,
@@ -28,12 +26,19 @@ class Error(Exception):
self.url = url # URL we were requesting
self.traceback = traceback # server traceback, if available
def __str__(self):
s = sprintf("[%s]", self.status)
if self.message:
s += sprintf(" %s", self.message)
if self.traceback: # pragma: no cover
s += sprintf("\nServer traceback:\n%s", self.traceback)
return s
def __repr__(self): # pragma: no cover
s = sprintf("[%s]", self.status)
if self.message:
s += sprintf(" %s", self.message)
if self.url:
s += sprintf(" (%s)", self.url)
if self.traceback: # pragma: no cover
if self.traceback:
s += sprintf("\nServer traceback:\n%s", self.traceback)
return s
class ClientError(Error):
@@ -60,7 +65,8 @@ class HTTPClient(object):
def _setup_url(self, url = "", params = ""):
url = urlparse.urljoin(self.baseurl, url)
if params:
url = urlparse.urljoin(url, "?" + urllib.urlencode(params, True))
url = urlparse.urljoin(
url, "?" + nilmdb.utils.urllib.urlencode(params))
self.curl.setopt(pycurl.URL, url)
self.url = url
@@ -85,6 +91,10 @@ class HTTPClient(object):
raise ClientError(**args)
else: # pragma: no cover
if code >= 500 and code <= 599:
if args["message"] is None:
args["message"] = ("(no message; try disabling " +
"response.stream option in " +
"nilmdb.server for better debugging)")
raise ServerError(**args)
else:
raise Error(**args)
@@ -109,7 +119,7 @@ class HTTPClient(object):
self.curl.setopt(pycurl.WRITEFUNCTION, callback)
self.curl.perform()
try:
for i in nilmdb.iteratorizer.Iteratorizer(func):
for i in nilmdb.utils.Iteratorizer(func):
if self._status == 200:
# If we had a 200 response, yield the data to the caller.
yield i

View File

@@ -37,6 +37,7 @@ cdef class Interval:
'start' and 'end' are arbitrary floats that represent time
"""
if start > end:
# Explicitly disallow zero-width intervals (since they're half-open)
raise IntervalError("start %s must precede end %s" % (start, end))
self.start = float(start)
self.end = float(end)
@@ -177,8 +178,8 @@ cdef class IntervalSet:
else:
return False
this = [ x for x in self ]
that = [ x for x in other ]
this = list(self)
that = list(other)
try:
while True:
@@ -236,6 +237,12 @@ cdef class IntervalSet:
self.__iadd__(x)
return self
def iadd_nocheck(self, Interval other not None):
"""Inplace add -- modifies self.
'Optimized' version that doesn't check for intersection and
only inserts the new interval into the tree."""
self.tree.insert(rbtree.RBNode(other.start, other.end, other))
def __isub__(self, Interval other not None):
"""Inplace subtract -- modifies self
@@ -272,7 +279,7 @@ cdef class IntervalSet:
return out
def intersection(self, Interval interval not None):
def intersection(self, Interval interval not None, orig = False):
"""
Compute a sequence of intervals that correspond to the
intersection between `self` and the provided interval.
@@ -281,6 +288,10 @@ cdef class IntervalSet:
Output intervals are built as subsets of the intervals in the
first argument (self).
If orig = True, also return the original interval that was
(potentially) subsetted to make the one that is being
returned.
"""
if not isinstance(interval, Interval):
raise TypeError("bad type")
@@ -288,10 +299,16 @@ cdef class IntervalSet:
i = n.obj
if i:
if i.start >= interval.start and i.end <= interval.end:
if orig:
yield (i, i)
else:
yield i
else:
subset = i.subset(max(i.start, interval.start),
min(i.end, interval.end))
if orig:
yield (subset, i)
else:
yield subset
cpdef intersects(self, Interval other):
@@ -300,3 +317,13 @@ cdef class IntervalSet:
if n.obj.intersects(other):
return True
return False
def find_end(self, double t):
"""
Return an Interval from this tree that ends at time t, or
None if it doesn't exist.
"""
n = self.tree.find_left_end(t)
if n and n.obj.end == t:
return n.obj
return None

View File

@@ -1,6 +1,5 @@
# cython: profile=False
import tables
import time
import sys
import inspect
@@ -122,15 +121,6 @@ class Layout:
s += " %d" % d[i+1]
return s + "\n"
# PyTables description
def description(self):
"""Return the PyTables description of this layout"""
desc = {}
desc['timestamp'] = tables.Col.from_type('float64', pos=0)
for n in range(self.count):
desc['c' + str(n+1)] = tables.Col.from_type(self.datatype, pos=n+1)
return tables.Description(desc)
# Get a layout by name
def get_named(typestring):
try:

View File

@@ -4,17 +4,16 @@
Object that represents a NILM database file.
Manages both the SQL database and the PyTables storage backend.
Manages both the SQL database and the table storage backend.
"""
# Need absolute_import so that "import nilmdb" won't pull in nilmdb.py,
# but will pull the nilmdb module instead.
from __future__ import absolute_import
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
import sqlite3
import tables
import time
import sys
import os
@@ -25,6 +24,8 @@ import pyximport
pyximport.install()
from nilmdb.interval import Interval, DBInterval, IntervalSet, IntervalError
from . import bulkdata
# Note about performance and transactions:
#
# Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
@@ -79,7 +80,7 @@ _sql_schema_updates = {
class NilmDBError(Exception):
"""Base exception for NilmDB errors"""
def __init__(self, message = "Unspecified error"):
Exception.__init__(self, self.__class__.__name__ + ": " + message)
Exception.__init__(self, message)
class StreamError(NilmDBError):
pass
@@ -87,19 +88,14 @@ class StreamError(NilmDBError):
class OverlapError(NilmDBError):
pass
# Helper that lets us pass a Pytables table into bisect
class BisectableTable(object):
def __init__(self, table):
self.table = table
def __getitem__(self, index):
return self.table[index][0]
@nilmdb.utils.must_close()
class NilmDB(object):
verbose = 0
def __init__(self, basepath, sync=True, max_results=None):
def __init__(self, basepath, sync=True, max_results=None,
bulkdata_args={}):
# set up path
self.basepath = os.path.abspath(basepath.rstrip('/'))
self.basepath = os.path.abspath(basepath)
# Create the database path if it doesn't exist
try:
@@ -108,16 +104,16 @@ class NilmDB(object):
if e.errno != errno.EEXIST:
raise IOError("can't create tree " + self.basepath)
# Our HD5 file goes inside it
h5filename = os.path.abspath(self.basepath + "/data.h5")
self.h5file = tables.openFile(h5filename, "a", "NILM Database")
# Our data goes inside it
self.data = bulkdata.BulkData(self.basepath, **bulkdata_args)
# SQLite database too
sqlfilename = os.path.abspath(self.basepath + "/data.sql")
sqlfilename = os.path.join(self.basepath, "data.sql")
# We use check_same_thread = False, assuming that the rest
# of the code (e.g. Server) will be smart and not access this
# database from multiple threads simultaneously. That requirement
# may be relaxed later.
# database from multiple threads simultaneously. Otherwise
# false positives will occur when the database is only opened
# in one thread, and only accessed in another.
self.con = sqlite3.connect(sqlfilename, check_same_thread = False)
self._sql_schema_update()
@@ -134,17 +130,6 @@ class NilmDB(object):
else:
self.max_results = 16384
self.opened = True
# Cached intervals
self._cached_iset = {}
def __del__(self):
if "opened" in self.__dict__: # pragma: no cover
fprintf(sys.stderr,
"error: NilmDB.close() wasn't called, path %s",
self.basepath)
def get_basepath(self):
return self.basepath
@@ -152,8 +137,7 @@ class NilmDB(object):
if self.con:
self.con.commit()
self.con.close()
self.h5file.close()
del self.opened
self.data.close()
def _sql_schema_update(self):
cur = self.con.cursor()
@@ -170,12 +154,11 @@ class NilmDB(object):
with self.con:
cur.execute("PRAGMA user_version = {v:d}".format(v=version))
@nilmdb.utils.lru_cache(size = 16)
def _get_intervals(self, stream_id):
"""
Return a mutable IntervalSet corresponding to the given stream ID.
"""
# Load from database if not cached
if stream_id not in self._cached_iset:
iset = IntervalSet()
result = self.con.execute("SELECT start_time, end_time, "
"start_pos, end_pos "
@@ -188,42 +171,112 @@ class NilmDB(object):
start_pos, end_pos)
except IntervalError as e: # pragma: no cover
raise NilmDBError("unexpected overlap in ranges table!")
self._cached_iset[stream_id] = iset
# Return cached value
return self._cached_iset[stream_id]
# TODO: Split add_interval into two pieces, one to add
# and one to flush to disk?
# Need to think about this. Basic problem is that we can't
# mess with intervals once they're in the IntervalSet,
# without mucking with bxinterval internals.
return iset
# Maybe add a separate optimization step?
# Join intervals that have a fairly small gap between them
def _sql_interval_insert(self, id, start, end, start_pos, end_pos):
"""Helper that adds interval to the SQL database only"""
self.con.execute("INSERT INTO ranges "
"(stream_id,start_time,end_time,start_pos,end_pos) "
"VALUES (?,?,?,?,?)",
(id, start, end, start_pos, end_pos))
def _sql_interval_delete(self, id, start, end, start_pos, end_pos):
"""Helper that removes interval from the SQL database only"""
self.con.execute("DELETE FROM ranges WHERE "
"stream_id=? AND start_time=? AND "
"end_time=? AND start_pos=? AND end_pos=?",
(id, start, end, start_pos, end_pos))
def _add_interval(self, stream_id, interval, start_pos, end_pos):
"""
Add interval to the internal interval cache, and to the database.
Note: arguments must be ints (not numpy.int64, etc)
"""
# Ensure this stream's intervals are cached, and add the new
# interval to that cache.
# Load this stream's intervals
iset = self._get_intervals(stream_id)
try:
iset += DBInterval(interval.start, interval.end,
interval.start, interval.end,
start_pos, end_pos)
except IntervalError as e: # pragma: no cover
# Check for overlap
if iset.intersects(interval): # pragma: no cover (gets caught earlier)
raise NilmDBError("new interval overlaps existing data")
# Check for adjacency. If there's a stream in the database
# that ends exactly when this one starts, and the database
# rows match up, we can make one interval that covers the
# time range [adjacent.start -> interval.end)
# and database rows [ adjacent.start_pos -> end_pos ].
# Only do this if the resulting interval isn't too large.
max_merged_rows = 8000 * 60 * 60 * 1.05 # 1.05 hours at 8 KHz
adjacent = iset.find_end(interval.start)
if (adjacent is not None and
start_pos == adjacent.db_endpos and
(end_pos - adjacent.db_startpos) < max_merged_rows):
# First delete the old one, both from our iset and the
# database
iset -= adjacent
self._sql_interval_delete(stream_id,
adjacent.db_start, adjacent.db_end,
adjacent.db_startpos, adjacent.db_endpos)
# Now update our interval so the fallthrough add is
# correct.
interval.start = adjacent.start
start_pos = adjacent.db_startpos
# Add the new interval to the iset
iset.iadd_nocheck(DBInterval(interval.start, interval.end,
interval.start, interval.end,
start_pos, end_pos))
# Insert into the database
self.con.execute("INSERT INTO ranges "
"(stream_id,start_time,end_time,start_pos,end_pos) "
"VALUES (?,?,?,?,?)",
(stream_id, interval.start, interval.end,
int(start_pos), int(end_pos)))
self._sql_interval_insert(stream_id, interval.start, interval.end,
int(start_pos), int(end_pos))
self.con.commit()
def _remove_interval(self, stream_id, original, remove):
"""
Remove an interval from the internal cache and the database.
stream_id: id of stream
original: original DBInterval; must be already present in DB
to_remove: DBInterval to remove; must be subset of 'original'
"""
# Just return if we have nothing to remove
if remove.start == remove.end: # pragma: no cover
return
# Load this stream's intervals
iset = self._get_intervals(stream_id)
# Remove existing interval from the cached set and the database
iset -= original
self._sql_interval_delete(stream_id,
original.db_start, original.db_end,
original.db_startpos, original.db_endpos)
# Add back the intervals that would be left over if the
# requested interval is removed. There may be two of them, if
# the removed piece was in the middle.
def add(iset, start, end, start_pos, end_pos):
iset += DBInterval(start, end, start, end, start_pos, end_pos)
self._sql_interval_insert(stream_id, start, end, start_pos, end_pos)
if original.start != remove.start:
# Interval before the removed region
add(iset, original.start, remove.start,
original.db_startpos, remove.db_startpos)
if original.end != remove.end:
# Interval after the removed region
add(iset, remove.end, original.end,
remove.db_endpos, original.db_endpos)
# Commit SQL changes
self.con.commit()
return
def stream_list(self, path = None, layout = None):
"""Return list of [path, layout] lists of all streams
in the database.
@@ -285,34 +338,11 @@ class NilmDB(object):
layout_name: string for nilmdb.layout.get_named(), e.g. 'float32_8'
"""
if path[0] != '/':
raise ValueError("paths must start with /")
[ group, node ] = path.rsplit("/", 1)
if group == '':
raise ValueError("invalid path")
# Create the bulk storage. Raises ValueError on error, which we
# pass along.
self.data.create(path, layout_name)
# Get description
try:
desc = nilmdb.layout.get_named(layout_name).description()
except KeyError:
raise ValueError("no such layout")
# Estimated table size (for PyTables optimization purposes): assume
# 3 months worth of data at 8 KHz. It's OK if this is wrong.
exp_rows = 8000 * 60*60*24*30*3
# Create the table
try:
table = self.h5file.createTable(group, node,
description = desc,
expectedrows = exp_rows,
createparents = True)
except AttributeError:
# Trying to create e.g. /foo/bar/baz when /foo/bar is already
# a table raises this error.
raise ValueError("error creating table at that path")
# Insert into SQL database once the PyTables is happy
# Insert into SQL database once the bulk storage is happy
with self.con as con:
con.execute("INSERT INTO streams (path, layout) VALUES (?,?)",
(path, layout_name))
@@ -358,24 +388,14 @@ class NilmDB(object):
def stream_destroy(self, path):
"""Fully remove a table and all of its data from the database.
No way to undo it! The group structure is removed, if there
are no other tables in it. Metadata is removed."""
No way to undo it! Metadata is removed."""
stream_id = self._stream_id(path)
# Delete the cached interval data
if stream_id in self._cached_iset:
del self._cached_iset[stream_id]
# Delete the cached interval data (if it was cached)
self._get_intervals.cache_remove(self, stream_id)
# Delete the data node, and all parent nodes (if they have no
# remaining children)
split_path = path.lstrip('/').split("/")
while split_path:
name = split_path.pop()
where = "/" + "/".join(split_path)
try:
self.h5file.removeNode(where, name, recursive = False)
except tables.NodeError:
break
# Delete the data
self.data.destroy(path)
# Delete metadata, stream, intervals
with self.con as con:
@@ -383,49 +403,35 @@ class NilmDB(object):
con.execute("DELETE FROM ranges WHERE stream_id=?", (stream_id,))
con.execute("DELETE FROM streams WHERE id=?", (stream_id,))
def stream_insert(self, path, parser, old_timestamp = None):
def stream_insert(self, path, start, end, data):
"""Insert new data into the database.
path: Path at which to add the data
parser: nilmdb.layout.Parser instance full of data to insert
start: Starting timestamp
end: Ending timestamp
data: Rows of data, to be passed to PyTable's table.append
method. E.g. nilmdb.layout.Parser.data
"""
if (not parser.min_timestamp or not parser.max_timestamp or
not len(parser.data)):
raise StreamError("no data provided")
# If we were provided with an old timestamp, the expectation
# is that the client has a contiguous block of time it is sending,
# but it's doing it over multiple calls to stream_insert.
# old_timestamp is the max_timestamp of the previous insert.
# To make things continuous, use that as our starting timestamp
# instead of what the parser found.
if old_timestamp:
min_timestamp = old_timestamp
else:
min_timestamp = parser.min_timestamp
# First check for basic overlap using timestamp info given.
stream_id = self._stream_id(path)
iset = self._get_intervals(stream_id)
interval = Interval(min_timestamp, parser.max_timestamp)
interval = Interval(start, end)
if iset.intersects(interval):
raise OverlapError("new data overlaps existing data at range: "
+ str(iset & interval))
# Insert the data into pytables
table = self.h5file.getNode(path)
# Insert the data
table = self.data.getnode(path)
row_start = table.nrows
table.append(parser.data)
table.append(data)
row_end = table.nrows
table.flush()
# Insert the record into the sql database.
# Casts are to convert from numpy.int64.
self._add_interval(stream_id, interval, int(row_start), int(row_end))
self._add_interval(stream_id, interval, row_start, row_end)
# And that's all
return "ok"
def _find_start(self, table, interval):
def _find_start(self, table, dbinterval):
"""
Given a DBInterval, find the row in the database that
corresponds to the start time. Return the first database
@@ -433,14 +439,14 @@ class NilmDB(object):
equal to 'start'.
"""
# Optimization for the common case where an interval wasn't truncated
if interval.start == interval.db_start:
return interval.db_startpos
return bisect.bisect_left(BisectableTable(table),
interval.start,
interval.db_startpos,
interval.db_endpos)
if dbinterval.start == dbinterval.db_start:
return dbinterval.db_startpos
return bisect.bisect_left(bulkdata.TimestampOnlyTable(table),
dbinterval.start,
dbinterval.db_startpos,
dbinterval.db_endpos)
def _find_end(self, table, interval):
def _find_end(self, table, dbinterval):
"""
Given a DBInterval, find the row in the database that follows
the end time. Return the first database position after the
@@ -448,16 +454,16 @@ class NilmDB(object):
to 'end'.
"""
# Optimization for the common case where an interval wasn't truncated
if interval.end == interval.db_end:
return interval.db_endpos
if dbinterval.end == dbinterval.db_end:
return dbinterval.db_endpos
# Note that we still use bisect_left here, because we don't
# want to include the given timestamp in the results. This is
# so a queries like 1:00 -> 2:00 and 2:00 -> 3:00 return
# non-overlapping data.
return bisect.bisect_left(BisectableTable(table),
interval.end,
interval.db_startpos,
interval.db_endpos)
return bisect.bisect_left(bulkdata.TimestampOnlyTable(table),
dbinterval.end,
dbinterval.db_startpos,
dbinterval.db_endpos)
def stream_extract(self, path, start = None, end = None, count = False):
"""
@@ -478,8 +484,8 @@ class NilmDB(object):
than actually fetching the data. It is not limited by
max_results.
"""
table = self.h5file.getNode(path)
stream_id = self._stream_id(path)
table = self.data.getnode(path)
intervals = self._get_intervals(stream_id)
requested = Interval(start or 0, end or 1e12)
result = []
@@ -516,3 +522,45 @@ class NilmDB(object):
if count:
return matched
return (result, restart)
def stream_remove(self, path, start = None, end = None):
"""
Remove data from the specified time interval within a stream.
Removes all data in the interval [start, end), and intervals
are truncated or split appropriately. Returns the number of
data points removed.
"""
stream_id = self._stream_id(path)
table = self.data.getnode(path)
intervals = self._get_intervals(stream_id)
to_remove = Interval(start or 0, end or 1e12)
removed = 0
if start == end:
return 0
# Can't remove intervals from within the iterator, so we need to
# remember what's currently in the intersection now.
all_candidates = list(intervals.intersection(to_remove, orig = True))
for (dbint, orig) in all_candidates:
# Find row start and end
row_start = self._find_start(table, dbint)
row_end = self._find_end(table, dbint)
# Adjust the DBInterval to match the newly found ends
dbint.db_start = dbint.start
dbint.db_end = dbint.end
dbint.db_startpos = row_start
dbint.db_endpos = row_end
# Remove interval from the database
self._remove_interval(stream_id, orig, dbint)
# Remove data from the underlying table storage
table.remove(row_start, row_end)
# Count how many were removed
removed += row_end - row_start
return removed

View File

@@ -3,15 +3,18 @@
# Need absolute_import so that "import nilmdb" won't pull in nilmdb.py,
# but will pull the nilmdb module instead.
from __future__ import absolute_import
from nilmdb.utils.printf import *
import nilmdb
from nilmdb.printf import *
import cherrypy
import sys
import time
import os
import simplejson as json
import decorator
import traceback
from nilmdb.nilmdb import NilmDBError
try:
import cherrypy
@@ -24,8 +27,53 @@ class NilmApp(object):
def __init__(self, db):
self.db = db
version = "1.1"
version = "1.2"
# Decorators
def chunked_response(func):
"""Decorator to enable chunked responses."""
# Set this to False to get better tracebacks from some requests
# (/stream/extract, /stream/intervals).
func._cp_config = { 'response.stream': True }
return func
@decorator.decorator
def workaround_cp_bug_1200(func, *args, **kwargs): # pragma: no cover
"""Decorator to work around CherryPy bug #1200 in a response
generator.
Even if chunked responses are disabled, LookupError or
UnicodeError exceptions may still be swallowed by CherryPy due to
bug #1200. This throws them as generic Exceptions instead so that
they make it through.
"""
try:
for val in func(*args, **kwargs):
yield val
except (LookupError, UnicodeError) as e:
raise Exception("bug workaround; real exception is:\n" +
traceback.format_exc())
def exception_to_httperror(*expected):
"""Return a decorator-generating function that catches expected
errors and throws a HTTPError describing it instead.
@exception_to_httperror(NilmDBError, ValueError)
def foo():
pass
"""
def wrapper(func, *args, **kwargs):
try:
return func(*args, **kwargs)
except expected as e:
message = sprintf("%s", str(e))
raise cherrypy.HTTPError("400 Bad Request", message)
# We need to preserve the function's argspecs for CherryPy to
# handle argument errors correctly. Decorator.decorator takes
# care of that.
return decorator.decorator(wrapper)
# CherryPy apps
class Root(NilmApp):
"""Root application for NILM database"""
@@ -59,7 +107,7 @@ class Root(NilmApp):
@cherrypy.expose
@cherrypy.tools.json_out()
def dbsize(self):
return nilmdb.du.du(self.db.get_basepath())
return nilmdb.utils.du(self.db.get_basepath())
class Stream(NilmApp):
"""Stream-specific operations"""
@@ -78,26 +126,20 @@ class Stream(NilmApp):
# /stream/create?path=/newton/prep&layout=PrepData
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, ValueError)
def create(self, path, layout):
"""Create a new stream in the database. Provide path
and one of the nilmdb.layout.layouts keys.
"""
try:
return self.db.stream_create(path, layout)
except Exception as e:
message = sprintf("%s: %s", type(e).__name__, e.message)
raise cherrypy.HTTPError("400 Bad Request", message)
# /stream/destroy?path=/newton/prep
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError)
def destroy(self, path):
"""Delete a stream and its associated data."""
try:
return self.db.stream_destroy(path)
except Exception as e:
message = sprintf("%s: %s", type(e).__name__, e.message)
raise cherrypy.HTTPError("400 Bad Request", message)
# /stream/get_metadata?path=/newton/prep
# /stream/get_metadata?path=/newton/prep&key=foo&key=bar
@@ -126,49 +168,35 @@ class Stream(NilmApp):
# /stream/set_metadata?path=/newton/prep&data=<json>
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, LookupError, TypeError)
def set_metadata(self, path, data):
"""Set metadata for the named stream, replacing any
existing metadata. Data should be a json-encoded
dictionary"""
try:
data_dict = json.loads(data)
self.db.stream_set_metadata(path, data_dict)
except Exception as e:
message = sprintf("%s: %s", type(e).__name__, e.message)
raise cherrypy.HTTPError("400 Bad Request", message)
return "ok"
# /stream/update_metadata?path=/newton/prep&data=<json>
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, LookupError, TypeError)
def update_metadata(self, path, data):
"""Update metadata for the named stream. Data
should be a json-encoded dictionary"""
try:
data_dict = json.loads(data)
self.db.stream_update_metadata(path, data_dict)
except Exception as e:
message = sprintf("%s: %s", type(e).__name__, e.message)
raise cherrypy.HTTPError("400 Bad Request", message)
return "ok"
# /stream/insert?path=/newton/prep
@cherrypy.expose
@cherrypy.tools.json_out()
#@cherrypy.tools.disable_prb()
def insert(self, path, old_timestamp = None):
def insert(self, path, start, end):
"""
Insert new data into the database. Provide textual data
(matching the path's layout) as a HTTP PUT.
old_timestamp is used when making multiple, split-up insertions
for a larger contiguous block of data. The first insert
will return the maximum timestamp that it saw, and the second
insert should provide this timestamp as an argument. This is
used to extend the previous database interval rather than
start a new one.
"""
# Important that we always read the input before throwing any
# errors, to keep lengths happy for persistent connections.
# However, CherryPy 3.2.2 has a bug where this fails for GET
@@ -190,25 +218,60 @@ class Stream(NilmApp):
parser.parse(body)
except nilmdb.layout.ParserError as e:
raise cherrypy.HTTPError("400 Bad Request",
"Error parsing input data: " +
"error parsing input data: " +
e.message)
if (not parser.min_timestamp or not parser.max_timestamp or
not len(parser.data)):
raise cherrypy.HTTPError("400 Bad Request",
"no data provided")
# Check limits
start = float(start)
end = float(end)
if parser.min_timestamp < start:
raise cherrypy.HTTPError("400 Bad Request", "Data timestamp " +
repr(parser.min_timestamp) +
" < start time " + repr(start))
if parser.max_timestamp >= end:
raise cherrypy.HTTPError("400 Bad Request", "Data timestamp " +
repr(parser.max_timestamp) +
" >= end time " + repr(end))
# Now do the nilmdb insert, passing it the parser full of data.
try:
if old_timestamp:
old_timestamp = float(old_timestamp)
result = self.db.stream_insert(path, parser, old_timestamp)
result = self.db.stream_insert(path, start, end, parser.data)
except nilmdb.nilmdb.NilmDBError as e:
raise cherrypy.HTTPError("400 Bad Request", e.message)
# Return the maximum timestamp that we saw. The client will
# return this back to us as the old_timestamp parameter, if
# it has more data to send.
return ("ok", parser.max_timestamp)
# Done
return "ok"
# /stream/remove?path=/newton/prep
# /stream/remove?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError)
def remove(self, path, start = None, end = None):
"""
Remove data from the backend database. Removes all data in
the interval [start, end). Returns the number of data points
removed.
"""
if start is not None:
start = float(start)
if end is not None:
end = float(end)
if start is not None and end is not None:
if end < start:
raise cherrypy.HTTPError("400 Bad Request",
"end before start")
return self.db.stream_remove(path, start, end)
# /stream/intervals?path=/newton/prep
# /stream/intervals?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@chunked_response
def intervals(self, path, start = None, end = None):
"""
Get intervals from backend database. Streams the resulting
@@ -230,9 +293,9 @@ class Stream(NilmApp):
if len(streams) != 1:
raise cherrypy.HTTPError("404 Not Found", "No such stream")
@workaround_cp_bug_1200
def content(start, end):
# Note: disable response.stream below to get better debug info
# from tracebacks in this subfunction.
# Note: disable chunked responses to see tracebacks from here.
while True:
(intervals, restart) = self.db.stream_intervals(path,start,end)
response = ''.join([ json.dumps(i) + "\n" for i in intervals ])
@@ -241,10 +304,10 @@ class Stream(NilmApp):
break
start = restart
return content(start, end)
intervals._cp_config = { 'response.stream': True } # chunked HTTP response
# /stream/extract?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@chunked_response
def extract(self, path, start = None, end = None, count = False):
"""
Extract data from backend database. Streams the resulting
@@ -274,9 +337,9 @@ class Stream(NilmApp):
# Get formatter
formatter = nilmdb.layout.Formatter(layout)
@workaround_cp_bug_1200
def content(start, end, count):
# Note: disable response.stream below to get better debug info
# from tracebacks in this subfunction.
# Note: disable chunked responses to see tracebacks from here.
if count:
matched = self.db.stream_extract(path, start, end, count)
yield sprintf("%d\n", matched)
@@ -292,8 +355,6 @@ class Stream(NilmApp):
return
start = restart
return content(start, end, count)
extract._cp_config = { 'response.stream': True } # chunked HTTP response
class Exiter(object):
"""App that exits the server, for testing"""
@@ -318,7 +379,7 @@ class Server(object):
# Need to wrap DB object in a serializer because we'll call
# into it from separate threads.
self.embedded = embedded
self.db = nilmdb.serializer.WrapObject(db)
self.db = nilmdb.utils.Serializer(db)
cherrypy.config.update({
'server.socket_host': host,
'server.socket_port': port,
@@ -334,6 +395,11 @@ class Server(object):
cherrypy.config.update({ 'request.show_tracebacks' : True })
self.force_traceback = force_traceback
# Patch CherryPy error handler to never pad out error messages.
# This isn't necessary, but then again, neither is padding the
# error messages.
cherrypy._cperror._ie_friendly_error_sizes = {}
cherrypy.tree.apps = {}
cherrypy.tree.mount(Root(self.db, self.version), "/")
cherrypy.tree.mount(Stream(self.db), "/stream")

View File

@@ -1,7 +1,7 @@
"""File-like objects that add timestamps to the input lines"""
from __future__ import absolute_import
from nilmdb.printf import *
from nilmdb.utils.printf import *
import time
import os

11
nilmdb/utils/__init__.py Normal file
View File

@@ -0,0 +1,11 @@
"""NilmDB utilities"""
from .timer import Timer
from .iteratorizer import Iteratorizer
from .serializer import Serializer
from .lrucache import lru_cache
from .diskusage import du
from .mustclose import must_close
from .urllib import urlencode
from . import misc
from . import atomic

26
nilmdb/utils/atomic.py Normal file
View File

@@ -0,0 +1,26 @@
# Atomic file writing helper.
import os
def replace_file(filename, content):
"""Attempt to atomically and durably replace the filename with the
given contents. This is intended to be 'pretty good on most
OSes', but not necessarily bulletproof."""
newfilename = filename + ".new"
# Write to new file, flush it
with open(newfilename, "wb") as f:
f.write(content)
f.flush()
os.fsync(f.fileno())
# Move new file over old one
try:
os.rename(newfilename, filename)
except OSError: # pragma: no cover
# Some OSes might not support renaming over an existing file.
# This is definitely NOT atomic!
os.remove(filename)
os.rename(newfilename, filename)

77
nilmdb/utils/lrucache.py Normal file
View File

@@ -0,0 +1,77 @@
# Memoize a function's return value with a least-recently-used cache
# Based on:
# http://code.activestate.com/recipes/498245-lru-and-lfu-cache-decorators/
# with added 'destructor' functionality.
import collections
import decorator
import warnings
def lru_cache(size = 10, onremove = None, keys = slice(None)):
"""Least-recently-used cache decorator.
@lru_cache(size = 10, onevict = None)
def f(...):
pass
Given a function and arguments, memoize its return value. Up to
'size' elements are cached. 'keys' is a slice object that
represents which arguments are used as the cache key.
When evicting a value from the cache, call the function
'onremove' with the value that's being evicted.
Call f.cache_remove(...) to evict the cache entry with the given
arguments. Call f.cache_remove_all() to evict all entries.
f.cache_hits and f.cache_misses give statistics.
"""
def decorate(func):
cache = collections.OrderedDict() # order: least- to most-recent
def evict(value):
if onremove:
onremove(value)
def wrapper(orig, *args, **kwargs):
if kwargs:
raise NotImplementedError("kwargs not supported")
key = args[keys]
try:
value = cache.pop(key)
orig.cache_hits += 1
except KeyError:
value = orig(*args)
orig.cache_misses += 1
if len(cache) >= size:
evict(cache.popitem(0)[1]) # evict LRU cache entry
cache[key] = value # (re-)insert this key at end
return value
def cache_remove(*args):
"""Remove the described key from this cache, if present."""
key = args
if key in cache:
evict(cache.pop(key))
else:
if len(cache) > 0 and len(args) != len(cache.iterkeys().next()):
raise KeyError("trying to remove from LRU cache, but "
"number of arguments doesn't match the "
"cache key length")
def cache_remove_all():
for key in cache:
evict(cache.pop(key))
def cache_info():
return (func.cache_hits, func.cache_misses)
new = decorator.decorator(wrapper, func)
func.cache_hits = 0
func.cache_misses = 0
new.cache_info = cache_info
new.cache_remove = cache_remove
new.cache_remove_all = cache_remove_all
return new
return decorate

8
nilmdb/utils/misc.py Normal file
View File

@@ -0,0 +1,8 @@
import itertools
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), ..., (sn,None)"
a, b = itertools.tee(iterable)
next(b, None)
return itertools.izip_longest(a, b)

63
nilmdb/utils/mustclose.py Normal file
View File

@@ -0,0 +1,63 @@
from nilmdb.utils.printf import *
import sys
import inspect
import decorator
def must_close(errorfile = sys.stderr, wrap_verify = False):
"""Class decorator that warns on 'errorfile' at deletion time if
the class's close() member wasn't called.
If 'wrap_verify' is True, every class method is wrapped with a
verifier that will raise AssertionError if the .close() method has
already been called."""
def class_decorator(cls):
# Helper to replace a class method with a wrapper function,
# while maintaining argument specs etc.
def wrap_class_method(wrapper_func):
method = wrapper_func.__name__
if method in cls.__dict__:
orig = getattr(cls, method).im_func
else:
orig = lambda self: None
setattr(cls, method, decorator.decorator(wrapper_func, orig))
@wrap_class_method
def __init__(orig, self, *args, **kwargs):
ret = orig(self, *args, **kwargs)
self.__dict__["_must_close"] = True
self.__dict__["_must_close_initialized"] = True
return ret
@wrap_class_method
def __del__(orig, self, *args, **kwargs):
if "_must_close" in self.__dict__:
fprintf(errorfile, "error: %s.close() wasn't called!\n",
self.__class__.__name__)
return orig(self, *args, **kwargs)
@wrap_class_method
def close(orig, self, *args, **kwargs):
del self._must_close
return orig(self, *args, **kwargs)
# Optionally wrap all other functions
def verifier(orig, self, *args, **kwargs):
if ("_must_close" not in self.__dict__ and
"_must_close_initialized" in self.__dict__):
raise AssertionError("called " + str(orig) + " after close")
return orig(self, *args, **kwargs)
if wrap_verify:
for (name, method) in inspect.getmembers(cls, inspect.ismethod):
# Skip class methods
if method.__self__ is not None:
continue
# Skip some methods
if name in [ "__del__", "__init__" ]:
continue
# Set up wrapper
setattr(cls, name, decorator.decorator(verifier,
method.im_func))
return cls
return class_decorator

View File

@@ -67,3 +67,6 @@ class WrapObject(object):
def __del__(self):
self.__wrap_call_queue.put((None, None, None, None))
self.__wrap_serializer.join()
# Just an alias
Serializer = WrapObject

View File

@@ -5,6 +5,7 @@
# with nilmdb.Timer("flush"):
# foo.flush()
from __future__ import print_function
import contextlib
import time
@@ -18,4 +19,4 @@ def Timer(name = None, tosyslog = False):
import syslog
syslog.syslog(msg)
else:
print msg
print(msg)

40
nilmdb/utils/urllib.py Normal file
View File

@@ -0,0 +1,40 @@
from __future__ import absolute_import
from urllib import quote_plus, _is_unicode
# urllib.urlencode insists on encoding Unicode as ASCII. This is based
# on that function, except we always encode it as UTF-8 instead.
def urlencode(query):
"""Encode a dictionary into a URL query string.
If any values in the query arg are sequences, each sequence
element is converted to a separate parameter.
"""
query = query.items()
l = []
for k, v in query:
k = quote_plus(str(k))
if isinstance(v, str):
v = quote_plus(v)
l.append(k + '=' + v)
elif _is_unicode(v):
# is there a reasonable way to convert to ASCII?
# encode generates a string, but "replace" or "ignore"
# lose information and "strict" can raise UnicodeError
v = quote_plus(v.encode("utf-8","strict"))
l.append(k + '=' + v)
else:
try:
# is this a sufficient test for sequence-ness?
len(v)
except TypeError:
# not a sequence
v = quote_plus(str(v))
l.append(k + '=' + v)
else:
# loop over the sequence
for elt in v:
l.append(k + '=' + quote_plus(str(elt)))
return '&'.join(l)

View File

@@ -3,14 +3,17 @@
import nilmdb
import argparse
parser = argparse.ArgumentParser(description='Run the NILM server')
formatter = argparse.ArgumentDefaultsHelpFormatter
parser = argparse.ArgumentParser(description='Run the NILM server',
formatter_class = formatter)
parser.add_argument('-p', '--port', help='Port number', type=int, default=12380)
parser.add_argument('-d', '--database', help='Database directory', default="db")
parser.add_argument('-y', '--yappi', help='Run with yappi profiler',
action='store_true')
args = parser.parse_args()
# Start web app on a custom port
db = nilmdb.NilmDB("db")
db = nilmdb.NilmDB(args.database)
server = nilmdb.Server(db, host = "127.0.0.1",
port = args.port,
embedded = False)

46
runtests.py Executable file
View File

@@ -0,0 +1,46 @@
#!/usr/bin/python
import nose
import os
import sys
import glob
from collections import OrderedDict
class JimOrderPlugin(nose.plugins.Plugin):
"""When searching for tests and encountering a directory that
contains a 'test.order' file, run tests listed in that file, in the
order that they're listed. Globs are OK in that file and duplicates
are removed."""
name = 'jimorder'
score = 10000
def prepareTestLoader(self, loader):
def wrap(func):
def wrapper(name, *args, **kwargs):
addr = nose.selector.TestAddress(
name, workingDir=loader.workingDir)
try:
order = os.path.join(addr.filename, "test.order")
except:
order = None
if order and os.path.exists(order):
files = []
for line in open(order):
line = line.split('#')[0].strip()
if not line:
continue
fn = os.path.join(addr.filename, line.strip())
files.extend(sorted(glob.glob(fn)) or [fn])
files = list(OrderedDict.fromkeys(files))
tests = [ wrapper(fn, *args, **kwargs) for fn in files ]
return loader.suiteClass(tests)
return func(name, *args, **kwargs)
return wrapper
loader.loadTestsFromName = wrap(loader.loadTestsFromName)
return loader
# Use setup.cfg for most of the test configuration. Adding
# --with-jimorder here means that a normal "nosetests" run will
# still work, it just won't support test.order.
nose.main(addplugins = [ JimOrderPlugin() ],
argv = sys.argv + ["--with-jimorder"])

View File

@@ -8,8 +8,14 @@ cover-package=nilmdb
cover-erase=
##cover-html= # this works, puts html output in cover/ dir
##cover-branches= # need nose 1.1.3 for this
#debug=nose
#debug-log=nose.log
stop=
verbosity=2
tests=tests
#tests=tests/test_bulkdata.py
#tests=tests/test_mustclose.py
#tests=tests/test_lrucache.py
#tests=tests/test_cmdline.py
#tests=tests/test_layout.py
#tests=tests/test_rbtree.py
@@ -21,6 +27,7 @@ verbosity=2
#tests=tests/test_serializer.py
#tests=tests/test_iteratorizer.py
#tests=tests/test_client.py:TestClient.test_client_nilmdb
#tests=tests/test_nilmdb.py
#with-profile=
#profile-sort=time
##profile-restrict=10 # doesn't work right, treated as string or something

View File

@@ -0,0 +1,19 @@
2.56437e+05 2.24430e+05 4.01161e+03 3.47534e+03 7.49589e+03 3.38894e+03 2.61397e+02 3.73126e+03
2.53963e+05 2.24167e+05 5.62107e+03 1.54801e+03 9.16517e+03 3.52293e+03 1.05893e+03 2.99696e+03
2.58508e+05 2.24930e+05 6.01140e+03 8.18866e+02 9.03995e+03 4.48244e+03 2.49039e+03 2.67934e+03
2.59627e+05 2.26022e+05 4.47450e+03 2.42302e+03 7.41419e+03 5.07197e+03 2.43938e+03 2.96296e+03
2.55187e+05 2.24632e+05 4.73857e+03 3.39804e+03 7.39512e+03 4.72645e+03 1.83903e+03 3.39353e+03
2.57102e+05 2.21623e+05 6.14413e+03 1.44109e+03 8.75648e+03 3.49532e+03 1.86994e+03 3.75253e+03
2.63653e+05 2.21770e+05 6.22177e+03 7.38962e+02 9.54760e+03 2.66682e+03 1.46266e+03 3.33257e+03
2.63613e+05 2.25256e+05 4.47712e+03 2.43745e+03 8.51021e+03 3.85563e+03 9.59442e+02 2.38718e+03
2.55350e+05 2.26264e+05 4.28372e+03 3.92394e+03 7.91247e+03 5.46652e+03 1.28499e+03 2.09372e+03
2.52727e+05 2.24609e+05 5.85193e+03 2.49198e+03 8.54063e+03 5.62305e+03 2.33978e+03 3.00714e+03
2.58475e+05 2.23578e+05 5.92487e+03 1.39448e+03 8.77962e+03 4.54418e+03 2.13203e+03 3.84976e+03
2.61563e+05 2.24609e+05 4.33614e+03 2.45575e+03 8.05538e+03 3.46911e+03 6.27873e+02 3.66420e+03
2.56401e+05 2.24441e+05 4.18715e+03 3.45717e+03 7.90669e+03 3.53355e+03 -5.84482e+00 2.96687e+03
2.54745e+05 2.22644e+05 6.02005e+03 1.94721e+03 9.28939e+03 3.80020e+03 1.34820e+03 2.37785e+03
2.60723e+05 2.22660e+05 6.69719e+03 1.03048e+03 9.26124e+03 4.34917e+03 2.84530e+03 2.73619e+03
2.63089e+05 2.25711e+05 4.77887e+03 2.60417e+03 7.39660e+03 4.59811e+03 2.17472e+03 3.40729e+03
2.55843e+05 2.27128e+05 4.02413e+03 4.39323e+03 6.79336e+03 4.62535e+03 7.52009e+02 3.44647e+03
2.51904e+05 2.24868e+05 5.82289e+03 3.02127e+03 8.46160e+03 3.80298e+03 8.07212e+02 3.53468e+03
2.57670e+05 2.22974e+05 6.73436e+03 1.60956e+03 9.92960e+03 2.98028e+03 1.44168e+03 3.05351e+03

18
tests/test.order Normal file
View File

@@ -0,0 +1,18 @@
test_printf.py
test_lrucache.py
test_mustclose.py
test_serializer.py
test_iteratorizer.py
test_timestamper.py
test_layout.py
test_rbtree.py
test_interval.py
test_bulkdata.py
test_nilmdb.py
test_client.py
test_cmdline.py
test_*.py

103
tests/test_bulkdata.py Normal file
View File

@@ -0,0 +1,103 @@
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.utils.printf import *
import nilmdb.bulkdata
from nose.tools import *
from nose.tools import assert_raises
import itertools
from testutil.helpers import *
testdb = "tests/bulkdata-testdb"
from nilmdb.bulkdata import BulkData
class TestBulkData(object):
def test_bulkdata(self):
for (size, files, db) in [ ( 0, 0, testdb ),
( 25, 1000, testdb ),
( 1000, 3, testdb.decode("utf-8") ) ]:
recursive_unlink(db)
os.mkdir(db)
self.do_basic(db, size, files)
def do_basic(self, db, size, files):
"""Do the basic test with variable file_size and files_per_dir"""
if not size or not files:
data = BulkData(db)
else:
data = BulkData(db, file_size = size, files_per_dir = files)
# create empty
with assert_raises(ValueError):
data.create("/foo", "uint16_8")
with assert_raises(ValueError):
data.create("foo/bar", "uint16_8")
with assert_raises(ValueError):
data.create("/foo/bar", "uint8_8")
data.create("/foo/bar", "uint16_8")
data.create(u"/foo/baz/quux", "float64_16")
with assert_raises(ValueError):
data.create("/foo/bar/baz", "uint16_8")
with assert_raises(ValueError):
data.create("/foo/baz", "float64_16")
# get node -- see if caching works
nodes = []
for i in range(5000):
nodes.append(data.getnode("/foo/bar"))
nodes.append(data.getnode("/foo/baz/quux"))
del nodes
# Test node
node = data.getnode("/foo/bar")
with assert_raises(IndexError):
x = node[0]
raw = []
for i in range(1000):
raw.append([10000+i, 1, 2, 3, 4, 5, 6, 7, 8 ])
node.append(raw[0:1])
node.append(raw[1:100])
node.append(raw[100:])
misc_slices = [ 0, 100, slice(None), slice(0), slice(10),
slice(5,10), slice(3,None), slice(3,-3),
slice(20,10), slice(200,100,-1), slice(None,0,-1),
slice(100,500,5) ]
# Extract slices
for s in misc_slices:
eq_(node[s], raw[s])
# Get some coverage of remove; remove is more fully tested
# in cmdline
with assert_raises(IndexError):
node.remove(9999,9998)
# close, reopen
# reopen
data.close()
if not size or not files:
data = BulkData(db)
else:
data = BulkData(db, file_size = size, files_per_dir = files)
node = data.getnode("/foo/bar")
# Extract slices
for s in misc_slices:
eq_(node[s], raw[s])
# destroy
with assert_raises(ValueError):
data.destroy("/foo")
with assert_raises(ValueError):
data.destroy("/foo/baz")
with assert_raises(ValueError):
data.destroy("/foo/qwerty")
data.destroy("/foo/baz/quux")
data.destroy("/foo/bar")
# close
data.close()

View File

@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nilmdb.client import ClientError, ServerError
import datetime_tz
@@ -15,8 +17,9 @@ import cStringIO
import simplejson as json
import unittest
import warnings
import resource
from test_helpers import *
from testutil.helpers import *
testdb = "tests/client-testdb"
@@ -67,7 +70,11 @@ class TestClient(object):
eq_(distutils.version.StrictVersion(version),
distutils.version.StrictVersion(test_server.version))
def test_client_2_nilmdb(self):
# Bad URLs should give 404, not 500
with assert_raises(ClientError):
client.http.get("/stream/create")
def test_client_2_createlist(self):
# Basic stream tests, like those in test_nilmdb:test_stream
client = nilmdb.Client(url = "http://localhost:12380/")
@@ -82,6 +89,8 @@ class TestClient(object):
# Bad layout type
with assert_raises(ClientError):
client.stream_create("/newton/prep", "NoSuchLayout")
# Create three streams
client.stream_create("/newton/prep", "PrepData")
client.stream_create("/newton/raw", "RawData")
client.stream_create("/newton/zzz/rawnotch", "RawNotchedData")
@@ -95,6 +104,20 @@ class TestClient(object):
eq_(client.stream_list(layout="RawData"), [ ["/newton/raw", "RawData"] ])
eq_(client.stream_list(path="/newton/raw"), [ ["/newton/raw", "RawData"] ])
# Try messing with resource limits to trigger errors and get
# more coverage. Here, make it so we can only create files 1
# byte in size, which will trigger an IOError in the server when
# we create a table.
limit = resource.getrlimit(resource.RLIMIT_FSIZE)
resource.setrlimit(resource.RLIMIT_FSIZE, (1, limit[1]))
with assert_raises(ServerError) as e:
client.stream_create("/newton/hello", "RawData")
resource.setrlimit(resource.RLIMIT_FSIZE, limit)
def test_client_3_metadata(self):
client = nilmdb.Client(url = "http://localhost:12380/")
# Set / get metadata
eq_(client.stream_get_metadata("/newton/prep"), {})
eq_(client.stream_get_metadata("/newton/raw"), {})
@@ -124,13 +147,14 @@ class TestClient(object):
with assert_raises(ClientError):
client.stream_update_metadata("/newton/prep", [1,2,3])
def test_client_3_insert(self):
def test_client_4_insert(self):
client = nilmdb.Client(url = "http://localhost:12380/")
datetime_tz.localtz_set("America/New_York")
testfile = "tests/data/prep-20120323T1000"
start = datetime_tz.datetime_tz.smartparse("20120323T1000")
start = start.totimestamp()
rate = 120
# First try a nonexistent path
@@ -155,30 +179,60 @@ class TestClient(object):
# Try forcing a server request with empty data
with assert_raises(ClientError) as e:
client.http.put("stream/insert", "", { "path": "/newton/prep" })
client.http.put("stream/insert", "", { "path": "/newton/prep",
"start": 0, "end": 0 })
in_("400 Bad Request", str(e.exception))
in_("no data provided", str(e.exception))
# Specify start/end (starts too late)
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data,
start + 5, start + 120)
in_("400 Bad Request", str(e.exception))
in_("Data timestamp 1332511200.0 < start time 1332511205.0",
str(e.exception))
# Specify start/end (ends too early)
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data,
start, start + 1)
in_("400 Bad Request", str(e.exception))
# Client chunks the input, so the exact timestamp here might change
# if the chunk positions change.
in_("Data timestamp 1332511271.016667 >= end time 1332511201.0",
str(e.exception))
# Now do the real load
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
result = client.stream_insert("/newton/prep", data)
eq_(result[0], "ok")
result = client.stream_insert("/newton/prep", data,
start, start + 119.999777)
eq_(result, "ok")
# Verify the intervals. Should be just one, even if the data
# was inserted in chunks, due to nilmdb interval concatenation.
intervals = list(client.stream_intervals("/newton/prep"))
eq_(intervals, [[start, start + 119.999777]])
# Try some overlapping data -- just insert it again
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data)
in_("400 Bad Request", str(e.exception))
in_("OverlapError", str(e.exception))
in_("verlap", str(e.exception))
def test_client_4_extract(self):
# Misc tests for extract. Most of them are in test_cmdline.
def test_client_5_extractremove(self):
# Misc tests for extract and remove. Most of them are in test_cmdline.
client = nilmdb.Client(url = "http://localhost:12380/")
for x in client.stream_extract("/newton/prep", 123, 123):
raise Exception("shouldn't be any data for this request")
def test_client_5_generators(self):
with assert_raises(ClientError) as e:
client.stream_remove("/newton/prep", 123, 120)
def test_client_6_generators(self):
# A lot of the client functionality is already tested by test_cmdline,
# but this gets a bit more coverage that cmdline misses.
client = nilmdb.Client(url = "http://localhost:12380/")
@@ -215,7 +269,8 @@ class TestClient(object):
# Check PUT with generator out
with assert_raises(ClientError) as e:
client.http.put_gen("stream/insert", "",
{ "path": "/newton/prep" }).next()
{ "path": "/newton/prep",
"start": 0, "end": 0 }).next()
in_("400 Bad Request", str(e.exception))
in_("no data provided", str(e.exception))
@@ -226,7 +281,7 @@ class TestClient(object):
in_("404 Not Found", str(e.exception))
in_("No such stream", str(e.exception))
def test_client_6_chunked(self):
def test_client_7_chunked(self):
# Make sure that /stream/intervals and /stream/extract
# properly return streaming, chunked response. Pokes around
# in client.http internals a bit to look at the response
@@ -238,7 +293,7 @@ class TestClient(object):
# still disable chunked responses for debugging.
x = client.http.get("stream/intervals", { "path": "/newton/prep" },
retjson=False)
eq_(x.count('\n'), 2)
lines_(x, 1)
if "transfer-encoding: chunked" not in client.http._headers.lower():
warnings.warn("Non-chunked HTTP response for /stream/intervals")
@@ -248,3 +303,40 @@ class TestClient(object):
"end": "123" }, retjson=False)
if "transfer-encoding: chunked" not in client.http._headers.lower():
warnings.warn("Non-chunked HTTP response for /stream/extract")
def test_client_8_unicode(self):
# Basic Unicode tests
client = nilmdb.Client(url = "http://localhost:12380/")
# Delete streams that exist
for stream in client.stream_list():
client.stream_destroy(stream[0])
# Database is empty
eq_(client.stream_list(), [])
# Create Unicode stream, match it
raw = [ u"/düsseldorf/raw", u"uint16_6" ]
prep = [ u"/düsseldorf/prep", u"uint16_6" ]
client.stream_create(*raw)
eq_(client.stream_list(), [raw])
eq_(client.stream_list(layout=raw[1]), [raw])
eq_(client.stream_list(path=raw[0]), [raw])
client.stream_create(*prep)
eq_(client.stream_list(), [prep, raw])
# Set / get metadata with Unicode keys and values
eq_(client.stream_get_metadata(raw[0]), {})
eq_(client.stream_get_metadata(prep[0]), {})
meta1 = { u"alpha": u"α",
u"β": u"beta" }
meta2 = { u"alpha": u"α" }
meta3 = { u"β": u"beta" }
client.stream_set_metadata(prep[0], meta1)
client.stream_update_metadata(prep[0], {})
client.stream_update_metadata(raw[0], meta2)
client.stream_update_metadata(raw[0], meta3)
eq_(client.stream_get_metadata(prep[0]), meta1)
eq_(client.stream_get_metadata(raw[0]), meta1)
eq_(client.stream_get_metadata(raw[0], [ "alpha" ]), meta2)
eq_(client.stream_get_metadata(raw[0], [ "alpha", "β" ]), meta1)

View File

@@ -1,29 +1,35 @@
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nilmdb.cmdline
import unittest
from nose.tools import *
from nose.tools import assert_raises
import itertools
import datetime_tz
import os
import re
import shutil
import sys
import threading
import urllib2
from urllib2 import urlopen, HTTPError
import Queue
import cStringIO
import StringIO
import shlex
from test_helpers import *
from testutil.helpers import *
testdb = "tests/cmdline-testdb"
def server_start(max_results = None):
def server_start(max_results = None, bulkdata_args = {}):
global test_server, test_db
# Start web app on a custom port
test_db = nilmdb.NilmDB(testdb, sync = False, max_results = max_results)
test_db = nilmdb.NilmDB(testdb, sync = False,
max_results = max_results,
bulkdata_args = bulkdata_args)
test_server = nilmdb.Server(test_db, host = "127.0.0.1",
port = 12380, stoppable = False,
fast_shutdown = True,
@@ -45,12 +51,18 @@ def setup_module():
def teardown_module():
server_stop()
# Add an encoding property to StringIO so Python will convert Unicode
# properly when writing or reading.
class UTF8StringIO(StringIO.StringIO):
encoding = 'utf-8'
class TestCmdline(object):
def run(self, arg_string, infile=None, outfile=None):
"""Run a cmdline client with the specified argument string,
passing the given input. Returns a tuple with the output and
exit code"""
# printf("TZ=UTC ./nilmtool.py %s\n", arg_string)
class stdio_wrapper:
def __init__(self, stdin, stdout, stderr):
self.io = (stdin, stdout, stderr)
@@ -61,15 +73,18 @@ class TestCmdline(object):
( sys.stdin, sys.stdout, sys.stderr ) = self.saved
# Empty input if none provided
if infile is None:
infile = cStringIO.StringIO("")
infile = UTF8StringIO("")
# Capture stderr
errfile = cStringIO.StringIO()
errfile = UTF8StringIO()
if outfile is None:
# If no output file, capture stdout with stderr
outfile = errfile
with stdio_wrapper(infile, outfile, errfile) as s:
try:
nilmdb.cmdline.Cmdline(shlex.split(arg_string)).run()
# shlex doesn't support Unicode very well. Encode the
# string as UTF-8 explicitly before splitting.
args = shlex.split(arg_string.encode('utf-8'))
nilmdb.cmdline.Cmdline(args).run()
sys.exit(0)
except SystemExit as e:
exitcode = e.code
@@ -83,14 +98,24 @@ class TestCmdline(object):
self.dump()
eq_(self.exitcode, 0)
def fail(self, arg_string, infile = None, exitcode = None):
def fail(self, arg_string, infile = None,
exitcode = None, require_error = True):
self.run(arg_string, infile)
if exitcode is not None and self.exitcode != exitcode:
# Wrong exit code
self.dump()
eq_(self.exitcode, exitcode)
if self.exitcode == 0:
# Success, when we wanted failure
self.dump()
ne_(self.exitcode, 0)
# Make sure the output contains the word "error" at the
# beginning of a line, but only if an exitcode wasn't
# specified.
if require_error and not re.search("^error",
self.captured, re.MULTILINE):
raise AssertionError("command failed, but output doesn't "
"contain the string 'error'")
def contain(self, checkstring):
in_(checkstring, self.captured)
@@ -120,7 +145,7 @@ class TestCmdline(object):
def dump(self):
printf("-----dump start-----\n%s-----dump end-----\n", self.captured)
def test_cmdline_01_basic(self):
def test_01_basic(self):
# help
self.ok("--help")
@@ -166,14 +191,14 @@ class TestCmdline(object):
self.fail("extract --start 2000-01-01 --start 2001-01-02")
self.contain("duplicated argument")
def test_cmdline_02_info(self):
def test_02_info(self):
self.ok("info")
self.contain("Server URL: http://localhost:12380/")
self.contain("Server version: " + test_server.version)
self.contain("Server database path")
self.contain("Server database size")
def test_cmdline_03_createlist(self):
def test_03_createlist(self):
# Basic stream tests, like those in test_client.
# No streams
@@ -190,6 +215,10 @@ class TestCmdline(object):
# Bad layout type
self.fail("create /newton/prep NoSuchLayout")
self.contain("no such layout")
self.fail("create /newton/prep float32_0")
self.contain("no such layout")
self.fail("create /newton/prep float33_1")
self.contain("no such layout")
# Create a few streams
self.ok("create /newton/zzz/rawnotch RawNotchedData")
@@ -199,7 +228,12 @@ class TestCmdline(object):
# Should not be able to create a stream with another stream as
# its parent
self.fail("create /newton/prep/blah PrepData")
self.contain("error creating table at that path")
self.contain("path is subdir of existing node")
# Should not be able to create a stream at a location that
# has other nodes as children
self.fail("create /newton/zzz PrepData")
self.contain("subdirs of this path already exist")
# Verify we got those 3 streams and they're returned in
# alphabetical order.
@@ -208,10 +242,17 @@ class TestCmdline(object):
"/newton/raw RawData\n"
"/newton/zzz/rawnotch RawNotchedData\n")
# Match just one type or one path
# Match just one type or one path. Also check
# that --path is optional
self.ok("list --path /newton/raw")
self.match("/newton/raw RawData\n")
self.ok("list /newton/raw")
self.match("/newton/raw RawData\n")
self.fail("list -p /newton/raw /newton/raw")
self.contain("too many paths")
self.ok("list --layout RawData")
self.match("/newton/raw RawData\n")
@@ -223,10 +264,17 @@ class TestCmdline(object):
self.ok("list --path *zzz* --layout Raw*")
self.match("/newton/zzz/rawnotch RawNotchedData\n")
self.ok("list *zzz* --layout Raw*")
self.match("/newton/zzz/rawnotch RawNotchedData\n")
self.ok("list --path *zzz* --layout Prep*")
self.match("")
def test_cmdline_04_metadata(self):
# reversed range
self.fail("list /newton/prep --start 2020-01-01 --end 2000-01-01")
self.contain("start is after end")
def test_04_metadata(self):
# Set / get metadata
self.fail("metadata")
self.fail("metadata --get")
@@ -283,7 +331,7 @@ class TestCmdline(object):
self.fail("metadata /newton/nosuchpath")
self.contain("No stream at path /newton/nosuchpath")
def test_cmdline_05_parsetime(self):
def test_05_parsetime(self):
os.environ['TZ'] = "America/New_York"
cmd = nilmdb.cmdline.Cmdline(None)
test = datetime_tz.datetime_tz.now()
@@ -292,30 +340,23 @@ class TestCmdline(object):
eq_(cmd.parse_time("hi there 20120405 1400-0400 testing! 123"), test)
eq_(cmd.parse_time("20120405 1800 UTC"), test)
eq_(cmd.parse_time("20120405 1400-0400 UTC"), test)
for badtime in [ "20120405 1400-9999", "hello", "-", "", "14:00" ]:
with assert_raises(ValueError):
print cmd.parse_time("20120405 1400-9999")
with assert_raises(ValueError):
print cmd.parse_time("hello")
with assert_raises(ValueError):
print cmd.parse_time("-")
with assert_raises(ValueError):
print cmd.parse_time("")
with assert_raises(ValueError):
print cmd.parse_time("14:00")
x = cmd.parse_time(badtime)
eq_(cmd.parse_time("snapshot-20120405-140000.raw.gz"), test)
eq_(cmd.parse_time("prep-20120405T1400"), test)
def test_cmdline_06_insert(self):
def test_06_insert(self):
self.ok("insert --help")
self.fail("insert /foo/bar baz qwer")
self.contain("Error getting stream info")
self.contain("error getting stream info")
self.fail("insert /newton/prep baz qwer")
self.match("Error opening input file baz\n")
self.match("error opening input file baz\n")
self.fail("insert /newton/prep")
self.contain("Error extracting time")
self.contain("error extracting time")
self.fail("insert --start 19801205 /newton/prep 1 2 3 4")
self.contain("--start can only be used with one input file")
@@ -356,7 +397,7 @@ class TestCmdline(object):
os.environ['TZ'] = "UTC"
self.fail("insert --rate 120 /newton/raw "
"tests/data/prep-20120323T1004")
self.contain("Error parsing input data")
self.contain("error parsing input data")
# empty data does nothing
self.ok("insert --rate 120 --start '03/23/2012 06:05:00' /newton/prep "
@@ -365,57 +406,64 @@ class TestCmdline(object):
# bad start time
self.fail("insert --rate 120 --start 'whatever' /newton/prep /dev/null")
def test_cmdline_07_detail(self):
def test_07_detail(self):
# Just count the number of lines, it's probably fine
self.ok("list --detail")
eq_(self.captured.count('\n'), 11)
lines_(self.captured, 8)
self.ok("list --detail --path *prep")
eq_(self.captured.count('\n'), 7)
lines_(self.captured, 4)
self.ok("list --detail --path *prep --start='23 Mar 2012 10:02'")
eq_(self.captured.count('\n'), 5)
lines_(self.captured, 3)
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05'")
eq_(self.captured.count('\n'), 3)
lines_(self.captured, 2)
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05:15'")
eq_(self.captured.count('\n'), 2)
lines_(self.captured, 2)
self.contain("10:05:15.000")
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05:15.50'")
eq_(self.captured.count('\n'), 2)
lines_(self.captured, 2)
self.contain("10:05:15.500")
self.ok("list --detail --path *prep --start='23 Mar 2012 19:05:15.50'")
eq_(self.captured.count('\n'), 2)
lines_(self.captured, 2)
self.contain("no intervals")
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05:15.50'"
+ " --end='23 Mar 2012 10:05:15.50'")
eq_(self.captured.count('\n'), 2)
lines_(self.captured, 2)
self.contain("10:05:15.500")
self.ok("list --detail")
eq_(self.captured.count('\n'), 11)
lines_(self.captured, 8)
def test_cmdline_08_extract(self):
def test_08_extract(self):
# nonexistent stream
self.fail("extract /no/such/foo --start 2000-01-01 --end 2020-01-01")
self.contain("Error getting stream info")
self.contain("error getting stream info")
# empty ranges return an error
# reversed range
self.fail("extract -a /newton/prep --start 2020-01-01 --end 2000-01-01")
self.contain("start is after end")
# empty ranges return error 2
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'", exitcode = 2)
"--end '23 Mar 2012 10:00:30'",
exitcode = 2, require_error = False)
self.contain("no data")
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2012 10:00:30.000001' " +
"--end '23 Mar 2012 10:00:30.000001'", exitcode = 2)
"--end '23 Mar 2012 10:00:30.000001'",
exitcode = 2, require_error = False)
self.contain("no data")
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2022 10:00:30' " +
"--end '23 Mar 2022 10:00:30'", exitcode = 2)
"--end '23 Mar 2022 10:00:30'",
exitcode = 2, require_error = False)
self.contain("no data")
# but are ok if we're just counting results
@@ -450,20 +498,115 @@ class TestCmdline(object):
# all data put in by tests
self.ok("extract -a /newton/prep --start 2000-01-01 --end 2020-01-01")
eq_(self.captured.count('\n'), 43204)
lines_(self.captured, 43204)
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("43200\n")
def test_cmdline_09_truncated(self):
def test_09_truncated(self):
# Test truncated responses by overriding the nilmdb max_results
server_stop()
server_start(max_results = 2)
self.ok("list --detail")
eq_(self.captured.count('\n'), 11)
lines_(self.captured, 8)
server_stop()
server_start()
def test_cmdline_10_destroy(self):
def test_10_remove(self):
# Removing data
# Try nonexistent stream
self.fail("remove /no/such/foo --start 2000-01-01 --end 2020-01-01")
self.contain("No stream at path")
self.fail("remove /newton/prep --start 2020-01-01 --end 2000-01-01")
self.contain("start is after end")
# empty ranges return success, backwards ranges return error
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
self.match("")
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:00:30.000001' " +
"--end '23 Mar 2012 10:00:30.000001'")
self.match("")
self.ok("remove /newton/prep " +
"--start '23 Mar 2022 10:00:30' " +
"--end '23 Mar 2022 10:00:30'")
self.match("")
# Verbose
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
self.match("0\n")
self.ok("remove --count /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
self.match("0\n")
# Make sure we have the data we expect
self.ok("list --detail /newton/prep")
self.match("/newton/prep PrepData\n" +
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:03:59.991668 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:04:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:05:59.991668 +0000 ]\n")
# Remove various chunks of prep data and make sure
# they're gone.
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:40'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:10' " +
"--end '23 Mar 2012 10:00:20'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:05' " +
"--end '23 Mar 2012 10:00:25'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:03:50' " +
"--end '23 Mar 2012 10:06:50'")
self.match("15600\n")
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("24000\n")
# See the missing chunks in list output
self.ok("list --detail /newton/prep")
self.match("/newton/prep PrepData\n" +
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:00:05.000000 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:00:25.000000 +0000"
" -> Fri, 23 Mar 2012 10:00:30.000000 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:00:40.000000 +0000"
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:03:50.000000 +0000 ]\n")
# Remove all data, verify it's missing
self.ok("remove /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("") # no count requested this time
self.ok("list --detail /newton/prep")
self.match("/newton/prep PrepData\n" +
" (no intervals)\n")
# Reinsert some data, to verify that no overlaps with deleted
# data are reported
os.environ['TZ'] = "UTC"
self.ok("insert --rate 120 /newton/prep "
"tests/data/prep-20120323T1000 "
"tests/data/prep-20120323T1002")
def test_11_destroy(self):
# Delete records
self.ok("destroy --help")
@@ -484,7 +627,7 @@ class TestCmdline(object):
# Notice how they're not empty
self.ok("list --detail")
eq_(self.captured.count('\n'), 11)
lines_(self.captured, 7)
# Delete some
self.ok("destroy /newton/prep")
@@ -513,3 +656,167 @@ class TestCmdline(object):
# Make sure it was created empty
self.ok("list --detail --path " + path)
self.contain("(no intervals)")
def test_12_unicode(self):
# Unicode paths.
self.ok("destroy /newton/asdf/qwer")
self.ok("destroy /newton/prep")
self.ok("destroy /newton/raw")
self.ok("destroy /newton/zzz")
self.ok(u"create /düsseldorf/raw uint16_6")
self.ok("list --detail")
self.contain(u"/düsseldorf/raw uint16_6")
self.contain("(no intervals)")
# Unicode metadata
self.ok(u"metadata /düsseldorf/raw --set α=beta 'γ'")
self.ok(u"metadata /düsseldorf/raw --update 'α=β ε τ α'")
self.ok(u"metadata /düsseldorf/raw")
self.match(u"α=β ε τ α\nγ\n")
self.ok(u"destroy /düsseldorf/raw")
def test_13_files(self):
# Test BulkData's ability to split into multiple files,
# by forcing the file size to be really small.
server_stop()
server_start(bulkdata_args = { "file_size" : 920, # 23 rows per file
"files_per_dir" : 3 })
# Fill data
self.ok("create /newton/prep float32_8")
os.environ['TZ'] = "UTC"
with open("tests/data/prep-20120323T1004-timestamped") as input:
self.ok("insert --none /newton/prep", input)
# Extract it
self.ok("extract /newton/prep --start '2000-01-01' " +
"--end '2012-03-23 10:04:01'")
lines_(self.captured, 120)
self.ok("extract /newton/prep --start '2000-01-01' " +
"--end '2022-03-23 10:04:01'")
lines_(self.captured, 14400)
# Make sure there were lots of files generated in the database
# dir
nfiles = 0
for (dirpath, dirnames, filenames) in os.walk(testdb):
nfiles += len(filenames)
assert(nfiles > 500)
# Make sure we can restart the server with a different file
# size and have it still work
server_stop()
server_start()
self.ok("extract /newton/prep --start '2000-01-01' " +
"--end '2022-03-23 10:04:01'")
lines_(self.captured, 14400)
# Now recreate the data one more time and make sure there are
# fewer files.
self.ok("destroy /newton/prep")
self.fail("destroy /newton/prep") # already destroyed
self.ok("create /newton/prep float32_8")
os.environ['TZ'] = "UTC"
with open("tests/data/prep-20120323T1004-timestamped") as input:
self.ok("insert --none /newton/prep", input)
nfiles = 0
for (dirpath, dirnames, filenames) in os.walk(testdb):
nfiles += len(filenames)
lt_(nfiles, 50)
self.ok("destroy /newton/prep") # destroy again
def test_14_remove_files(self):
# Test BulkData's ability to remove when data is split into
# multiple files. Should be a fairly comprehensive test of
# remove functionality.
server_stop()
server_start(bulkdata_args = { "file_size" : 920, # 23 rows per file
"files_per_dir" : 3 })
# Insert data. Just for fun, insert out of order
self.ok("create /newton/prep PrepData")
os.environ['TZ'] = "UTC"
self.ok("insert --rate 120 /newton/prep "
"tests/data/prep-20120323T1002 "
"tests/data/prep-20120323T1000")
# Should take up about 2.8 MB here (including directory entries)
du_before = nilmdb.utils.diskusage.du_bytes(testdb)
# Make sure we have the data we expect
self.ok("list --detail")
self.match("/newton/prep PrepData\n" +
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:01:59.991668 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:02:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:03:59.991668 +0000 ]\n")
# Remove various chunks of prep data and make sure
# they're gone.
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("28800\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:03:30'")
self.match("21600\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:10' " +
"--end '23 Mar 2012 10:00:20'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:05' " +
"--end '23 Mar 2012 10:00:25'")
self.match("1200\n")
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:03:50' " +
"--end '23 Mar 2012 10:06:50'")
self.match("1200\n")
self.ok("extract -c /newton/prep --start 2000-01-01 --end 2020-01-01")
self.match("3600\n")
# See the missing chunks in list output
self.ok("list --detail")
self.match("/newton/prep PrepData\n" +
" [ Fri, 23 Mar 2012 10:00:00.000000 +0000"
" -> Fri, 23 Mar 2012 10:00:05.000000 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:00:25.000000 +0000"
" -> Fri, 23 Mar 2012 10:00:30.000000 +0000 ]\n"
" [ Fri, 23 Mar 2012 10:03:30.000000 +0000"
" -> Fri, 23 Mar 2012 10:03:50.000000 +0000 ]\n")
# We have 1/8 of the data that we had before, so the file size
# should have dropped below 1/4 of what it used to be
du_after = nilmdb.utils.diskusage.du_bytes(testdb)
lt_(du_after, (du_before / 4))
# Remove anything that came from the 10:02 data file
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:02:00' --end '2020-01-01'")
# Re-insert 19 lines from that file, then remove them again.
# With the specific file_size above, this will cause the last
# file in the bulk data storage to be exactly file_size large,
# so removing the data should also remove that last file.
self.ok("insert --rate 120 /newton/prep " +
"tests/data/prep-20120323T1002-first19lines")
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:02:00' --end '2020-01-01'")
# Shut down and restart server, to force nrows to get refreshed.
server_stop()
server_start()
# Re-add the full 10:02 data file. This tests adding new data once
# we removed data near the end.
self.ok("insert --rate 120 /newton/prep tests/data/prep-20120323T1002")
# See if we can extract it all
self.ok("extract /newton/prep --start 2000-01-01 --end 2020-01-01")
lines_(self.captured, 15600)

View File

@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
import datetime_tz
from nose.tools import *
@@ -10,13 +10,13 @@ import itertools
from nilmdb.interval import Interval, DBInterval, IntervalSet, IntervalError
from test_helpers import *
from testutil.helpers import *
import unittest
# set to False to skip live renders
do_live_renders = False
def render(iset, description = "", live = True):
import renderdot
import testutil.renderdot as renderdot
r = renderdot.RBTreeRenderer(iset.tree)
return r.render(description, live and do_live_renders)
@@ -137,6 +137,15 @@ class TestInterval:
x = iseta != 3
ne_(IntervalSet(a), IntervalSet(b))
# Note that assignment makes a new reference (not a copy)
isetd = IntervalSet(isetb)
isete = isetd
eq_(isetd, isetb)
eq_(isetd, isete)
isetd -= a
ne_(isetd, isetb)
eq_(isetd, isete)
# test iterator
for interval in iseta:
pass
@@ -158,11 +167,18 @@ class TestInterval:
iset = IntervalSet(a)
iset += IntervalSet(b)
eq_(iset, IntervalSet([a, b]))
iset = IntervalSet(a)
iset += b
eq_(iset, IntervalSet([a, b]))
iset = IntervalSet(a)
iset.iadd_nocheck(b)
eq_(iset, IntervalSet([a, b]))
iset = IntervalSet(a) + IntervalSet(b)
eq_(iset, IntervalSet([a, b]))
iset = IntervalSet(b) + a
eq_(iset, IntervalSet([a, b]))
@@ -329,14 +345,15 @@ class TestIntervalSpeed:
def test_interval_speed(self):
import yappi
import time
import aplotter
import testutil.aplotter as aplotter
import random
import math
print
yappi.start()
speeds = {}
for j in [ 2**x for x in range(5,20) ]:
limit = 10 # was 20
for j in [ 2**x for x in range(5,limit) ]:
start = time.time()
iset = IntervalSet()
for i in random.sample(xrange(j),j):

View File

@@ -1,5 +1,5 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nose
from nose.tools import *
@@ -7,9 +7,7 @@ from nose.tools import assert_raises
import threading
import time
from test_helpers import *
import nilmdb.iteratorizer
from testutil.helpers import *
def func_with_callback(a, b, callback):
callback(a)
@@ -27,7 +25,8 @@ class TestIteratorizer(object):
eq_(self.result, "123")
# Now make it an iterator
it = nilmdb.iteratorizer.Iteratorizer(lambda x:
it = nilmdb.utils.Iteratorizer(
lambda x:
func_with_callback(1, 2, x))
result = ""
for i in it:
@@ -35,7 +34,8 @@ class TestIteratorizer(object):
eq_(result, "123")
# Make sure things work when an exception occurs
it = nilmdb.iteratorizer.Iteratorizer(lambda x:
it = nilmdb.utils.Iteratorizer(
lambda x:
func_with_callback(1, "a", x))
result = ""
with assert_raises(TypeError) as e:
@@ -48,7 +48,8 @@ class TestIteratorizer(object):
# itself. This doesn't have a particular result in the test,
# but gains coverage.
def foo():
it = nilmdb.iteratorizer.Iteratorizer(lambda x:
it = nilmdb.utils.Iteratorizer(
lambda x:
func_with_callback(1, 2, x))
it.next()
foo()

View File

@@ -2,7 +2,7 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nose.tools import *
from nose.tools import assert_raises
@@ -20,7 +20,7 @@ import cStringIO
import random
import unittest
from test_helpers import *
from testutil.helpers import *
from nilmdb.layout import *
@@ -28,9 +28,13 @@ class TestLayouts(object):
# Some nilmdb.layout tests. Not complete, just fills in missing
# coverage.
def test_layouts(self):
x = nilmdb.layout.get_named("PrepData").description()
y = nilmdb.layout.get_named("float32_8").description()
eq_(repr(x), repr(y))
x = nilmdb.layout.get_named("PrepData")
y = nilmdb.layout.get_named("float32_8")
eq_(x.count, y.count)
eq_(x.datatype, y.datatype)
y = nilmdb.layout.get_named("float32_7")
ne_(x.count, y.count)
eq_(x.datatype, y.datatype)
def test_parsing(self):
self.real_t_parsing("PrepData", "RawData", "RawNotchedData")

83
tests/test_lrucache.py Normal file
View File

@@ -0,0 +1,83 @@
import nilmdb
from nilmdb.utils.printf import *
import nose
from nose.tools import *
from nose.tools import assert_raises
import threading
import time
import inspect
from testutil.helpers import *
@nilmdb.utils.lru_cache(size = 3)
def foo1(n):
return n
@nilmdb.utils.lru_cache(size = 5)
def foo2(n):
return n
def foo3d(n):
foo3d.destructed.append(n)
foo3d.destructed = []
@nilmdb.utils.lru_cache(size = 3, onremove = foo3d)
def foo3(n):
return n
class Foo:
def __init__(self):
self.calls = 0
@nilmdb.utils.lru_cache(size = 3, keys = slice(1, 2))
def foo(self, n, **kwargs):
self.calls += 1
class TestLRUCache(object):
def test(self):
[ foo1(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo1.cache_info(), (6, 3))
[ foo1(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo1.cache_info(), (15, 3))
[ foo1(n) for n in [ 4, 2, 1, 1, 4 ] ]
eq_(foo1.cache_info(), (18, 5))
[ foo2(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo2.cache_info(), (6, 3))
[ foo2(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo2.cache_info(), (15, 3))
[ foo2(n) for n in [ 4, 2, 1, 1, 4 ] ]
eq_(foo2.cache_info(), (19, 4))
[ foo3(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo3.cache_info(), (6, 3))
[ foo3(n) for n in [ 1, 2, 3, 1, 2, 3, 1, 2, 3 ] ]
eq_(foo3.cache_info(), (15, 3))
[ foo3(n) for n in [ 4, 2, 1, 1, 4 ] ]
eq_(foo3.cache_info(), (18, 5))
eq_(foo3d.destructed, [1, 3])
with assert_raises(KeyError):
foo3.cache_remove(1,2,3)
foo3.cache_remove(1)
eq_(foo3d.destructed, [1, 3, 1])
foo3.cache_remove_all()
eq_(foo3d.destructed, [1, 3, 1, 2, 4 ])
foo = Foo()
foo.foo(5)
foo.foo(6)
foo.foo(7)
foo.foo(5)
eq_(foo.calls, 3)
# Can't handle keyword arguments right now
with assert_raises(NotImplementedError):
foo.foo(3, asdf = 7)
# Verify that argspecs were maintained
eq_(inspect.getargspec(foo1),
inspect.ArgSpec(args=['n'],
varargs=None, keywords=None, defaults=None))
eq_(inspect.getargspec(foo.foo),
inspect.ArgSpec(args=['self', 'n'],
varargs=None, keywords="kwargs", defaults=None))

110
tests/test_mustclose.py Normal file
View File

@@ -0,0 +1,110 @@
import nilmdb
from nilmdb.utils.printf import *
import nose
from nose.tools import *
from nose.tools import assert_raises
from testutil.helpers import *
import sys
import cStringIO
import gc
import inspect
err = cStringIO.StringIO()
@nilmdb.utils.must_close(errorfile = err)
class Foo:
def __init__(self, arg):
fprintf(err, "Init %s\n", arg)
def __del__(self):
fprintf(err, "Deleting\n")
def close(self):
fprintf(err, "Closing\n")
@nilmdb.utils.must_close(errorfile = err, wrap_verify = True)
class Bar:
def __init__(self):
fprintf(err, "Init\n")
def __del__(self):
fprintf(err, "Deleting\n")
def close(self):
fprintf(err, "Closing\n")
def blah(self, arg):
fprintf(err, "Blah %s\n", arg)
@nilmdb.utils.must_close(errorfile = err)
class Baz:
pass
class TestMustClose(object):
def test(self):
# Note: this test might fail if the Python interpreter doesn't
# garbage collect the object (and call its __del__ function)
# right after a "del x".
# Trigger error
err.truncate()
x = Foo("hi")
# Verify that the arg spec was maintained
eq_(inspect.getargspec(x.__init__),
inspect.ArgSpec(args = ['self', 'arg'],
varargs = None, keywords = None, defaults = None))
del x
gc.collect()
eq_(err.getvalue(),
"Init hi\n"
"error: Foo.close() wasn't called!\n"
"Deleting\n")
# No error
err.truncate(0)
y = Foo("bye")
y.close()
del y
gc.collect()
eq_(err.getvalue(),
"Init bye\n"
"Closing\n"
"Deleting\n")
# Verify function calls when wrap_verify is True
err.truncate(0)
z = Bar()
eq_(inspect.getargspec(z.blah),
inspect.ArgSpec(args = ['self', 'arg'],
varargs = None, keywords = None, defaults = None))
z.blah("boo")
z.close()
with assert_raises(AssertionError) as e:
z.blah("hello")
in_("called <function blah at 0x", str(e.exception))
in_("> after close", str(e.exception))
# Since the most recent assertion references 'z',
# we need to raise another assertion here so that
# 'z' will get properly deleted.
with assert_raises(AssertionError):
raise AssertionError()
del z
gc.collect()
eq_(err.getvalue(),
"Init\n"
"Blah boo\n"
"Closing\n"
"Deleting\n")
# Class with missing methods
err.truncate(0)
w = Baz()
w.close()
del w
eq_(err.getvalue(), "")

View File

@@ -14,6 +14,7 @@ import urllib2
from urllib2 import urlopen, HTTPError
import Queue
import cStringIO
import time
testdb = "tests/testdb"
@@ -21,7 +22,7 @@ testdb = "tests/testdb"
#def cleanup():
# os.unlink(testdb)
from test_helpers import *
from testutil.helpers import *
class Test00Nilmdb(object): # named 00 so it runs first
def test_NilmDB(self):
@@ -39,8 +40,8 @@ class Test00Nilmdb(object): # named 00 so it runs first
capture = cStringIO.StringIO()
old = sys.stdout
sys.stdout = capture
with nilmdb.Timer("test"):
nilmdb.timer.time.sleep(0.01)
with nilmdb.utils.Timer("test"):
time.sleep(0.01)
sys.stdout = old
in_("test: ", capture.getvalue())
@@ -69,12 +70,14 @@ class Test00Nilmdb(object): # named 00 so it runs first
eq_(db.stream_list(layout="RawData"), [ ["/newton/raw", "RawData"] ])
eq_(db.stream_list(path="/newton/raw"), [ ["/newton/raw", "RawData"] ])
# Verify that columns were made right
eq_(len(db.h5file.getNode("/newton/prep").cols), 9)
eq_(len(db.h5file.getNode("/newton/raw").cols), 7)
eq_(len(db.h5file.getNode("/newton/zzz/rawnotch").cols), 10)
assert(not db.h5file.getNode("/newton/prep").colindexed["timestamp"])
assert(not db.h5file.getNode("/newton/prep").colindexed["c1"])
# Verify that columns were made right (pytables specific)
if "h5file" in db.data.__dict__:
h5file = db.data.h5file
eq_(len(h5file.getNode("/newton/prep").cols), 9)
eq_(len(h5file.getNode("/newton/raw").cols), 7)
eq_(len(h5file.getNode("/newton/zzz/rawnotch").cols), 10)
assert(not h5file.getNode("/newton/prep").colindexed["timestamp"])
assert(not h5file.getNode("/newton/prep").colindexed["c1"])
# Set / get metadata
eq_(db.stream_get_metadata("/newton/prep"), {})
@@ -196,6 +199,6 @@ class TestServer(object):
# GET instead of POST (no body)
# (actual POST test is done by client code)
with assert_raises(HTTPError) as e:
getjson("/stream/insert?path=/newton/prep")
getjson("/stream/insert?path=/newton/prep&start=0&end=0")
eq_(e.exception.code, 400)

View File

@@ -1,12 +1,12 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nose.tools import *
from nose.tools import assert_raises
from cStringIO import StringIO
import sys
from test_helpers import *
from testutil.helpers import *
class TestPrintf(object):
def test_printf(self):

View File

@@ -1,20 +1,20 @@
# -*- coding: utf-8 -*-
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
from nose.tools import *
from nose.tools import assert_raises
from nilmdb.rbtree import RBTree, RBNode
from test_helpers import *
from testutil.helpers import *
import unittest
# set to False to skip live renders
do_live_renders = False
def render(tree, description = "", live = True):
import renderdot
import testutil.renderdot as renderdot
r = renderdot.RBTreeRenderer(tree)
return r.render(description, live and do_live_renders)

View File

@@ -1,5 +1,5 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
import nose
from nose.tools import *
@@ -7,7 +7,7 @@ from nose.tools import assert_raises
import threading
import time
from test_helpers import *
from testutil.helpers import *
#raise nose.exc.SkipTest("Skip these")
@@ -57,7 +57,7 @@ class TestUnserialized(Base):
class TestSerialized(Base):
def setUp(self):
self.realfoo = Foo()
self.foo = nilmdb.serializer.WrapObject(self.realfoo)
self.foo = nilmdb.utils.Serializer(self.realfoo)
def tearDown(self):
del self.foo

View File

@@ -1,5 +1,5 @@
import nilmdb
from nilmdb.printf import *
from nilmdb.utils.printf import *
import datetime_tz
@@ -9,7 +9,7 @@ import os
import sys
import cStringIO
from test_helpers import *
from testutil.helpers import *
class TestTimestamper(object):

View File

@@ -0,0 +1 @@
# empty

View File

@@ -12,6 +12,10 @@ def eq_(a, b):
if not a == b:
raise AssertionError("%s != %s" % (myrepr(a), myrepr(b)))
def lt_(a, b):
if not a < b:
raise AssertionError("%s is not less than %s" % (myrepr(a), myrepr(b)))
def in_(a, b):
if a not in b:
raise AssertionError("%s not in %s" % (myrepr(a), myrepr(b)))
@@ -20,6 +24,14 @@ def ne_(a, b):
if not a != b:
raise AssertionError("unexpected %s == %s" % (myrepr(a), myrepr(b)))
def lines_(a, n):
l = a.count('\n')
if not l == n:
if len(a) > 5000:
a = a[0:5000] + " ... truncated"
raise AssertionError("wanted %d lines, got %d in output: '%s'"
% (n, l, a))
def recursive_unlink(path):
try:
shutil.rmtree(path)

View File

@@ -13,7 +13,7 @@ class Renderer(object):
# Rendering
def __render_dot_node(self, node, max_depth = 20):
from nilmdb.printf import sprintf
from nilmdb.utils.printf import sprintf
"""Render a single node and its children into a dot graph fragment"""
if max_depth == 0:
return ""

View File

@@ -1,21 +1,22 @@
./nilmtool.py destroy /bpnilm/2/raw
./nilmtool.py create /bpnilm/2/raw RawData
if true; then
if false; then
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-110000 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-120001 -r 8000 /bpnilm/2/raw
else
for i in $(seq 2000 2050); do
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-010001 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-020002 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-030003 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-040004 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-050005 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-060006 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-070007 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-080008 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-090009 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-100010 /bpnilm/2/raw
# 170 hours, about 98 gigs uncompressed:
for i in $(seq 2000 2016); do
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-010001 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-020002 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-030003 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-040004 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-050005 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-060006 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-070007 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-080008 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-090009 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-100010 -r 8000 /bpnilm/2/raw
done
fi