Compare commits

...

103 Commits

Author SHA1 Message Date
0c1a1d2388 Fix nilmdb-server script 2013-02-28 18:53:06 -05:00
e3f335dfe5 Move time parsing from cmdline into nilmdb.utils.time 2013-02-28 17:09:26 -05:00
7a191c0ebb Fix versioneer to update versions on install 2013-02-28 14:50:53 -05:00
55bf11e393 Fix error when pyximport is too old 2013-02-26 22:21:23 -05:00
e90dcd10f3 Update README and setup.py with python-requests dependency 2013-02-26 22:00:42 -05:00
7d44f4eaa0 Cleanup Makefile; make tests run through setup.py when outside emacs 2013-02-26 22:00:42 -05:00
f541432d44 Merge branch 'requests' 2013-02-26 21:59:15 -05:00
aa4e32f78a Merge branch 'curl-multi' 2013-02-26 21:59:03 -05:00
2bc1416c00 Merge branch 'fixups' 2013-02-26 21:58:55 -05:00
68bbbf757d Remove nilmdb.utils.urllib
python-requests seems to handle UTF-8 just fine.
2013-02-26 19:46:22 -05:00
3df96fdfdd Reorder code 2013-02-26 19:41:55 -05:00
740ab76eaf Re-add persistent connection test for Requests based httpclient 2013-02-26 19:41:27 -05:00
ce13a47fea Save full response object for tests 2013-02-26 17:45:41 -05:00
50a4a60786 Replace pyCurl with Requests
Only tested with v1.1.0.  It's not clear how well older versions will
work.
2013-02-26 17:45:40 -05:00
14afa02db6 Temporarily remove curl-specific keepalive tests 2013-02-26 17:45:40 -05:00
cc990d6ce4 Test persistent connections 2013-02-26 13:41:40 -05:00
0f5162e0c0 Always use the curl multi interface
.. even for non-generator requests
2013-02-26 13:39:33 -05:00
b26cd52f8c Work around curl multi bug 2013-02-26 13:38:42 -05:00
236d925a1d Make sure we use POST when requested, even if the body is empty 2013-02-25 21:05:01 -05:00
a4a4bc61ba Switch to using pycurl.Multi instead of Iteratorizer 2013-02-25 21:05:01 -05:00
3d82888580 Enforce method types, and require POST for actions that change things.
This is a pretty big change that will render existing clients unable
to modify the database, but it's important that we use POST or PUT
instead of GET for anything that may change state, in case this
is ever put behind a cache.
2013-02-25 21:05:01 -05:00
749b878904 Add an explicit lock to httpclient's public methods
This is to prevent possible reentrancy problems.
2013-02-25 18:06:00 -05:00
f396e3934c Remove cherrypy version check
Dependencies should be handled by installation, not at runtime.
2013-02-25 16:50:19 -05:00
dd7594b5fa Fix issue where PUT responses were being dropped
PUTs generate a "HTTP/1.1 100 Continue" response before the
"HTTP/1.1 200 OK" response, and so we were mistakenly picking up
the 100 status code and not returning any data.  Improve the
header callback to correctly process any number of status codes.
2013-02-23 17:51:59 -05:00
4ac1beee6d layout: allow zero and negative timestamps in parser 2013-02-23 16:58:49 -05:00
8c0ce736d8 Disable use of signals in Curl
Various places suggest that this is needed for better thread-safety,
and the only drawback is that some systems cannot timeout properly on
DNS lookups.
2013-02-23 16:15:28 -05:00
8858c9426f Fix error message text in nilmdb.server.Server 2013-02-23 16:13:47 -05:00
9123ccb583 Merge branch 'decorator-work' 2013-02-23 14:38:36 -05:00
5dce851bef Merge branch 'client-insert-context' 2013-02-23 14:37:59 -05:00
5b0441de6b Give serializer and iteratorizer threads names 2013-02-23 14:28:37 -05:00
317c53ab6f Improve serializer_proxy and verify_thread_proxy
These functions can now take an object or a type (class).

If given an object, they will wrap subsequent calls to that object.
If given a type, they will return an object that can be instantiated
to create a new object, and all calls including __init__ will be
covered by the serialization or thread verification.
2013-02-23 14:28:37 -05:00
7db4411462 Cleanup nilmdb.utils.must_close a bit 2013-02-23 11:28:03 -05:00
422317850e Replace threadsafety class decorator version, add explicit proxy version
Like the serializer changes, the class decorator was too fragile.
2013-02-23 11:25:40 -05:00
965537d8cb Implement verify_thread_safety to check for unsafe access patterns
Occasional segfaults may be the result of performing thread-unsafe
operations.  This class decorator verifies that all of its methods
are called in a thread-safe manner.

It can separately warn about:
- two threads calling methods in a function (the kind of thing sqlite
  doesn't like)
- recursion
- concurrency (two different threads functions at the same time)
2013-02-23 11:25:02 -05:00
0dcdec5949 Turn on sqlite thread safety checks -- serializer should fully protect it 2013-02-23 11:25:01 -05:00
7fce305a1d Make server check that the db object has been wrapped in a serializer
It's only the server that calls it in multiple threads.
2013-02-23 11:25:01 -05:00
dfbbe23512 Switch to explicitly wrapping nilmdb objects in a serializer_proxy
This is quite a bit simpler than the class decorator method, so it
may be more reliable.
2013-02-23 11:23:54 -05:00
7761a91242 Remove class decorator version of the serializer; it's too fragile 2013-02-23 11:23:54 -05:00
9b06e46bf1 Add back a proxy version of the Serializer, which is much simpler. 2013-02-23 11:23:54 -05:00
171e6f1871 Replace "serializer" function with a "serialized" decorator
This decorator makes a class always be serialized, including its
instantiation, in a separate thread.  This is an improvement over
the old Serializer() object wrapper, which didn't put the
instantiation into the new thread.
2013-02-23 11:23:54 -05:00
1431e41d16 Allow inserting empty intervals in the database, and add tests for it.
Previously, we could get empty intervals anyway by having a non-empty
interval and removing a smaller interval around each piece of data.
Turns out that empty intervals are OK and needed in some situations,
so explicitly allow and test for it.
2013-02-21 14:07:35 -05:00
a49c655816 Strictly enforce (start < end) for all intervals.
Previously, we allowed start == end, but this doesn't make sense with
half-open intervals.
2013-02-21 14:06:40 -05:00
30e3ffc0e9 Fix check for interval ends to be None, so that zero doesn't confuse it 2013-02-21 12:42:33 -05:00
db7211c3a9 Have server verify that start <= end before creating intervals
Also rename _fill_in_limits to _check_user_times
2013-02-21 12:38:51 -05:00
c6d57cf5c3 Fix errors with calculating limits when start==end==None
This also has the effect of now handling negative timestamps
correctly.
2013-02-19 19:27:06 -05:00
ca5253ddee Fix and test stream_count 2013-02-19 18:26:44 -05:00
e19da84b2e server: always return None instead of sometimes returning "ok"
Previously some functions returned the string "ok".
2013-02-19 18:26:44 -05:00
3e8e3542fd Test for detecting nested HTTP requests 2013-02-19 18:26:44 -05:00
2f7365412d client: detect and give a more clear error when HTTP requests are nested 2013-02-19 17:20:07 -05:00
bba9ad131e Add test for client.stream_insert_context 2013-02-19 17:19:45 -05:00
ee24380d1f Replace duplicated URL in tests with a variable 2013-02-19 15:27:51 -05:00
bfcd91acf8 client tests: renumber 2013-02-19 15:25:34 -05:00
d97291d4d3 client: Use .stream_insert_block from within .stream_insert_context
Avoids duplicating code.
2013-02-19 15:25:01 -05:00
a61fbbcf45 Big rework of client stream_insert_context
Now supports these operations:
  ctx.insert_line()
  ctx.insert_iter()
  ctx.finalize() (end the current contiguous interval, so a new one
                  can be started with a gap)
  ctx.update_end() (update ending timestamp before finalizing interval)
  ctx.update_start() (update starting timestamp for new interval)
2013-02-18 18:06:03 -05:00
5adc8fd0a7 Remove nilmdb.utils.misc.pairwise, as it's no longer used. 2013-02-18 18:06:03 -05:00
251a486c28 client.py: Significant speedup in stream_insert_context
block_data += "string" is fast with local variables, but slow with
variables inside some namespace.  Instead, build a list of strings and
join them once at the end.  This fixes the slowdown that resulted from
the stream_insert_context cleanup.
2013-02-18 18:06:03 -05:00
1edb96a0bd Add client.stream_insert_context, convert everything to use it. Slow.
Not sure why this is so painfully slow.  Need more testing;
might have to scratch the idea.
2013-02-18 18:06:03 -05:00
52e674a192 Fix warning in mustclose decorator 2013-02-18 18:05:45 -05:00
e241c13bf1 Remove must_close decorator from client
It still should be closed, but warning each time was mostly for
debugging and it's kind of annoying when writing one-off programs
where it's OK to just let things get torn down as they're completed.
Not closing is not fatal in terms of data integrity etc.
2013-02-18 18:02:05 -05:00
b53ff31212 client: Add must_close() decorator to nilmdb.Client, and fix tests
Test suite wasn't closing connections correctly.
2013-02-16 18:55:23 -05:00
2045e89f24 client: Add context manager functionality, test closing 2013-02-16 18:55:20 -05:00
841b2dab5c server: Replace /dbpath and /dbsize with a more generic /dbinfo
Update tests accordingly.  This isn't backwards compatible, but
existing clients don't rely on it.
2013-02-14 16:57:33 -05:00
d634f7d3cf bulkdata: Use file writes instead of writing to the mmap.
Extending and then writing to the mmap file has a problem: if the disk
fills up, the mapping becomes invalid, and the Python interpreter will
get a SIGBUS, killing it.  It's difficult to catch this gracefully;
there's no way to do that with existing modules.  Instead, switch to
only using mmap when reading, and normal file writes when writing.
Since we only ever append, it should have similar performance.
2013-02-13 20:30:39 -05:00
1593e181a3 Switch to versioneer-provided versions everywhere 2013-02-05 19:07:38 -05:00
8e781506de Incorporate versioneer for versioning 2013-02-05 18:49:07 -05:00
f6a2c7620a Restructure cherrypy application more correctly
Specifically, switch from using global configuration and several apps,
to using application-specific configuration with a single app.  This
should hopefully make it easier to plug this into another
WSGI-compliant server someday, and also silences some startup warnings
about missing application configs.
2013-02-04 22:38:49 -05:00
6c30e5ab2f Add gitclean target to Makefile 2013-02-04 22:15:12 -05:00
810eac4e61 Flesh out the list of dependencies in setup.py 2013-02-04 22:14:09 -05:00
d9bb3ab7ab Fix iteratorizer coverage issue with thread timing 2013-02-04 22:14:01 -05:00
21d0e90bd9 Rework Cython and external module support.
Now we build Cython modules only if cython >= 0.16 is present.
Tarballs made by "make sdist" include the Cython-generated *.c files,
and so Cython isn't required on the end user machine at all.
2013-02-04 22:12:52 -05:00
f071d749ce Generate a MANIFEST.in from setup.py; more setup.py and Makefile updates 2013-02-04 18:14:44 -05:00
d95c354595 Print a warning in setup.py if basic dependencies aren't present 2013-02-01 17:44:54 -05:00
9bcd8183f6 Add cython dependency 2013-02-01 17:44:27 -05:00
5c531d8273 Convert runserver.py into a generated nilmdb-server script 2013-02-01 17:43:41 -05:00
3fe3e2ca95 Move nilmtool into a dedicated nilmdb.scripts module 2013-02-01 17:42:09 -05:00
f01e781469 Convert nilmtool.py into a setuptools-generated script
At install time, the script "/usr/bin/nilmtool" will be created.
2013-02-01 16:25:12 -05:00
e6180a5a81 Remove all relative imports 2013-02-01 16:02:01 -05:00
a9d31b46ed More files in clean target 2013-02-01 15:48:55 -05:00
b01f23ed99 Move runtests.py script into test directory 2013-02-01 15:47:47 -05:00
842bf21411 Include the full server response if we can't parse errors out of it.
This makes things easier to debug if there's an error in
e.g. json_error_handler(), or if we're trying to poke a server that's
not even ours.
2013-02-01 15:47:47 -05:00
750d9e3c38 Clean up some pylint warnings and potential errors 2013-02-01 15:29:24 -05:00
3b90318f83 Merge remote-tracking branch 'origin/packaging' 2013-01-31 21:54:41 -05:00
1fb37604d3 Rearrange documentation, clean up Makefile, README 2013-01-31 19:06:32 -05:00
018ecab310 Make setup.py executable 2013-01-31 17:26:55 -05:00
6a1d6017e2 Include datetime_tz module 2013-01-31 17:25:14 -05:00
e7406f8147 Add metadata 2013-01-31 17:14:47 -05:00
f316026592 Move datetime_tz package under nilmdb.utils
datetime_tz isn't readily available, so it's a lot easier to just
package it within the nilmdb tree.
2013-01-30 19:03:42 -05:00
a8db747768 More work on setup.py; fixed issues in setup.cfg
Adjusted setup.cfg so "python setup.py nosetests" now works correctly.
Also added a "test" alias so that "python setup.py test" works.
2013-01-30 18:35:12 -05:00
727af94722 Start working on setup.py 2013-01-29 20:21:03 -05:00
6c89659df7 Cleanup cmdline "create" help text 2013-01-28 19:07:48 -05:00
58c7c8f6ff Support "now" as a timestamp argument 2013-01-28 19:07:45 -05:00
225003f412 Huge cleanup of namespaces, modules, packages, imports.
Now nilmdb.client, nilmdb.server, nilmdb.cmdline, and nilmdb.utils
are each their own modules, and there is a little bit more of a
logical separation between them.  Various changes scattered throughout
to fix naming (for example, nilmdb.nilmdb.NilmDBError is now
nilmdb.server.errors.NilmDBError).

Reduced usage of "from __future__ import absolute_import" as much
as possible.  It's still needed for the functions in the nilmdb/server
directory to be able to import the nilmdb module rather than the
nilmdb.py script.

This should hopefully ease future packaging a bit.
2013-01-28 19:04:52 -05:00
40b966aef2 Add pycurl-specific hack to Iteratorizer
Inside the pycurl callback, we can't raise exceptions, because the
pycurl extension module will unconditionally print the exception
itself, and not pass it up to the caller.  Instead, we have the
callback return a value that tells curl to abort.  (-1 would be best,
in case we were given 0 bytes, but the extension doesn't support
that either).

This resolves the 'Exception("should die")' problem when interrupting
a streaming generator like stream_extract.
2013-01-24 19:06:20 -05:00
294ec6988b Rewrite Iteratorizer as a context manager
Relying on __del__ to clean up the thread isn't as reliable.
2013-01-24 19:04:25 -05:00
fad23ebb22 Add --timestamp-raw option to extract and list 2013-01-24 16:03:38 -05:00
b226dc4337 Properly handle test case where server doesn't start 2013-01-24 16:03:38 -05:00
e7af863017 httpclient: make sure we error out quickly if nested calls are made
Curl will give an error if we call .setopt() while a .perform() is
in progress, for example if we try to do a stream_insert() while
in the middle of a stream_extract().  Move the setopt() to the
beginning of the get/put functions to ensure that we hit this
error before we mess with the URLs or anything else.
2013-01-24 15:36:10 -05:00
af6ce5b79c Remove superfluous from iteratorizor callback exception 2013-01-23 15:42:27 -05:00
0a6fc943e2 Add some better documentation of layout parameter to create.py 2013-01-22 18:47:39 -05:00
67c6e178e1 Documentation updates 2013-01-22 18:36:05 -05:00
9bf213707c Properly return an error if two timestamps are equal 2013-01-22 18:35:18 -05:00
5cd7899e98 Send a Access-Control-Allow-Origin (CORS) header with all responses 2013-01-22 14:42:03 -05:00
ceec5fb9b3 Force /stream/interval and /stream/extract responses to be text/plain 2013-01-22 12:47:06 -05:00
78 changed files with 3414 additions and 1259 deletions

View File

@@ -7,3 +7,4 @@
exclude_lines =
pragma: no cover
if 0:
omit = nilmdb/utils/datetime_tz*,nilmdb/scripts,nilmdb/_version.py

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
nilmdb/_version.py export-subst

24
.gitignore vendored
View File

@@ -1,7 +1,27 @@
db/
# Tests
tests/*testdb/
.coverage
db/
# Compiled / cythonized files
docs/*.html
build/
*.pyc
design.html
nilmdb/server/interval.c
nilmdb/server/interval.so
nilmdb/server/layout.c
nilmdb/server/layout.so
nilmdb/server/rbtree.c
nilmdb/server/rbtree.so
# Setup junk
dist/
nilmdb.egg-info/
# This gets generated as needed by setup.py
MANIFEST.in
MANIFEST
# Misc
timeit*out

250
.pylintrc Normal file
View File

@@ -0,0 +1,250 @@
# -*- conf -*-
[MASTER]
# Specify a configuration file.
#rcfile=
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Profiled execution.
profile=no
# Add files or directories to the blacklist. They should be base names, not
# paths.
ignore=datetime_tz
# Pickle collected data for later comparisons.
persistent=no
# List of plugins (as comma separated values of python modules names) to load,
# usually to register additional checkers.
load-plugins=
[MESSAGES CONTROL]
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time.
#enable=
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once).
disable=C0111,R0903,R0201,R0914,R0912,W0142,W0703,W0702
[REPORTS]
# Set the output format. Available formats are text, parseable, colorized, msvs
# (visual studio) and html
output-format=parseable
# Include message's id in output
include-ids=yes
# Put messages in a separate file for each module / package specified on the
# command line instead of printing them on stdout. Reports (if any) will be
# written in a file name "pylint_global.[txt|html]".
files-output=no
# Tells whether to display a full report or only the messages
reports=yes
# Python expression which should return a note less than 10 (10 is the highest
# note). You have access to the variables errors warning, statement which
# respectively contain the number of errors / warnings messages and the total
# number of statements analyzed. This is used by the global evaluation report
# (RP0004).
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
# Add a comment according to your evaluation note. This is used by the global
# evaluation report (RP0004).
comment=no
[SIMILARITIES]
# Minimum lines number of a similarity.
min-similarity-lines=4
# Ignore comments when computing similarities.
ignore-comments=yes
# Ignore docstrings when computing similarities.
ignore-docstrings=yes
[TYPECHECK]
# Tells whether missing members accessed in mixin class should be ignored. A
# mixin class is detected if its name ends with "mixin" (case insensitive).
ignore-mixin-members=yes
# List of classes names for which member attributes should not be checked
# (useful for classes with attributes dynamically set).
ignored-classes=SQLObject
# When zope mode is activated, add a predefined set of Zope acquired attributes
# to generated-members.
zope=no
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E0201 when accessed. Python regular
# expressions are accepted.
generated-members=REQUEST,acl_users,aq_parent
[FORMAT]
# Maximum number of characters on a single line.
max-line-length=80
# Maximum number of lines in a module
max-module-lines=1000
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,XXX,TODO
[VARIABLES]
# Tells whether we should check for unused import in __init__ files.
init-import=no
# A regular expression matching the beginning of the name of dummy variables
# (i.e. not used).
dummy-variables-rgx=_|dummy
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
additional-builtins=
[BASIC]
# Required attributes for module, separated by a comma
required-attributes=
# List of builtins function names that should not be used, separated by a comma
bad-functions=apply,input
# Regular expression which should only match correct module names
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
# Regular expression which should only match correct module level names
const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__)|version)$
# Regular expression which should only match correct class names
class-rgx=[A-Z_][a-zA-Z0-9]+$
# Regular expression which should only match correct function names
function-rgx=[a-z_][a-z0-9_]{0,30}$
# Regular expression which should only match correct method names
method-rgx=[a-z_][a-z0-9_]{0,30}$
# Regular expression which should only match correct instance attribute names
attr-rgx=[a-z_][a-z0-9_]{0,30}$
# Regular expression which should only match correct argument names
argument-rgx=[a-z_][a-z0-9_]{0,30}$
# Regular expression which should only match correct variable names
variable-rgx=[a-z_][a-z0-9_]{0,30}$
# Regular expression which should only match correct list comprehension /
# generator expression variable names
inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
# Good variable names which should always be accepted, separated by a comma
good-names=i,j,k,ex,Run,_
# Bad variable names which should always be refused, separated by a comma
bad-names=foo,bar,baz,toto,tutu,tata
# Regular expression which should only match functions or classes name which do
# not require a docstring
no-docstring-rgx=__.*__
[CLASSES]
# List of interface methods to ignore, separated by a comma. This is used for
# instance to not check methods defines in Zope's Interface base class.
ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,__new__,setUp
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
[DESIGN]
# Maximum number of arguments for function / method
max-args=5
# Argument names that match this expression will be ignored. Default to name
# with leading underscore
ignored-argument-names=_.*
# Maximum number of locals for function / method body
max-locals=15
# Maximum number of return / yield for function / method body
max-returns=6
# Maximum number of branch for function / method body
max-branchs=12
# Maximum number of statements in function / method body
max-statements=50
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
[IMPORTS]
# Deprecated modules which should not be used, separated by a comma
deprecated-modules=regsub,string,TERMIOS,Bastion,rexec
# Create a graph of every (i.e. internal and external) dependencies in the
# given file (report RP0402 must not be disabled)
import-graph=
# Create a graph of external dependencies in the given file (report RP0402 must
# not be disabled)
ext-import-graph=
# Create a graph of internal dependencies in the given file (report RP0402 must
# not be disabled)
int-import-graph=
[EXCEPTIONS]
# Exceptions that will emit a warning when being caught. Defaults to
# "Exception"
overgeneral-exceptions=Exception

View File

@@ -1,23 +1,42 @@
# By default, run the tests.
all: test
tool:
python nilmtool.py --help
python nilmtool.py list --help
python nilmtool.py -u asfdadsf list
version:
python setup.py version
build:
python setup.py build_ext --inplace
dist: sdist
sdist:
python setup.py sdist
install:
python setup.py install
docs:
make -C docs
lint:
pylint -f parseable nilmdb
%.html: %.md
pandoc -s $< > $@
pylint --rcfile=.pylintrc nilmdb
test:
python runtests.py
profile:
python runtests.py --with-profile
ifeq ($(INSIDE_EMACS), t)
# Use the slightly more flexible script
python tests/runtests.py
else
# Let setup.py check dependencies, build stuff, and run the test
python setup.py nosetests
endif
clean::
find . -name '*pyc' | xargs rm -f
rm -f .coverage
rm -rf tests/*testdb*
rm -rf nilmdb.egg-info/ build/ nilmdb/server/*.so MANIFEST.in
make -C docs clean
gitclean::
git clean -dXf
.PHONY: all version build dist sdist install docs lint test clean

View File

@@ -1,3 +1,26 @@
sudo apt-get install python2.7 python-cherrypy3 python-decorator python-nose python-coverage
sudo apt-get install cython # 0.17.1-1 or newer
nilmdb: Non-Intrusive Load Monitor Database
by Jim Paris <jim@jtan.com>
Prerequisites:
# Runtime and build environments
sudo apt-get install python2.7 python2.7-dev python-setuptools cython
# Base NilmDB dependencies
sudo apt-get install python-cherrypy3 python-decorator python-simplejson
sudo apt-get install python-requests python-dateutil python-tz python-psutil
# Tools for running tests
sudo apt-get install python-nose python-coverage
Test:
python setup.py nosetests
Install:
python setup.py install
Usage:
nilmdb-server --help
nilmtool --help

5
TODO
View File

@@ -1,5 +0,0 @@
-- Clean up error responses. Specifically I'd like to be able to add
json-formatted data to OverflowError and DB parsing errors. It
seems like subclassing cherrypy.HTTPError and overriding
set_response is the best thing to do -- it would let me get rid
of the _be_ie_unfriendly and other hacks in the server.

9
docs/Makefile Normal file
View File

@@ -0,0 +1,9 @@
ALL_DOCS = $(wildcard *.md)
all: $(ALL_DOCS:.md=.html)
%.html: %.md
pandoc -s $< > $@
clean:
rm -f *.html

5
docs/TODO.md Normal file
View File

@@ -0,0 +1,5 @@
- Documentation
- Machine-readable information in OverflowError, parser errors.
Maybe subclass `cherrypy.HTTPError` and override `set_response`
to add another JSON field?

View File

@@ -1,12 +1,8 @@
"""Main NilmDB import"""
from .nilmdb import NilmDB
from .server import Server
from .client import Client
import pyximport; pyximport.install()
import layout
import interval
import cmdline
from nilmdb.server import NilmDB, Server
from nilmdb.client import Client
from nilmdb._version import get_versions
__version__ = get_versions()['version']
del get_versions

197
nilmdb/_version.py Normal file
View File

@@ -0,0 +1,197 @@
IN_LONG_VERSION_PY = True
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (build by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.7+ (https://github.com/warner/python-versioneer)
# these strings will be replaced by git during git-archive
git_refnames = "$Format:%d$"
git_full = "$Format:%H$"
import subprocess
import sys
def run_command(args, cwd=None, verbose=False):
try:
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(args, stdout=subprocess.PIPE, cwd=cwd)
except EnvironmentError:
e = sys.exc_info()[1]
if verbose:
print("unable to run %s" % args[0])
print(e)
return None
stdout = p.communicate()[0].strip()
if sys.version >= '3':
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % args[0])
return None
return stdout
import sys
import re
import os.path
def get_expanded_variables(versionfile_source):
# the code embedded in _version.py can just fetch the value of these
# variables. When used from setup.py, we don't want to import
# _version.py, so we do it with a regexp instead. This function is not
# used from _version.py.
variables = {}
try:
for line in open(versionfile_source,"r").readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["full"] = mo.group(1)
except EnvironmentError:
pass
return variables
def versions_from_expanded_variables(variables, tag_prefix, verbose=False):
refnames = variables["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("variables are unexpanded, not using")
return {} # unexpanded, so not in an unpacked git-archive tarball
refs = set([r.strip() for r in refnames.strip("()").split(",")])
for ref in list(refs):
if not re.search(r'\d', ref):
if verbose:
print("discarding '%s', no digits" % ref)
refs.discard(ref)
# Assume all version tags have a digit. git's %d expansion
# behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us
# distinguish between branches and tags. By ignoring refnames
# without digits, we filter out many common branch names like
# "release" and "stabilization", as well as "HEAD" and "master".
if verbose:
print("remaining refs: %s" % ",".join(sorted(refs)))
for ref in sorted(refs):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return { "version": r,
"full": variables["full"].strip() }
# no suitable tags, so we use the full revision id
if verbose:
print("no suitable tags, using full revision id")
return { "version": variables["full"].strip(),
"full": variables["full"].strip() }
def versions_from_vcs(tag_prefix, versionfile_source, verbose=False):
# this runs 'git' from the root of the source tree. That either means
# someone ran a setup.py command (and this code is in versioneer.py, so
# IN_LONG_VERSION_PY=False, thus the containing directory is the root of
# the source tree), or someone ran a project-specific entry point (and
# this code is in _version.py, so IN_LONG_VERSION_PY=True, thus the
# containing directory is somewhere deeper in the source tree). This only
# gets called if the git-archive 'subst' variables were *not* expanded,
# and _version.py hasn't already been rewritten with a short version
# string, meaning we're inside a checked out source tree.
try:
here = os.path.abspath(__file__)
except NameError:
# some py2exe/bbfreeze/non-CPython implementations don't do __file__
return {} # not always correct
# versionfile_source is the relative path from the top of the source tree
# (where the .git directory might live) to this file. Invert this to find
# the root from __file__.
root = here
if IN_LONG_VERSION_PY:
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
root = os.path.dirname(here)
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %s" % root)
return {}
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
stdout = run_command([GIT, "describe", "--tags", "--dirty", "--always"],
cwd=root)
if stdout is None:
return {}
if not stdout.startswith(tag_prefix):
if verbose:
print("tag '%s' doesn't start with prefix '%s'" % (stdout, tag_prefix))
return {}
tag = stdout[len(tag_prefix):]
stdout = run_command([GIT, "rev-parse", "HEAD"], cwd=root)
if stdout is None:
return {}
full = stdout.strip()
if tag.endswith("-dirty"):
full += "-dirty"
return {"version": tag, "full": full}
def versions_from_parentdir(parentdir_prefix, versionfile_source, verbose=False):
if IN_LONG_VERSION_PY:
# We're running from _version.py. If it's from a source tree
# (execute-in-place), we can work upwards to find the root of the
# tree, and then check the parent directory for a version string. If
# it's in an installed application, there's no hope.
try:
here = os.path.abspath(__file__)
except NameError:
# py2exe/bbfreeze/non-CPython don't have __file__
return {} # without __file__, we have no hope
# versionfile_source is the relative path from the top of the source
# tree to _version.py. Invert this to find the root from __file__.
root = here
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
# we're running from versioneer.py, which means we're running from
# the setup.py in a source tree. sys.argv[0] is setup.py in the root.
here = os.path.abspath(sys.argv[0])
root = os.path.dirname(here)
# Source tarballs conventionally unpack into a directory that includes
# both the project name and a version string.
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print("guessing rootdir is '%s', but '%s' doesn't start with prefix '%s'" %
(root, dirname, parentdir_prefix))
return None
return {"version": dirname[len(parentdir_prefix):], "full": ""}
tag_prefix = "nilmdb-"
parentdir_prefix = "nilmdb-"
versionfile_source = "nilmdb/_version.py"
def get_versions(default={"version": "unknown", "full": ""}, verbose=False):
variables = { "refnames": git_refnames, "full": git_full }
ver = versions_from_expanded_variables(variables, tag_prefix, verbose)
if not ver:
ver = versions_from_vcs(tag_prefix, versionfile_source, verbose)
if not ver:
ver = versions_from_parentdir(parentdir_prefix, versionfile_source,
verbose)
if not ver:
ver = default
return ver

View File

@@ -1,215 +0,0 @@
# -*- coding: utf-8 -*-
"""Class for performing HTTP client requests via libcurl"""
from __future__ import absolute_import
from nilmdb.utils.printf import *
import time
import sys
import re
import os
import simplejson as json
import itertools
import nilmdb.utils
import nilmdb.httpclient
# Other functions expect to see these in the nilmdb.client namespace
from nilmdb.httpclient import ClientError, ServerError, Error
version = "1.0"
def float_to_string(f):
# Use repr to maintain full precision in the string output.
return repr(float(f))
class Client(object):
"""Main client interface to the Nilm database."""
client_version = version
def __init__(self, url):
self.http = nilmdb.httpclient.HTTPClient(url)
def _json_param(self, data):
"""Return compact json-encoded version of parameter"""
return json.dumps(data, separators=(',',':'))
def close(self):
self.http.close()
def geturl(self):
"""Return the URL we're using"""
return self.http.baseurl
def version(self):
"""Return server version"""
return self.http.get("version")
def dbpath(self):
"""Return server database path"""
return self.http.get("dbpath")
def dbsize(self):
"""Return server database size as human readable string"""
return self.http.get("dbsize")
def stream_list(self, path = None, layout = None):
params = {}
if path is not None:
params["path"] = path
if layout is not None:
params["layout"] = layout
return self.http.get("stream/list", params)
def stream_get_metadata(self, path, keys = None):
params = { "path": path }
if keys is not None:
params["key"] = keys
return self.http.get("stream/get_metadata", params)
def stream_set_metadata(self, path, data):
"""Set stream metadata from a dictionary, replacing all existing
metadata."""
params = {
"path": path,
"data": self._json_param(data)
}
return self.http.get("stream/set_metadata", params)
def stream_update_metadata(self, path, data):
"""Update stream metadata from a dictionary"""
params = {
"path": path,
"data": self._json_param(data)
}
return self.http.get("stream/update_metadata", params)
def stream_create(self, path, layout):
"""Create a new stream"""
params = { "path": path,
"layout" : layout }
return self.http.get("stream/create", params)
def stream_destroy(self, path):
"""Delete stream and its contents"""
params = { "path": path }
return self.http.get("stream/destroy", params)
def stream_remove(self, path, start = None, end = None):
"""Remove data from the specified time range"""
params = {
"path": path
}
if start is not None:
params["start"] = float_to_string(start)
if end is not None:
params["end"] = float_to_string(end)
return self.http.get("stream/remove", params)
def stream_insert(self, path, data, start = None, end = None):
"""Insert data into a stream. data should be a file-like object
that provides ASCII data that matches the database layout for path.
start and end are the starting and ending timestamp of this
stream; all timestamps t in the data must satisfy 'start <= t
< end'. If left unspecified, 'start' is the timestamp of the
first line of data, and 'end' is the timestamp on the last line
of data, plus a small delta of 1μs.
"""
params = { "path": path }
# See design.md for a discussion of how much data to send.
# These are soft limits -- actual data might be rounded up.
max_data = 1048576
max_time = 30
end_epsilon = 1e-6
def extract_timestamp(line):
return float(line.split()[0])
def sendit():
# If we have more data after this, use the timestamp of
# the next line as the end. Otherwise, use the given
# overall end time, or add end_epsilon to the last data
# point.
if nextline:
block_end = extract_timestamp(nextline)
if end and block_end > end:
# This is unexpected, but we'll defer to the server
# to return an error in this case.
block_end = end
elif end:
block_end = end
else:
block_end = extract_timestamp(line) + end_epsilon
# Send it
params["start"] = float_to_string(block_start)
params["end"] = float_to_string(block_end)
return self.http.put("stream/insert", block_data, params)
clock_start = time.time()
block_data = ""
block_start = start
result = None
for (line, nextline) in nilmdb.utils.misc.pairwise(data):
# If we don't have a starting time, extract it from the first line
if block_start is None:
block_start = extract_timestamp(line)
clock_elapsed = time.time() - clock_start
block_data += line
# If we have enough data, or enough time has elapsed,
# send this block to the server, and empty things out
# for the next block.
if (len(block_data) > max_data) or (clock_elapsed > max_time):
result = sendit()
block_start = None
block_data = ""
clock_start = time.time()
# One last block?
if len(block_data):
result = sendit()
# Return the most recent JSON result we got back, or None if
# we didn't make any requests.
return result
def stream_intervals(self, path, start = None, end = None):
"""
Return a generator that yields each stream interval.
"""
params = {
"path": path
}
if start is not None:
params["start"] = float_to_string(start)
if end is not None:
params["end"] = float_to_string(end)
return self.http.get_gen("stream/intervals", params, retjson = True)
def stream_extract(self, path, start = None, end = None, count = False):
"""
Extract data from a stream. Returns a generator that yields
lines of ASCII-formatted data that matches the database
layout for the given path.
Specify count=True to just get a count of values rather than
the actual data.
"""
params = {
"path": path,
}
if start is not None:
params["start"] = float_to_string(start)
if end is not None:
params["end"] = float_to_string(end)
if count:
params["count"] = 1
return self.http.get_gen("stream/extract", params, retjson = False)

View File

@@ -0,0 +1,4 @@
"""nilmdb.client"""
from nilmdb.client.client import Client
from nilmdb.client.errors import ClientError, ServerError, Error

378
nilmdb/client/client.py Normal file
View File

@@ -0,0 +1,378 @@
# -*- coding: utf-8 -*-
"""Class for performing HTTP client requests via libcurl"""
import nilmdb
import nilmdb.utils
import nilmdb.client.httpclient
import time
import simplejson as json
import contextlib
def float_to_string(f):
"""Use repr to maintain full precision in the string output."""
return repr(float(f))
def extract_timestamp(line):
"""Extract just the timestamp from a line of data text"""
return float(line.split()[0])
class Client(object):
"""Main client interface to the Nilm database."""
def __init__(self, url):
self.http = nilmdb.client.httpclient.HTTPClient(url)
# __enter__/__exit__ allow this class to be a context manager
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def _json_param(self, data):
"""Return compact json-encoded version of parameter"""
return json.dumps(data, separators=(',',':'))
def close(self):
"""Close the connection; safe to call multiple times"""
self.http.close()
def geturl(self):
"""Return the URL we're using"""
return self.http.baseurl
def version(self):
"""Return server version"""
return self.http.get("version")
def dbinfo(self):
"""Return server database info (path, size, free space)
as a dictionary."""
return self.http.get("dbinfo")
def stream_list(self, path = None, layout = None):
params = {}
if path is not None:
params["path"] = path
if layout is not None:
params["layout"] = layout
return self.http.get("stream/list", params)
def stream_get_metadata(self, path, keys = None):
params = { "path": path }
if keys is not None:
params["key"] = keys
return self.http.get("stream/get_metadata", params)
def stream_set_metadata(self, path, data):
"""Set stream metadata from a dictionary, replacing all existing
metadata."""
params = {
"path": path,
"data": self._json_param(data)
}
return self.http.post("stream/set_metadata", params)
def stream_update_metadata(self, path, data):
"""Update stream metadata from a dictionary"""
params = {
"path": path,
"data": self._json_param(data)
}
return self.http.post("stream/update_metadata", params)
def stream_create(self, path, layout):
"""Create a new stream"""
params = { "path": path,
"layout" : layout }
return self.http.post("stream/create", params)
def stream_destroy(self, path):
"""Delete stream and its contents"""
params = { "path": path }
return self.http.post("stream/destroy", params)
def stream_remove(self, path, start = None, end = None):
"""Remove data from the specified time range"""
params = {
"path": path
}
if start is not None:
params["start"] = float_to_string(start)
if end is not None:
params["end"] = float_to_string(end)
return self.http.post("stream/remove", params)
@contextlib.contextmanager
def stream_insert_context(self, path, start = None, end = None):
"""Return a context manager that allows data to be efficiently
inserted into a stream in a piecewise manner. Data is be provided
as single lines, and is aggregated and sent to the server in larger
chunks as necessary. Data lines must match the database layout for
the given path, and end with a newline.
Example:
with client.stream_insert_context('/path', start, end) as ctx:
ctx.insert_line('1234567890.0 1 2 3 4\\n')
ctx.insert_line('1234567891.0 1 2 3 4\\n')
For more details, see help for nilmdb.client.client.StreamInserter
This may make multiple requests to the server, if the data is
large enough or enough time has passed between insertions.
"""
ctx = StreamInserter(self, path, start, end)
yield ctx
ctx.finalize()
def stream_insert(self, path, data, start = None, end = None):
"""Insert rows of data into a stream. data should be an
iterable object that provides ASCII data that matches the
database layout for path. See stream_insert_context for
details on the 'start' and 'end' parameters."""
with self.stream_insert_context(path, start, end) as ctx:
ctx.insert_iter(data)
return ctx.last_response
def stream_insert_block(self, path, block, start, end):
"""Insert an entire block of data into a stream. Like
stream_insert, except 'block' contains multiple lines of ASCII
text and is sent in one single chunk."""
params = { "path": path,
"start": float_to_string(start),
"end": float_to_string(end) }
return self.http.put("stream/insert", block, params)
def stream_intervals(self, path, start = None, end = None):
"""
Return a generator that yields each stream interval.
"""
params = {
"path": path
}
if start is not None:
params["start"] = float_to_string(start)
if end is not None:
params["end"] = float_to_string(end)
return self.http.get_gen("stream/intervals", params)
def stream_extract(self, path, start = None, end = None, count = False):
"""
Extract data from a stream. Returns a generator that yields
lines of ASCII-formatted data that matches the database
layout for the given path.
Specify count = True to return a count of matching data points
rather than the actual data. The output format is unchanged.
"""
params = {
"path": path,
}
if start is not None:
params["start"] = float_to_string(start)
if end is not None:
params["end"] = float_to_string(end)
if count:
params["count"] = 1
return self.http.get_gen("stream/extract", params)
def stream_count(self, path, start = None, end = None):
"""
Return the number of rows of data in the stream that satisfy
the given timestamps.
"""
counts = list(self.stream_extract(path, start, end, count = True))
return int(counts[0])
class StreamInserter(object):
"""Object returned by stream_insert_context() that manages
the insertion of rows of data into a particular path.
The basic data flow is that we are filling a contiguous interval
on the server, with no gaps, that extends from timestamp 'start'
to timestamp 'end'. Data timestamps satisfy 'start <= t < end'.
Data is provided by the user one line at a time with
.insert_line() or .insert_iter().
1. The first inserted line begins a new interval that starts at
'start'. If 'start' is not given, it is deduced from the first
line's timestamp.
2. Subsequent lines go into the same contiguous interval. As lines
are inserted, this routine may make multiple insertion requests to
the server, but will structure the timestamps to leave no gaps.
3. The current contiguous interval can be completed by manually
calling .finalize(), which the context manager will also do
automatically. This will send any remaining data to the server,
using the 'end' timestamp to end the interval.
After a .finalize(), inserting new data goes back to step 1.
.update_start() can be called before step 1 to change the start
time for the interval. .update_end() can be called before step 3
to change the end time for the interval.
"""
# See design.md for a discussion of how much data to send.
# These are soft limits -- actual data might be rounded up.
# We send when we have a certain amount of data queued, or
# when a certain amount of time has passed since the last send.
_max_data = 1048576
_max_time = 30
# Delta to add to the final timestamp, if "end" wasn't given
_end_epsilon = 1e-6
def __init__(self, client, path, start = None, end = None):
"""'http' is the httpclient object. 'path' is the database
path to insert to. 'start' and 'end' are used for the first
contiguous interval."""
self.last_response = None
self._client = client
self._path = path
# Start and end for the overall contiguous interval we're
# filling
self._interval_start = start
self._interval_end = end
# Data for the specific block we're building up to send
self._block_data = []
self._block_len = 0
self._block_start = None
# Time of last request
self._last_time = time.time()
# We keep a buffer of the two most recently inserted lines.
# Only the older one actually gets processed; the newer one
# is used to "look-ahead" to the next timestamp if we need
# to internally split an insertion into two requests.
self._line_old = None
self._line_new = None
def insert_iter(self, iter):
"""Insert all lines of ASCII formatted data from the given
iterable. Lines must be terminated with '\\n'."""
for line in iter:
self.insert_line(line)
def insert_line(self, line, allow_intermediate = True):
"""Insert a single line of ASCII formatted data. Line
must be terminated with '\\n'."""
if line and (len(line) < 1 or line[-1] != '\n'):
raise ValueError("lines must end in with a newline character")
# Store this new line, but process the previous (old) one.
# This lets us "look ahead" to the next line.
self._line_old = self._line_new
self._line_new = line
if self._line_old is None:
return
# If starting a new block, pull out the timestamp if needed.
if self._block_start is None:
if self._interval_start is not None:
# User provided a start timestamp. Use it once, then
# clear it for the next block.
self._block_start = self._interval_start
self._interval_start = None
else:
# Extract timestamp from the first row
self._block_start = extract_timestamp(self._line_old)
# Save the line
self._block_data.append(self._line_old)
self._block_len += len(self._line_old)
if allow_intermediate:
# Send an intermediate block to the server if needed.
elapsed = time.time() - self._last_time
if (self._block_len > self._max_data) or (elapsed > self._max_time):
self._send_block_intermediate()
def update_start(self, start):
"""Update the start time for the next contiguous interval.
Call this before starting to insert data for a new interval,
for example, after .finalize()"""
self._interval_start = start
def update_end(self, end):
"""Update the end time for the current contiguous interval.
Call this before .finalize()"""
self._interval_end = end
def finalize(self):
"""Stop filling the current contiguous interval.
All outstanding data will be sent, and the interval end
time of the interval will be taken from the 'end' argument
used when initializing this class, or the most recent
value passed to update_end(), or the last timestamp plus
a small epsilon value if no other endpoint was provided.
If more data is inserted after a finalize(), it will become
part of a new interval and there may be a gap left in-between."""
# Special marker tells insert_line that this is the end
self.insert_line(None, allow_intermediate = False)
if self._block_len > 0:
# We have data pending, so send the final block
self._send_block_final()
elif None not in (self._interval_start, self._interval_end):
# We have no data, but enough information to create an
# empty interval.
self._block_start = self._interval_start
self._interval_start = None
self._send_block_final()
else:
# No data, and no timestamps to use to create an empty
# interval.
pass
# Make sure both timestamps are emptied for future intervals.
self._interval_start = None
self._interval_end = None
def _send_block_intermediate(self):
"""Send data, when we still have more data to send.
Use the timestamp from the next line, so that the blocks
are contiguous."""
block_end = extract_timestamp(self._line_new)
if self._interval_end is not None and block_end > self._interval_end:
# Something's fishy -- the timestamp we found is after
# the user's specified end. Limit it here, and the
# server will return an error.
block_end = self._interval_end
self._send_block(block_end)
def _send_block_final(self):
"""Send data, when this is the last block for the interval.
There is no next line, so figure out the actual interval end
using interval_end or end_epsilon."""
if self._interval_end is not None:
# Use the user's specified end timestamp
block_end = self._interval_end
# Clear it in case we send more intervals in the future.
self._interval_end = None
else:
# Add an epsilon to the last timestamp we saw
block_end = extract_timestamp(self._line_old) + self._end_epsilon
self._send_block(block_end)
def _send_block(self, block_end):
"""Send current block to the server"""
self.last_response = self._client.stream_insert_block(
self._path, "".join(self._block_data),
self._block_start, block_end)
# Clear out the block
self._block_data = []
self._block_len = 0
self._block_start = None
# Note when we sent it
self._last_time = time.time()

33
nilmdb/client/errors.py Normal file
View File

@@ -0,0 +1,33 @@
"""HTTP client errors"""
from nilmdb.utils.printf import *
class Error(Exception):
"""Base exception for both ClientError and ServerError responses"""
def __init__(self,
status = "Unspecified error",
message = None,
url = None,
traceback = None):
Exception.__init__(self, status)
self.status = status # e.g. "400 Bad Request"
self.message = message # textual message from the server
self.url = url # URL we were requesting
self.traceback = traceback # server traceback, if available
def _format_error(self, show_url):
s = sprintf("[%s]", self.status)
if self.message:
s += sprintf(" %s", self.message)
if show_url and self.url: # pragma: no cover
s += sprintf(" (%s)", self.url)
if self.traceback: # pragma: no cover
s += sprintf("\nServer traceback:\n%s", self.traceback)
return s
def __str__(self):
return self._format_error(show_url = False)
def __repr__(self): # pragma: no cover
return self._format_error(show_url = True)
class ClientError(Error):
pass
class ServerError(Error):
pass

120
nilmdb/client/httpclient.py Normal file
View File

@@ -0,0 +1,120 @@
"""HTTP client library"""
import nilmdb
import nilmdb.utils
from nilmdb.client.errors import ClientError, ServerError, Error
import simplejson as json
import urlparse
import requests
class HTTPClient(object):
"""Class to manage and perform HTTP requests from the client"""
def __init__(self, baseurl = ""):
"""If baseurl is supplied, all other functions that take
a URL can be given a relative URL instead."""
# Verify / clean up URL
reparsed = urlparse.urlparse(baseurl).geturl()
if '://' not in reparsed:
reparsed = urlparse.urlparse("http://" + baseurl).geturl()
self.baseurl = reparsed
# Build Requests session object, enable SSL verification
self.session = requests.Session()
self.session.verify = True
# Saved response, so that tests can verify a few things.
self._last_response = {}
def _handle_error(self, url, code, body):
# Default variables for exception. We use the entire body as
# the default message, in case we can't extract it from a JSON
# response.
args = { "url" : url,
"status" : str(code),
"message" : body,
"traceback" : None }
try:
# Fill with server-provided data if we can
jsonerror = json.loads(body)
args["status"] = jsonerror["status"]
args["message"] = jsonerror["message"]
args["traceback"] = jsonerror["traceback"]
except Exception: # pragma: no cover
pass
if code >= 400 and code <= 499:
raise ClientError(**args)
else: # pragma: no cover
if code >= 500 and code <= 599:
if args["message"] is None:
args["message"] = ("(no message; try disabling " +
"response.stream option in " +
"nilmdb.server for better debugging)")
raise ServerError(**args)
else:
raise Error(**args)
def close(self):
self.session.close()
def _do_req(self, method, url, query_data, body_data, stream):
url = urlparse.urljoin(self.baseurl, url)
try:
response = self.session.request(method, url,
params = query_data,
data = body_data)
except requests.RequestException as e:
raise ServerError(status = "502 Error", url = url,
message = str(e.message))
if response.status_code != 200:
self._handle_error(url, response.status_code, response.content)
self._last_response = response
if response.headers["content-type"] in ("application/json",
"application/x-json-stream"):
return (response, True)
else:
return (response, False)
# Normal versions that return data directly
def _req(self, method, url, query = None, body = None):
"""
Make a request and return the body data as a string or parsed
JSON object, or raise an error if it contained an error.
"""
(response, isjson) = self._do_req(method, url, query, body, False)
if isjson:
return json.loads(response.content)
return response.content
def get(self, url, params = None):
"""Simple GET (parameters in URL)"""
return self._req("GET", url, params, None)
def post(self, url, params = None):
"""Simple POST (parameters in body)"""
return self._req("POST", url, None, params)
def put(self, url, data, params = None):
"""Simple PUT (parameters in URL, data in body)"""
return self._req("PUT", url, params, data)
# Generator versions that return data one line at a time.
def _req_gen(self, method, url, query = None, body = None):
"""
Make a request and return a generator that gives back strings
or JSON decoded lines of the body data, or raise an error if
it contained an eror.
"""
(response, isjson) = self._do_req(method, url, query, body, True)
for line in response.iter_lines():
if isjson:
yield json.loads(line)
else:
yield line
def get_gen(self, url, params = None):
"""Simple GET (parameters in URL) returning a generator"""
return self._req_gen("GET", url, params)
# Not much use for a POST or PUT generator, since they don't
# return much data.

View File

@@ -1 +1,3 @@
from .cmdline import Cmdline
"""nilmdb.cmdline"""
from nilmdb.cmdline.cmdline import Cmdline

View File

@@ -1,25 +1,20 @@
"""Command line client functionality"""
from __future__ import absolute_import
import nilmdb
from nilmdb.utils.printf import *
import nilmdb.client
from nilmdb.utils import datetime_tz
import nilmdb.utils.time
import datetime_tz
import dateutil.parser
import sys
import re
import argparse
from argparse import ArgumentDefaultsHelpFormatter as def_form
version = "1.0"
# Valid subcommands. Defined in separate files just to break
# things up -- they're still called with Cmdline as self.
subcommands = [ "info", "create", "list", "metadata", "insert", "extract",
"remove", "destroy" ]
# Import the subcommand modules. Equivalent way of doing this would be
# from . import info as cmd_info
# Import the subcommand modules
subcmd_mods = {}
for cmd in subcommands:
subcmd_mods[cmd] = __import__("nilmdb.cmdline." + cmd, fromlist = [ cmd ])
@@ -31,74 +26,19 @@ class JimArgumentParser(argparse.ArgumentParser):
class Cmdline(object):
def __init__(self, argv):
self.argv = argv
def __init__(self, argv = None):
self.argv = argv or sys.argv[1:]
self.client = None
def arg_time(self, toparse):
"""Parse a time string argument"""
try:
return self.parse_time(toparse).totimestamp()
return nilmdb.utils.time.parse_time(toparse).totimestamp()
except ValueError as e:
raise argparse.ArgumentTypeError(sprintf("%s \"%s\"",
str(e), toparse))
def parse_time(self, toparse):
"""
Parse a free-form time string and return a datetime_tz object.
If the string doesn't contain a timestamp, the current local
timezone is assumed (e.g. from the TZ env var).
"""
# If string doesn't contain at least 6 digits, consider it
# invalid. smartparse might otherwise accept empty strings
# and strings with just separators.
if len(re.findall(r"\d", toparse)) < 6:
raise ValueError("not enough digits for a timestamp")
# Try to just parse the time as given
try:
return datetime_tz.datetime_tz.smartparse(toparse)
except ValueError:
pass
# Try to extract a substring in a condensed format that we expect
# to see in a filename or header comment
res = re.search(r"(^|[^\d])(" # non-numeric or SOL
r"(199\d|2\d\d\d)" # year
r"[-/]?" # separator
r"(0[1-9]|1[012])" # month
r"[-/]?" # separator
r"([012]\d|3[01])" # day
r"[-T ]?" # separator
r"([01]\d|2[0-3])" # hour
r"[:]?" # separator
r"([0-5]\d)" # minute
r"[:]?" # separator
r"([0-5]\d)?" # second
r"([-+]\d\d\d\d)?" # timezone
r")", toparse)
if res is not None:
try:
return datetime_tz.datetime_tz.smartparse(res.group(2))
except ValueError:
pass
# Could also try to successively parse substrings, but let's
# just give up for now.
raise ValueError("unable to parse timestamp")
def time_string(self, timestamp):
"""
Convert a Unix timestamp to a string for printing, using the
local timezone for display (e.g. from the TZ env var).
"""
dt = datetime_tz.datetime_tz.fromtimestamp(timestamp)
return dt.strftime("%a, %d %b %Y %H:%M:%S.%f %z")
def parser_setup(self):
version_string = sprintf("nilmtool %s, client library %s",
version, nilmdb.Client.client_version)
self.parser = JimArgumentParser(add_help = False,
formatter_class = def_form)
@@ -106,7 +46,7 @@ class Cmdline(object):
group.add_argument("-h", "--help", action='help',
help='show this help message and exit')
group.add_argument("-V", "--version", action="version",
version=version_string)
version = nilmdb.__version__)
group = self.parser.add_argument_group("Server")
group.add_argument("-u", "--url", action="store",

View File

@@ -1,17 +1,25 @@
from __future__ import absolute_import
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
from argparse import ArgumentDefaultsHelpFormatter as def_form
from argparse import RawDescriptionHelpFormatter as raw_form
def setup(self, sub):
cmd = sub.add_parser("create", help="Create a new stream",
formatter_class = def_form,
formatter_class = raw_form,
description="""
Create a new empty stream at the
specified path and with the specifed
layout type.
""")
Create a new empty stream at the specified path and with the specified
layout type.
Layout types are of the format: type_count
'type' is a data type like 'float32', 'float64', 'uint16', 'int32', etc.
'count' is the number of columns of this type.
For example, 'float32_8' means the data for this stream has 8 columns of
32-bit floating point values.
""")
cmd.set_defaults(handler = cmd_create)
group = cmd.add_argument_group("Required arguments")
group.add_argument("path",

View File

@@ -1,5 +1,5 @@
from __future__ import absolute_import
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
from argparse import ArgumentDefaultsHelpFormatter as def_form

View File

@@ -1,8 +1,6 @@
from __future__ import absolute_import
from __future__ import print_function
from nilmdb.utils.printf import *
import nilmdb.client
import sys
def setup(self, sub):
cmd = sub.add_parser("extract", help="Extract data",
@@ -28,6 +26,8 @@ def setup(self, sub):
group.add_argument("-a", "--annotate", action="store_true",
help="Include comments with some information "
"about the stream")
group.add_argument("-T", "--timestamp-raw", action="store_true",
help="Show raw timestamps in annotated information")
group.add_argument("-c", "--count", action="store_true",
help="Just output a count of matched data points")
@@ -42,11 +42,16 @@ def cmd_extract(self):
self.die("error getting stream info for path %s", self.args.path)
layout = streams[0][1]
if self.args.timestamp_raw:
time_string = repr
else:
time_string = nilmdb.utils.time.format_time
if self.args.annotate:
printf("# path: %s\n", self.args.path)
printf("# layout: %s\n", layout)
printf("# start: %s\n", self.time_string(self.args.start))
printf("# end: %s\n", self.time_string(self.args.end))
printf("# start: %s\n", time_string(self.args.start))
printf("# end: %s\n", time_string(self.args.end))
printed = False
for dataline in self.client.stream_extract(self.args.path,

View File

@@ -1,5 +1,6 @@
from __future__ import absolute_import
import nilmdb
from nilmdb.utils.printf import *
from nilmdb.utils import human_size
from argparse import ArgumentDefaultsHelpFormatter as def_form
@@ -14,8 +15,10 @@ def setup(self, sub):
def cmd_info(self):
"""Print info about the server"""
printf("Client library version: %s\n", self.client.client_version)
printf("Client version: %s\n", nilmdb.__version__)
printf("Server version: %s\n", self.client.version())
printf("Server URL: %s\n", self.client.geturl())
printf("Server database path: %s\n", self.client.dbpath())
printf("Server database size: %s\n", self.client.dbsize())
dbinfo = self.client.dbinfo()
printf("Server database path: %s\n", dbinfo["path"])
printf("Server database size: %s\n", human_size(dbinfo["size"]))
printf("Server database free space: %s\n", human_size(dbinfo["free"]))

View File

@@ -1,7 +1,8 @@
from __future__ import absolute_import
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
import nilmdb.timestamper
import nilmdb.utils.timestamper as timestamper
import nilmdb.utils.time
import sys
@@ -53,8 +54,6 @@ def cmd_insert(self):
if len(streams) != 1:
self.die("error getting stream info for path %s", self.args.path)
layout = streams[0][1]
if self.args.start and len(self.args.file) != 1:
self.die("error: --start can only be used with one input file")
@@ -69,13 +68,13 @@ def cmd_insert(self):
# Build a timestamper for this file
if self.args.none:
ts = nilmdb.timestamper.TimestamperNull(infile)
ts = timestamper.TimestamperNull(infile)
else:
if self.args.start:
start = self.args.start
else:
try:
start = self.parse_time(filename)
start = nilmdb.utils.time.parse_time(filename)
except ValueError:
self.die("error extracting time from filename '%s'",
filename)
@@ -84,7 +83,7 @@ def cmd_insert(self):
self.die("error: --rate is needed, but was not specified")
rate = self.args.rate
ts = nilmdb.timestamper.TimestamperRate(infile, start, rate)
ts = timestamper.TimestamperRate(infile, start, rate)
# Print info
if not self.args.quiet:
@@ -93,7 +92,7 @@ def cmd_insert(self):
# Insert the data
try:
result = self.client.stream_insert(self.args.path, ts)
self.client.stream_insert(self.args.path, ts)
except nilmdb.client.Error as e:
# TODO: It would be nice to be able to offer better errors
# here, particularly in the case of overlap, which just shows

View File

@@ -1,6 +1,5 @@
from __future__ import absolute_import
from nilmdb.utils.printf import *
import nilmdb.client
import nilmdb.utils.time
import fnmatch
import argparse
@@ -28,6 +27,8 @@ def setup(self, sub):
group = cmd.add_argument_group("Interval details")
group.add_argument("-d", "--detail", action="store_true",
help="Show available data time intervals")
group.add_argument("-T", "--timestamp-raw", action="store_true",
help="Show raw timestamps in time intervals")
group.add_argument("-s", "--start",
metavar="TIME", type=self.arg_time,
help="Starting timestamp (free-form, inclusive)")
@@ -47,12 +48,18 @@ def cmd_list_verify(self):
self.args.path = self.args.path_positional
if self.args.start is not None and self.args.end is not None:
if self.args.start > self.args.end:
self.parser.error("start is after end")
if self.args.start >= self.args.end:
self.parser.error("start must precede end")
def cmd_list(self):
"""List available streams"""
streams = self.client.stream_list()
if self.args.timestamp_raw:
time_string = repr
else:
time_string = nilmdb.utils.time.format_time
for (path, layout) in streams:
if not (fnmatch.fnmatch(path, self.args.path) and
fnmatch.fnmatch(layout, self.args.layout)):
@@ -65,9 +72,7 @@ def cmd_list(self):
printed = False
for (start, end) in self.client.stream_intervals(path, self.args.start,
self.args.end):
printf(" [ %s -> %s ]\n",
self.time_string(start),
self.time_string(end))
printf(" [ %s -> %s ]\n", time_string(start), time_string(end))
printed = True
if not printed:
printf(" (no intervals)\n")

View File

@@ -1,5 +1,5 @@
from __future__ import absolute_import
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
def setup(self, sub):

View File

@@ -1,8 +1,6 @@
from __future__ import absolute_import
from __future__ import print_function
from nilmdb.utils.printf import *
import nilmdb
import nilmdb.client
import sys
def setup(self, sub):
cmd = sub.add_parser("remove", help="Remove data",
@@ -10,8 +8,7 @@ def setup(self, sub):
Remove all data from a specified time range within a
stream.
""")
cmd.set_defaults(verify = cmd_remove_verify,
handler = cmd_remove)
cmd.set_defaults(handler = cmd_remove)
group = cmd.add_argument_group("Data selection")
group.add_argument("path",
@@ -27,11 +24,6 @@ def setup(self, sub):
group.add_argument("-c", "--count", action="store_true",
help="Output number of data points removed")
def cmd_remove_verify(self):
if self.args.start is not None and self.args.end is not None:
if self.args.start > self.args.end:
self.parser.error("start is after end")
def cmd_remove(self):
try:
count = self.client.stream_remove(self.args.path,

View File

@@ -1,230 +0,0 @@
"""HTTP client library"""
from __future__ import absolute_import
from nilmdb.utils.printf import *
import nilmdb.utils
import time
import sys
import re
import os
import simplejson as json
import urlparse
import pycurl
import cStringIO
class Error(Exception):
"""Base exception for both ClientError and ServerError responses"""
def __init__(self,
status = "Unspecified error",
message = None,
url = None,
traceback = None):
Exception.__init__(self, status)
self.status = status # e.g. "400 Bad Request"
self.message = message # textual message from the server
self.url = url # URL we were requesting
self.traceback = traceback # server traceback, if available
def __str__(self):
s = sprintf("[%s]", self.status)
if self.message:
s += sprintf(" %s", self.message)
if self.traceback: # pragma: no cover
s += sprintf("\nServer traceback:\n%s", self.traceback)
return s
def __repr__(self): # pragma: no cover
s = sprintf("[%s]", self.status)
if self.message:
s += sprintf(" %s", self.message)
if self.url:
s += sprintf(" (%s)", self.url)
if self.traceback:
s += sprintf("\nServer traceback:\n%s", self.traceback)
return s
class ClientError(Error):
pass
class ServerError(Error):
pass
class HTTPClient(object):
"""Class to manage and perform HTTP requests from the client"""
def __init__(self, baseurl = ""):
"""If baseurl is supplied, all other functions that take
a URL can be given a relative URL instead."""
# Verify / clean up URL
reparsed = urlparse.urlparse(baseurl).geturl()
if '://' not in reparsed:
reparsed = urlparse.urlparse("http://" + baseurl).geturl()
self.baseurl = reparsed
self.curl = pycurl.Curl()
self.curl.setopt(pycurl.SSL_VERIFYHOST, 2)
self.curl.setopt(pycurl.FOLLOWLOCATION, 1)
self.curl.setopt(pycurl.MAXREDIRS, 5)
self._setup_url()
def _setup_url(self, url = "", params = ""):
url = urlparse.urljoin(self.baseurl, url)
if params:
url = urlparse.urljoin(
url, "?" + nilmdb.utils.urllib.urlencode(params))
self.curl.setopt(pycurl.URL, url)
self.url = url
def _check_error(self, body = None):
code = self.curl.getinfo(pycurl.RESPONSE_CODE)
if code == 200:
return
# Default variables for exception
args = { "url" : self.url,
"status" : str(code),
"message" : None,
"traceback" : None }
try:
# Fill with server-provided data if we can
jsonerror = json.loads(body)
args["status"] = jsonerror["status"]
args["message"] = jsonerror["message"]
args["traceback"] = jsonerror["traceback"]
except Exception: # pragma: no cover
pass
if code >= 400 and code <= 499:
raise ClientError(**args)
else: # pragma: no cover
if code >= 500 and code <= 599:
if args["message"] is None:
args["message"] = ("(no message; try disabling " +
"response.stream option in " +
"nilmdb.server for better debugging)")
raise ServerError(**args)
else:
raise Error(**args)
def _req_generator(self, url, params):
"""
Like self._req(), but runs the perform in a separate thread.
It returns a generator that spits out arbitrary-sized chunks
of the resulting data, instead of using the WRITEFUNCTION
callback.
"""
self._setup_url(url, params)
self._status = None
error_body = ""
self._headers = ""
def header_callback(data):
if self._status is None:
self._status = int(data.split(" ")[1])
self._headers += data
self.curl.setopt(pycurl.HEADERFUNCTION, header_callback)
def func(callback):
self.curl.setopt(pycurl.WRITEFUNCTION, callback)
self.curl.perform()
try:
for i in nilmdb.utils.Iteratorizer(func):
if self._status == 200:
# If we had a 200 response, yield the data to the caller.
yield i
else:
# Otherwise, collect it into an error string.
error_body += i
except pycurl.error as e:
raise ServerError(status = "502 Error",
url = self.url,
message = e[1])
# Raise an exception if there was an error
self._check_error(error_body)
def _req(self, url, params):
"""
GET or POST that returns raw data. Returns the body
data as a string, or raises an error if it contained an error.
"""
self._setup_url(url, params)
body = cStringIO.StringIO()
self.curl.setopt(pycurl.WRITEFUNCTION, body.write)
self._headers = ""
def header_callback(data):
self._headers += data
self.curl.setopt(pycurl.HEADERFUNCTION, header_callback)
try:
self.curl.perform()
except pycurl.error as e:
raise ServerError(status = "502 Error",
url = self.url,
message = e[1])
body_str = body.getvalue()
# Raise an exception if there was an error
self._check_error(body_str)
return body_str
def close(self):
self.curl.close()
def _iterate_lines(self, it):
"""
Given an iterator that returns arbitrarily-sized chunks
of data, return '\n'-delimited lines of text
"""
partial = ""
for chunk in it:
partial += chunk
lines = partial.split("\n")
for line in lines[0:-1]:
yield line
partial = lines[-1]
if partial != "":
yield partial
# Non-generator versions
def _doreq(self, url, params, retjson):
"""
Perform a request, and return the body.
url: URL to request (relative to baseurl)
params: dictionary of query parameters
retjson: expect JSON and return python objects instead of string
"""
out = self._req(url, params)
if retjson:
return json.loads(out)
return out
def get(self, url, params = None, retjson = True):
"""Simple GET"""
self.curl.setopt(pycurl.UPLOAD, 0)
return self._doreq(url, params, retjson)
def put(self, url, postdata, params = None, retjson = True):
"""Simple PUT"""
self._setup_url(url, params)
data = cStringIO.StringIO(postdata)
self.curl.setopt(pycurl.UPLOAD, 1)
self.curl.setopt(pycurl.READFUNCTION, data.read)
return self._doreq(url, params, retjson)
# Generator versions
def _doreq_gen(self, url, params, retjson):
"""
Perform a request, and return lines of the body in a generator.
url: URL to request (relative to baseurl)
params: dictionary of query parameters
retjson: expect JSON and yield python objects instead of strings
"""
for line in self._iterate_lines(self._req_generator(url, params)):
if retjson:
yield json.loads(line)
else:
yield line
def get_gen(self, url, params = None, retjson = True):
"""Simple GET, returning a generator"""
self.curl.setopt(pycurl.UPLOAD, 0)
return self._doreq_gen(url, params, retjson)
def put_gen(self, url, postdata, params = None, retjson = True):
"""Simple PUT, returning a generator"""
self._setup_url(url, params)
data = cStringIO.StringIO(postdata)
self.curl.setopt(pycurl.UPLOAD, 1)
self.curl.setopt(pycurl.READFUNCTION, data.read)
return self._doreq_gen(url, params, retjson)

View File

@@ -0,0 +1 @@
# Command line scripts

82
nilmdb/scripts/nilmdb_server.py Executable file
View File

@@ -0,0 +1,82 @@
#!/usr/bin/python
import nilmdb.server
import argparse
import os
import socket
def main():
"""Main entry point for the 'nilmdb-server' command line script"""
parser = argparse.ArgumentParser(
description = 'Run the NilmDB server',
formatter_class = argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument("-V", "--version", action="version",
version = nilmdb.__version__)
group = parser.add_argument_group("Standard options")
group.add_argument('-a', '--address',
help = 'Only listen on the given address',
default = '0.0.0.0')
group.add_argument('-p', '--port', help = 'Listen on the given port',
type = int, default = 12380)
group.add_argument('-d', '--database', help = 'Database directory',
default = os.path.join(os.getcwd(), "db"))
group.add_argument('-q', '--quiet', help = 'Silence output',
action = 'store_true')
group = parser.add_argument_group("Debug options")
group.add_argument('-y', '--yappi', help = 'Run under yappi profiler and '
'invoke interactive shell afterwards',
action = 'store_true')
args = parser.parse_args()
# Create database object. Needs to be serialized before passing
# to the Server.
db = nilmdb.utils.serializer_proxy(nilmdb.NilmDB)(args.database)
# Configure the server
if args.quiet:
embedded = True
else:
embedded = False
server = nilmdb.server.Server(db,
host = args.address,
port = args.port,
embedded = embedded)
# Print info
if not args.quiet:
print "Database: %s" % (os.path.realpath(args.database))
if args.address == '0.0.0.0' or args.address == '::':
host = socket.getfqdn()
else:
host = args.address
print "Server URL: http://%s:%d/" % ( host, args.port)
print "----"
# Run it
if args.yappi:
print "Running in yappi"
try:
import yappi
yappi.start()
server.start(blocking = True)
finally:
yappi.stop()
yappi.print_stats(sort_type = yappi.SORTTYPE_TTOT, limit = 50)
from IPython import embed
embed(header = "Use the yappi object to explore further, "
"quit to exit")
else:
server.start(blocking = True)
# Clean up
if not args.quiet:
print "Closing database"
db.close()
if __name__ == "__main__":
main()

10
nilmdb/scripts/nilmtool.py Executable file
View File

@@ -0,0 +1,10 @@
#!/usr/bin/python
import nilmdb.cmdline
def main():
"""Main entry point for the 'nilmtool' command line script"""
nilmdb.cmdline.Cmdline().run()
if __name__ == "__main__":
main()

22
nilmdb/server/__init__.py Normal file
View File

@@ -0,0 +1,22 @@
"""nilmdb.server"""
from __future__ import absolute_import
# Try to set up pyximport to automatically rebuild Cython modules. If
# this doesn't work, it's OK, as long as the modules were built externally.
# (e.g. python setup.py build_ext --inplace)
try: # pragma: no cover
import Cython
import distutils.version
if (distutils.version.LooseVersion(Cython.__version__) <
distutils.version.LooseVersion("0.17")): # pragma: no cover
raise ImportError("Cython version too old")
import pyximport
pyximport.install(inplace = True, build_in_temp = False)
except (ImportError, TypeError): # pragma: no cover
pass
import nilmdb.server.layout
from nilmdb.server.nilmdb import NilmDB
from nilmdb.server.server import Server
from nilmdb.server.errors import NilmDBError, StreamError, OverlapError

View File

@@ -1,18 +1,28 @@
# Fixed record size bulk data storage
# Need absolute_import so that "import nilmdb" won't pull in
# nilmdb.py, but will pull the parent nilmdb module instead.
from __future__ import absolute_import
from __future__ import division
import nilmdb
from nilmdb.utils.printf import *
import os
import sys
import cPickle as pickle
import struct
import fnmatch
import mmap
import re
# If we have the faulthandler module, use it. All of the mmap stuff
# might trigger a SIGSEGV or SIGBUS if we're not careful, and
# faulthandler will give a traceback in that case. (the Python
# interpreter will still die either way).
try: # pragma: no cover
import faulthandler
faulthandler.enable()
except: # pragma: no cover
pass
# Up to 256 open file descriptors at any given time.
# These variables are global so they can be used in the decorator arguments.
table_cache_size = 16
@@ -75,7 +85,7 @@ class BulkData(object):
# Get layout, and build format string for struct module
try:
layout = nilmdb.layout.get_named(layout_name)
layout = nilmdb.server.layout.get_named(layout_name)
struct_fmt = '<d' # Little endian, double timestamp
struct_mapping = {
"int8": 'b',
@@ -89,8 +99,7 @@ class BulkData(object):
"float32": 'f',
"float64": 'd',
}
for n in range(layout.count):
struct_fmt += struct_mapping[layout.datatype]
struct_fmt += struct_mapping[layout.datatype] * layout.count
except KeyError:
raise ValueError("no such layout, or bad data types")
@@ -162,6 +171,52 @@ class BulkData(object):
ospath = os.path.join(self.root, *elements)
return Table(ospath)
@nilmdb.utils.must_close(wrap_verify = True)
class File(object):
"""Object representing a single file on disk. Data can be appended,
or the self.mmap handle can be used for random reads."""
def __init__(self, root, subdir, filename):
# Create path if it doesn't exist
try:
os.mkdir(os.path.join(root, subdir))
except OSError:
pass
# Open/create file
self._f = open(os.path.join(root, subdir, filename), "a+b", 0)
# Seek to end, and get size
self._f.seek(0, 2)
self.size = self._f.tell()
# Open mmap object
self.mmap = None
self._mmap_reopen()
def _mmap_reopen(self):
if self.size == 0:
# Don't mmap if the file is empty; it would fail
pass
elif self.mmap is None:
# Not opened yet, so open it
self.mmap = mmap.mmap(self._f.fileno(), 0)
else:
# Already opened, so just resize it
self.mmap.resize(self.size)
def close(self):
if self.mmap is not None:
self.mmap.close()
self._f.close()
def append(self, data):
# Write data, flush it, and resize our mmap accordingly
self._f.write(data)
self._f.flush()
self.size += len(data)
self._mmap_reopen()
@nilmdb.utils.must_close(wrap_verify = True)
class Table(object):
"""Tools to help access a single table (data at a specific OS path)."""
@@ -183,12 +238,12 @@ class Table(object):
packer = struct.Struct(struct_fmt)
rows_per_file = max(file_size // packer.size, 1)
format = { "rows_per_file": rows_per_file,
"files_per_dir": files_per_dir,
"struct_fmt": struct_fmt,
"version": 1 }
fmt = { "rows_per_file": rows_per_file,
"files_per_dir": files_per_dir,
"struct_fmt": struct_fmt,
"version": 1 }
with open(os.path.join(root, "_format"), "wb") as f:
pickle.dump(format, f, 2)
pickle.dump(fmt, f, 2)
# Normal methods
def __init__(self, root):
@@ -197,22 +252,22 @@ class Table(object):
# Load the format and build packer
with open(os.path.join(self.root, "_format"), "rb") as f:
format = pickle.load(f)
fmt = pickle.load(f)
if format["version"] != 1: # pragma: no cover (just future proofing)
raise NotImplementedError("version " + format["version"] +
if fmt["version"] != 1: # pragma: no cover (just future proofing)
raise NotImplementedError("version " + fmt["version"] +
" bulk data store not supported")
self.rows_per_file = format["rows_per_file"]
self.files_per_dir = format["files_per_dir"]
self.packer = struct.Struct(format["struct_fmt"])
self.rows_per_file = fmt["rows_per_file"]
self.files_per_dir = fmt["files_per_dir"]
self.packer = struct.Struct(fmt["struct_fmt"])
self.file_size = self.packer.size * self.rows_per_file
# Find nrows
self.nrows = self._get_nrows()
def close(self):
self.mmap_open.cache_remove_all()
self.file_open.cache_remove_all()
# Internal helpers
def _get_nrows(self):
@@ -276,37 +331,11 @@ class Table(object):
# Cache open files
@nilmdb.utils.lru_cache(size = fd_cache_size,
keys = slice(0,3), # exclude newsize
onremove = lambda x: x.close())
def mmap_open(self, subdir, filename, newsize = None):
onremove = lambda f: f.close())
def file_open(self, subdir, filename):
"""Open and map a given 'subdir/filename' (relative to self.root).
Will be automatically closed when evicted from the cache.
If 'newsize' is provided, the file is truncated to the given
size before the mapping is returned. (Note that the LRU cache
on this function means the truncate will only happen if the
object isn't already cached; mmap.resize should be used too.)"""
try:
os.mkdir(os.path.join(self.root, subdir))
except OSError:
pass
f = open(os.path.join(self.root, subdir, filename), "a+", 0)
if newsize is not None:
# mmap can't map a zero-length file, so this allows the
# caller to set the filesize between file creation and
# mmap.
f.truncate(newsize)
mm = mmap.mmap(f.fileno(), 0)
return mm
def mmap_open_resize(self, subdir, filename, newsize):
"""Open and map a given 'subdir/filename' (relative to self.root).
The file is resized to the given size."""
# Pass new size to mmap_open
mm = self.mmap_open(subdir, filename, newsize)
# In case we got a cached copy, need to call mm.resize too.
mm.resize(newsize)
return mm
Will be automatically closed when evicted from the cache."""
return File(self.root, subdir, filename)
def append(self, data):
"""Append the data and flush it to disk.
@@ -318,14 +347,13 @@ class Table(object):
(subdir, fname, offset, count) = self._offset_from_row(self.nrows)
if count > remaining:
count = remaining
newsize = offset + count * self.packer.size
mm = self.mmap_open_resize(subdir, fname, newsize)
mm.seek(offset)
f = self.file_open(subdir, fname)
# Write the data
for i in xrange(count):
row = dataiter.next()
mm.write(self.packer.pack(*row))
f.append(self.packer.pack(*row))
remaining -= count
self.nrows += count
@@ -352,7 +380,7 @@ class Table(object):
(subdir, filename, offset, count) = self._offset_from_row(row)
if count > remaining:
count = remaining
mm = self.mmap_open(subdir, filename)
mm = self.file_open(subdir, filename).mmap
for i in xrange(count):
ret.append(list(self.packer.unpack_from(mm, offset)))
offset += self.packer.size
@@ -364,7 +392,7 @@ class Table(object):
if key < 0 or key >= self.nrows:
raise IndexError("Index out of range")
(subdir, filename, offset, count) = self._offset_from_row(key)
mm = self.mmap_open(subdir, filename)
mm = self.file_open(subdir, filename).mmap
# unpack_from ignores the mmap object's current seek position
return list(self.packer.unpack_from(mm, offset))
@@ -411,8 +439,8 @@ class Table(object):
# are generally easier if we don't have to special-case that.
if (len(merged) == 1 and
merged[0][0] == 0 and merged[0][1] == self.rows_per_file):
# Close potentially open file in mmap_open LRU cache
self.mmap_open.cache_remove(self, subdir, filename)
# Close potentially open file in file_open LRU cache
self.file_open.cache_remove(self, subdir, filename)
# Delete files
os.remove(datafile)

12
nilmdb/server/errors.py Normal file
View File

@@ -0,0 +1,12 @@
"""Exceptions"""
class NilmDBError(Exception):
"""Base exception for NilmDB errors"""
def __init__(self, message = "Unspecified error"):
Exception.__init__(self, message)
class StreamError(NilmDBError):
pass
class OverlapError(NilmDBError):
pass

View File

@@ -36,7 +36,7 @@ cdef class Interval:
"""
'start' and 'end' are arbitrary floats that represent time
"""
if start > end:
if start >= end:
# Explicitly disallow zero-width intervals (since they're half-open)
raise IntervalError("start %s must precede end %s" % (start, end))
self.start = float(start)

View File

@@ -154,7 +154,7 @@ class Parser(object):
layout, into an internal data structure suitable for a
pytables 'table.append(parser.data)'.
"""
cdef double last_ts = 0, ts
cdef double last_ts = -1e12, ts
cdef int n = 0, i
cdef char *line
@@ -170,7 +170,7 @@ class Parser(object):
if line[0] == '\#':
continue
(ts, row) = self.layout.parse(line)
if ts < last_ts:
if ts <= last_ts:
raise ValueError("timestamp is not "
"monotonically increasing")
last_ts = ts

View File

@@ -7,25 +7,21 @@ Object that represents a NILM database file.
Manages both the SQL database and the table storage backend.
"""
# Need absolute_import so that "import nilmdb" won't pull in nilmdb.py,
# but will pull the nilmdb module instead.
# Need absolute_import so that "import nilmdb" won't pull in
# nilmdb.py, but will pull the parent nilmdb module instead.
from __future__ import absolute_import
import nilmdb
from nilmdb.utils.printf import *
from nilmdb.server.interval import (Interval, DBInterval,
IntervalSet, IntervalError)
from nilmdb.server import bulkdata
from nilmdb.server.errors import NilmDBError, StreamError, OverlapError
import sqlite3
import time
import sys
import os
import errno
import bisect
import pyximport
pyximport.install()
from nilmdb.interval import Interval, DBInterval, IntervalSet, IntervalError
from . import bulkdata
# Note about performance and transactions:
#
# Committing a transaction in the default sync mode (PRAGMA synchronous=FULL)
@@ -77,23 +73,15 @@ _sql_schema_updates = {
""",
}
class NilmDBError(Exception):
"""Base exception for NilmDB errors"""
def __init__(self, message = "Unspecified error"):
Exception.__init__(self, message)
class StreamError(NilmDBError):
pass
class OverlapError(NilmDBError):
pass
@nilmdb.utils.must_close()
class NilmDB(object):
verbose = 0
def __init__(self, basepath, sync=True, max_results=None,
bulkdata_args={}):
bulkdata_args=None):
if bulkdata_args is None:
bulkdata_args = {}
# set up path
self.basepath = os.path.abspath(basepath)
@@ -109,12 +97,7 @@ class NilmDB(object):
# SQLite database too
sqlfilename = os.path.join(self.basepath, "data.sql")
# We use check_same_thread = False, assuming that the rest
# of the code (e.g. Server) will be smart and not access this
# database from multiple threads simultaneously. Otherwise
# false positives will occur when the database is only opened
# in one thread, and only accessed in another.
self.con = sqlite3.connect(sqlfilename, check_same_thread = False)
self.con = sqlite3.connect(sqlfilename, check_same_thread = True)
self._sql_schema_update()
# See big comment at top about the performance implications of this
@@ -154,6 +137,15 @@ class NilmDB(object):
with self.con:
cur.execute("PRAGMA user_version = {v:d}".format(v=version))
def _check_user_times(self, start, end):
if start is None:
start = -1e12
if end is None:
end = 1e12
if start >= end:
raise NilmDBError("start must precede end")
return (start, end)
@nilmdb.utils.lru_cache(size = 16)
def _get_intervals(self, stream_id):
"""
@@ -169,7 +161,7 @@ class NilmDB(object):
iset += DBInterval(start_time, end_time,
start_time, end_time,
start_pos, end_pos)
except IntervalError as e: # pragma: no cover
except IntervalError: # pragma: no cover
raise NilmDBError("unexpected overlap in ranges table!")
return iset
@@ -315,7 +307,8 @@ class NilmDB(object):
"""
stream_id = self._stream_id(path)
intervals = self._get_intervals(stream_id)
requested = Interval(start or 0, end or 1e12)
(start, end) = self._check_user_times(start, end)
requested = Interval(start, end)
result = []
for n, i in enumerate(intervals.intersection(requested)):
if n >= self.max_results:
@@ -408,7 +401,7 @@ class NilmDB(object):
path: Path at which to add the data
start: Starting timestamp
end: Ending timestamp
data: Rows of data, to be passed to PyTable's table.append
data: Rows of data, to be passed to bulkdata table.append
method. E.g. nilmdb.layout.Parser.data
"""
# First check for basic overlap using timestamp info given.
@@ -429,7 +422,7 @@ class NilmDB(object):
self._add_interval(stream_id, interval, row_start, row_end)
# And that's all
return "ok"
return
def _find_start(self, table, dbinterval):
"""
@@ -487,7 +480,8 @@ class NilmDB(object):
stream_id = self._stream_id(path)
table = self.data.getnode(path)
intervals = self._get_intervals(stream_id)
requested = Interval(start or 0, end or 1e12)
(start, end) = self._check_user_times(start, end)
requested = Interval(start, end)
result = []
matched = 0
remaining = self.max_results
@@ -533,12 +527,10 @@ class NilmDB(object):
stream_id = self._stream_id(path)
table = self.data.getnode(path)
intervals = self._get_intervals(stream_id)
to_remove = Interval(start or 0, end or 1e12)
(start, end) = self._check_user_times(start, end)
to_remove = Interval(start, end)
removed = 0
if start == end:
return 0
# Can't remove intervals from within the iterator, so we need to
# remember what's currently in the intersection now.
all_candidates = list(intervals.intersection(to_remove, orig = True))

View File

@@ -1,34 +1,24 @@
"""CherryPy-based server for accessing NILM database via HTTP"""
# Need absolute_import so that "import nilmdb" won't pull in nilmdb.py,
# but will pull the nilmdb module instead.
# Need absolute_import so that "import nilmdb" won't pull in
# nilmdb.py, but will pull the nilmdb module instead.
from __future__ import absolute_import
from nilmdb.utils.printf import *
import nilmdb
from nilmdb.utils.printf import *
from nilmdb.server.errors import NilmDBError
import cherrypy
import sys
import time
import os
import simplejson as json
import decorator
import traceback
from nilmdb.nilmdb import NilmDBError
try:
import cherrypy
cherrypy.tools.json_out
except: # pragma: no cover
sys.stderr.write("Cherrypy 3.2+ required\n")
sys.exit(1)
import psutil
class NilmApp(object):
def __init__(self, db):
self.db = db
version = "1.2"
# Decorators
def chunked_response(func):
"""Decorator to enable chunked responses."""
@@ -37,6 +27,14 @@ def chunked_response(func):
func._cp_config = { 'response.stream': True }
return func
def response_type(content_type):
"""Return a decorator-generating function that sets the
response type to the specified string."""
def wrapper(func, *args, **kwargs):
cherrypy.response.headers['Content-Type'] = content_type
return func(*args, **kwargs)
return decorator.decorator(wrapper)
@decorator.decorator
def workaround_cp_bug_1200(func, *args, **kwargs): # pragma: no cover
"""Decorator to work around CherryPy bug #1200 in a response
@@ -50,7 +48,7 @@ def workaround_cp_bug_1200(func, *args, **kwargs): # pragma: no cover
try:
for val in func(*args, **kwargs):
yield val
except (LookupError, UnicodeError) as e:
except (LookupError, UnicodeError):
raise Exception("bug workaround; real exception is:\n" +
traceback.format_exc())
@@ -73,13 +71,23 @@ def exception_to_httperror(*expected):
# care of that.
return decorator.decorator(wrapper)
# Custom Cherrypy tools
def allow_methods(methods):
method = cherrypy.request.method.upper()
if method not in methods:
if method in cherrypy.request.methods_with_bodies:
cherrypy.request.body.read()
allowed = ', '.join(methods)
cherrypy.response.headers['Allow'] = allowed
raise cherrypy.HTTPError(405, method + " not allowed; use " + allowed)
cherrypy.tools.allow_methods = cherrypy.Tool('before_handler', allow_methods)
# CherryPy apps
class Root(NilmApp):
"""Root application for NILM database"""
def __init__(self, db, version):
def __init__(self, db):
super(Root, self).__init__(db)
self.server_version = version
# /
@cherrypy.expose
@@ -95,19 +103,18 @@ class Root(NilmApp):
@cherrypy.expose
@cherrypy.tools.json_out()
def version(self):
return self.server_version
return nilmdb.__version__
# /dbpath
# /dbinfo
@cherrypy.expose
@cherrypy.tools.json_out()
def dbpath(self):
return self.db.get_basepath()
# /dbsize
@cherrypy.expose
@cherrypy.tools.json_out()
def dbsize(self):
return nilmdb.utils.du(self.db.get_basepath())
def dbinfo(self):
"""Return a dictionary with the database path,
size of the database in bytes, and free disk space in bytes"""
path = self.db.get_basepath()
return { "path": path,
"size": nilmdb.utils.du(path),
"free": psutil.disk_usage(path).free }
class Stream(NilmApp):
"""Stream-specific operations"""
@@ -127,6 +134,7 @@ class Stream(NilmApp):
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, ValueError)
@cherrypy.tools.allow_methods(methods = ["POST"])
def create(self, path, layout):
"""Create a new stream in the database. Provide path
and one of the nilmdb.layout.layouts keys.
@@ -137,6 +145,7 @@ class Stream(NilmApp):
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError)
@cherrypy.tools.allow_methods(methods = ["POST"])
def destroy(self, path):
"""Delete a stream and its associated data."""
return self.db.stream_destroy(path)
@@ -151,7 +160,7 @@ class Stream(NilmApp):
matching the given keys."""
try:
data = self.db.stream_get_metadata(path)
except nilmdb.nilmdb.StreamError as e:
except nilmdb.server.nilmdb.StreamError as e:
raise cherrypy.HTTPError("404 Not Found", e.message)
if key is None: # If no keys specified, return them all
key = data.keys()
@@ -169,29 +178,29 @@ class Stream(NilmApp):
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, LookupError, TypeError)
@cherrypy.tools.allow_methods(methods = ["POST"])
def set_metadata(self, path, data):
"""Set metadata for the named stream, replacing any
existing metadata. Data should be a json-encoded
dictionary"""
data_dict = json.loads(data)
self.db.stream_set_metadata(path, data_dict)
return "ok"
# /stream/update_metadata?path=/newton/prep&data=<json>
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError, LookupError, TypeError)
@cherrypy.tools.allow_methods(methods = ["POST"])
def update_metadata(self, path, data):
"""Update metadata for the named stream. Data
should be a json-encoded dictionary"""
data_dict = json.loads(data)
self.db.stream_update_metadata(path, data_dict)
return "ok"
# /stream/insert?path=/newton/prep
@cherrypy.expose
@cherrypy.tools.json_out()
#@cherrypy.tools.disable_prb()
@cherrypy.tools.allow_methods(methods = ["PUT"])
def insert(self, path, start, end):
"""
Insert new data into the database. Provide textual data
@@ -199,12 +208,9 @@ class Stream(NilmApp):
"""
# Important that we always read the input before throwing any
# errors, to keep lengths happy for persistent connections.
# However, CherryPy 3.2.2 has a bug where this fails for GET
# requests, so catch that. (issue #1134)
try:
body = cherrypy.request.body.read()
except TypeError:
raise cherrypy.HTTPError("400 Bad Request", "No request body")
# Note that CherryPy 3.2.2 has a bug where this fails for GET
# requests, if we ever want to handle those (issue #1134)
body = cherrypy.request.body.read()
# Check path and get layout
streams = self.db.stream_list(path = path)
@@ -214,44 +220,43 @@ class Stream(NilmApp):
# Parse the input data
try:
parser = nilmdb.layout.Parser(layout)
parser = nilmdb.server.layout.Parser(layout)
parser.parse(body)
except nilmdb.layout.ParserError as e:
except nilmdb.server.layout.ParserError as e:
raise cherrypy.HTTPError("400 Bad Request",
"error parsing input data: " +
e.message)
if (not parser.min_timestamp or not parser.max_timestamp or
not len(parser.data)):
raise cherrypy.HTTPError("400 Bad Request",
"no data provided")
# Check limits
start = float(start)
end = float(end)
if parser.min_timestamp < start:
if start >= end:
raise cherrypy.HTTPError("400 Bad Request",
"start must precede end")
if parser.min_timestamp is not None and parser.min_timestamp < start:
raise cherrypy.HTTPError("400 Bad Request", "Data timestamp " +
repr(parser.min_timestamp) +
" < start time " + repr(start))
if parser.max_timestamp >= end:
if parser.max_timestamp is not None and parser.max_timestamp >= end:
raise cherrypy.HTTPError("400 Bad Request", "Data timestamp " +
repr(parser.max_timestamp) +
" >= end time " + repr(end))
# Now do the nilmdb insert, passing it the parser full of data.
try:
result = self.db.stream_insert(path, start, end, parser.data)
except nilmdb.nilmdb.NilmDBError as e:
self.db.stream_insert(path, start, end, parser.data)
except NilmDBError as e:
raise cherrypy.HTTPError("400 Bad Request", e.message)
# Done
return "ok"
return
# /stream/remove?path=/newton/prep
# /stream/remove?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@cherrypy.tools.json_out()
@exception_to_httperror(NilmDBError)
@cherrypy.tools.allow_methods(methods = ["POST"])
def remove(self, path, start = None, end = None):
"""
Remove data from the backend database. Removes all data in
@@ -263,21 +268,25 @@ class Stream(NilmApp):
if end is not None:
end = float(end)
if start is not None and end is not None:
if end < start:
if start >= end:
raise cherrypy.HTTPError("400 Bad Request",
"end before start")
"start must precede end")
return self.db.stream_remove(path, start, end)
# /stream/intervals?path=/newton/prep
# /stream/intervals?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@chunked_response
@response_type("application/x-json-stream")
def intervals(self, path, start = None, end = None):
"""
Get intervals from backend database. Streams the resulting
intervals as JSON strings separated by newlines. This may
intervals as JSON strings separated by CR LF pairs. This may
make multiple requests to the nilmdb backend to avoid causing
it to block for too long.
Note that the response type is the non-standard
'application/x-json-stream' for lack of a better option.
"""
if start is not None:
start = float(start)
@@ -285,9 +294,9 @@ class Stream(NilmApp):
end = float(end)
if start is not None and end is not None:
if end < start:
if start >= end:
raise cherrypy.HTTPError("400 Bad Request",
"end before start")
"start must precede end")
streams = self.db.stream_list(path = path)
if len(streams) != 1:
@@ -297,8 +306,8 @@ class Stream(NilmApp):
def content(start, end):
# Note: disable chunked responses to see tracebacks from here.
while True:
(intervals, restart) = self.db.stream_intervals(path,start,end)
response = ''.join([ json.dumps(i) + "\n" for i in intervals ])
(ints, restart) = self.db.stream_intervals(path, start, end)
response = ''.join([ json.dumps(i) + "\r\n" for i in ints ])
yield response
if restart == 0:
break
@@ -308,6 +317,7 @@ class Stream(NilmApp):
# /stream/extract?path=/newton/prep&start=1234567890.0&end=1234567899.0
@cherrypy.expose
@chunked_response
@response_type("text/plain")
def extract(self, path, start = None, end = None, count = False):
"""
Extract data from backend database. Streams the resulting
@@ -324,9 +334,9 @@ class Stream(NilmApp):
# Check parameters
if start is not None and end is not None:
if end < start:
if start >= end:
raise cherrypy.HTTPError("400 Bad Request",
"end before start")
"start must precede end")
# Check path and get layout
streams = self.db.stream_list(path = path)
@@ -335,7 +345,7 @@ class Stream(NilmApp):
layout = streams[0][1]
# Get formatter
formatter = nilmdb.layout.Formatter(layout)
formatter = nilmdb.server.layout.Formatter(layout)
@workaround_cp_bug_1200
def content(start, end, count):
@@ -374,25 +384,47 @@ class Server(object):
fast_shutdown = False, # don't wait for clients to disconn.
force_traceback = False # include traceback in all errors
):
self.version = version
# Save server version, just for verification during tests
self.version = nilmdb.__version__
# Need to wrap DB object in a serializer because we'll call
# into it from separate threads.
self.embedded = embedded
self.db = nilmdb.utils.Serializer(db)
self.db = db
if not getattr(db, "_thread_safe", None):
raise KeyError("Database object " + str(db) + " doesn't claim "
"to be thread safe. You should pass "
"nilmdb.utils.serializer_proxy(NilmDB)(args) "
"rather than NilmDB(args).")
# Build up global server configuration
cherrypy.config.update({
'server.socket_host': host,
'server.socket_port': port,
'engine.autoreload_on': False,
'server.max_request_body_size': 4*1024*1024,
'error_page.default': self.json_error_page,
})
if self.embedded:
cherrypy.config.update({ 'environment': 'embedded' })
# Build up application specific configuration
app_config = {}
app_config.update({
'error_page.default': self.json_error_page,
})
# Send a permissive Access-Control-Allow-Origin (CORS) header
# with all responses so that browsers can send cross-domain
# requests to this server.
app_config.update({ 'response.headers.Access-Control-Allow-Origin':
'*' })
# Only allow GET and HEAD by default. Individual handlers
# can override.
app_config.update({ 'tools.allow_methods.on': True,
'tools.allow_methods.methods': ['GET', 'HEAD'] })
# Send tracebacks in error responses. They're hidden by the
# error_page function for client errors (code 400-499).
cherrypy.config.update({ 'request.show_tracebacks' : True })
app_config.update({ 'request.show_tracebacks' : True })
self.force_traceback = force_traceback
# Patch CherryPy error handler to never pad out error messages.
@@ -400,11 +432,13 @@ class Server(object):
# error messages.
cherrypy._cperror._ie_friendly_error_sizes = {}
cherrypy.tree.apps = {}
cherrypy.tree.mount(Root(self.db, self.version), "/")
cherrypy.tree.mount(Stream(self.db), "/stream")
# Build up the application and mount it
root = Root(self.db)
root.stream = Stream(self.db)
if stoppable:
cherrypy.tree.mount(Exiter(), "/exit")
root.exit = Exiter()
cherrypy.tree.apps = {}
cherrypy.tree.mount(root, "/", config = { "/" : app_config })
# Shutdowns normally wait for clients to disconnect. To speed
# up tests, set fast_shutdown = True
@@ -425,7 +459,7 @@ class Server(object):
if not self.force_traceback:
if code >= 400 and code <= 499:
errordata["traceback"] = ""
except Exception as e: # pragma: no cover
except Exception: # pragma: no cover
pass
# Override the response type, which was previously set to text/html
cherrypy.serving.response.headers['Content-Type'] = (
@@ -462,8 +496,10 @@ class Server(object):
cherrypy.engine.start()
os._exit = real_exit
# Signal that the engine has started successfully
if event is not None:
event.set()
if blocking:
try:
cherrypy.engine.wait(cherrypy.engine.states.EXITING,

View File

@@ -1,11 +1,10 @@
"""NilmDB utilities"""
from .timer import Timer
from .iteratorizer import Iteratorizer
from .serializer import Serializer
from .lrucache import lru_cache
from .diskusage import du
from .mustclose import must_close
from .urllib import urlencode
from . import misc
from . import atomic
from nilmdb.utils.timer import Timer
from nilmdb.utils.iteratorizer import Iteratorizer
from nilmdb.utils.serializer import serializer_proxy
from nilmdb.utils.lrucache import lru_cache
from nilmdb.utils.diskusage import du, human_size
from nilmdb.utils.mustclose import must_close
from nilmdb.utils import atomic
import nilmdb.utils.threadsafety

View File

@@ -1,8 +1,7 @@
import nilmdb
import os
from math import log
def sizeof_fmt(num):
def human_size(num):
"""Human friendly file size"""
unit_list = zip(['bytes', 'kiB', 'MiB', 'GiB', 'TiB'], [0, 0, 1, 2, 2])
if num > 1:
@@ -16,15 +15,11 @@ def sizeof_fmt(num):
if num == 1: # pragma: no cover
return '1 byte'
def du_bytes(path):
def du(path):
"""Like du -sb, returns total size of path in bytes."""
size = os.path.getsize(path)
if os.path.isdir(path):
for file in os.listdir(path):
filepath = os.path.join(path, file)
size += du_bytes(filepath)
for thisfile in os.listdir(path):
filepath = os.path.join(path, thisfile)
size += du(filepath)
return size
def du(path):
"""Like du -sh, returns total size of path as a human-readable string."""
return sizeof_fmt(du_bytes(path))

View File

@@ -1,72 +1,100 @@
import Queue
import threading
import sys
import contextlib
# This file provides a class that will convert a function that
# takes a callback into a generator that returns an iterator.
# This file provides a context manager that converts a function
# that takes a callback into a generator that returns an iterable.
# This is done by running the function in a new thread.
# Based partially on http://stackoverflow.com/questions/9968592/
class IteratorizerThread(threading.Thread):
def __init__(self, queue, function):
def __init__(self, queue, function, curl_hack):
"""
function: function to execute, which takes the
callback (provided by this class) as an argument
"""
threading.Thread.__init__(self)
self.name = "Iteratorizer-" + function.__name__ + "-" + self.name
self.function = function
self.queue = queue
self.die = False
self.curl_hack = curl_hack
def callback(self, data):
if self.die:
raise Exception("should die")
self.queue.put((1, data))
try:
if self.die:
raise Exception() # trigger termination
self.queue.put((1, data))
except:
if self.curl_hack:
# We can't raise exceptions, because the pycurl
# extension module will unconditionally print the
# exception itself, and not pass it up to the caller.
# Instead, just return a value that tells curl to
# abort. (-1 would be best, in case we were given 0
# bytes, but the extension doesn't support that).
self.queue.put((2, sys.exc_info()))
return 0
raise
def run(self):
try:
result = self.function(self.callback)
except:
if sys is not None: # can be None during unclean shutdown
self.queue.put((2, sys.exc_info()))
self.queue.put((2, sys.exc_info()))
else:
self.queue.put((0, result))
class Iteratorizer(object):
def __init__(self, function):
"""
function: function to execute, which takes the
callback (provided by this class) as an argument
"""
self.function = function
self.queue = Queue.Queue(maxsize = 1)
self.thread = IteratorizerThread(self.queue, self.function)
self.thread.daemon = True
self.thread.start()
@contextlib.contextmanager
def Iteratorizer(function, curl_hack = False):
"""
Context manager that takes a function expecting a callback,
and provides an iterable that yields the values passed to that
callback instead.
def __del__(self):
# If we get garbage collected, try to get rid of the
# thread too by asking it to raise an exception, then
# draining the queue until it's gone.
self.thread.die = True
while self.thread.isAlive():
function: function to execute, which takes a callback
(provided by this context manager) as an argument
with iteratorizer(func) as it:
for i in it:
print 'callback was passed:', i
print 'function returned:', it.retval
"""
queue = Queue.Queue(maxsize = 1)
thread = IteratorizerThread(queue, function, curl_hack)
thread.daemon = True
thread.start()
class iteratorizer_gen(object):
def __init__(self, queue):
self.queue = queue
self.retval = None
def __iter__(self):
return self
def next(self):
(typ, data) = self.queue.get()
if typ == 0:
# function has returned
self.retval = data
raise StopIteration
elif typ == 1:
# data is available
return data
else:
# callback raised an exception
raise data[0], data[1], data[2]
try:
yield iteratorizer_gen(queue)
finally:
# Ask the thread to die, if it's still running.
thread.die = True
while thread.isAlive():
try:
self.queue.get(True, 0.01)
queue.get(True, 0.01)
except: # pragma: no cover
pass
def __iter__(self):
return self
def next(self):
(typ, data) = self.queue.get()
if typ == 0:
# function returned
self.retval = data
raise StopIteration
elif typ == 1:
# data available
return data
else:
# exception
raise data[0], data[1], data[2]

View File

@@ -5,7 +5,6 @@
import collections
import decorator
import warnings
def lru_cache(size = 10, onremove = None, keys = slice(None)):
"""Least-recently-used cache decorator.

View File

@@ -1,8 +0,0 @@
import itertools
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), ..., (sn,None)"
a, b = itertools.tee(iterable)
next(b, None)
return itertools.izip_longest(a, b)

View File

@@ -12,15 +12,12 @@ def must_close(errorfile = sys.stderr, wrap_verify = False):
already been called."""
def class_decorator(cls):
# Helper to replace a class method with a wrapper function,
# while maintaining argument specs etc.
def wrap_class_method(wrapper_func):
method = wrapper_func.__name__
if method in cls.__dict__:
orig = getattr(cls, method).im_func
else:
orig = lambda self: None
setattr(cls, method, decorator.decorator(wrapper_func, orig))
def wrap_class_method(wrapper):
try:
orig = getattr(cls, wrapper.__name__).im_func
except:
orig = lambda x: None
setattr(cls, wrapper.__name__, decorator.decorator(wrapper, orig))
@wrap_class_method
def __init__(orig, self, *args, **kwargs):
@@ -38,7 +35,8 @@ def must_close(errorfile = sys.stderr, wrap_verify = False):
@wrap_class_method
def close(orig, self, *args, **kwargs):
del self._must_close
if "_must_close" in self.__dict__:
del self._must_close
return orig(self, *args, **kwargs)
# Optionally wrap all other functions

View File

@@ -1,6 +1,10 @@
import Queue
import threading
import sys
import decorator
import inspect
import types
import functools
# This file provides a class that will wrap an object and serialize
# all calls to its methods. All calls to that object will be queued
@@ -12,8 +16,9 @@ import sys
class SerializerThread(threading.Thread):
"""Thread that retrieves call information from the queue, makes the
call, and returns the results."""
def __init__(self, call_queue):
def __init__(self, classname, call_queue):
threading.Thread.__init__(self)
self.name = "Serializer-" + classname + "-" + self.name
self.call_queue = call_queue
def run(self):
@@ -22,51 +27,83 @@ class SerializerThread(threading.Thread):
# Terminate if result_queue is None
if result_queue is None:
return
exception = None
result = None
try:
result = func(*args, **kwargs) # wrapped
except:
result_queue.put((sys.exc_info(), None))
exception = sys.exc_info()
# Ensure we delete these before returning a result, so
# we don't unncessarily hold onto a reference while
# we're waiting for the next call.
del func, args, kwargs
result_queue.put((exception, result))
del exception, result
def serializer_proxy(obj_or_type):
"""Wrap the given object or type in a SerializerObjectProxy.
Returns a SerializerObjectProxy object that proxies all method
calls to the object, as well as attribute retrievals.
The proxied requests, including instantiation, are performed in a
single thread and serialized between caller threads.
"""
class SerializerCallProxy(object):
def __init__(self, call_queue, func, objectproxy):
self.call_queue = call_queue
self.func = func
# Need to hold a reference to object proxy so it doesn't
# go away (and kill the thread) until after get called.
self.objectproxy = objectproxy
def __call__(self, *args, **kwargs):
result_queue = Queue.Queue()
self.call_queue.put((result_queue, self.func, args, kwargs))
( exc_info, result ) = result_queue.get()
if exc_info is None:
return result
else:
result_queue.put((None, result))
raise exc_info[0], exc_info[1], exc_info[2]
class WrapCall(object):
"""Wrap a callable using the given queues"""
class SerializerObjectProxy(object):
def __init__(self, obj_or_type, *args, **kwargs):
self.__object = obj_or_type
try:
if type(obj_or_type) in (types.TypeType, types.ClassType):
classname = obj_or_type.__name__
else:
classname = obj_or_type.__class__.__name__
except AttributeError: # pragma: no cover
classname = "???"
self.__call_queue = Queue.Queue()
self.__thread = SerializerThread(classname, self.__call_queue)
self.__thread.daemon = True
self.__thread.start()
self._thread_safe = True
def __init__(self, call_queue, result_queue, func):
self.call_queue = call_queue
self.result_queue = result_queue
self.func = func
def __getattr__(self, key):
if key.startswith("_SerializerObjectProxy__"): # pragma: no cover
raise AttributeError
attr = getattr(self.__object, key)
if not callable(attr):
getter = SerializerCallProxy(self.__call_queue, getattr, self)
return getter(self.__object, key)
r = SerializerCallProxy(self.__call_queue, attr, self)
return r
def __call__(self, *args, **kwargs):
self.call_queue.put((self.result_queue, self.func, args, kwargs))
( exc_info, result ) = self.result_queue.get()
if exc_info is None:
return result
else:
raise exc_info[0], exc_info[1], exc_info[2]
def __call__(self, *args, **kwargs):
"""Call this to instantiate the type, if a type was passed
to serializer_proxy. Otherwise, pass the call through."""
ret = SerializerCallProxy(self.__call_queue,
self.__object, self)(*args, **kwargs)
if type(self.__object) in (types.TypeType, types.ClassType):
# Instantiation
self.__object = ret
return self
return ret
class WrapObject(object):
"""Wrap all calls to methods in a target object with WrapCall"""
def __del__(self):
self.__call_queue.put((None, None, None, None))
self.__thread.join()
def __init__(self, target):
self.__wrap_target = target
self.__wrap_call_queue = Queue.Queue()
self.__wrap_serializer = SerializerThread(self.__wrap_call_queue)
self.__wrap_serializer.daemon = True
self.__wrap_serializer.start()
def __getattr__(self, key):
"""Wrap methods of self.__wrap_target in a WrapCall instance"""
func = getattr(self.__wrap_target, key)
if not callable(func):
raise TypeError("Can't serialize attribute %r (type: %s)"
% (key, type(func)))
result_queue = Queue.Queue()
return WrapCall(self.__wrap_call_queue, result_queue, func)
def __del__(self):
self.__wrap_call_queue.put((None, None, None, None))
self.__wrap_serializer.join()
# Just an alias
Serializer = WrapObject
return SerializerObjectProxy(obj_or_type)

View File

@@ -0,0 +1,109 @@
from nilmdb.utils.printf import *
import threading
import warnings
import types
def verify_proxy(obj_or_type, exception = False, check_thread = True,
check_concurrent = True):
"""Wrap the given object or type in a VerifyObjectProxy.
Returns a VerifyObjectProxy that proxies all method calls to the
given object, as well as attribute retrievals.
When calling methods, the following checks are performed. If
exception is True, an exception is raised. Otherwise, a warning
is printed.
check_thread = True # Warn/fail if two different threads call methods.
check_concurrent = True # Warn/fail if two functions are concurrently
# run through this proxy
"""
class Namespace(object):
pass
class VerifyCallProxy(object):
def __init__(self, func, parent_namespace):
self.func = func
self.parent_namespace = parent_namespace
def __call__(self, *args, **kwargs):
p = self.parent_namespace
this = threading.current_thread()
try:
callee = self.func.__name__
except AttributeError:
callee = "???"
if p.thread is None:
p.thread = this
p.thread_callee = callee
if check_thread and p.thread != this:
err = sprintf("unsafe threading: %s called %s.%s,"
" but %s called %s.%s",
p.thread.name, p.classname, p.thread_callee,
this.name, p.classname, callee)
if exception:
raise AssertionError(err)
else: # pragma: no cover
warnings.warn(err)
need_concur_unlock = False
if check_concurrent:
if p.concur_lock.acquire(False) == False:
err = sprintf("unsafe concurrency: %s called %s.%s "
"while %s is still in %s.%s",
this.name, p.classname, callee,
p.concur_tname, p.classname, p.concur_callee)
if exception:
raise AssertionError(err)
else: # pragma: no cover
warnings.warn(err)
else:
p.concur_tname = this.name
p.concur_callee = callee
need_concur_unlock = True
try:
ret = self.func(*args, **kwargs)
finally:
if need_concur_unlock:
p.concur_lock.release()
return ret
class VerifyObjectProxy(object):
def __init__(self, obj_or_type, *args, **kwargs):
p = Namespace()
self.__ns = p
p.thread = None
p.thread_callee = None
p.concur_lock = threading.Lock()
p.concur_tname = None
p.concur_callee = None
self.__obj = obj_or_type
try:
if type(obj_or_type) in (types.TypeType, types.ClassType):
p.classname = self.__obj.__name__
else:
p.classname = self.__obj.__class__.__name__
except AttributeError: # pragma: no cover
p.classname = "???"
def __getattr__(self, key):
if key.startswith("_VerifyObjectProxy__"): # pragma: no cover
raise AttributeError
attr = getattr(self.__obj, key)
if not callable(attr):
return VerifyCallProxy(getattr, self.__ns)(self.__obj, key)
return VerifyCallProxy(attr, self.__ns)
def __call__(self, *args, **kwargs):
"""Call this to instantiate the type, if a type was passed
to verify_proxy. Otherwise, pass the call through."""
ret = VerifyCallProxy(self.__obj, self.__ns)(*args, **kwargs)
if type(self.__obj) in (types.TypeType, types.ClassType):
# Instantiation
self.__obj = ret
return self
return ret
return VerifyObjectProxy(obj_or_type)

54
nilmdb/utils/time.py Normal file
View File

@@ -0,0 +1,54 @@
from nilmdb.utils import datetime_tz
import re
def parse_time(toparse):
"""
Parse a free-form time string and return a datetime_tz object.
If the string doesn't contain a timestamp, the current local
timezone is assumed (e.g. from the TZ env var).
"""
# If string isn't "now" and doesn't contain at least 4 digits,
# consider it invalid. smartparse might otherwise accept
# empty strings and strings with just separators.
if toparse != "now" and len(re.findall(r"\d", toparse)) < 4:
raise ValueError("not enough digits for a timestamp")
# Try to just parse the time as given
try:
return datetime_tz.datetime_tz.smartparse(toparse)
except ValueError:
pass
# Try to extract a substring in a condensed format that we expect
# to see in a filename or header comment
res = re.search(r"(^|[^\d])(" # non-numeric or SOL
r"(199\d|2\d\d\d)" # year
r"[-/]?" # separator
r"(0[1-9]|1[012])" # month
r"[-/]?" # separator
r"([012]\d|3[01])" # day
r"[-T ]?" # separator
r"([01]\d|2[0-3])" # hour
r"[:]?" # separator
r"([0-5]\d)" # minute
r"[:]?" # separator
r"([0-5]\d)?" # second
r"([-+]\d\d\d\d)?" # timezone
r")", toparse)
if res is not None:
try:
return datetime_tz.datetime_tz.smartparse(res.group(2))
except ValueError:
pass
# Could also try to successively parse substrings, but let's
# just give up for now.
raise ValueError("unable to parse timestamp")
def format_time(timestamp):
"""
Convert a Unix timestamp to a string for printing, using the
local timezone for display (e.g. from the TZ env var).
"""
dt = datetime_tz.datetime_tz.fromtimestamp(timestamp)
return dt.strftime("%a, %d %b %Y %H:%M:%S.%f %z")

View File

@@ -2,10 +2,11 @@
# Simple timer to time a block of code, for optimization debugging
# use like:
# with nilmdb.Timer("flush"):
# with nilmdb.utils.Timer("flush"):
# foo.flush()
from __future__ import print_function
from __future__ import absolute_import
import contextlib
import time

View File

@@ -1,22 +1,18 @@
"""File-like objects that add timestamps to the input lines"""
from __future__ import absolute_import
from nilmdb.utils.printf import *
import time
import os
import datetime_tz
from nilmdb.utils import datetime_tz
class Timestamper(object):
"""A file-like object that adds timestamps to lines of an input file."""
def __init__(self, file, ts_iter):
def __init__(self, infile, ts_iter):
"""file: filename, or another file-like object
ts_iter: iterator that returns a timestamp string for
each line of the file"""
if isinstance(file, basestring):
self.file = open(file, "r")
if isinstance(infile, basestring):
self.file = open(infile, "r")
else:
self.file = file
self.file = infile
self.ts_iter = ts_iter
def close(self):
@@ -55,7 +51,7 @@ class Timestamper(object):
class TimestamperRate(Timestamper):
"""Timestamper that uses a start time and a fixed rate"""
def __init__(self, file, start, rate, end = None):
def __init__(self, infile, start, rate, end = None):
"""
file: file name or object
@@ -77,7 +73,7 @@ class TimestamperRate(Timestamper):
# Handle case where we're passed a datetime or datetime_tz object
if "totimestamp" in dir(start):
start = start.totimestamp()
Timestamper.__init__(self, file, iterator(start, rate, end))
Timestamper.__init__(self, infile, iterator(start, rate, end))
self.start = start
self.rate = rate
def __str__(self):
@@ -88,21 +84,21 @@ class TimestamperRate(Timestamper):
class TimestamperNow(Timestamper):
"""Timestamper that uses current time"""
def __init__(self, file):
def __init__(self, infile):
def iterator():
while True:
now = datetime_tz.datetime_tz.utcnow().totimestamp()
yield sprintf("%.6f ", now)
Timestamper.__init__(self, file, iterator())
Timestamper.__init__(self, infile, iterator())
def __str__(self):
return "TimestamperNow(...)"
class TimestamperNull(Timestamper):
"""Timestamper that adds nothing to each line"""
def __init__(self, file):
def __init__(self, infile):
def iterator():
while True:
yield ""
Timestamper.__init__(self, file, iterator())
Timestamper.__init__(self, infile, iterator())
def __str__(self):
return "TimestamperNull(...)"

View File

@@ -1,40 +0,0 @@
from __future__ import absolute_import
from urllib import quote_plus, _is_unicode
# urllib.urlencode insists on encoding Unicode as ASCII. This is based
# on that function, except we always encode it as UTF-8 instead.
def urlencode(query):
"""Encode a dictionary into a URL query string.
If any values in the query arg are sequences, each sequence
element is converted to a separate parameter.
"""
query = query.items()
l = []
for k, v in query:
k = quote_plus(str(k))
if isinstance(v, str):
v = quote_plus(v)
l.append(k + '=' + v)
elif _is_unicode(v):
# is there a reasonable way to convert to ASCII?
# encode generates a string, but "replace" or "ignore"
# lose information and "strict" can raise UnicodeError
v = quote_plus(v.encode("utf-8","strict"))
l.append(k + '=' + v)
else:
try:
# is this a sufficient test for sequence-ness?
len(v)
except TypeError:
# not a sequence
v = quote_plus(str(v))
l.append(k + '=' + v)
else:
# loop over the sequence
for elt in v:
l.append(k + '=' + quote_plus(str(elt)))
return '&'.join(l)

View File

@@ -1,6 +0,0 @@
#!/usr/bin/python
import nilmdb
import sys
nilmdb.cmdline.Cmdline(sys.argv[1:]).run()

View File

@@ -1,35 +0,0 @@
#!/usr/bin/python
import nilmdb
import argparse
formatter = argparse.ArgumentDefaultsHelpFormatter
parser = argparse.ArgumentParser(description='Run the NILM server',
formatter_class = formatter)
parser.add_argument('-p', '--port', help='Port number', type=int, default=12380)
parser.add_argument('-d', '--database', help='Database directory', default="db")
parser.add_argument('-y', '--yappi', help='Run with yappi profiler',
action='store_true')
args = parser.parse_args()
# Start web app on a custom port
db = nilmdb.NilmDB(args.database)
server = nilmdb.Server(db, host = "127.0.0.1",
port = args.port,
embedded = False)
if args.yappi:
print "Running in yappi"
try:
import yappi
yappi.start()
server.start(blocking = True)
finally:
yappi.stop()
print "Try: yappi.print_stats(sort_type=yappi.SORTTYPE_TTOT,limit=50)"
from IPython import embed
embed()
else:
server.start(blocking = True)
db.close()

View File

@@ -1,18 +1,26 @@
[aliases]
test = nosetests
[nosetests]
# note: the value doesn't matter, that's why they're empty here
nocapture=
nologcapture= # comment to see cherrypy logs on failure
with-coverage=
cover-inclusive=
# Note: values must be set to 1, and have no comments on the same line,
# for "python setup.py nosetests" to work correctly.
nocapture=1
# Comment this out to see CherryPy logs on failure:
nologcapture=1
with-coverage=1
cover-inclusive=1
cover-package=nilmdb
cover-erase=
##cover-html= # this works, puts html output in cover/ dir
##cover-branches= # need nose 1.1.3 for this
cover-erase=1
# this works, puts html output in cover/ dir:
# cover-html=1
# need nose 1.1.3 for this:
# cover-branches=1
#debug=nose
#debug-log=nose.log
stop=
stop=1
verbosity=2
tests=tests
#tests=tests/test_threadsafety.py
#tests=tests/test_bulkdata.py
#tests=tests/test_mustclose.py
#tests=tests/test_lrucache.py
@@ -28,6 +36,6 @@ tests=tests
#tests=tests/test_iteratorizer.py
#tests=tests/test_client.py:TestClient.test_client_nilmdb
#tests=tests/test_nilmdb.py
#with-profile=
#with-profile=1
#profile-sort=time
##profile-restrict=10 # doesn't work right, treated as string or something

136
setup.py Executable file
View File

@@ -0,0 +1,136 @@
#!/usr/bin/python
# To release a new version, tag it:
# git tag -a nilmdb-1.1 -m "Version 1.1"
# git push --tags
# Then just package it up:
# python setup.py sdist
# This is supposed to be using Distribute:
#
# distutils provides a "setup" method.
# setuptools is a set of monkeypatches on top of that.
# distribute is a particular version/implementation of setuptools.
#
# So we don't really know if this is using the old setuptools or the
# Distribute-provided version of setuptools.
import traceback
import sys
import os
try:
from setuptools import setup, find_packages
from distutils.extension import Extension
import distutils.version
except ImportError:
traceback.print_exc()
print "Please install the prerequisites listed in README.txt"
sys.exit(1)
# Versioneer manages version numbers from git tags.
# https://github.com/warner/python-versioneer
import versioneer
versioneer.versionfile_source = 'nilmdb/_version.py'
versioneer.versionfile_build = 'nilmdb/_version.py'
versioneer.tag_prefix = 'nilmdb-'
versioneer.parentdir_prefix = 'nilmdb-'
# Hack to workaround logging/multiprocessing issue:
# https://groups.google.com/d/msg/nose-users/fnJ-kAUbYHQ/_UsLN786ygcJ
try: import multiprocessing
except: pass
# Use Cython if it's new enough, otherwise use preexisting C files.
cython_modules = [ 'nilmdb.server.interval',
'nilmdb.server.layout',
'nilmdb.server.rbtree' ]
try:
import Cython
from Cython.Build import cythonize
if (distutils.version.LooseVersion(Cython.__version__) <
distutils.version.LooseVersion("0.16")):
print "Cython version", Cython.__version__, "is too old; not using it."
raise ImportError()
use_cython = True
except ImportError:
use_cython = False
ext_modules = []
for modulename in cython_modules:
filename = modulename.replace('.','/')
if use_cython:
ext_modules.extend(cythonize(filename + ".pyx"))
else:
cfile = filename + ".c"
if not os.path.exists(cfile):
raise Exception("Missing source file " + cfile + ". "
"Try installing cython >= 0.16.")
ext_modules.append(Extension(modulename, [ cfile ]))
# We need a MANIFEST.in. Generate it here rather than polluting the
# repository with yet another setup-related file.
with open("MANIFEST.in", "w") as m:
m.write("""
# Root
include README.txt
include setup.cfg
include setup.py
include versioneer.py
include Makefile
include .coveragerc
include .pylintrc
# Cython files -- include source.
recursive-include nilmdb/server *.pyx *.pyxdep *.pxd
# Tests
recursive-include tests *.py
recursive-include tests/data *
include tests/test.order
# Docs
recursive-include docs Makefile *.md
""")
# Run setup
setup(name='nilmdb',
version = versioneer.get_version(),
cmdclass = versioneer.get_cmdclass(),
url = 'https://git.jim.sh/jim/lees/nilmdb.git',
author = 'Jim Paris',
description = "NILM Database",
long_description = "NILM Database",
license = "Proprietary",
author_email = 'jim@jtan.com',
tests_require = [ 'nose',
'coverage',
],
setup_requires = [ 'distribute',
],
install_requires = [ 'decorator',
'cherrypy >= 3.2',
'simplejson',
'pycurl',
'python-dateutil',
'pytz',
'psutil >= 0.3.0',
'requests >= 1.1.0, < 2.0.0',
],
packages = [ 'nilmdb',
'nilmdb.utils',
'nilmdb.utils.datetime_tz',
'nilmdb.server',
'nilmdb.client',
'nilmdb.cmdline',
'nilmdb.scripts',
],
entry_points = {
'console_scripts': [
'nilmtool = nilmdb.scripts.nilmtool:main',
'nilmdb-server = nilmdb.scripts.nilmdb_server:main',
],
},
ext_modules = ext_modules,
zip_safe = False,
)

124
tests/data/extract-7 Normal file
View File

@@ -0,0 +1,124 @@
# path: /newton/prep
# layout: PrepData
# start: 1332496830.0
# end: 1332496830.999
1332496830.000000 251774.000000 224241.000000 5688.100098 1915.530029 9329.219727 4183.709961 1212.349976 2641.790039
1332496830.008333 259567.000000 222698.000000 6207.600098 678.671997 9380.230469 4575.580078 2830.610107 2688.629883
1332496830.016667 263073.000000 223304.000000 4961.640137 2197.120117 7687.310059 4861.859863 2732.780029 3008.540039
1332496830.025000 257614.000000 223323.000000 5003.660156 3525.139893 7165.310059 4685.620117 1715.380005 3440.479980
1332496830.033333 255780.000000 221915.000000 6357.310059 2145.290039 8426.969727 3775.350098 1475.390015 3797.239990
1332496830.041667 260166.000000 223008.000000 6702.589844 1484.959961 9288.099609 3330.830078 1228.500000 3214.320068
1332496830.050000 261231.000000 226426.000000 4980.060059 2982.379883 8499.629883 4267.669922 994.088989 2292.889893
1332496830.058333 255117.000000 226642.000000 4584.410156 4656.439941 7860.149902 5317.310059 1473.599976 2111.689941
1332496830.066667 253300.000000 223554.000000 6455.089844 3036.649902 8869.750000 4986.310059 2607.360107 2839.590088
1332496830.075000 261061.000000 221263.000000 6951.979980 1500.239990 9386.099609 3791.679932 2677.010010 3980.629883
1332496830.083333 266503.000000 223198.000000 5189.609863 2594.560059 8571.530273 3175.000000 919.840027 3792.010010
1332496830.091667 260692.000000 225184.000000 3782.479980 4642.879883 7662.959961 3917.790039 -251.097000 2907.060059
1332496830.100000 253963.000000 225081.000000 5123.529785 3839.550049 8669.030273 4877.819824 943.723999 2527.449951
1332496830.108333 256555.000000 224169.000000 5930.600098 2298.540039 8906.709961 5331.680176 2549.909912 3053.560059
1332496830.116667 260889.000000 225010.000000 4681.129883 2971.870117 7900.040039 4874.080078 2322.429932 3649.120117
1332496830.125000 257944.000000 224923.000000 3291.139893 4357.089844 7131.589844 4385.560059 1077.050049 3664.040039
1332496830.133333 255009.000000 223018.000000 4584.819824 2864.000000 8469.490234 3625.580078 985.557007 3504.229980
1332496830.141667 260114.000000 221947.000000 5676.189941 1210.339966 9393.780273 3390.239990 1654.020020 3018.699951
1332496830.150000 264277.000000 224438.000000 4446.620117 2176.719971 8142.089844 4584.879883 2327.830078 2615.800049
1332496830.158333 259221.000000 226471.000000 2734.439941 4182.759766 6389.549805 5540.520020 1958.880005 2720.120117
1332496830.166667 252650.000000 224831.000000 4163.640137 2989.989990 7179.200195 5213.060059 1929.550049 3457.659912
1332496830.175000 257083.000000 222048.000000 5759.040039 702.440979 8566.549805 3552.020020 1832.939941 3956.189941
1332496830.183333 263130.000000 222967.000000 5141.140137 1166.119995 8666.959961 2720.370117 971.374023 3479.729980
1332496830.191667 260236.000000 225265.000000 3425.139893 3339.080078 7853.609863 3674.949951 525.908020 2443.310059
1332496830.200000 253503.000000 224527.000000 4398.129883 2927.429932 8110.279785 4842.470215 1513.869995 2467.100098
1332496830.208333 256126.000000 222693.000000 6043.529785 656.223999 8797.559570 4832.410156 2832.370117 3426.139893
1332496830.216667 261677.000000 223608.000000 5830.459961 1033.910034 8123.939941 3980.689941 1927.959961 4092.719971
1332496830.225000 259457.000000 225536.000000 4015.570068 2995.989990 7135.439941 3713.550049 307.220001 3849.429932
1332496830.233333 253352.000000 224216.000000 4650.560059 3196.620117 8131.279785 3586.159912 70.832298 3074.179932
1332496830.241667 256124.000000 221513.000000 6100.479980 821.979980 9757.540039 3474.510010 1647.520020 2559.860107
1332496830.250000 263024.000000 221559.000000 5789.959961 699.416992 9129.740234 4153.080078 2829.250000 2677.270020
1332496830.258333 261720.000000 224015.000000 4358.500000 2645.360107 7414.109863 4810.669922 2225.989990 3185.989990
1332496830.266667 254756.000000 224240.000000 4857.379883 3229.679932 7539.310059 4769.140137 1507.130005 3668.260010
1332496830.275000 256889.000000 222658.000000 6473.419922 1214.109985 9010.759766 3848.729980 1303.839966 3778.500000
1332496830.283333 264208.000000 223316.000000 5700.450195 1116.560059 9087.610352 3846.679932 1293.589966 2891.560059
1332496830.291667 263310.000000 225719.000000 3936.120117 3252.360107 7552.850098 4897.859863 1156.630005 2037.160034
1332496830.300000 255079.000000 225086.000000 4536.450195 3960.110107 7454.589844 5479.069824 1596.359985 2190.800049
1332496830.308333 254487.000000 222508.000000 6635.859863 1758.849976 8732.969727 4466.970215 2650.360107 3139.310059
1332496830.316667 261241.000000 222432.000000 6702.270020 1085.130005 8989.230469 3112.989990 1933.560059 3828.409912
1332496830.325000 262119.000000 225587.000000 4714.950195 2892.360107 8107.819824 2961.310059 239.977997 3273.719971
1332496830.333333 254999.000000 226514.000000 4532.089844 4126.899902 8200.129883 3872.590088 56.089001 2370.580078
1332496830.341667 254289.000000 224033.000000 6538.810059 2251.439941 9419.429688 4564.450195 2077.810059 2508.169922
1332496830.350000 261890.000000 221960.000000 6846.089844 1475.270020 9125.589844 4598.290039 3299.219971 3475.419922
1332496830.358333 264502.000000 223085.000000 5066.379883 3270.560059 7933.169922 4173.709961 1908.910034 3867.459961
1332496830.366667 257889.000000 223656.000000 4201.660156 4473.640137 7688.339844 4161.580078 687.578979 3653.689941
1332496830.375000 254270.000000 223151.000000 5715.140137 2752.139893 9273.320312 3772.949951 896.403992 3256.060059
1332496830.383333 258257.000000 224217.000000 6114.310059 1856.859985 9604.320312 4200.490234 1764.380005 2939.219971
1332496830.391667 260020.000000 226868.000000 4237.529785 3605.879883 8066.220215 5430.250000 2138.580078 2696.709961
1332496830.400000 255083.000000 225924.000000 3350.310059 4853.069824 7045.819824 5925.200195 1893.609985 2897.340088
1332496830.408333 254453.000000 222127.000000 5271.330078 2491.500000 8436.679688 5032.080078 2436.050049 3724.590088
1332496830.416667 262588.000000 219950.000000 5994.620117 789.273987 9029.650391 3515.739990 1953.569946 4014.520020
1332496830.425000 265610.000000 223333.000000 4391.410156 2400.959961 8146.459961 3536.959961 530.231995 3133.919922
1332496830.433333 257470.000000 226977.000000 2975.320068 4633.529785 7278.560059 4640.100098 -50.150200 2024.959961
1332496830.441667 250687.000000 226331.000000 4517.859863 3183.800049 8072.600098 5281.660156 1605.140015 2335.139893
1332496830.450000 255563.000000 224495.000000 5551.000000 1101.300049 8461.490234 4725.700195 2726.669922 3480.540039
1332496830.458333 261335.000000 224645.000000 4764.680176 1557.020020 7833.350098 3524.810059 1577.410034 4038.620117
1332496830.466667 260269.000000 224008.000000 3558.030029 2987.610107 7362.439941 3279.229980 562.442017 3786.550049
1332496830.475000 257435.000000 221777.000000 4972.600098 2166.879883 8481.440430 3328.719971 1037.130005 3271.370117
1332496830.483333 261046.000000 221550.000000 5816.180176 590.216980 9120.929688 3895.399902 2382.669922 2824.169922
1332496830.491667 262766.000000 224473.000000 4835.049805 1785.770020 7880.759766 4745.620117 2443.659912 3229.550049
1332496830.500000 256509.000000 226413.000000 3758.870117 3461.199951 6743.770020 4928.959961 1536.619995 3546.689941
1332496830.508333 250793.000000 224372.000000 5218.490234 2865.260010 7803.959961 4351.089844 1333.819946 3680.489990
1332496830.516667 256319.000000 222066.000000 6403.970215 732.344971 9627.759766 3089.300049 1516.780029 3653.689941
1332496830.525000 263343.000000 223235.000000 5200.430176 1388.579956 9372.849609 3371.229980 1450.390015 2678.909912
1332496830.533333 260903.000000 225110.000000 3722.580078 3246.659912 7876.540039 4716.810059 1498.439941 2116.520020
1332496830.541667 254416.000000 223769.000000 4841.649902 2956.399902 8115.919922 5392.359863 2142.810059 2652.320068
1332496830.550000 256698.000000 222172.000000 6471.229980 970.395996 8834.980469 4816.839844 2376.629883 3605.860107
1332496830.558333 261841.000000 223537.000000 5500.740234 1189.660034 8365.730469 4016.469971 1042.270020 3821.199951
1332496830.566667 259503.000000 225840.000000 3827.929932 3088.840088 7676.140137 3978.310059 -357.006989 3016.419922
1332496830.575000 253457.000000 224636.000000 4914.609863 3097.449951 8224.900391 4321.439941 171.373993 2412.360107
1332496830.583333 256029.000000 222221.000000 6841.799805 1028.500000 9252.299805 4387.569824 2418.139893 2510.100098
1332496830.591667 262840.000000 222550.000000 6210.250000 1410.729980 8538.900391 4152.580078 3009.300049 3219.760010
1332496830.600000 261633.000000 225065.000000 4284.529785 3357.209961 7282.169922 3823.590088 1402.839966 3644.669922
1332496830.608333 254591.000000 225109.000000 4693.160156 3647.739990 7745.160156 3686.379883 490.161011 3448.860107
1332496830.616667 254780.000000 223599.000000 6527.379883 1569.869995 9438.429688 3456.580078 1162.520020 3252.010010
1332496830.625000 260639.000000 224107.000000 6531.049805 1633.050049 9283.719727 4174.020020 2089.550049 2775.750000
1332496830.633333 261108.000000 225472.000000 4968.259766 3527.850098 7692.870117 5137.100098 2207.389893 2436.659912
1332496830.641667 255775.000000 223708.000000 4963.450195 4017.370117 7701.419922 5269.649902 2284.399902 2842.080078
1332496830.650000 257398.000000 220947.000000 6767.500000 1645.709961 9107.070312 4000.179932 2548.860107 3624.770020
1332496830.658333 264924.000000 221559.000000 6471.459961 1110.329956 9459.650391 3108.169922 1696.969971 3893.439941
1332496830.666667 265339.000000 225733.000000 4348.799805 3459.510010 8475.299805 4031.239990 573.346985 2910.270020
1332496830.675000 256814.000000 226995.000000 3479.540039 4949.790039 7499.910156 5624.709961 751.656006 2347.709961
1332496830.683333 253316.000000 225161.000000 5147.060059 3218.429932 8460.160156 5869.299805 2336.320068 2987.959961
1332496830.691667 259360.000000 223101.000000 5549.120117 1869.949951 8740.759766 4668.939941 2457.909912 3758.820068
1332496830.700000 262012.000000 224016.000000 4173.609863 3004.129883 8157.040039 3704.729980 987.963989 3652.750000
1332496830.708333 257176.000000 224420.000000 3517.300049 4118.750000 7822.240234 3718.229980 37.264900 2953.679932
1332496830.716667 255146.000000 223322.000000 4923.979980 2330.679932 9095.910156 3792.399902 1013.070007 2711.239990
1332496830.725000 260524.000000 223651.000000 5413.629883 1146.209961 8817.169922 4419.649902 2446.649902 2832.050049
1332496830.733333 262098.000000 225752.000000 4262.979980 2270.969971 7135.479980 5067.120117 2294.679932 3376.620117
1332496830.741667 256889.000000 225379.000000 3606.459961 3568.189941 6552.649902 4970.270020 1516.380005 3662.570068
1332496830.750000 253948.000000 222631.000000 5511.700195 2066.300049 7952.660156 4019.909912 1513.140015 3752.629883
1332496830.758333 259799.000000 222067.000000 5873.500000 608.583984 9253.780273 2870.739990 1348.239990 3344.199951
1332496830.766667 262547.000000 224901.000000 4346.080078 1928.099976 8590.969727 3455.459961 904.390991 2379.270020
1332496830.775000 256137.000000 226761.000000 3423.560059 3379.080078 7471.149902 4894.169922 1153.540039 2031.410034
1332496830.783333 250326.000000 225013.000000 5519.979980 2423.969971 7991.759766 5117.950195 2098.790039 3099.239990
1332496830.791667 255454.000000 222992.000000 6547.950195 496.496002 8751.339844 3900.560059 2132.290039 4076.810059
1332496830.800000 261286.000000 223489.000000 5152.850098 1501.510010 8425.610352 2888.030029 776.114014 3786.360107
1332496830.808333 258969.000000 224069.000000 3832.610107 3001.979980 7979.259766 3182.310059 52.716000 2874.800049
1332496830.816667 254946.000000 222035.000000 5317.879883 2139.800049 9103.139648 3955.610107 1235.170044 2394.149902
1332496830.825000 258676.000000 221205.000000 6594.910156 505.343994 9423.360352 4562.470215 2913.739990 2892.350098
1332496830.833333 262125.000000 223566.000000 5116.750000 1773.599976 8082.200195 4776.370117 2386.389893 3659.729980
1332496830.841667 257835.000000 225918.000000 3714.300049 3477.080078 7205.370117 4554.609863 711.539001 3878.419922
1332496830.850000 253660.000000 224371.000000 5022.450195 2592.429932 8277.200195 4119.370117 486.507996 3666.739990
1332496830.858333 259503.000000 222061.000000 6589.950195 659.935974 9596.919922 3598.100098 1702.489990 3036.600098
1332496830.866667 265495.000000 222843.000000 5541.850098 1728.430054 8459.959961 4492.000000 2231.969971 2430.620117
1332496830.875000 260929.000000 224996.000000 4000.949951 3745.989990 6983.790039 5430.859863 1855.260010 2533.379883
1332496830.883333 252716.000000 224335.000000 5086.560059 3401.149902 7597.970215 5196.120117 1755.719971 3079.760010
1332496830.891667 254110.000000 223111.000000 6822.189941 1229.079956 9164.339844 3761.229980 1679.390015 3584.879883
1332496830.900000 259969.000000 224693.000000 6183.950195 1538.500000 9222.080078 3139.169922 949.901978 3180.800049
1332496830.908333 259078.000000 226913.000000 4388.890137 3694.820068 8195.019531 3933.000000 426.079987 2388.449951
1332496830.916667 254563.000000 224760.000000 5168.439941 4020.939941 8450.269531 4758.910156 1458.900024 2286.429932
1332496830.925000 258059.000000 221217.000000 6883.459961 1649.530029 9232.780273 4457.649902 3057.820068 3031.949951
1332496830.933333 264667.000000 221177.000000 6218.509766 1645.729980 8657.179688 3663.500000 2528.280029 3978.340088
1332496830.941667 262925.000000 224382.000000 4627.500000 3635.929932 7892.799805 3431.320068 604.508972 3901.370117
1332496830.950000 254708.000000 225448.000000 4408.250000 4461.040039 8197.169922 3953.750000 -44.534599 3154.870117
1332496830.958333 253702.000000 224635.000000 5825.770020 2577.050049 9590.049805 4569.250000 1460.270020 2785.169922
1332496830.966667 260206.000000 224140.000000 5387.979980 1951.160034 8789.509766 5131.660156 2706.379883 2972.479980
1332496830.975000 261240.000000 224737.000000 3860.810059 3418.310059 7414.529785 5284.520020 2271.379883 3183.149902
1332496830.983333 256140.000000 223252.000000 3850.010010 3957.139893 7262.649902 4964.640137 1499.510010 3453.129883
1332496830.991667 256116.000000 221349.000000 5594.479980 2054.399902 8835.129883 3662.010010 1485.510010 3613.010010

View File

@@ -0,0 +1,11 @@
1332497040.000000 2.56439e+05 2.24775e+05 2.92897e+03 4.66646e+03 7.58491e+03 3.57351e+03 -4.34171e+02 2.98819e+03
1332497040.010000 2.51903e+05 2.23202e+05 4.23696e+03 3.49363e+03 8.53493e+03 4.29416e+03 8.49573e+02 2.38189e+03
1332497040.020000 2.57625e+05 2.20247e+05 5.47017e+03 1.35872e+03 9.18903e+03 4.56136e+03 2.65599e+03 2.60912e+03
1332497040.030000 2.63375e+05 2.20706e+05 4.51842e+03 1.80758e+03 8.17208e+03 4.17463e+03 2.57884e+03 3.32848e+03
1332497040.040000 2.59221e+05 2.22346e+05 2.98879e+03 3.66264e+03 6.87274e+03 3.94223e+03 1.25928e+03 3.51786e+03
1332497040.050000 2.51918e+05 2.22281e+05 4.22677e+03 2.84764e+03 7.78323e+03 3.81659e+03 8.04944e+02 3.46314e+03
1332497040.050000 2.54478e+05 2.21701e+05 5.61366e+03 1.02262e+03 9.26581e+03 3.50152e+03 1.29331e+03 3.07271e+03
1332497040.060000 2.59568e+05 2.22945e+05 4.97190e+03 1.28250e+03 8.62081e+03 4.06316e+03 1.85717e+03 2.61990e+03
1332497040.070000 2.57269e+05 2.23697e+05 3.60527e+03 3.05749e+03 7.22363e+03 4.90330e+03 1.93736e+03 2.35357e+03
1332497040.080000 2.52274e+05 2.21438e+05 5.01228e+03 2.86309e+03 7.87115e+03 4.80448e+03 2.18291e+03 2.93397e+03
1332497040.090000 2.56468e+05 2.19205e+05 6.29804e+03 8.09467e+02 9.12895e+03 3.52055e+03 2.16980e+03 3.88739e+03

View File

@@ -6,6 +6,9 @@ import sys
import glob
from collections import OrderedDict
# Change into parent dir
os.chdir(os.path.dirname(os.path.realpath(__file__)) + "/..")
class JimOrderPlugin(nose.plugins.Plugin):
"""When searching for tests and encountering a directory that
contains a 'test.order' file, run tests listed in that file, in the

View File

@@ -1,4 +1,5 @@
test_printf.py
test_threadsafety.py
test_lrucache.py
test_mustclose.py

View File

@@ -2,8 +2,6 @@
import nilmdb
from nilmdb.utils.printf import *
import nilmdb.bulkdata
from nose.tools import *
from nose.tools import assert_raises
import itertools
@@ -12,7 +10,8 @@ from testutil.helpers import *
testdb = "tests/bulkdata-testdb"
from nilmdb.bulkdata import BulkData
import nilmdb.server.bulkdata
from nilmdb.server.bulkdata import BulkData
class TestBulkData(object):

View File

@@ -2,10 +2,11 @@
import nilmdb
from nilmdb.utils.printf import *
from nilmdb.utils import timestamper
from nilmdb.client import ClientError, ServerError
from nilmdb.utils import datetime_tz
import datetime_tz
from nose.plugins.skip import SkipTest
from nose.tools import *
from nose.tools import assert_raises
import itertools
@@ -18,10 +19,12 @@ import simplejson as json
import unittest
import warnings
import resource
import time
from testutil.helpers import *
testdb = "tests/client-testdb"
testurl = "http://localhost:12380/"
def setup_module():
global test_server, test_db
@@ -29,7 +32,7 @@ def setup_module():
recursive_unlink(testdb)
# Start web app on a custom port
test_db = nilmdb.NilmDB(testdb, sync = False)
test_db = nilmdb.utils.serializer_proxy(nilmdb.NilmDB)(testdb, sync = False)
test_server = nilmdb.Server(test_db, host = "127.0.0.1",
port = 12380, stoppable = False,
fast_shutdown = True,
@@ -44,39 +47,44 @@ def teardown_module():
class TestClient(object):
def test_client_1_basic(self):
def test_client_01_basic(self):
# Test a fake host
client = nilmdb.Client(url = "http://localhost:1/")
with assert_raises(nilmdb.client.ServerError):
client.version()
client.close()
# Trigger same error with a PUT request
client = nilmdb.Client(url = "http://localhost:1/")
with assert_raises(nilmdb.client.ServerError):
client.version()
client.close()
# Then a fake URL on a real host
client = nilmdb.Client(url = "http://localhost:12380/fake/")
with assert_raises(nilmdb.client.ClientError):
client.version()
client.close()
# Now a real URL with no http:// prefix
client = nilmdb.Client(url = "localhost:12380")
version = client.version()
client.close()
# Now use the real URL
client = nilmdb.Client(url = "http://localhost:12380/")
client = nilmdb.Client(url = testurl)
version = client.version()
eq_(distutils.version.StrictVersion(version),
distutils.version.StrictVersion(test_server.version))
eq_(distutils.version.LooseVersion(version),
distutils.version.LooseVersion(test_server.version))
# Bad URLs should give 404, not 500
with assert_raises(ClientError):
client.http.get("/stream/create")
client.close()
def test_client_2_createlist(self):
def test_client_02_createlist(self):
# Basic stream tests, like those in test_nilmdb:test_stream
client = nilmdb.Client(url = "http://localhost:12380/")
client = nilmdb.Client(url = testurl)
# Database starts empty
eq_(client.stream_list(), [])
@@ -90,6 +98,15 @@ class TestClient(object):
with assert_raises(ClientError):
client.stream_create("/newton/prep", "NoSuchLayout")
# Bad method types
with assert_raises(ClientError):
client.http.put("/stream/list","")
# Try a bunch of times to make sure the request body is getting consumed
for x in range(10):
with assert_raises(ClientError):
client.http.post("/stream/list")
client = nilmdb.Client(url = testurl)
# Create three streams
client.stream_create("/newton/prep", "PrepData")
client.stream_create("/newton/raw", "RawData")
@@ -101,8 +118,10 @@ class TestClient(object):
["/newton/zzz/rawnotch", "RawNotchedData"]
])
# Match just one type or one path
eq_(client.stream_list(layout="RawData"), [ ["/newton/raw", "RawData"] ])
eq_(client.stream_list(path="/newton/raw"), [ ["/newton/raw", "RawData"] ])
eq_(client.stream_list(layout="RawData"),
[ ["/newton/raw", "RawData"] ])
eq_(client.stream_list(path="/newton/raw"),
[ ["/newton/raw", "RawData"] ])
# Try messing with resource limits to trigger errors and get
# more coverage. Here, make it so we can only create files 1
@@ -114,9 +133,10 @@ class TestClient(object):
client.stream_create("/newton/hello", "RawData")
resource.setrlimit(resource.RLIMIT_FSIZE, limit)
client.close()
def test_client_3_metadata(self):
client = nilmdb.Client(url = "http://localhost:12380/")
def test_client_03_metadata(self):
client = nilmdb.Client(url = testurl)
# Set / get metadata
eq_(client.stream_get_metadata("/newton/prep"), {})
@@ -131,9 +151,10 @@ class TestClient(object):
client.stream_update_metadata("/newton/raw", meta3)
eq_(client.stream_get_metadata("/newton/prep"), meta1)
eq_(client.stream_get_metadata("/newton/raw"), meta1)
eq_(client.stream_get_metadata("/newton/raw", [ "description" ] ), meta2)
eq_(client.stream_get_metadata("/newton/raw", [ "description",
"v_scale" ] ), meta1)
eq_(client.stream_get_metadata("/newton/raw",
[ "description" ] ), meta2)
eq_(client.stream_get_metadata("/newton/raw",
[ "description", "v_scale" ] ), meta1)
# missing key
eq_(client.stream_get_metadata("/newton/raw", "descr"),
@@ -146,9 +167,10 @@ class TestClient(object):
client.stream_set_metadata("/newton/prep", [1,2,3])
with assert_raises(ClientError):
client.stream_update_metadata("/newton/prep", [1,2,3])
client.close()
def test_client_4_insert(self):
client = nilmdb.Client(url = "http://localhost:12380/")
def test_client_04_insert(self):
client = nilmdb.Client(url = testurl)
datetime_tz.localtz_set("America/New_York")
@@ -158,13 +180,13 @@ class TestClient(object):
rate = 120
# First try a nonexistent path
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/no-such-path", data)
in_("404 Not Found", str(e.exception))
# Now try reversed timestamps
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
data = reversed(list(data))
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data)
@@ -173,19 +195,40 @@ class TestClient(object):
# Now try empty data (no server request made)
empty = cStringIO.StringIO("")
data = nilmdb.timestamper.TimestamperRate(empty, start, 120)
data = timestamper.TimestamperRate(empty, start, 120)
result = client.stream_insert("/newton/prep", data)
eq_(result, None)
# Try forcing a server request with empty data
# It's OK to insert an empty interval
client.http.put("stream/insert", "", { "path": "/newton/prep",
"start": 1, "end": 2 })
eq_(list(client.stream_intervals("/newton/prep")), [[1, 2]])
client.stream_remove("/newton/prep")
eq_(list(client.stream_intervals("/newton/prep")), [])
# Timestamps can be negative too
client.http.put("stream/insert", "", { "path": "/newton/prep",
"start": -2, "end": -1 })
eq_(list(client.stream_intervals("/newton/prep")), [[-2, -1]])
client.stream_remove("/newton/prep")
eq_(list(client.stream_intervals("/newton/prep")), [])
# Intervals that end at zero shouldn't be any different
client.http.put("stream/insert", "", { "path": "/newton/prep",
"start": -1, "end": 0 })
eq_(list(client.stream_intervals("/newton/prep")), [[-1, 0]])
client.stream_remove("/newton/prep")
eq_(list(client.stream_intervals("/newton/prep")), [])
# Try forcing a server request with equal start and end
with assert_raises(ClientError) as e:
client.http.put("stream/insert", "", { "path": "/newton/prep",
"start": 0, "end": 0 })
in_("400 Bad Request", str(e.exception))
in_("no data provided", str(e.exception))
in_("start must precede end", str(e.exception))
# Specify start/end (starts too late)
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data,
start + 5, start + 120)
@@ -194,7 +237,7 @@ class TestClient(object):
str(e.exception))
# Specify start/end (ends too early)
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data,
start, start + 1)
@@ -205,10 +248,9 @@ class TestClient(object):
str(e.exception))
# Now do the real load
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
result = client.stream_insert("/newton/prep", data,
start, start + 119.999777)
eq_(result, "ok")
# Verify the intervals. Should be just one, even if the data
# was inserted in chunks, due to nilmdb interval concatenation.
@@ -216,26 +258,33 @@ class TestClient(object):
eq_(intervals, [[start, start + 119.999777]])
# Try some overlapping data -- just insert it again
data = nilmdb.timestamper.TimestamperRate(testfile, start, 120)
data = timestamper.TimestamperRate(testfile, start, 120)
with assert_raises(ClientError) as e:
result = client.stream_insert("/newton/prep", data)
in_("400 Bad Request", str(e.exception))
in_("verlap", str(e.exception))
def test_client_5_extractremove(self):
# Misc tests for extract and remove. Most of them are in test_cmdline.
client = nilmdb.Client(url = "http://localhost:12380/")
client.close()
for x in client.stream_extract("/newton/prep", 123, 123):
raise Exception("shouldn't be any data for this request")
def test_client_05_extractremove(self):
# Misc tests for extract and remove. Most of them are in test_cmdline.
client = nilmdb.Client(url = testurl)
for x in client.stream_extract("/newton/prep", 999123, 999124):
raise AssertionError("shouldn't be any data for this request")
with assert_raises(ClientError) as e:
client.stream_remove("/newton/prep", 123, 120)
def test_client_6_generators(self):
# Test count
eq_(client.stream_count("/newton/prep"), 14400)
client.close()
def test_client_06_generators(self):
# A lot of the client functionality is already tested by test_cmdline,
# but this gets a bit more coverage that cmdline misses.
client = nilmdb.Client(url = "http://localhost:12380/")
client = nilmdb.Client(url = testurl)
# Trigger a client error in generator
start = datetime_tz.datetime_tz.smartparse("20120323T2000")
@@ -246,7 +295,7 @@ class TestClient(object):
start.totimestamp(),
end.totimestamp()).next()
in_("400 Bad Request", str(e.exception))
in_("end before start", str(e.exception))
in_("start must precede end", str(e.exception))
# Trigger a curl error in generator
with assert_raises(ServerError) as e:
@@ -256,24 +305,6 @@ class TestClient(object):
with assert_raises(ServerError) as e:
client.http.get_gen("http://nosuchurl/").next()
# Check non-json version of string output
eq_(json.loads(client.http.get("/stream/list",retjson=False)),
client.http.get("/stream/list",retjson=True))
# Check non-json version of generator output
for (a, b) in itertools.izip(
client.http.get_gen("/stream/list",retjson=False),
client.http.get_gen("/stream/list",retjson=True)):
eq_(json.loads(a), b)
# Check PUT with generator out
with assert_raises(ClientError) as e:
client.http.put_gen("stream/insert", "",
{ "path": "/newton/prep",
"start": 0, "end": 0 }).next()
in_("400 Bad Request", str(e.exception))
in_("no data provided", str(e.exception))
# Check 404 for missing streams
for function in [ client.stream_intervals, client.stream_extract ]:
with assert_raises(ClientError) as e:
@@ -281,32 +312,58 @@ class TestClient(object):
in_("404 Not Found", str(e.exception))
in_("No such stream", str(e.exception))
def test_client_7_chunked(self):
client.close()
def test_client_07_headers(self):
# Make sure that /stream/intervals and /stream/extract
# properly return streaming, chunked response. Pokes around
# in client.http internals a bit to look at the response
# headers.
# properly return streaming, chunked, text/plain response.
# Pokes around in client.http internals a bit to look at the
# response headers.
client = nilmdb.Client(url = "http://localhost:12380/")
client = nilmdb.Client(url = testurl)
http = client.http
# Use a warning rather than returning a test failure, so that we can
# still disable chunked responses for debugging.
x = client.http.get("stream/intervals", { "path": "/newton/prep" },
retjson=False)
lines_(x, 1)
if "transfer-encoding: chunked" not in client.http._headers.lower():
# Use a warning rather than returning a test failure for the
# transfer-encoding, so that we can still disable chunked
# responses for debugging.
def headers():
h = ""
for (k, v) in http._last_response.headers.items():
h += k + ": " + v + "\n"
return h.lower()
# Intervals
x = http.get("stream/intervals", { "path": "/newton/prep" })
if "transfer-encoding: chunked" not in headers():
warnings.warn("Non-chunked HTTP response for /stream/intervals")
if "content-type: application/x-json-stream" not in headers():
raise AssertionError("/stream/intervals content type "
"is not application/x-json-stream:\n" +
headers())
x = client.http.get("stream/extract",
# Extract
x = http.get("stream/extract",
{ "path": "/newton/prep",
"start": "123",
"end": "123" }, retjson=False)
if "transfer-encoding: chunked" not in client.http._headers.lower():
"end": "124" })
if "transfer-encoding: chunked" not in headers():
warnings.warn("Non-chunked HTTP response for /stream/extract")
if "content-type: text/plain;charset=utf-8" not in headers():
raise AssertionError("/stream/extract is not text/plain:\n" +
headers())
def test_client_8_unicode(self):
# Make sure Access-Control-Allow-Origin gets set
if "access-control-allow-origin: " not in headers():
raise AssertionError("No Access-Control-Allow-Origin (CORS) "
"header in /stream/extract response:\n" +
headers())
client.close()
def test_client_08_unicode(self):
# Basic Unicode tests
client = nilmdb.Client(url = "http://localhost:12380/")
client = nilmdb.Client(url = testurl)
# Delete streams that exist
for stream in client.stream_list():
@@ -340,3 +397,209 @@ class TestClient(object):
eq_(client.stream_get_metadata(raw[0]), meta1)
eq_(client.stream_get_metadata(raw[0], [ "alpha" ]), meta2)
eq_(client.stream_get_metadata(raw[0], [ "alpha", "β" ]), meta1)
client.close()
def test_client_09_closing(self):
# Make sure we actually close sockets correctly. New
# connections will block for a while if they're not, since the
# server will stop accepting new connections.
for test in [1, 2]:
start = time.time()
for i in range(50):
if time.time() - start > 15:
raise AssertionError("Connections seem to be blocking... "
"probably not closing properly.")
if test == 1:
# explicit close
client = nilmdb.Client(url = testurl)
with assert_raises(ClientError) as e:
client.stream_remove("/newton/prep", 123, 120)
client.close() # remove this to see the failure
elif test == 2:
# use the context manager
with nilmdb.Client(url = testurl) as c:
with assert_raises(ClientError) as e:
c.stream_remove("/newton/prep", 123, 120)
def test_client_10_context(self):
# Test using the client's stream insertion context manager to
# insert data.
client = nilmdb.Client(testurl)
client.stream_create("/context/test", "uint16_1")
with client.stream_insert_context("/context/test") as ctx:
# override _max_data to trigger frequent server updates
ctx._max_data = 15
with assert_raises(ValueError):
ctx.insert_line("100 1")
ctx.insert_line("100 1\n")
ctx.insert_iter([ "101 1\n",
"102 1\n",
"103 1\n" ])
ctx.insert_line("104 1\n")
ctx.insert_line("105 1\n")
ctx.finalize()
ctx.insert_line("106 1\n")
ctx.update_end(106.5)
ctx.finalize()
ctx.update_start(106.8)
ctx.insert_line("107 1\n")
ctx.insert_line("108 1\n")
ctx.insert_line("109 1\n")
ctx.insert_line("110 1\n")
ctx.insert_line("111 1\n")
ctx.update_end(113)
ctx.insert_line("112 1\n")
ctx.update_end(114)
ctx.insert_line("113 1\n")
ctx.update_end(115)
ctx.insert_line("114 1\n")
ctx.finalize()
with assert_raises(ClientError):
with client.stream_insert_context("/context/test", 100, 200) as ctx:
ctx.insert_line("115 1\n")
with assert_raises(ClientError):
with client.stream_insert_context("/context/test", 200, 300) as ctx:
ctx.insert_line("115 1\n")
with client.stream_insert_context("/context/test", 200, 300) as ctx:
# make sure our override wasn't permanent
ne_(ctx._max_data, 15)
ctx.insert_line("225 1\n")
ctx.finalize()
eq_(list(client.stream_intervals("/context/test")),
[ [ 100, 105.000001 ],
[ 106, 106.5 ],
[ 106.8, 115 ],
[ 200, 300 ] ])
client.stream_destroy("/context/test")
client.close()
def test_client_11_emptyintervals(self):
# Empty intervals are ok! If recording detection events
# by inserting rows into the database, we want to be able to
# have an interval where no events occurred. Test them here.
client = nilmdb.Client(testurl)
client.stream_create("/empty/test", "uint16_1")
def info():
result = []
for interval in list(client.stream_intervals("/empty/test")):
result.append((client.stream_count("/empty/test", *interval),
interval))
return result
eq_(info(), [])
# Insert a region with just a few points
with client.stream_insert_context("/empty/test") as ctx:
ctx.update_start(100)
ctx.insert_line("140 1\n")
ctx.insert_line("150 1\n")
ctx.insert_line("160 1\n")
ctx.update_end(200)
ctx.finalize()
eq_(info(), [(3, [100, 200])])
# Delete chunk, which will leave one data point and two intervals
client.stream_remove("/empty/test", 145, 175)
eq_(info(), [(1, [100, 145]),
(0, [175, 200])])
# Try also creating a completely empty interval from scratch,
# in a few different ways.
client.stream_insert_block("/empty/test", "", 300, 350)
client.stream_insert("/empty/test", [], 400, 450)
with client.stream_insert_context("/empty/test", 500, 550):
pass
# If enough timestamps aren't provided, empty streams won't be created.
client.stream_insert("/empty/test", [])
with client.stream_insert_context("/empty/test"):
pass
client.stream_insert("/empty/test", [], start = 600)
with client.stream_insert_context("/empty/test", start = 700):
pass
client.stream_insert("/empty/test", [], end = 850)
with client.stream_insert_context("/empty/test", end = 950):
pass
# Try various things that might cause problems
with client.stream_insert_context("/empty/test", 1000, 1050):
ctx.finalize() # inserts [1000, 1050]
ctx.finalize() # nothing
ctx.finalize() # nothing
ctx.insert_line("1100 1\n")
ctx.finalize() # inserts [1100, 1100.000001]
ctx.update_start(1199)
ctx.insert_line("1200 1\n")
ctx.update_end(1250)
ctx.finalize() # inserts [1199, 1250]
ctx.update_start(1299)
ctx.finalize() # nothing
ctx.update_end(1350)
ctx.finalize() # nothing
ctx.update_start(1400)
ctx.update_end(1450)
ctx.finalize()
# implicit last finalize inserts [1400, 1450]
# Check everything
eq_(info(), [(1, [100, 145]),
(0, [175, 200]),
(0, [300, 350]),
(0, [400, 450]),
(0, [500, 550]),
(0, [1000, 1050]),
(1, [1100, 1100.000001]),
(1, [1199, 1250]),
(0, [1400, 1450]),
])
# Clean up
client.stream_destroy("/empty/test")
client.close()
def test_client_12_persistent(self):
# Check that connections are persistent when they should be.
# This is pretty hard to test; we have to poke deep into
# the Requests library.
with nilmdb.Client(url = testurl) as c:
def connections():
try:
poolmanager = c.http._last_response.connection.poolmanager
pool = poolmanager.pools[('http','localhost',12380)]
return (pool.num_connections, pool.num_requests)
except:
raise SkipTest("can't get connection info")
# First request makes a connection
c.stream_create("/persist/test", "uint16_1")
eq_(connections(), (1, 1))
# Non-generator
c.stream_list("/persist/test")
eq_(connections(), (1, 2))
c.stream_list("/persist/test")
eq_(connections(), (1, 3))
# Generators
for x in c.stream_intervals("/persist/test"):
pass
eq_(connections(), (1, 4))
for x in c.stream_intervals("/persist/test"):
pass
eq_(connections(), (1, 5))
# Clean up
c.stream_destroy("/persist/test")
eq_(connections(), (1, 6))

View File

@@ -3,12 +3,12 @@
import nilmdb
from nilmdb.utils.printf import *
import nilmdb.cmdline
from nilmdb.utils import datetime_tz
import unittest
from nose.tools import *
from nose.tools import assert_raises
import itertools
import datetime_tz
import os
import re
import shutil
@@ -27,9 +27,10 @@ testdb = "tests/cmdline-testdb"
def server_start(max_results = None, bulkdata_args = {}):
global test_server, test_db
# Start web app on a custom port
test_db = nilmdb.NilmDB(testdb, sync = False,
max_results = max_results,
bulkdata_args = bulkdata_args)
test_db = nilmdb.utils.serializer_proxy(nilmdb.NilmDB)(
testdb, sync = False,
max_results = max_results,
bulkdata_args = bulkdata_args)
test_server = nilmdb.Server(test_db, host = "127.0.0.1",
port = 12380, stoppable = False,
fast_shutdown = True,
@@ -128,8 +129,8 @@ class TestCmdline(object):
with open(file) as f:
contents = f.read()
if contents != self.captured:
#print contents[1:1000] + "\n"
#print self.captured[1:1000] + "\n"
print contents[1:1000] + "\n"
print self.captured[1:1000] + "\n"
raise AssertionError("captured data doesn't match " + file)
def matchfilecount(self, file):
@@ -162,16 +163,16 @@ class TestCmdline(object):
# try some URL constructions
self.fail("--url http://nosuchurl/ info")
self.contain("Couldn't resolve host 'nosuchurl'")
self.contain("error connecting to server")
self.fail("--url nosuchurl info")
self.contain("Couldn't resolve host 'nosuchurl'")
self.contain("error connecting to server")
self.fail("-u nosuchurl/foo info")
self.contain("Couldn't resolve host 'nosuchurl'")
self.contain("error connecting to server")
self.fail("-u localhost:0 info")
self.contain("couldn't connect to host")
self.fail("-u localhost:1 info")
self.contain("error connecting to server")
self.ok("-u localhost:12380 info")
self.ok("info")
@@ -191,14 +192,32 @@ class TestCmdline(object):
self.fail("extract --start 2000-01-01 --start 2001-01-02")
self.contain("duplicated argument")
def test_02_info(self):
def test_02_parsetime(self):
os.environ['TZ'] = "America/New_York"
test = datetime_tz.datetime_tz.now()
parse_time = nilmdb.utils.time.parse_time
eq_(parse_time(str(test)), test)
test = datetime_tz.datetime_tz.smartparse("20120405 1400-0400")
eq_(parse_time("hi there 20120405 1400-0400 testing! 123"), test)
eq_(parse_time("20120405 1800 UTC"), test)
eq_(parse_time("20120405 1400-0400 UTC"), test)
for badtime in [ "20120405 1400-9999", "hello", "-", "", "4:00" ]:
with assert_raises(ValueError):
x = parse_time(badtime)
x = parse_time("now")
eq_(parse_time("snapshot-20120405-140000.raw.gz"), test)
eq_(parse_time("prep-20120405T1400"), test)
def test_03_info(self):
self.ok("info")
self.contain("Server URL: http://localhost:12380/")
self.contain("Client version: " + nilmdb.__version__)
self.contain("Server version: " + test_server.version)
self.contain("Server database path")
self.contain("Server database size")
self.contain("Server database free space")
def test_03_createlist(self):
def test_04_createlist(self):
# Basic stream tests, like those in test_client.
# No streams
@@ -272,9 +291,9 @@ class TestCmdline(object):
# reversed range
self.fail("list /newton/prep --start 2020-01-01 --end 2000-01-01")
self.contain("start is after end")
self.contain("start must precede end")
def test_04_metadata(self):
def test_05_metadata(self):
# Set / get metadata
self.fail("metadata")
self.fail("metadata --get")
@@ -331,21 +350,6 @@ class TestCmdline(object):
self.fail("metadata /newton/nosuchpath")
self.contain("No stream at path /newton/nosuchpath")
def test_05_parsetime(self):
os.environ['TZ'] = "America/New_York"
cmd = nilmdb.cmdline.Cmdline(None)
test = datetime_tz.datetime_tz.now()
eq_(cmd.parse_time(str(test)), test)
test = datetime_tz.datetime_tz.smartparse("20120405 1400-0400")
eq_(cmd.parse_time("hi there 20120405 1400-0400 testing! 123"), test)
eq_(cmd.parse_time("20120405 1800 UTC"), test)
eq_(cmd.parse_time("20120405 1400-0400 UTC"), test)
for badtime in [ "20120405 1400-9999", "hello", "-", "", "14:00" ]:
with assert_raises(ValueError):
x = cmd.parse_time(badtime)
eq_(cmd.parse_time("snapshot-20120405-140000.raw.gz"), test)
eq_(cmd.parse_time("prep-20120405T1400"), test)
def test_06_insert(self):
self.ok("insert --help")
@@ -369,6 +373,14 @@ class TestCmdline(object):
with open("tests/data/prep-20120323T1004-timestamped") as input:
self.ok("insert --none /newton/prep", input)
# insert pre-timestamped data, with bad times (non-monotonic)
os.environ['TZ'] = "UTC"
with open("tests/data/prep-20120323T1004-badtimes") as input:
self.fail("insert --none /newton/prep", input)
self.contain("error parsing input data")
self.contain("line 7:")
self.contain("timestamp is not monotonically increasing")
# insert data with normal timestamper from filename
os.environ['TZ'] = "UTC"
self.ok("insert --rate 120 /newton/prep "
@@ -433,13 +445,24 @@ class TestCmdline(object):
self.contain("no intervals")
self.ok("list --detail --path *prep --start='23 Mar 2012 10:05:15.50'"
+ " --end='23 Mar 2012 10:05:15.50'")
+ " --end='23 Mar 2012 10:05:15.51'")
lines_(self.captured, 2)
self.contain("10:05:15.500")
self.ok("list --detail")
lines_(self.captured, 8)
# Verify the "raw timestamp" output
self.ok("list --detail --path *prep --timestamp-raw "
"--start='23 Mar 2012 10:05:15.50'")
lines_(self.captured, 2)
self.contain("[ 1332497115.5 -> 1332497159.991668 ]")
self.ok("list --detail --path *prep -T "
"--start='23 Mar 2012 10:05:15.612'")
lines_(self.captured, 2)
self.contain("[ 1332497115.612 -> 1332497159.991668 ]")
def test_08_extract(self):
# nonexistent stream
self.fail("extract /no/such/foo --start 2000-01-01 --end 2020-01-01")
@@ -451,29 +474,29 @@ class TestCmdline(object):
# empty ranges return error 2
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'",
"--start '23 Mar 2012 20:00:30' " +
"--end '23 Mar 2012 20:00:31'",
exitcode = 2, require_error = False)
self.contain("no data")
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2012 10:00:30.000001' " +
"--end '23 Mar 2012 10:00:30.000001'",
"--start '23 Mar 2012 20:00:30.000001' " +
"--end '23 Mar 2012 20:00:30.000002'",
exitcode = 2, require_error = False)
self.contain("no data")
self.fail("extract -a /newton/prep " +
"--start '23 Mar 2022 10:00:30' " +
"--end '23 Mar 2022 10:00:30'",
"--end '23 Mar 2022 10:00:31'",
exitcode = 2, require_error = False)
self.contain("no data")
# but are ok if we're just counting results
self.ok("extract --count /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
"--start '23 Mar 2012 20:00:30' " +
"--end '23 Mar 2012 20:00:31'")
self.match("0\n")
self.ok("extract -c /newton/prep " +
"--start '23 Mar 2012 10:00:30.000001' " +
"--end '23 Mar 2012 10:00:30.000001'")
"--start '23 Mar 2012 20:00:30.000001' " +
"--end '23 Mar 2012 20:00:30.000002'")
self.match("0\n")
# Check various dumps against stored copies of how they should appear
@@ -495,6 +518,8 @@ class TestCmdline(object):
test(4, "10:00:30.008333", "10:00:30.025")
test(5, "10:00:30", "10:00:31", extra="--annotate --bare")
test(6, "10:00:30", "10:00:31", extra="-b")
test(7, "10:00:30", "10:00:30.999", extra="-a -T")
test(7, "10:00:30", "10:00:30.999", extra="-a --timestamp-raw")
# all data put in by tests
self.ok("extract -a /newton/prep --start 2000-01-01 --end 2020-01-01")
@@ -518,31 +543,31 @@ class TestCmdline(object):
self.fail("remove /no/such/foo --start 2000-01-01 --end 2020-01-01")
self.contain("No stream at path")
# empty or backward ranges return errors
self.fail("remove /newton/prep --start 2020-01-01 --end 2000-01-01")
self.contain("start is after end")
self.contain("start must precede end")
# empty ranges return success, backwards ranges return error
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
self.match("")
self.ok("remove /newton/prep " +
"--start '23 Mar 2012 10:00:30.000001' " +
"--end '23 Mar 2012 10:00:30.000001'")
self.match("")
self.ok("remove /newton/prep " +
"--start '23 Mar 2022 10:00:30' " +
"--end '23 Mar 2022 10:00:30'")
self.match("")
self.fail("remove /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
self.contain("start must precede end")
self.fail("remove /newton/prep " +
"--start '23 Mar 2012 10:00:30.000001' " +
"--end '23 Mar 2012 10:00:30.000001'")
self.contain("start must precede end")
self.fail("remove /newton/prep " +
"--start '23 Mar 2022 10:00:30' " +
"--end '23 Mar 2022 10:00:30'")
self.contain("start must precede end")
# Verbose
self.ok("remove -c /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
"--start '23 Mar 2022 20:00:30' " +
"--end '23 Mar 2022 20:00:31'")
self.match("0\n")
self.ok("remove --count /newton/prep " +
"--start '23 Mar 2012 10:00:30' " +
"--end '23 Mar 2012 10:00:30'")
"--start '23 Mar 2022 20:00:30' " +
"--end '23 Mar 2022 20:00:31'")
self.match("0\n")
# Make sure we have the data we expect
@@ -743,7 +768,7 @@ class TestCmdline(object):
"tests/data/prep-20120323T1000")
# Should take up about 2.8 MB here (including directory entries)
du_before = nilmdb.utils.diskusage.du_bytes(testdb)
du_before = nilmdb.utils.diskusage.du(testdb)
# Make sure we have the data we expect
self.ok("list --detail")
@@ -793,7 +818,7 @@ class TestCmdline(object):
# We have 1/8 of the data that we had before, so the file size
# should have dropped below 1/4 of what it used to be
du_after = nilmdb.utils.diskusage.du_bytes(testdb)
du_after = nilmdb.utils.diskusage.du(testdb)
lt_(du_after, (du_before / 4))
# Remove anything that came from the 10:02 data file

View File

@@ -2,13 +2,14 @@
import nilmdb
from nilmdb.utils.printf import *
import datetime_tz
from nilmdb.utils import datetime_tz
from nose.tools import *
from nose.tools import assert_raises
import itertools
from nilmdb.interval import Interval, DBInterval, IntervalSet, IntervalError
from nilmdb.server.interval import (Interval, DBInterval,
IntervalSet, IntervalError)
from testutil.helpers import *
import unittest
@@ -54,7 +55,7 @@ class TestInterval:
for x in [ "03/24/2012", "03/25/2012", "03/26/2012" ] ]
# basic construction
i = Interval(d1, d1)
i = Interval(d1, d2)
i = Interval(d1, d3)
eq_(i.start, d1)
eq_(i.end, d3)
@@ -76,8 +77,8 @@ class TestInterval:
assert(Interval(d1, d3) > Interval(d1, d2))
assert(Interval(d1, d2) < Interval(d2, d3))
assert(Interval(d1, d3) < Interval(d2, d3))
assert(Interval(d2, d2) > Interval(d1, d3))
assert(Interval(d3, d3) == Interval(d3, d3))
assert(Interval(d2, d2+0.01) > Interval(d1, d3))
assert(Interval(d3, d3+0.01) == Interval(d3, d3+0.01))
#with assert_raises(TypeError): # was AttributeError, that's wrong
# x = (i == 123)
@@ -292,7 +293,7 @@ class TestIntervalDB:
# actual start, end can be a subset
a = DBInterval(150, 200, 100, 200, 10000, 20000)
b = DBInterval(100, 150, 100, 200, 10000, 20000)
c = DBInterval(150, 150, 100, 200, 10000, 20000)
c = DBInterval(150, 160, 100, 200, 10000, 20000)
# Make a set of DBIntervals
iseta = IntervalSet([a, b])

View File

@@ -13,6 +13,7 @@ def func_with_callback(a, b, callback):
callback(a)
callback(b)
callback(a+b)
return "return value"
class TestIteratorizer(object):
def test(self):
@@ -25,22 +26,21 @@ class TestIteratorizer(object):
eq_(self.result, "123")
# Now make it an iterator
it = nilmdb.utils.Iteratorizer(
lambda x:
func_with_callback(1, 2, x))
result = ""
for i in it:
result += str(i)
eq_(result, "123")
# Make sure things work when an exception occurs
it = nilmdb.utils.Iteratorizer(
lambda x:
func_with_callback(1, "a", x))
result = ""
with assert_raises(TypeError) as e:
f = lambda x: func_with_callback(1, 2, x)
with nilmdb.utils.Iteratorizer(f) as it:
for i in it:
result += str(i)
eq_(result, "123")
eq_(it.retval, "return value")
# Make sure things work when an exception occurs
result = ""
with nilmdb.utils.Iteratorizer(
lambda x: func_with_callback(1, "a", x)) as it:
with assert_raises(TypeError) as e:
for i in it:
result += str(i)
eq_(result, "1a")
# Now try to trigger the case where we stop iterating
@@ -48,8 +48,14 @@ class TestIteratorizer(object):
# itself. This doesn't have a particular result in the test,
# but gains coverage.
def foo():
it = nilmdb.utils.Iteratorizer(
lambda x:
func_with_callback(1, 2, x))
it.next()
with nilmdb.utils.Iteratorizer(f) as it:
it.next()
foo()
eq_(it.retval, None)
# Do the same thing when the curl hack is applied
def foo():
with nilmdb.utils.Iteratorizer(f, curl_hack = True) as it:
it.next()
foo()
eq_(it.retval, None)

View File

@@ -22,17 +22,17 @@ import unittest
from testutil.helpers import *
from nilmdb.layout import *
from nilmdb.server.layout import *
class TestLayouts(object):
# Some nilmdb.layout tests. Not complete, just fills in missing
# coverage.
def test_layouts(self):
x = nilmdb.layout.get_named("PrepData")
y = nilmdb.layout.get_named("float32_8")
x = nilmdb.server.layout.get_named("PrepData")
y = nilmdb.server.layout.get_named("float32_8")
eq_(x.count, y.count)
eq_(x.datatype, y.datatype)
y = nilmdb.layout.get_named("float32_7")
y = nilmdb.server.layout.get_named("float32_7")
ne_(x.count, y.count)
eq_(x.datatype, y.datatype)
@@ -89,11 +89,23 @@ class TestLayouts(object):
# non-monotonic
parser = Parser(name_raw)
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
"1234567890.000000 1 2 3 4 5 6\n" )
"1234567890.099999 1 2 3 4 5 6\n" )
with assert_raises(ParserError) as e:
parser.parse(data)
in_("not monotonically increasing", str(e.exception))
parser = Parser(name_raw)
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
"1234567890.100000 1 2 3 4 5 6\n" )
with assert_raises(ParserError) as e:
parser.parse(data)
in_("not monotonically increasing", str(e.exception))
parser = Parser(name_raw)
data = ( "1234567890.100000 1 2 3 4 5 6\n" +
"1234567890.100001 1 2 3 4 5 6\n" )
parser.parse(data)
# RawData with values out of bounds
parser = Parser(name_raw)
data = ( "1234567890.000000 1 2 3 4 500000 6\n" +

View File

@@ -16,6 +16,8 @@ import Queue
import cStringIO
import time
from nilmdb.utils import serializer_proxy
testdb = "tests/testdb"
#@atexit.register
@@ -93,16 +95,28 @@ class Test00Nilmdb(object): # named 00 so it runs first
eq_(db.stream_get_metadata("/newton/prep"), meta1)
eq_(db.stream_get_metadata("/newton/raw"), meta1)
# fill in some test coverage for start >= end
with assert_raises(nilmdb.server.NilmDBError):
db.stream_remove("/newton/prep", 0, 0)
with assert_raises(nilmdb.server.NilmDBError):
db.stream_remove("/newton/prep", 1, 0)
db.stream_remove("/newton/prep", 0, 1)
db.close()
class TestBlockingServer(object):
def setUp(self):
self.db = nilmdb.NilmDB(testdb, sync=False)
self.db = serializer_proxy(nilmdb.NilmDB)(testdb, sync=False)
def tearDown(self):
self.db.close()
def test_blocking_server(self):
# Server should fail if the database doesn't have a "_thread_safe"
# property.
with assert_raises(KeyError):
nilmdb.Server(object())
# Start web app on a custom port
self.server = nilmdb.Server(self.db, host = "127.0.0.1",
port = 12380, stoppable = True)
@@ -113,7 +127,8 @@ class TestBlockingServer(object):
self.server.start(blocking = True, event = event)
thread = threading.Thread(target = run_server)
thread.start()
event.wait(timeout = 2)
if not event.wait(timeout = 10):
raise AssertionError("server didn't start in 10 seconds")
# Send request to exit.
req = urlopen("http://127.0.0.1:12380/exit/", timeout = 1)
@@ -132,7 +147,7 @@ class TestServer(object):
def setUp(self):
# Start web app on a custom port
self.db = nilmdb.NilmDB(testdb, sync=False)
self.db = serializer_proxy(nilmdb.NilmDB)(testdb, sync=False)
self.server = nilmdb.Server(self.db, host = "127.0.0.1",
port = 12380, stoppable = False)
self.server.start(blocking = False)
@@ -150,8 +165,8 @@ class TestServer(object):
eq_(e.exception.code, 404)
# Check version
eq_(distutils.version.StrictVersion(getjson("/version")),
distutils.version.StrictVersion(self.server.version))
eq_(distutils.version.LooseVersion(getjson("/version")),
distutils.version.LooseVersion(nilmdb.__version__))
def test_stream_list(self):
# Known streams that got populated by an earlier test (test_nilmdb)
@@ -193,12 +208,3 @@ class TestServer(object):
data = getjson("/stream/get_metadata?path=/newton/prep"
"&key=foo")
eq_(data, {'foo': None})
def test_insert(self):
# GET instead of POST (no body)
# (actual POST test is done by client code)
with assert_raises(HTTPError) as e:
getjson("/stream/insert?path=/newton/prep&start=0&end=0")
eq_(e.exception.code, 400)

View File

@@ -6,7 +6,7 @@ from nilmdb.utils.printf import *
from nose.tools import *
from nose.tools import assert_raises
from nilmdb.rbtree import RBTree, RBNode
from nilmdb.server.rbtree import RBTree, RBNode
from testutil.helpers import *
import unittest

View File

@@ -9,16 +9,28 @@ import time
from testutil.helpers import *
#raise nose.exc.SkipTest("Skip these")
class Foo(object):
val = 0
def __init__(self, asdf = "asdf"):
self.init_thread = threading.current_thread().name
@classmethod
def foo(self):
pass
def fail(self):
raise Exception("you asked me to do this")
def test(self, debug = False):
self.tester(debug)
def t(self):
pass
def tester(self, debug = False):
# purposely not thread-safe
self.test_thread = threading.current_thread().name
oldval = self.val
newval = oldval + 1
time.sleep(0.05)
@@ -46,27 +58,29 @@ class Base(object):
t.join()
self.verify_result()
def verify_result(self):
eq_(self.foo.val, 20)
eq_(self.foo.init_thread, self.foo.test_thread)
class TestUnserialized(Base):
def setUp(self):
self.foo = Foo()
def verify_result(self):
# This should have failed to increment properly
assert(self.foo.val != 20)
ne_(self.foo.val, 20)
# Init and tests ran in different threads
ne_(self.foo.init_thread, self.foo.test_thread)
class TestSerialized(Base):
class TestSerializer(Base):
def setUp(self):
self.realfoo = Foo()
self.foo = nilmdb.utils.Serializer(self.realfoo)
self.foo = nilmdb.utils.serializer_proxy(Foo)("qwer")
def tearDown(self):
del self.foo
def verify_result(self):
# This should have worked
eq_(self.realfoo.val, 20)
def test_attribute(self):
# Can't wrap attributes yet
with assert_raises(TypeError):
self.foo.val
def test_multi(self):
sp = nilmdb.utils.serializer_proxy
sp(Foo("x")).t()
sp(sp(Foo)("x")).t()
sp(sp(Foo))("x").t()
sp(sp(Foo("x"))).t()
sp(sp(Foo)("x")).t()
sp(sp(Foo))("x").t()

View File

@@ -0,0 +1,96 @@
import nilmdb
from nilmdb.utils.printf import *
import nose
from nose.tools import *
from nose.tools import assert_raises
from testutil.helpers import *
import threading
class Thread(threading.Thread):
def __init__(self, target):
self.target = target
threading.Thread.__init__(self)
def run(self):
try:
self.target()
except AssertionError as e:
self.error = e
else:
self.error = None
class Test():
def __init__(self):
self.test = 1234
@classmethod
def asdf(cls):
pass
def foo(self, exception = False, reenter = False):
if exception:
raise Exception()
self.bar(reenter)
def bar(self, reenter):
if reenter:
self.foo()
return 123
def baz_threaded(self, target):
t = Thread(target)
t.start()
t.join()
return t
def baz(self, target):
target()
class TestThreadSafety(object):
def tryit(self, c, threading_ok, concurrent_ok):
eq_(c.test, 1234)
c.foo()
t = Thread(c.foo)
t.start()
t.join()
if threading_ok and t.error:
raise Exception("got unexpected error: " + str(t.error))
if not threading_ok and not t.error:
raise Exception("failed to get expected error")
try:
c.baz(c.foo)
except AssertionError as e:
if concurrent_ok:
raise Exception("got unexpected error: " + str(e))
else:
if not concurrent_ok:
raise Exception("failed to get expected error")
t = c.baz_threaded(c.foo)
if (concurrent_ok and threading_ok) and t.error:
raise Exception("got unexpected error: " + str(t.error))
if not (concurrent_ok and threading_ok) and not t.error:
raise Exception("failed to get expected error")
def test(self):
proxy = nilmdb.utils.threadsafety.verify_proxy
self.tryit(Test(), True, True)
self.tryit(proxy(Test(), True, True, True), False, False)
self.tryit(proxy(Test(), True, True, False), False, True)
self.tryit(proxy(Test(), True, False, True), True, False)
self.tryit(proxy(Test(), True, False, False), True, True)
self.tryit(proxy(Test, True, True, True)(), False, False)
self.tryit(proxy(Test, True, True, False)(), False, True)
self.tryit(proxy(Test, True, False, True)(), True, False)
self.tryit(proxy(Test, True, False, False)(), True, True)
proxy(proxy(proxy(Test))()).foo()
c = proxy(Test())
c.foo()
try:
c.foo(exception = True)
except Exception:
pass
c.foo()

View File

@@ -1,7 +1,6 @@
import nilmdb
from nilmdb.utils.printf import *
import datetime_tz
from nilmdb.utils import datetime_tz
from nose.tools import *
from nose.tools import assert_raises
@@ -11,6 +10,8 @@ import cStringIO
from testutil.helpers import *
from nilmdb.utils import timestamper
class TestTimestamper(object):
# Not a very comprehensive test, but it's good enough.
@@ -27,20 +28,20 @@ class TestTimestamper(object):
# full
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
ts = timestamper.TimestamperRate(input, start, 8000)
foo = ts.readlines()
eq_(foo, join(lines_out))
in_("TimestamperRate(..., start=", str(ts))
# first 30 or so bytes means the first 2 lines
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
ts = timestamper.TimestamperRate(input, start, 8000)
foo = ts.readlines(30)
eq_(foo, join(lines_out[0:2]))
# stop iteration early
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
ts = timestamper.TimestamperRate(input, start, 8000,
1332561600.000200)
foo = ""
for line in ts:
@@ -49,21 +50,21 @@ class TestTimestamper(object):
# stop iteration early (readlines)
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
ts = timestamper.TimestamperRate(input, start, 8000,
1332561600.000200)
foo = ts.readlines()
eq_(foo, join(lines_out[0:2]))
# stop iteration really early
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000,
ts = timestamper.TimestamperRate(input, start, 8000,
1332561600.000000)
foo = ts.readlines()
eq_(foo, "")
# use iterator
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperRate(input, start, 8000)
ts = timestamper.TimestamperRate(input, start, 8000)
foo = ""
for line in ts:
foo += line
@@ -71,21 +72,21 @@ class TestTimestamper(object):
# check that TimestamperNow gives similar result
input = cStringIO.StringIO(join(lines_in))
ts = nilmdb.timestamper.TimestamperNow(input)
ts = timestamper.TimestamperNow(input)
foo = ts.readlines()
ne_(foo, join(lines_out))
eq_(len(foo), len(join(lines_out)))
eq_(str(ts), "TimestamperNow(...)")
# Test passing a file (should be empty)
ts = nilmdb.timestamper.TimestamperNow("/dev/null")
ts = timestamper.TimestamperNow("/dev/null")
for line in ts:
raise AssertionError
ts.close()
# Test the null timestamper
input = cStringIO.StringIO(join(lines_out)) # note: lines_out
ts = nilmdb.timestamper.TimestamperNull(input)
ts = timestamper.TimestamperNull(input)
foo = ts.readlines()
eq_(foo, join(lines_out))
eq_(str(ts), "TimestamperNull(...)")

View File

@@ -1,54 +0,0 @@
nosetests
32: 386 μs (12.0625 μs each)
64: 672.102 μs (10.5016 μs each)
128: 1510.86 μs (11.8036 μs each)
256: 2782.11 μs (10.8676 μs each)
512: 5591.87 μs (10.9216 μs each)
1024: 12812.1 μs (12.5119 μs each)
2048: 21835.1 μs (10.6617 μs each)
4096: 46059.1 μs (11.2449 μs each)
8192: 114127 μs (13.9315 μs each)
16384: 181217 μs (11.0606 μs each)
32768: 419649 μs (12.8067 μs each)
65536: 804320 μs (12.2729 μs each)
131072: 1.73534e+06 μs (13.2396 μs each)
262144: 3.74451e+06 μs (14.2842 μs each)
524288: 8.8694e+06 μs (16.917 μs each)
1048576: 1.69993e+07 μs (16.2118 μs each)
2097152: 3.29387e+07 μs (15.7064 μs each)
|
+3.29387e+07 *
| ----
| -----
| ----
| -----
| -----
| ----
| -----
| -----
| ----
| -----
| ----
| -----
| ---
| ---
| ---
| -------
---+386---------------------------------------------------------------------+---
+32 +2.09715e+06
name #n tsub ttot tavg
..vl/lees/bucket/nilm/nilmdb/nilmdb/interval.py.__iadd__:184 4194272 10.025323 30.262723 0.000007
..evl/lees/bucket/nilm/nilmdb/nilmdb/interval.py.__init__:27 4194272 24.715377 24.715377 0.000006
../lees/bucket/nilm/nilmdb/nilmdb/interval.py.intersects:239 4194272 6.705053 12.577620 0.000003
..im/devl/lees/bucket/nilm/nilmdb/tests/aplotter.py.plot:404 1 0.000048 0.001412 0.001412
../lees/bucket/nilm/nilmdb/tests/aplotter.py.plot_double:311 1 0.000106 0.001346 0.001346
..vl/lees/bucket/nilm/nilmdb/tests/aplotter.py.plot_data:201 1 0.000098 0.000672 0.000672
..vl/lees/bucket/nilm/nilmdb/tests/aplotter.py.plot_line:241 16 0.000298 0.000496 0.000031
..jim/devl/lees/bucket/nilm/nilmdb/nilmdb/printf.py.printf:4 17 0.000252 0.000334 0.000020
..vl/lees/bucket/nilm/nilmdb/tests/aplotter.py.transposed:39 1 0.000229 0.000235 0.000235
..vl/lees/bucket/nilm/nilmdb/tests/aplotter.py.y_reversed:45 1 0.000151 0.000174 0.000174
name tid fname ttot scnt
_MainThread 47269783682784 ..b/python2.7/threading.py.setprofile:88 64.746000 1

View File

@@ -1,22 +0,0 @@
./nilmtool.py destroy /bpnilm/2/raw
./nilmtool.py create /bpnilm/2/raw RawData
if false; then
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-110000 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-120001 -r 8000 /bpnilm/2/raw
else
# 170 hours, about 98 gigs uncompressed:
for i in $(seq 2000 2016); do
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-010001 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-020002 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-030003 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-040004 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-050005 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-060006 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-070007 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-080008 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-090009 -r 8000 /bpnilm/2/raw
time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s ${i}0101-100010 -r 8000 /bpnilm/2/raw
done
fi

655
versioneer.py Normal file
View File

@@ -0,0 +1,655 @@
#! /usr/bin/python
"""versioneer.py
(like a rocketeer, but for versions)
* https://github.com/warner/python-versioneer
* Brian Warner
* License: Public Domain
* Version: 0.7+
This file helps distutils-based projects manage their version number by just
creating version-control tags.
For developers who work from a VCS-generated tree (e.g. 'git clone' etc),
each 'setup.py version', 'setup.py build', 'setup.py sdist' will compute a
version number by asking your version-control tool about the current
checkout. The version number will be written into a generated _version.py
file of your choosing, where it can be included by your __init__.py
For users who work from a VCS-generated tarball (e.g. 'git archive'), it will
compute a version number by looking at the name of the directory created when
te tarball is unpacked. This conventionally includes both the name of the
project and a version number.
For users who work from a tarball built by 'setup.py sdist', it will get a
version number from a previously-generated _version.py file.
As a result, loading code directly from the source tree will not result in a
real version. If you want real versions from VCS trees (where you frequently
update from the upstream repository, or do new development), you will need to
do a 'setup.py version' after each update, and load code from the build/
directory.
You need to provide this code with a few configuration values:
versionfile_source:
A project-relative pathname into which the generated version strings
should be written. This is usually a _version.py next to your project's
main __init__.py file. If your project uses src/myproject/__init__.py,
this should be 'src/myproject/_version.py'. This file should be checked
in to your VCS as usual: the copy created below by 'setup.py
update_files' will include code that parses expanded VCS keywords in
generated tarballs. The 'build' and 'sdist' commands will replace it with
a copy that has just the calculated version string.
versionfile_build:
Like versionfile_source, but relative to the build directory instead of
the source directory. These will differ when your setup.py uses
'package_dir='. If you have package_dir={'myproject': 'src/myproject'},
then you will probably have versionfile_build='myproject/_version.py' and
versionfile_source='src/myproject/_version.py'.
tag_prefix: a string, like 'PROJECTNAME-', which appears at the start of all
VCS tags. If your tags look like 'myproject-1.2.0', then you
should use tag_prefix='myproject-'. If you use unprefixed tags
like '1.2.0', this should be an empty string.
parentdir_prefix: a string, frequently the same as tag_prefix, which
appears at the start of all unpacked tarball filenames. If
your tarball unpacks into 'myproject-1.2.0', this should
be 'myproject-'.
To use it:
1: include this file in the top level of your project
2: make the following changes to the top of your setup.py:
import versioneer
versioneer.versionfile_source = 'src/myproject/_version.py'
versioneer.versionfile_build = 'myproject/_version.py'
versioneer.tag_prefix = '' # tags are like 1.2.0
versioneer.parentdir_prefix = 'myproject-' # dirname like 'myproject-1.2.0'
3: add the following arguments to the setup() call in your setup.py:
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
4: run 'setup.py update_files', which will create _version.py, and will
append the following to your __init__.py:
from _version import __version__
5: modify your MANIFEST.in to include versioneer.py
6: add both versioneer.py and the generated _version.py to your VCS
"""
import os, sys, re
from distutils.core import Command
from distutils.command.sdist import sdist as _sdist
from distutils.command.build_py import build_py as _build_py
versionfile_source = None
versionfile_build = None
tag_prefix = None
parentdir_prefix = None
VCS = "git"
IN_LONG_VERSION_PY = False
LONG_VERSION_PY = '''
IN_LONG_VERSION_PY = True
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (build by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.7+ (https://github.com/warner/python-versioneer)
# these strings will be replaced by git during git-archive
git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
import subprocess
import sys
def run_command(args, cwd=None, verbose=False):
try:
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(args, stdout=subprocess.PIPE, cwd=cwd)
except EnvironmentError:
e = sys.exc_info()[1]
if verbose:
print("unable to run %%s" %% args[0])
print(e)
return None
stdout = p.communicate()[0].strip()
if sys.version >= '3':
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% args[0])
return None
return stdout
import sys
import re
import os.path
def get_expanded_variables(versionfile_source):
# the code embedded in _version.py can just fetch the value of these
# variables. When used from setup.py, we don't want to import
# _version.py, so we do it with a regexp instead. This function is not
# used from _version.py.
variables = {}
try:
for line in open(versionfile_source,"r").readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["full"] = mo.group(1)
except EnvironmentError:
pass
return variables
def versions_from_expanded_variables(variables, tag_prefix, verbose=False):
refnames = variables["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("variables are unexpanded, not using")
return {} # unexpanded, so not in an unpacked git-archive tarball
refs = set([r.strip() for r in refnames.strip("()").split(",")])
for ref in list(refs):
if not re.search(r'\d', ref):
if verbose:
print("discarding '%%s', no digits" %% ref)
refs.discard(ref)
# Assume all version tags have a digit. git's %%d expansion
# behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us
# distinguish between branches and tags. By ignoring refnames
# without digits, we filter out many common branch names like
# "release" and "stabilization", as well as "HEAD" and "master".
if verbose:
print("remaining refs: %%s" %% ",".join(sorted(refs)))
for ref in sorted(refs):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %%s" %% r)
return { "version": r,
"full": variables["full"].strip() }
# no suitable tags, so we use the full revision id
if verbose:
print("no suitable tags, using full revision id")
return { "version": variables["full"].strip(),
"full": variables["full"].strip() }
def versions_from_vcs(tag_prefix, versionfile_source, verbose=False):
# this runs 'git' from the root of the source tree. That either means
# someone ran a setup.py command (and this code is in versioneer.py, so
# IN_LONG_VERSION_PY=False, thus the containing directory is the root of
# the source tree), or someone ran a project-specific entry point (and
# this code is in _version.py, so IN_LONG_VERSION_PY=True, thus the
# containing directory is somewhere deeper in the source tree). This only
# gets called if the git-archive 'subst' variables were *not* expanded,
# and _version.py hasn't already been rewritten with a short version
# string, meaning we're inside a checked out source tree.
try:
here = os.path.abspath(__file__)
except NameError:
# some py2exe/bbfreeze/non-CPython implementations don't do __file__
return {} # not always correct
# versionfile_source is the relative path from the top of the source tree
# (where the .git directory might live) to this file. Invert this to find
# the root from __file__.
root = here
if IN_LONG_VERSION_PY:
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
root = os.path.dirname(here)
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %%s" %% root)
return {}
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
stdout = run_command([GIT, "describe", "--tags", "--dirty", "--always"],
cwd=root)
if stdout is None:
return {}
if not stdout.startswith(tag_prefix):
if verbose:
print("tag '%%s' doesn't start with prefix '%%s'" %% (stdout, tag_prefix))
return {}
tag = stdout[len(tag_prefix):]
stdout = run_command([GIT, "rev-parse", "HEAD"], cwd=root)
if stdout is None:
return {}
full = stdout.strip()
if tag.endswith("-dirty"):
full += "-dirty"
return {"version": tag, "full": full}
def versions_from_parentdir(parentdir_prefix, versionfile_source, verbose=False):
if IN_LONG_VERSION_PY:
# We're running from _version.py. If it's from a source tree
# (execute-in-place), we can work upwards to find the root of the
# tree, and then check the parent directory for a version string. If
# it's in an installed application, there's no hope.
try:
here = os.path.abspath(__file__)
except NameError:
# py2exe/bbfreeze/non-CPython don't have __file__
return {} # without __file__, we have no hope
# versionfile_source is the relative path from the top of the source
# tree to _version.py. Invert this to find the root from __file__.
root = here
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
# we're running from versioneer.py, which means we're running from
# the setup.py in a source tree. sys.argv[0] is setup.py in the root.
here = os.path.abspath(sys.argv[0])
root = os.path.dirname(here)
# Source tarballs conventionally unpack into a directory that includes
# both the project name and a version string.
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print("guessing rootdir is '%%s', but '%%s' doesn't start with prefix '%%s'" %%
(root, dirname, parentdir_prefix))
return None
return {"version": dirname[len(parentdir_prefix):], "full": ""}
tag_prefix = "%(TAG_PREFIX)s"
parentdir_prefix = "%(PARENTDIR_PREFIX)s"
versionfile_source = "%(VERSIONFILE_SOURCE)s"
def get_versions(default={"version": "unknown", "full": ""}, verbose=False):
variables = { "refnames": git_refnames, "full": git_full }
ver = versions_from_expanded_variables(variables, tag_prefix, verbose)
if not ver:
ver = versions_from_vcs(tag_prefix, versionfile_source, verbose)
if not ver:
ver = versions_from_parentdir(parentdir_prefix, versionfile_source,
verbose)
if not ver:
ver = default
return ver
'''
import subprocess
import sys
def run_command(args, cwd=None, verbose=False):
try:
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(args, stdout=subprocess.PIPE, cwd=cwd)
except EnvironmentError:
e = sys.exc_info()[1]
if verbose:
print("unable to run %s" % args[0])
print(e)
return None
stdout = p.communicate()[0].strip()
if sys.version >= '3':
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % args[0])
return None
return stdout
import sys
import re
import os.path
def get_expanded_variables(versionfile_source):
# the code embedded in _version.py can just fetch the value of these
# variables. When used from setup.py, we don't want to import
# _version.py, so we do it with a regexp instead. This function is not
# used from _version.py.
variables = {}
try:
for line in open(versionfile_source,"r").readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["full"] = mo.group(1)
except EnvironmentError:
pass
return variables
def versions_from_expanded_variables(variables, tag_prefix, verbose=False):
refnames = variables["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("variables are unexpanded, not using")
return {} # unexpanded, so not in an unpacked git-archive tarball
refs = set([r.strip() for r in refnames.strip("()").split(",")])
for ref in list(refs):
if not re.search(r'\d', ref):
if verbose:
print("discarding '%s', no digits" % ref)
refs.discard(ref)
# Assume all version tags have a digit. git's %d expansion
# behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us
# distinguish between branches and tags. By ignoring refnames
# without digits, we filter out many common branch names like
# "release" and "stabilization", as well as "HEAD" and "master".
if verbose:
print("remaining refs: %s" % ",".join(sorted(refs)))
for ref in sorted(refs):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return { "version": r,
"full": variables["full"].strip() }
# no suitable tags, so we use the full revision id
if verbose:
print("no suitable tags, using full revision id")
return { "version": variables["full"].strip(),
"full": variables["full"].strip() }
def versions_from_vcs(tag_prefix, versionfile_source, verbose=False):
# this runs 'git' from the root of the source tree. That either means
# someone ran a setup.py command (and this code is in versioneer.py, so
# IN_LONG_VERSION_PY=False, thus the containing directory is the root of
# the source tree), or someone ran a project-specific entry point (and
# this code is in _version.py, so IN_LONG_VERSION_PY=True, thus the
# containing directory is somewhere deeper in the source tree). This only
# gets called if the git-archive 'subst' variables were *not* expanded,
# and _version.py hasn't already been rewritten with a short version
# string, meaning we're inside a checked out source tree.
try:
here = os.path.abspath(__file__)
except NameError:
# some py2exe/bbfreeze/non-CPython implementations don't do __file__
return {} # not always correct
# versionfile_source is the relative path from the top of the source tree
# (where the .git directory might live) to this file. Invert this to find
# the root from __file__.
root = here
if IN_LONG_VERSION_PY:
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
root = os.path.dirname(here)
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %s" % root)
return {}
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
stdout = run_command([GIT, "describe", "--tags", "--dirty", "--always"],
cwd=root)
if stdout is None:
return {}
if not stdout.startswith(tag_prefix):
if verbose:
print("tag '%s' doesn't start with prefix '%s'" % (stdout, tag_prefix))
return {}
tag = stdout[len(tag_prefix):]
stdout = run_command([GIT, "rev-parse", "HEAD"], cwd=root)
if stdout is None:
return {}
full = stdout.strip()
if tag.endswith("-dirty"):
full += "-dirty"
return {"version": tag, "full": full}
def versions_from_parentdir(parentdir_prefix, versionfile_source, verbose=False):
if IN_LONG_VERSION_PY:
# We're running from _version.py. If it's from a source tree
# (execute-in-place), we can work upwards to find the root of the
# tree, and then check the parent directory for a version string. If
# it's in an installed application, there's no hope.
try:
here = os.path.abspath(__file__)
except NameError:
# py2exe/bbfreeze/non-CPython don't have __file__
return {} # without __file__, we have no hope
# versionfile_source is the relative path from the top of the source
# tree to _version.py. Invert this to find the root from __file__.
root = here
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
# we're running from versioneer.py, which means we're running from
# the setup.py in a source tree. sys.argv[0] is setup.py in the root.
here = os.path.abspath(sys.argv[0])
root = os.path.dirname(here)
# Source tarballs conventionally unpack into a directory that includes
# both the project name and a version string.
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print("guessing rootdir is '%s', but '%s' doesn't start with prefix '%s'" %
(root, dirname, parentdir_prefix))
return None
return {"version": dirname[len(parentdir_prefix):], "full": ""}
import sys
def do_vcs_install(versionfile_source, ipy):
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
run_command([GIT, "add", "versioneer.py"])
run_command([GIT, "add", versionfile_source])
run_command([GIT, "add", ipy])
present = False
try:
f = open(".gitattributes", "r")
for line in f.readlines():
if line.strip().startswith(versionfile_source):
if "export-subst" in line.strip().split()[1:]:
present = True
f.close()
except EnvironmentError:
pass
if not present:
f = open(".gitattributes", "a+")
f.write("%s export-subst\n" % versionfile_source)
f.close()
run_command([GIT, "add", ".gitattributes"])
SHORT_VERSION_PY = """
# This file was generated by 'versioneer.py' (0.7+) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
version_version = '%(version)s'
version_full = '%(full)s'
def get_versions(default={}, verbose=False):
return {'version': version_version, 'full': version_full}
"""
DEFAULT = {"version": "unknown", "full": "unknown"}
def versions_from_file(filename):
versions = {}
try:
f = open(filename)
except EnvironmentError:
return versions
for line in f.readlines():
mo = re.match("version_version = '([^']+)'", line)
if mo:
versions["version"] = mo.group(1)
mo = re.match("version_full = '([^']+)'", line)
if mo:
versions["full"] = mo.group(1)
return versions
def write_to_version_file(filename, versions):
f = open(filename, "w")
f.write(SHORT_VERSION_PY % versions)
f.close()
print("set %s to '%s'" % (filename, versions["version"]))
def get_best_versions(versionfile, tag_prefix, parentdir_prefix,
default=DEFAULT, verbose=False):
# returns dict with two keys: 'version' and 'full'
#
# extract version from first of _version.py, 'git describe', parentdir.
# This is meant to work for developers using a source checkout, for users
# of a tarball created by 'setup.py sdist', and for users of a
# tarball/zipball created by 'git archive' or github's download-from-tag
# feature.
variables = get_expanded_variables(versionfile_source)
if variables:
ver = versions_from_expanded_variables(variables, tag_prefix)
if ver:
if verbose: print("got version from expanded variable %s" % ver)
return ver
ver = versions_from_file(versionfile)
if ver:
if verbose: print("got version from file %s %s" % (versionfile, ver))
return ver
ver = versions_from_vcs(tag_prefix, versionfile_source, verbose)
if ver:
if verbose: print("got version from git %s" % ver)
return ver
ver = versions_from_parentdir(parentdir_prefix, versionfile_source, verbose)
if ver:
if verbose: print("got version from parentdir %s" % ver)
return ver
if verbose: print("got version from default %s" % ver)
return default
def get_versions(default=DEFAULT, verbose=False):
assert versionfile_source is not None, "please set versioneer.versionfile_source"
assert tag_prefix is not None, "please set versioneer.tag_prefix"
assert parentdir_prefix is not None, "please set versioneer.parentdir_prefix"
return get_best_versions(versionfile_source, tag_prefix, parentdir_prefix,
default=default, verbose=verbose)
def get_version(verbose=False):
return get_versions(verbose=verbose)["version"]
class cmd_version(Command):
description = "report generated version string"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
ver = get_version(verbose=True)
print("Version is currently: %s" % ver)
class cmd_build_py(_build_py):
def run(self):
versions = get_versions(verbose=True)
_build_py.run(self)
# now locate _version.py in the new build/ directory and replace it
# with an updated value
target_versionfile = os.path.join(self.build_lib, versionfile_build)
print("UPDATING %s" % target_versionfile)
os.unlink(target_versionfile)
f = open(target_versionfile, "w")
f.write(SHORT_VERSION_PY % versions)
f.close()
class cmd_sdist(_sdist):
def run(self):
versions = get_versions(verbose=True)
self._versioneer_generated_versions = versions
# unless we update this, the command will keep using the old version
self.distribution.metadata.version = versions["version"]
return _sdist.run(self)
def make_release_tree(self, base_dir, files):
_sdist.make_release_tree(self, base_dir, files)
# now locate _version.py in the new base_dir directory (remembering
# that it may be a hardlink) and replace it with an updated value
target_versionfile = os.path.join(base_dir, versionfile_source)
print("UPDATING %s" % target_versionfile)
os.unlink(target_versionfile)
f = open(target_versionfile, "w")
f.write(SHORT_VERSION_PY % self._versioneer_generated_versions)
f.close()
INIT_PY_SNIPPET = """
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
"""
class cmd_update_files(Command):
description = "modify __init__.py and create _version.py"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
ipy = os.path.join(os.path.dirname(versionfile_source), "__init__.py")
print(" creating %s" % versionfile_source)
f = open(versionfile_source, "w")
f.write(LONG_VERSION_PY % {"DOLLAR": "$",
"TAG_PREFIX": tag_prefix,
"PARENTDIR_PREFIX": parentdir_prefix,
"VERSIONFILE_SOURCE": versionfile_source,
})
f.close()
try:
old = open(ipy, "r").read()
except EnvironmentError:
old = ""
if INIT_PY_SNIPPET not in old:
print(" appending to %s" % ipy)
f = open(ipy, "a")
f.write(INIT_PY_SNIPPET)
f.close()
else:
print(" %s unmodified" % ipy)
do_vcs_install(versionfile_source, ipy)
def get_cmdclass():
return {'version': cmd_version,
'update_files': cmd_update_files,
'build_py': cmd_build_py,
'sdist': cmd_sdist,
}