|
|
@@ -45,7 +45,7 @@ transfer? |
|
|
|
of fixed-size chunks. |
|
|
|
- Even chunked encoding needs the size of each chunk beforehand, so |
|
|
|
everything still gets buffered. Just a tradeoff of buffer size. |
|
|
|
|
|
|
|
|
|
|
|
Before timestamps are added: |
|
|
|
- Raw data is about 440 kB/s (9 channels) |
|
|
|
- Prep data is about 12.5 kB/s (1 phase) |
|
|
@@ -60,7 +60,7 @@ Before timestamps are added: |
|
|
|
- If data > 1 MB, send it |
|
|
|
- If more than 10 seconds have elapsed, send it |
|
|
|
- Should those numbers come from the server? |
|
|
|
|
|
|
|
|
|
|
|
Converting from ASCII to PyTables: |
|
|
|
- For each row getting added, we need to set attributes on a PyTables |
|
|
|
Row object and call table.append(). This means that there isn't a |
|
|
@@ -73,7 +73,7 @@ Converting from ASCII to PyTables: |
|
|
|
- Client sends ASCII data |
|
|
|
- Server converts this ACSII data to a list of values |
|
|
|
- Maybe: |
|
|
|
|
|
|
|
|
|
|
|
# threaded side creates this object |
|
|
|
parser = nilmdb.layout.Parser("layout_name") |
|
|
|
# threaded side parses and fills it with data |
|
|
@@ -138,6 +138,18 @@ Speed |
|
|
|
user 2m23.841s |
|
|
|
sys 0m6.928s |
|
|
|
|
|
|
|
- Fourth run |
|
|
|
|
|
|
|
$ time zcat /home/jim/bpnilm-data/snapshot-1-20110513-110002.raw.gz | ./nilmtool.py insert -s 20110513-140003 /bpnilm/1/raw |
|
|
|
Input file: - |
|
|
|
|
|
|
|
Timestamper: TimestamperRate(..., start="Fri, 13 May 2011 14:00:03 EDT", rate=8000) |
|
|
|
real 166m53.007s |
|
|
|
user 2m22.037s |
|
|
|
sys 0m7.688s |
|
|
|
|
|
|
|
- This is bad, must be slowing down due to pytables. |
|
|
|
Database also seems to be ballooning -- something like 1.1G after the first 3 inserts, >2 G afterwards |
|
|
|
|
|
|
|
- Database also seems to be ballooning -- something like 1.1G after |
|
|
|
the first 3 inserts, 2.3 G afterwards |
|
|
|
- Maybe due to indexing, need to see if disabling that is good. Otherwise, |
|
|
|
ditch pytables. |