`git-svn-id: https://bucket.mit.edu/svn/nilm/zoom@8679 ddd99763-3ecb-0310-9145-efcb8ce7c51f`
tags/zoom-1.0
jim 12 years ago
parent
commit
06efd7b5c2
3 changed files with 87437 additions and 0 deletions
1. +15
-0
2. +87389
-0
pc/data/20100708-dactest/dactest.log
3. +33
-0
pc/data/20100708-dactest/process.m

+ 15 - 0 pc/data/20100708-dactest/README.txtView File

 @@ -0,0 +1,15 @@ Testing the stability of the low bits of the DAC. We have a 10-bit DAC that we are commanding with 10 bits, and want to find out the precision of each command, ie. to what precision we can predict the actual output of the DAC for each command. Test setup: 10-bit DAC, scaled to about ±5V, measured as a voltage by the Keithley 2002. We ran dactest.c for about 16 hours to write random DAC values and accurately measure their output. process.m calculates, for each of 1024 commanded value, the average, min, max, and standard deviation of the measured voltages.

+ 33 - 0 pc/data/20100708-dactest/process.mView File

 @@ -0,0 +1,33 @@ a = load("dactest.log"); for i = 0:0x3ff; amin(i+1) = min(a(find(a(:,2) == i),3)); amax(i+1) = max(a(find(a(:,2) == i),3)); amean(i+1) = mean(a(find(a(:,2) == i),3)); astd(i+1) = std(a(find(a(:,2) == i),3)); end # First line is the maximum deviation that we saw # (ie. 14 bits means "1 part in 2^14, over the full 10V scale") # Second line is 3 standard deviations, so 99.9% for a normal distribution. t = 1:1024; figure(1); plot(t, log2(10./(amax(t)-amin(t))), ';Max;', t, log2(10./(3*astd(t))), ';99.9%;'); xlabel("DAC command") ylabel("Deviation from mean (bits)"); figure(2); p = polyfit(t, amean(t), 1) plot(t, log2(10 ./ ((t * p(1) + p(2)) - amean(t)))); xlabel("DAC command") ylabel("Average error from linear (bits)"); if input('Print? [y/N] ', 's') == 'y' disp('Printing'); figure(1); print('-landscape'); figure(2); print('-landscape'); end