Hi again Al,
Unwanted noise should be eliminated long before it gets quantized. With a clean signal, using the MEAN does not seem more valid than using the AVERAGE to me - quite the opposite.
This is not a statistical problem, but rather an encoder accuracy issue.
I'm also missing something, I guess, because I'm also not sure how your reference to the outdated Nyquist sample rate is relevant - Nyquist / Shannon's famous algorithm relates to information theory and the proposal that you need to sample at least twice as fast as the highest frequency in the signal you are looking at. I say outdated, because far more efficient methods are in use today to cram more information into less bandwidth. When we started digital telephony, it took 8 KHz to carry 4 KHz bandwidth. (Nyquist rate) Now my cell phone / SIP phone both take considerably less than 1/2 of that thanks to new coding methods.
At any rate you do put forward an interesting idea and it may be better in a noisy environment. It would be great to have a large collection of use cases executed to determine which is best in which situation. Still, I think I'll rely on an analog filter, oversampling, averaging, and then drop bits if I'm worried about least significant digit stability.
Bookmarks