Hi Ronni
How do you mean to do it?
Shannon entropy is not an independent measurement--the information in a observation is relative to the distribution of all it's possible values.
If I just take one sample and it's evenly distributed between -0.98 and 1 and it's quantized in 0.02 increments (to make the math easier), then the information of any value observed is: -0.01*log(0.01)
Then--if I had a signal that's N samples long, I have N times as much information. Or perhaps think of it as a rate of information.
But for real numbers and continuous distributions, this doesn't work. The information in a single observation diverges. So, doing that with floating point numbers is not practical.
You often see Shannon entropy describing digital signals. If the signal just switches between 0 and 1, we can generate a distribution of the data and see what the probability is empirically. The entropy of each new sample is relative to the distribution. Likewise, then if you know the maximum rate of switching, you can figure out the maximum rate of information in the signal.
Just a few thoughts...
Chuck
On Tue, Feb 26, 2013 at 6:09 AM, ronni montoya ronni.montoya@gmail.comwrote:
Hi , i was wondering if anybody have implemented the shannon entropy function in pd?
Do anybody have tried measuring entropy of a signal?
cheeers
R.
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list