-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
is there an object to compute the derivative function of a signal?
don't know if derivative is the right term.. i mean: f(x)=x => f'(x)=1 f(x)=ln(x) => f'(x)=1/x f(x)=sin(x) => f'(x)=cos(x) ...
and a more question: has this anything to do with complex signals or fourier?
thanks
Federico
hmmm....simple way, use biquad~, you can put simple integrators/differentiators in there
or make a more complicated differentiator using z~ objects and *~ and +~ (or more than one biquad)
Your basic numerical differentiator is called a first forward difference:
f ' (n) = ( f(n+1) - f(n) ) / delta-T
next one, central divided difference
f ' (n) = ( f(n+1) - f(n-1) ) / 2delta-T There are others...... and it all comes down to the truncation of a Taylor series anyway:
f(t) = f(0) + f'(0)*t + f''(0)/2! * t^2 + ...
and let's suppose we're approximating f'(0)
f'(0) = (f(t) - f(0) - f''(0)/2! * t^2 - ... ) / t
Those ...'s are where you need to supply numerical 3rd, 4th, 5th, and such derivatives to improve the precision of your differentiator. Numerical integration has more precise methods involved, differentiation is always a numerical problem.
Doesn't have much to do with Fourier transforms or complex signals. Chuck
On 6/26/06, Federico xaero@inwind.it wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
is there an object to compute the derivative function of a signal?
don't know if derivative is the right term.. i mean: f(x)=x => f'(x)=1 f(x)=ln(x) => f'(x)=1/x f(x)=sin(x) => f'(x)=cos(x) ...
and a more question: has this anything to do with complex signals or fourier?
thanks
Federico -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFEoCdyxlXK3KziJfcRAofaAKDWpeq0kjNvXCgmkuUKLNYBlRV1PwCgv9Tz VGqUUc/ZsEXE7St/l+u6Jwo= =1f1l -----END PGP SIGNATURE-----
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
I forgot to send this here to the list _and_ frederico (again, damn)!
Federico wrote:
-----BEGIN PGP SIGNED MESSAGE----- is there an object to compute the derivative function of a signal?
don't know if derivative is the right term.. i mean: f(x)=x => f'(x)=1 f(x)=ln(x) => f'(x)=1/x f(x)=sin(x) => f'(x)=cos(x)
I don't know such as object, but maybe this will help you a little bit: Because you're dealing with finite discrete signals, the derivative of a signal becomes a difference between two consecutive samples. You can easily implement it using [z~] (from zexy, I think) and then subtract the delayed and original sequence:
y[n]= x[n]-x[n-1]
and a more question: has this anything to do with complex signals
I don't think so.
or fourier?
It can, if you wish: derivative corresponds to a multiplication with a ramp in the frequency domain. So, theoretically, you can transform your signal to the frequency domain, multiple with a ramp (watch for 2-sided spectrum), and then, back to time domain, you should have something like "derivative". I have no idea if this works in practice :-) Computation the group delay (=negative derivative of the phase) works using this algorithm, compare http://ccrma.stanford.edu/~jos/filters/Numerical_Computation_Group_Delay.htm...
br, Piotr
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Piotr Majdak wrote:
Federico wrote:
don't know if derivative is the right term.. i mean: f(x)=x => f'(x)=1 f(x)=ln(x) => f'(x)=1/x f(x)=sin(x) => f'(x)=cos(x)
I don't know such as object, but maybe this will help you a little bit: Because you're dealing with finite discrete signals, the derivative of a signal becomes a difference between two consecutive samples. You can easily implement it using [z~] (from zexy, I think) and then subtract the delayed and original sequence:
y[n]= x[n]-x[n-1]
hmm... I tried this with [z~] and [fexpr~], achieving the same result....
I wonder if there are errors in this way of computing derivative...
I am looking at my math notebook, where I read that derivatives are the "limit for the incremental ratio of a function", where h is the increment, and f(x) is the function, and I have this formula:
y' = lim {h >> 0} ( f(x+h) - f(x) ) / h
translating this into a fexpr~ I do:
[osc~] [float (h)] | | [fexpr~ ($x1-$x1[-$f2])/$f2]
recalling the rules above, for f(x)=sin(x), I should have f'(x)=cos(x), however its amplitude lower as frequency lowers... and phase offset of [osc~]' is 180° not 90°.... what's the problem? h isn't enough close to zero?
is there an "ideal" derivator? or I am say something totally wrong?
Federico
I believe h should be in units of radians, which depends on frequency. As you change frequency, instead of computing the derivative of sine x, you compute the derivative of sine n*x. The slope isn't independent of the x axis, or time axis.
On 6/27/06, Federico xaero@inwind.it wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Piotr Majdak wrote:
Federico wrote:
don't know if derivative is the right term.. i mean: f(x)=x => f'(x)=1 f(x)=ln(x) => f'(x)=1/x f(x)=sin(x) => f'(x)=cos(x)
I don't know such as object, but maybe this will help you a little bit: Because you're dealing with finite discrete signals, the derivative of a signal becomes a difference between two consecutive samples. You can easily implement it using [z~] (from zexy, I think) and then subtract the delayed and original sequence:
y[n]= x[n]-x[n-1]
hmm... I tried this with [z~] and [fexpr~], achieving the same result....
I wonder if there are errors in this way of computing derivative...
I am looking at my math notebook, where I read that derivatives are the "limit for the incremental ratio of a function", where h is the increment, and f(x) is the function, and I have this formula:
y' = lim {h >> 0} ( f(x+h) - f(x) ) / h
translating this into a fexpr~ I do:
[osc~] [float (h)] | | [fexpr~ ($x1-$x1[-$f2])/$f2]
recalling the rules above, for f(x)=sin(x), I should have f'(x)=cos(x), however its amplitude lower as frequency lowers... and phase offset of [osc~]' is 180° not 90°.... what's the problem? h isn't enough close to zero?
is there an "ideal" derivator? or I am say something totally wrong?
Federico -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFEoVuixlXK3KziJfcRAsP+AJ9/+u4wgLZKYt6MN9QxUvP26nmxZgCfdIbF hdZFx8rIgh+UJR4ipF6QDmA= =0yYt -----END PGP SIGNATURE-----
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
I don't know such as object, but maybe this will help you a little bit: Because you're dealing with finite discrete signals, the derivative of a signal becomes a difference between two consecutive samples. You can easily implement it using [z~] (from zexy, I think) and then subtract the delayed and original sequence:
y[n]= x[n]-x[n-1]
hmm... I tried this with [z~] and [fexpr~], achieving the same result....
I wonder if there are errors in this way of computing derivative...
yes, this is THE worst derivative approximation. No other derivative approximation has worse error terms.
all of your derivatives can be written using FIR filters (integrators require IIR filtering). We construct our approximate deriv's using convolution
simplest (and worst) is the y[n] = (x[n] - x[n-1])/ delta-x (obtained by truncating the Taylor series after the first derivative term) next, better is y[n] = (x[n+1] - x[n-1]) / (2*delta-x) (obtained by truncating the Taylor series after the second derivative term) These two are the obvious choices. Very simple, low latency. Even in the second example, we had to know one sample ahead, before calculating the derivative, so it's one-sample latency.
as convolutional series
first forward difference: (1 -1 0)*fs (the conv. listed is reverse order, fs is sampling freq.) second derivative approx: (1 -2 1)*fs^2 central divided difference: (.5 0 -.5)*fs third derivative (convolution of 1st and 2nd derivs.): (.5 -1 0 1 -.5)*fs^3 so an improved derivative approx. can be obtained from a Taylor series expansion
f'(0) = (f(t) - f(0) - f''(0)/2! * t^2 - f'''(0)/3! * t^3 - ... ) / t
( 1/t = fs)
f'(n) = ((f(n+1) -f(n) -(f(n+1) -2f(n) +f(n-1))/2 -(.5f(n+2) -f(n+1) +f(n-1) -.5f(n-2))/6)*fs
and it works out to be (-1/12 2/3 0 -2/3 -1/12) which will have better error characteristics than the derivatives mentioned before
consult a numerical analysis textbook; numerical derivatives are different from your typical calculus definitions
I am looking at my math notebook, where I read that derivatives are the "limit for the incremental ratio of a function", where h is the increment, and f(x) is the function, and I have this formula:
y' = lim {h >> 0} ( f(x+h) - f(x) ) / h
translating this into a fexpr~ I do:
[osc~] [float (h)] | | [fexpr~ ($x1-$x1[-$f2])/$f2]
recalling the rules above, for f(x)=sin(x), I should have f'(x)=cos(x), however its amplitude lower as frequency lowers... and phase offset of [osc~]' is 180° not 90°.... what's the problem? h isn't enough close to zero?
is there an "ideal" derivator? or I am say something totally wrong?
Nope, there's not an "ideal" differentiator. You should be seeing the correct behavior of your differentiator. Amplitude goes to zero as frequency goes to zero, under differentiation. Also, the phase shift for ALL frequencies is the same, 90 degrees.
Chuck
is there an "ideal" derivator? or I am say something totally wrong?
Nope, there's not an "ideal" differentiator.
I take that back....I wrote too hastily. There is an ideal differentiator, related to the ideal interpolator.
For ideal interpolation, we have to have an infinitely long signal. We have a function defined on the set of real numbers to the set of real numbers, for instance. The Whittaker Cardinal function is the ideal interpolator. If we have a signal that is band-limited in the frequency domain, we can choose a sinc(k*t) for some k, that contains the frequency bands, we have in our function. sinc(x) = sin(pi*x)/pi*x For simplicity sake, we'll assume that our frequency spectrum is limited to (-1/2,1/2)...Then, we choose k=1 (this is a related bit to the sampling theorem, just replace k with fs, and (-fs/2,fs/2).
And we sample our function at the integers, -inf, ... -2, -1, 0, 1, 2, ..., inf The Whittaker Cardinal function is f(t) = sum( i= -inf to inf, f(i)*sinc(i-t) )
Also, this can be written as a convolution f(t) = sinc(t) -conv-with- sum( i= -inf to inf, f(i)*delta(t-i))
and the result is *exactly* the function we started with! and we can differentiate this function: d/dt (sin (pi*t) / (pi*t)) = ((pi*t)*cos(pi*t) - sin(pi*t) )/ (pi*t^2)
and, when we take this function and convolve it with our sampled function values, we get the derivative of the sampled function. There is a problem, here, namely that the sequence we need to convolve by is infinitely long....so, there's a problem.... but, we can truncate the series to as many samples as we need. For example, a length 11 sequence is: (-1/5 1/4 -1/3 1/2 -1 0 1 -1/2 1/3 -1/4 1/5)*fs
I'm not sure if I've done something wrong here, yet. Anyway, all of your great mathematicians just made it up as they went along, right?
Chuck
On Mon, 26 Jun 2006, Federico wrote:
is there an object to compute the derivative function of a signal? f(x)=x => f'(x)=1
[rzero~ 1] times a constant.
and a more question: has this anything to do with complex signals or fourier?
Fourier(f')(w) = i*w*Fourier(f)(w)
so in the frequency domain, a derivative can be achieved by multiplying the spectrum by an imaginary [phasor~] tuned to fit the blocksize, times another constant.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada