Hello
A common delay problem. Suppose you have a note that lasts about one second, and you are processing it into a delay with feedback. The problems come when you use short delay times (about 50-100 ms) with a high feedback (about 80-90 %). As the note is superposed (mixed) lots of times, it produces classic digital clip distorsion [And actually, even if you repeat only one time the note, if the delay time is shorter than the duration of the note, you may get distorsion as well]. Is there a common method for controlling the occurence of this clipping (besides using [clip~]), or do I just have to tune the values empirically ?
thanks,
-j
assuming you still want to keep all of the properties of the effect except for the clipping, simply start with a quieter sample. this will give you more "headroom" for the inevitable amplification that will come from a high feedback short delay.
pix.
On Fri, May 21, 2004 at 03:39:01PM +0200, julien.breval@tremplin-utc.net wrote:
Hello
A common delay problem. Suppose you have a note that lasts about one second, and you are processing it into a delay with feedback. The problems come when you use short delay times (about 50-100 ms) with a high feedback (about 80-90 %). As the note is superposed (mixed) lots of times, it produces classic digital clip distorsion [And actually, even if you repeat only one time the note, if the delay time is shorter than the duration of the note, you may get distorsion as well]. Is there a common method for controlling the occurence of this clipping (besides using [clip~]), or do I just have to tune the values empirically ?
thanks,
-j
PD-list mailing list PD-list@iem.at to manage your subscription (including un-subscription) see http://iem.at/cgi-bin/mailman/listinfo/pd-list
Selon pix pix@test.at:
assuming you still want to keep all of the properties of the effect except for the clipping, simply start with a quieter sample. this will give you more "headroom" for the inevitable amplification that will come from a high feedback short delay.
that's a good solution but I forgot to say this will be done in realtime with an instrumentist, so maybe it's difficult to control this ... the only thing I can do is to decrease the microphone input level at these moments
On Fri, May 21, 2004 at 03:39:01PM +0200, julien.breval@tremplin-utc.net wrote:
Hello
A common delay problem. Suppose you have a note that lasts about one
second,
and you are processing it into a delay with feedback. The problems come when you use short delay times (about 50-100 ms) with a
high
feedback (about 80-90 %). As the note is superposed (mixed) lots of times,
it
produces classic digital clip distorsion [And actually, even if you repeat
only
one time the note, if the delay time is shorter than the duration of the
note,
you may get distorsion as well]. Is there a common method for controlling the occurence of this clipping (besides using [clip~]), or do I just have to tune the values empirically
?
thanks,
-j
PD-list mailing list PD-list@iem.at to manage your subscription (including un-subscription) see http://iem.at/cgi-bin/mailman/listinfo/pd-list
PD-list mailing list PD-list@iem.at to manage your subscription (including un-subscription) see http://iem.at/cgi-bin/mailman/listinfo/pd-list
Julien,
julien.breval@tremplin-utc.net wrote:
but I forgot to say this will be done in realtime with an instrumentist, so maybe it's difficult to control this ... the only thing I can do is to decrease the microphone input level at these moments
You could use the [limiter~] from Zexy, I think also Unauthorized has one, and set the limit low enough to prevent your peaks from getting too high.
I use a lot of comb filters, where the same input is recycled at a very fast rate [which of course gives the characteristic frequency response of the comb filter], but my inputs tend to be pretty low anyway. But I have had situations where the whole thing "blows up" because of too high an input level combined with too short a delay, so I know what you mean. So pix is right...catch the level before it gets to the loop.
best, d.
Selon derek holzer derek@x-i.net:
maybe it's difficult to control this ... the only thing I can do is to
decrease
the microphone input level at these moments
You could use the [limiter~] from Zexy, I think also Unauthorized has one, and set the limit low enough to prevent your peaks from getting too high.
what about putting the [limiter~] after the delay (with the limit set to almost maximum) ?
dereks solution of using limiter~ is also good. i only just noticed it myself. i think it's in iemlib by the way. you could tie the power of the delayed signal to the power of the unprocessed signal, but with a longer decay time so that you don't clip off all of the effect. there is information on how to do this in the "modes" help subpatch.
pix.
On Fri, May 21, 2004 at 04:30:22PM +0200, julien.breval@tremplin-utc.net wrote:
Selon pix pix@test.at:
assuming you still want to keep all of the properties of the effect except for the clipping, simply start with a quieter sample. this will give you more "headroom" for the inevitable amplification that will come from a high feedback short delay.
that's a good solution but I forgot to say this will be done in realtime with an instrumentist, so maybe it's difficult to control this ... the only thing I can do is to decrease the microphone input level at these moments
On Fri, May 21, 2004 at 03:39:01PM +0200, julien.breval@tremplin-utc.net wrote:
Hello
A common delay problem. Suppose you have a note that lasts about one
second,
and you are processing it into a delay with feedback. The problems come when you use short delay times (about 50-100 ms) with a
high
feedback (about 80-90 %). As the note is superposed (mixed) lots of times,
it
produces classic digital clip distorsion [And actually, even if you repeat
only
one time the note, if the delay time is shorter than the duration of the
note,
you may get distorsion as well]. Is there a common method for controlling the occurence of this clipping (besides using [clip~]), or do I just have to tune the values empirically
?
thanks,
-j
PD-list mailing list PD-list@iem.at to manage your subscription (including un-subscription) see http://iem.at/cgi-bin/mailman/listinfo/pd-list
PD-list mailing list PD-list@iem.at to manage your subscription (including un-subscription) see http://iem.at/cgi-bin/mailman/listinfo/pd-list
--
PD-list mailing list PD-list@iem.at to manage your subscription (including un-subscription) see http://iem.at/cgi-bin/mailman/listinfo/pd-list
just a note to say that apart from the bit about limiter~ being good, the rest of this paragraph is crap. the stuff about comparing signals makes no sense in this context. i was deluded for a minute into thinking that limiter~ worked like the "balance" opcode in csound (whoa, flashback).
;)
pix.
On Fri, May 21, 2004 at 04:59:06PM +0200, pix wrote:
dereks solution of using limiter~ is also good. i only just noticed it myself. i think it's in iemlib by the way. you could tie the power of the delayed signal to the power of the unprocessed signal, but with a longer decay time so that you don't clip off all of the effect. there is information on how to do this in the "modes" help subpatch.
Hallo, julien.breval@tremplin-utc.net hat gesagt: // julien.breval@tremplin-utc.net wrote:
Selon pix pix@test.at:
assuming you still want to keep all of the properties of the effect except for the clipping, simply start with a quieter sample. this will give you more "headroom" for the inevitable amplification that will come from a high feedback short delay.
that's a good solution but I forgot to say this will be done in realtime with an instrumentist, so maybe it's difficult to control this ... the only thing I can do is to decrease the microphone input level at these moments
You could try a limiter or compressor. Zexy has a limiter I never understood (although I didn't try hard), and there are nice compressor plugins in the swh-LADSPA collection, in case you're on Linux (or maybe OS-X). Or build it yourself, probably with env~ or similar.
Frank Barknecht _ ______footils.org__