Selon pix pix@test.at:
assuming you still want to keep all of the properties of the effect except for the clipping, simply start with a quieter sample. this will give you more "headroom" for the inevitable amplification that will come from a high feedback short delay.
that's a good solution but I forgot to say this will be done in realtime with an instrumentist, so maybe it's difficult to control this ... the only thing I can do is to decrease the microphone input level at these moments
On Fri, May 21, 2004 at 03:39:01PM +0200, julien.breval@tremplin-utc.net wrote:
Hello
A common delay problem. Suppose you have a note that lasts about one
second,
and you are processing it into a delay with feedback. The problems come when you use short delay times (about 50-100 ms) with a
high
feedback (about 80-90 %). As the note is superposed (mixed) lots of times,
it
produces classic digital clip distorsion [And actually, even if you repeat
only
one time the note, if the delay time is shorter than the duration of the
note,
you may get distorsion as well]. Is there a common method for controlling the occurence of this clipping (besides using [clip~]), or do I just have to tune the values empirically
?
thanks,
-j
PD-list mailing list PD-list@iem.at to manage your subscription (including un-subscription) see http://iem.at/cgi-bin/mailman/listinfo/pd-list
PD-list mailing list PD-list@iem.at to manage your subscription (including un-subscription) see http://iem.at/cgi-bin/mailman/listinfo/pd-list