-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
and please, always always reply to the mailinglist rather than by personal email (unless you want some private conversation).
On 2012-06-06 08:21, IOhannes m zmoelnig wrote:
On 2012-06-06 00:45, Tebjan Halm - VVVV wrote:
yep, the latest pd ist just missing that one export of glist_getindex. maybe someone just forgot it. i am trying to build pd vanilla with the makefile.mingw right now.
after some (3hrs) trouble and search all files build, but i have a linker error now: ... g_text.o:g_text.c:(.text+0x19ac): undefined reference to
u8_wc_toutf8' g_rtext.o:g_rtext.c:(.text+0x57): undefined reference to
u8_charnum' g_rtext.o:g_rtext.c:(.text+0x173): undefined reference tou8_offset' g_rtext.o:g_rtext.c:(.text+0x2a0): undefined reference to
u8_charnum' g_rtext.o:g_rtext.c:(.text+0x76c): undefined reference to `u8_offset' ...what lib is missing? i cant find any hints on it.. i guess its just something i have to install for mingw. any ideas?
well, makefile.mingw seems to not build some files, namely s_utf8.c just add it to the SRC variable within makefile.mingw
apart from that, i think the "right" way would be to build Pd using the autotoolchain. afair, some tweaks are required (alas!), which ought to be documented in the mailinglist archives (look out of "patco" and "mingw")
gmasdr IOhannes
Hey,I wonder whether there is something similar to Max' ipoke~ (an interpolating buffer~ writer) for Pd. I should need it for some physical modelling and resampling stuff. Otherwise, I could implement it myself. It seems only interpolated reading is available (tabread4~ and similar ones), not writing. Thanks in advanceJosep M
On Wed, 2012-06-06 at 09:53 +0200, Jeppi Jeppi wrote:
Hey, I wonder whether there is something similar to Max' ipoke~ (an interpolating buffer~ writer) for Pd. I should need it for some physical modelling and resampling stuff. Otherwise, I could implement it myself. It seems only interpolated reading is available (tabread4~ and similar ones), not writing.
This somehow reminds of the thread about settable [receive]. Is there really a need for the ability to do interpolated writing? Conceptually, is there any restriction if it is lacking? Can't everything that employs interpolated writing be achieved with interpolated reading as well?
Maybe I'm not thinking hard enough...
Roman
----- Original Message -----
From: Roman Haefeli reduzent@gmail.com To: pd-list@iem.at Cc: Sent: Wednesday, June 6, 2012 4:26 AM Subject: Re: [PD] ipoke~ ?
On Wed, 2012-06-06 at 09:53 +0200, Jeppi Jeppi wrote:
Hey, I wonder whether there is something similar to Max' ipoke~ (an interpolating buffer~ writer) for Pd. I should need it for some physical modelling and resampling stuff. Otherwise, I could implement it myself. It seems only interpolated reading is available (tabread4~ and similar ones), not writing.
This somehow reminds of the thread about settable [receive].
Whether or not the user who started the settable [receive] thread really needed a settable receive, there are situations where it's needed, like wrapping s/r in abstractions so that I don't have to prepend a $0- which, in 95% of cases is what I want, and using a 2nd arg for setting scope for the other 5% of situations. There, not having a settable receive leads to hacky solutions like dynamic-patching or feeding a message-box with a semicolon, the receive-symbol, and the message (which also requires a hack to get "list foo" to remain "list foo" when it comes out). Both of those solutions are obscure and way more error-prone than simply sending a symbol to an inlet.
And the historical replies to a user wanting a settable receive of "why do you want to do that" are misleading, because the real question was "why do you want to do that when there's a long-standing bug-- even in all the iemguis-- that may cause a crash by doing that?"
Anyway, Ivica apparently has fixed the issue.
-Jonathan
Is there really a need for the ability to do interpolated writing? Conceptually, is there any restriction if it is lacking? Can't everything that employs interpolated writing be achieved with interpolated reading as well?
Maybe I'm not thinking hard enough...
Roman
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Wed, 2012-06-06 at 08:56 -0700, Jonathan Wilkes wrote:
----- Original Message -----
From: Roman Haefeli reduzent@gmail.com To: pd-list@iem.at Cc: Sent: Wednesday, June 6, 2012 4:26 AM Subject: Re: [PD] ipoke~ ?
On Wed, 2012-06-06 at 09:53 +0200, Jeppi Jeppi wrote:
Hey, I wonder whether there is something similar to Max' ipoke~ (an interpolating buffer~ writer) for Pd. I should need it for some physical modelling and resampling stuff. Otherwise, I could implement it myself. It seems only interpolated reading is available (tabread4~ and similar ones), not writing.
This somehow reminds of the thread about settable [receive].
Whether or not the user who started the settable [receive] thread really needed a settable receive, there are situations where it's needed, like wrapping s/r in abstractions so that I don't have to prepend a $0- which, in 95% of cases is what I want, and using a 2nd arg for setting scope for the other 5% of situations.
Forgive my ignorance, but I don't understand. Can you elaborate this?
There, not having a settable receive leads to hacky solutions like dynamic-patching or feeding a message-box with a semicolon, the receive-symbol, and the message (which also requires a hack to get "list foo" to remain "list foo" when it comes out). Both of those solutions are obscure and way more error-prone than simply sending a symbol to an inlet.
Sure, I wasn't advocating to substitute a settable receive by some dynamic patching hack. I just happened not to be able to think of a case that absolutely needs a settable receive (and am sorry for not yet understanding the one you provided above).
And the historical replies to a user wanting a settable receive of "why do you want to do that" are misleading, because the real question was "why do you want to do that when there's a long-standing bug-- even in all the iemguis-- that may cause a crash by doing that?"
There never was a bug in [r ], afaik. I didn't know about the fact, that adding an inlet to [r] would imply implementing a bug before it was mentioned in this thread and I always thought, that for conceptual reasons it was never implemented. And for some reason I haven't missed it in all those years of Pd patching.
Anyway, Ivica apparently has fixed the issue.
That's good.
Roman
----- Original Message -----
From: Roman Haefeli reduzent@gmail.com To: Cc: "pd-list@iem.at" pd-list@iem.at Sent: Thursday, June 7, 2012 4:48 AM Subject: [PD] settable receive again (was: ipoke~ ?)
On Wed, 2012-06-06 at 08:56 -0700, Jonathan Wilkes wrote:
----- Original Message -----
From: Roman Haefeli reduzent@gmail.com To: pd-list@iem.at Cc: Sent: Wednesday, June 6, 2012 4:26 AM Subject: Re: [PD] ipoke~ ?
On Wed, 2012-06-06 at 09:53 +0200, Jeppi Jeppi wrote:
Hey, I wonder whether there is something similar to Max' ipoke~
(an
interpolating buffer~ writer) for Pd. I should need it for some physical modelling and resampling stuff. Otherwise, I could
implement
it myself. It seems only interpolated reading is available
(tabread4~
and similar ones), not writing.
This somehow reminds of the thread about settable [receive].
Whether or not the user who started the settable [receive] thread really needed a settable receive, there are situations where it's needed, like
wrapping s/r in abstractions so that I don't have to prepend a $0-
which,
in 95% of cases is what I want, and using a 2nd arg for setting scope for the other 5% of situations.
Forgive my ignorance, but I don't understand. Can you elaborate this?
I've posted about it before. Just imagine [s] inside abstraction [foo] and [r] inside abstraction [bar]. I want to type [foo blah] and have my abstraction set the inner [s] symbol to [parent-$0]-blah. Easy enough. Similarly, I want [bar blah] to set its inner [r] symbol to [parent-$0]-blah. Roadblock.
The scope stuff is more involved than that, but that's enough of an example to demonstrate a use case for a dynamically settable [receive].
There, not having a settable receive leads to hacky solutions like dynamic-patching or feeding a message-box with a semicolon, the receive-symbol, and the message (which also requires a hack to get "list foo" to
remain
"list foo" when it comes out). Both of those solutions are
obscure and
way more error-prone than simply sending a symbol to an inlet.
Sure, I wasn't advocating to substitute a settable receive by some dynamic patching hack. I just happened not to be able to think of a case that absolutely needs a settable receive (and am sorry for not yet understanding the one you provided above).
And the historical replies to a user wanting a settable receive of
"why do
you want to do that" are misleading, because the real question was "why do you want to do that when there's a long-standing bug--
even in
all the iemguis-- that may cause a crash by doing that?"
There never was a bug in [r ], afaik.
There's a bug in [iem_r] and all the other alternatives to [r] that tried to add that functionality, plus the iemguis which are internal objects.
I didn't know about the fact, that adding an inlet to [r] would imply implementing a bug before it was mentioned in this thread and I always thought, that for conceptual reasons it was never implemented. And for some reason I haven't missed it in all those years of Pd patching.
What are the conceptual reasons?
-Jonathan
Anyway, Ivica apparently has fixed the issue.
That's good.
Roman
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
hello,
Le 08/06/2012 00:43, Jonathan Wilkes a écrit :
I've posted about it before. Just imagine [s] inside abstraction [foo] and [r] inside abstraction [bar]. I want to type [foo blah] and have my abstraction set the inner [s] symbol to [parent-$0]-blah. Easy enough. Similarly, I want [bar blah] to set its inner [r] symbol to [parent-$0]-blah. Roadblock.
[s parent-$0-$1] [r parent-$0-$1]
anyway, if you really in need for a settable send and a settable receive, you can always use prepends and route that are both settable. see small attached abstraction.
cheers c
----- Original Message -----
From: Cyrille Henry ch@chnry.net To: Jonathan Wilkes jancsika@yahoo.com Cc: Roman Haefeli reduzent@gmail.com; "pd-list@iem.at" pd-list@iem.at Sent: Friday, June 8, 2012 4:16 AM Subject: Re: [PD] settable receive again
hello,
Le 08/06/2012 00:43, Jonathan Wilkes a écrit :
I've posted about it before. Just imagine [s] inside abstraction [foo]
and
[r] inside abstraction [bar]. I want to type [foo blah] and have my
abstraction
set the inner [s] symbol to [parent-$0]-blah. Easy enough. Similarly, I
want
[bar blah] to set its inner [r] symbol to [parent-$0]-blah. Roadblock.
[s parent-$0-$1] [r parent-$0-$1]
That probably wasn't clear. I don't want [symbol parent-$0-$1]; inside my abstractions I want the parent $0 prefixed to $1 as the symbol. In other words, my abstractions make it so that I don't have to type "$0-" in every s/r pair where I want canvas locality which as I said is most of the cases by far. (My abstractions do other stuff which I wrote about in the nonlocal scope thread, but that isn't important to this discussion.)
anyway, if you really in need for a settable send and a settable receive, you can always use prepends and route that are both settable. see small attached abstraction.
I think you are stuck for two reasons
my abstraction symbols don't clash with other abstractions. 2) Your example filters messages in a way that s/r doesn't. It's possible to hack around this using three extra objects. It is also possible to get the arguments of an abstraction in Pd Vanilla. With the former, I'd rather send a single message to an inlet and be done.
-Jonathan
cheers c
Le 08/06/2012 19:15, Jonathan Wilkes a écrit :
anyway, if you really in need for a settable send and a settable receive, you can always use prepends and route that are both settable. see small attached abstraction.
I think you are stuck for two reasons
- [r setable_send_receive] is global. I want the parent $0 in front of it so that
my abstraction symbols don't clash with other abstractions.
i don't understand this point : just ignore the settable_send_receive stuff that is hidden inside ss and sr. this 2 abstractions work exactly like a real settable send and receive, at least for the local / global send. i.e. if you want a local only send/receive, just use $0-bla, like you would have done with "real" send / receive.
that the route that filter content of different abstraction. the only problem is CPU overload, but that should really be minor.
- Your example filters messages in a way that s/r doesn't. It's possible to hack
around this using three extra objects.
yes, right. but that is a minor problem. not a show stopper.
cheers c
It is also possible to get the arguments of an abstraction in Pd Vanilla. With the former, I'd rather send a single message to an inlet and be done.
-Jonathan
cheers c
This may be a bit off-topic, but here it goes anyhow. If you guys need dynamically setable receives that will not crash pd, try the latest pd-l2ork (version 20120607).
Cheers!
Ivica Ico Bukvic, D.M.A Composition, Music Technology Director, DISIS Interactive Sound & Intermedia Studio Director, L2Ork Linux Laptop Orchestra Assistant Director, CCTAD Virginia Tech Department of Music Blacksburg, VA 24061-0240 (540) 231-6139 (540) 231-5034 (fax) disis.music.vt.edu l2ork.music.vt.edu ico.bukvic.net
Cyrille Henry ch@chnry.net wrote:
Le 08/06/2012 19:15, Jonathan Wilkes a écrit :
anyway, if you really in need for a settable send and a settable receive, you can always use prepends and route that are both settable. see small attached abstraction.
I think you are stuck for two reasons
- [r setable_send_receive] is global. I want the parent $0 in front of it so that
my abstraction symbols don't clash with other abstractions.
i don't understand this point : just ignore the settable_send_receive stuff that is hidden inside ss and sr. this 2 abstractions work exactly like a real settable send and receive, at least for the local / global send. i.e. if you want a local only send/receive, just use $0-bla, like you would have done with "real" send / receive.
that the route that filter content of different abstraction. the only problem is CPU overload, but that should really be minor.
- Your example filters messages in a way that s/r doesn't. It's possible to hack
around this using three extra objects.
yes, right. but that is a minor problem. not a show stopper.
cheers c
It is also possible to get the arguments of an abstraction in Pd Vanilla. With the former, I'd rather send a single message to an inlet and be done.
-Jonathan
cheers c
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
----- Original Message -----
From: Cyrille Henry ch@chnry.net To: Jonathan Wilkes jancsika@yahoo.com Cc: Roman Haefeli reduzent@gmail.com; "pd-list@iem.at" pd-list@iem.at Sent: Saturday, June 9, 2012 7:08 AM Subject: Re: [PD] settable receive again
Le 08/06/2012 19:15, Jonathan Wilkes a écrit :
anyway, if you really in need for a settable send and a settable
receive, you
can always use prepends and route that are both settable. see small attached abstraction.
I think you are stuck for two reasons
- [r setable_send_receive] is global. I want the parent $0 in front of it
so that
my abstraction symbols don't clash with other abstractions.
i don't understand this point : just ignore the settable_send_receive stuff that is hidden inside ss and sr.
What if some other abstraction somewhere uses that symbol? The whole point of $0 is that you don't need to worry about this.
this 2 abstractions work exactly like a real settable send and receive, at least for the local / global send.
No, they don't. They have an additional feature/bug of filtering lists that have a symbol as the first element. "list foo bar" comes out "foo bar" at the other end.
Like I wrote, it's possible to hack around this problem. But that's much uglier than, say, sending a symbol to an inlet.
-Jonathan
i.e. if you want a local only send/receive, just use $0-bla, like you would have done with "real" send / receive.
that the route that filter content of different abstraction. the only problem is CPU overload, but that should really be minor.
- Your example filters messages in a way that s/r doesn't. It's
possible to hack
around this using three extra objects.
yes, right. but that is a minor problem. not a show stopper.
cheers c
It is also possible to get the arguments of an abstraction in Pd Vanilla. With the former, I'd rather send a
single message to
an inlet and be done.
-Jonathan
cheers c
Le 09/06/2012 18:36, Jonathan Wilkes a écrit :
----- Original Message -----
From: Cyrille Henrych@chnry.net To: Jonathan Wilkesjancsika@yahoo.com Cc: Roman Haefelireduzent@gmail.com; "pd-list@iem.at"pd-list@iem.at Sent: Saturday, June 9, 2012 7:08 AM Subject: Re: [PD] settable receive again
Le 08/06/2012 19:15, Jonathan Wilkes a écrit :
anyway, if you really in need for a settable send and a settable
receive, you
can always use prepends and route that are both settable. see small attached abstraction.
I think you are stuck for two reasons
- [r setable_send_receive] is global. I want the parent $0 in front of it
so that
my abstraction symbols don't clash with other abstractions.
i don't understand this point : just ignore the settable_send_receive stuff that is hidden inside ss and sr.
What if some other abstraction somewhere uses that symbol? The whole point of $0 is that you don't need to worry about this.
the risk can be reduce using this symbol instead : This_symbol_is_use_for_the_ss_and_sr_object_and_should_not_be_use_elsewhere
if you still think it's dangerous, then think of someone using 1000-foo in it's patch. $0-foo is not 100% safe either!!!
this 2 abstractions work exactly like a real settable send and receive, at least for the local / global send.
No, they don't. They have an additional feature/bug of filtering lists that have a symbol as the first element. "list foo bar" comes out "foo bar" at the other end.
yes, my sentence was an answer to your 1st point : local / global send. not an answer to your 2nd point.
this patchs was a prof of concept, not a final answer.
Like I wrote, it's possible to hack around this problem. But that's much uglier than, say, sending a symbol to an inlet.
yes, i agree. having a settable receive is one of the 1000 things that can be improve to make user life easier. i just wanted to point that it's far from being a show stopper, since simple workaround can be find.
cheers
Cyrille
-Jonathan
i.e. if you want a local only send/receive, just use $0-bla, like you would have done with "real" send / receive.
that the route that filter content of different abstraction. the only problem is CPU overload, but that should really be minor.
- Your example filters messages in a way that s/r doesn't. It's
possible to hack
around this using three extra objects.
yes, right. but that is a minor problem. not a show stopper.
cheers c
It is also possible to get the arguments of an abstraction in Pd Vanilla. With the former, I'd rather send a
single message to
an inlet and be done.
-Jonathan
cheers c
----- Original Message -----
From: Cyrille Henry ch@chnry.net To: Jonathan Wilkes jancsika@yahoo.com Cc: Roman Haefeli reduzent@gmail.com; "pd-list@iem.at" pd-list@iem.at Sent: Saturday, June 9, 2012 12:55 PM Subject: Re: [PD] settable receive again
Le 09/06/2012 18:36, Jonathan Wilkes a écrit :
----- Original Message -----
From: Cyrille Henrych@chnry.net To: Jonathan Wilkesjancsika@yahoo.com Cc: Roman Haefelireduzent@gmail.com;
"pd-list@iem.at"pd-list@iem.at
Sent: Saturday, June 9, 2012 7:08 AM Subject: Re: [PD] settable receive again
Le 08/06/2012 19:15, Jonathan Wilkes a écrit :
anyway, if you really in need for a settable send and a
settable
receive, you
can always use prepends and route that are both settable. see small attached abstraction.
I think you are stuck for two reasons 1) [r setable_send_receive] is global. I want the parent $0 in
front of it
so that
my abstraction symbols don't clash with other abstractions.
i don't understand this point : just ignore the
settable_send_receive stuff
that is hidden inside ss and sr.
What if some other abstraction somewhere uses that symbol? The whole point of $0 is that you don't need to worry about this.
the risk can be reduce using this symbol instead : This_symbol_is_use_for_the_ss_and_sr_object_and_should_not_be_use_elsewhere
if you still think it's dangerous, then think of someone using 1000-foo in it's patch. $0-foo is not 100% safe either!!!
this 2 abstractions work exactly like a real settable send and receive,
at least
for the local / global send.
No, they don't. They have an additional feature/bug of filtering lists
that have a
symbol as the first element. "list foo bar" comes out "foo
bar" at the other end. yes, my sentence was an answer to your 1st point : local / global send. not an answer to your 2nd point.
this patchs was a prof of concept, not a final answer.
Like I wrote, it's possible to hack around this problem. But
that's much uglier
than, say, sending a symbol to an inlet.
yes, i agree. having a settable receive is one of the 1000 things that can be improve to make user life easier. i just wanted to point that it's far from being a show stopper, since simple workaround can be find.
999 if you use pd-l2ork. :)
A roadblock isn't a showstopper. But if you have enough roadblocks it makes it very difficult to get where you want to go.
-Jonathan
cheers
Cyrille
-Jonathan
i.e. if you want a local only send/receive, just use $0-bla, like you
would have
done with "real" send / receive.
that the route that filter content of different abstraction. the only
problem is
CPU overload, but that should really be minor.
2) Your example filters messages in a way that s/r doesn't.
It's
possible to hack
around this using three extra objects.
yes, right. but that is a minor problem. not a show stopper.
cheers c
It is also possible to get the arguments of an abstraction in Pd Vanilla. With the former, I'd rather
send a
single message to
an inlet and be done.
-Jonathan
cheers c
Quoting Jonathan Wilkes jancsika@yahoo.com:
[s parent-$0-$1] [r parent-$0-$1]
That probably wasn't clear. I don't want [symbol parent-$0-$1]; inside my abstractions I want the parent $0 prefixed to $1 as the symbol. In other words, my abstractions make it so that I don't have to type "$0-" in every s/r pair where I want canvas locality which as I said is most of the cases by far. (My abstractions do other stuff which I wrote about in the nonlocal scope thread, but that isn't important to this discussion.)
are you talking about canvas-locality (something Pd has no constructs
for), patch-locality ($0), or hierarchical locality (something like
[block~] does, and which many text-based languages do, e.g. {int foo;
if(2>1){float foo; /* ... */ } }
also, do you want to be able to build abstractions that have the same
property?
using externals like iemguts makes it trivial to have all this, but
probably the idea is to have all this in Pd-vanilla (i haven't
followed the entire thread yet, due to problems with my mai filters)
fgamsr IOhannes
----- Original Message -----
From: "zmoelnig@iem.at" zmoelnig@iem.at To: pd-list@iem.at Cc: Sent: Sunday, June 10, 2012 6:10 AM Subject: Re: [PD] settable receive again
Quoting Jonathan Wilkes jancsika@yahoo.com:
[s parent-$0-$1] [r parent-$0-$1]
That probably wasn't clear. I don't want [symbol parent-$0-$1];
inside my
abstractions I want the parent $0 prefixed to $1 as the symbol. In other words, my abstractions make it so that I don't have to type
"$0-" in every
s/r pair where I want canvas locality which as I said is most of the cases by far. (My abstractions do other stuff which I wrote about in the
nonlocal
scope thread, but that isn't important to this discussion.)
are you talking about canvas-locality (something Pd has no constructs for),
See [sendlocal/receivelocal]
patch-locality ($0),
Yes, that's what I meant.
or hierarchical locality (something like [block~] does, and which many text-based languages do, e.g. {int foo; if(2>1){float foo; /* ... */ } }
also, do you want to be able to build abstractions that have the same property?
See the thread with the subject "nonlocal message passing scope".
using externals like iemguts makes it trivial to have all this, but probably the idea is to have all this in Pd-vanilla (i haven't followed the entire thread yet, due to problems with my mai filters)
Digression: While iemguts and my patch that adds a canvas "get" method are good for prototyping a solution, neither provides a full solution-- that is, whatever idioms we come up with for scope would have to be augmented by use of $0 in table manipulation classes, message boxes, structs, and probably other places I'm not thinking of. That's fine for me, as I'm still exploring the possibilities of scope in pd, esp. with abstractions, but the user learning Pd doesn't need to have two incompatible ways of handling symbol locality.
Tim Blechmann had a neat way of solving this in Nova by declaring variables. I think they default to patch-locality if not declared, though I'm not certain on that. Anyway, his method apparently covers all symbols that are bound to send/receive message-- in message boxes, names of arrays, etc. It looked like a very clean solution except that I don't think it addressed abstraction locality like "this symbol shared by all instances of this abstraction", "this symbol common to all objects from this libdir", "this symbol shared by all instances of this abstraction on the parent", etc.
-Jonathan
fgamsr IOhannes
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Sun, Jun 10, 2012 at 6:10 AM, zmoelnig@iem.at wrote:
Quoting Jonathan Wilkes jancsika@yahoo.com:
[s parent-$0-$1] [r parent-$0-$1]
That probably wasn't clear. I don't want [symbol parent-$0-$1]; inside my abstractions I want the parent $0 prefixed to $1 as the symbol. In other words, my abstractions make it so that I don't have to type "$0-" in every s/r pair where I want canvas locality which as I said is most of the cases by far. (My abstractions do other stuff which I wrote about in the nonlocal scope thread, but that isn't important to this discussion.)
are you talking about canvas-locality (something Pd has no constructs for), patch-locality ($0), or hierarchical locality (something like [block~] does, and which many text-based languages do, e.g. {int foo; if(2>1){float foo; /* ... */ } }
also, do you want to be able to build abstractions that have the same property?
One other thing I'm not clear on - is the point to have a convenient way to ensure locality at patch init, or do you want settable receive while the patch is running? The latter would provide the former, obviously, but I wonder if the latter is actually germane to the original complaint. (The latter would also be in most ways conceptually the same as dynamic patching connections while the patch is running...)
Matt
----- Original Message -----
From: Matt Barber brbrofsvl@gmail.com To: zmoelnig@iem.at Cc: pd-list@iem.at Sent: Sunday, June 10, 2012 12:50 PM Subject: Re: [PD] settable receive again
On Sun, Jun 10, 2012 at 6:10 AM, zmoelnig@iem.at wrote:
Quoting Jonathan Wilkes jancsika@yahoo.com:
[s parent-$0-$1] [r parent-$0-$1]
That probably wasn't clear. I don't want [symbol
parent-$0-$1]; inside my
abstractions I want the parent $0 prefixed to $1 as the symbol. In
other
words, my abstractions make it so that I don't have to type
"$0-" in every
s/r pair where I want canvas locality which as I said is most of the
cases
by far. (My abstractions do other stuff which I wrote about in the nonlocal scope thread, but that isn't important to this discussion.)
are you talking about canvas-locality (something Pd has no constructs for), patch-locality ($0), or hierarchical locality (something like [block~]
does,
and which many text-based languages do, e.g. {int foo; if(2>1){float
foo; /*
... */ } }
also, do you want to be able to build abstractions that have the same property?
One other thing I'm not clear on - is the point to have a convenient way to ensure locality at patch init, or do you want settable receive while the patch is running? The latter would provide the former, obviously, but I wonder if the latter is actually germane to the original complaint. (The latter would also be in most ways conceptually the same as dynamic patching connections while the patch is running...)
For the purposes of this constrained example-- where I'm _only_ concerned with my abstraction that takes $1 and prepends the parent's "$0-" to it-- I have the receive set by loadbang.
In my example from the "nonlocal message passing scope" thread I might have added a 2nd inlet to reset the symbol, but that's irrelevant here. The main point is that the question "why would you ever want a settable receive" has a clear answer, and in this example it's much preferable to the alternatives.
-Jonathan
Matt
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Don, 2012-06-07 at 15:43 -0700, Jonathan Wilkes wrote:
What are the conceptual reasons?
I don't know actually. I somehow thought, that the missing ability to set the receive symbol in [r ] was intentional. And since I never saw an urgent need for it, I never questioned my somewhat silly assumption. However, it seems now this feature hasn't been added simply because of the bug it would expose.
Regarding your use case: I would do it like Cyrille proposed it in the previous mail. Now, would that solve your problem or am I still totally misunderstanding you? (Sorry if I do)
Roman
----- Original Message -----
From: Roman Haefeli reduzent@gmail.com To: "pd-list@iem.at" pd-list@iem.at Cc: Sent: Friday, June 8, 2012 9:05 AM Subject: Re: [PD] settable receive again (was: ipoke~ ?)
On Don, 2012-06-07 at 15:43 -0700, Jonathan Wilkes wrote:
What are the conceptual reasons?
I don't know actually. I somehow thought, that the missing ability to set the receive symbol in [r ] was intentional. And since I never saw an urgent need for it, I never questioned my somewhat silly assumption. However, it seems now this feature hasn't been added simply because of the bug it would expose.
Regarding your use case: I would do it like Cyrille proposed it in the previous mail. Now, would that solve your problem or am I still totally misunderstanding you? (Sorry if I do)
See my reply to him. It would solve my problem in the same way the hack to get args in Pd vanilla solves the arg-getting problem. That is to say, it requires more work from the user and is really only a partial solution.
-Jonathan
Roman
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Am 06.06.2012 um 10:26 schrieb Roman Haefeli:
This somehow reminds of the thread about settable [receive]. Is there really a need for the ability to do interpolated writing?
Conceptually, is there any restriction if it is lacking? Can't everything that
employs interpolated writing be achieved with interpolated reading as well?
things I used ipoke~ for in Max include an array/buffer based looper
that allows for overdubs while changing playback speed, and pedal-
style delays, where each feedback iteration is pitchshifted (see also
discussion here: http://puredata.hurleur.com/sujet-3204-different-ways-implementing-delay-loo...
)
cheers
georg
On Sat, Jun 9, 2012 at 11:00 AM, Georg Bosch kram@stillavailable.com wrote:
things I used ipoke~ for in Max include an array/buffer based looper that allows for overdubs while changing playback speed, and pedal-style delays, where each feedback iteration is pitchshifted (see also discussion here: http://puredata.hurleur.com/sujet-3204-different-ways-implementing-delay-loo...
For exactly that purpose I was looking for [tabwrite4~] a while ago, see:
http://lists.puredata.info/pipermail/pd-list/2012-01/093476.html
Indeed [ipoke~] would be useful for many things. Did anyone ask Pierre Alexandre if he is willing to release the code so it can be ported to Pd?
Katja
Hi,
I was away from the list for a long while and missed the [tabwrite4~] conversation -- quite interesting.
I have been thinking about this for a while. Depending on the application, there's a further complication, which is whether it would overwrite samples in the table, or mix the incoming signal with samples already there. I don't have access to max so I don't know what ipoke~ does.
Csound has a variable write delay opcode that would be worth looking at - the csound website has just been flagged by google for having malicious content so I can't link to the manual page, but the opcode is called "vdelayxw."
Matt
On Sat, Jun 9, 2012 at 6:03 AM, katja katjavetter@gmail.com wrote:
On Sat, Jun 9, 2012 at 11:00 AM, Georg Bosch kram@stillavailable.com wrote:
things I used ipoke~ for in Max include an array/buffer based looper that allows for overdubs while changing playback speed, and pedal-style delays, where each feedback iteration is pitchshifted (see also discussion here: http://puredata.hurleur.com/sujet-3204-different-ways-implementing-delay-loo...
For exactly that purpose I was looking for [tabwrite4~] a while ago, see:
http://lists.puredata.info/pipermail/pd-list/2012-01/093476.html
Indeed [ipoke~] would be useful for many things. Did anyone ask Pierre Alexandre if he is willing to release the code so it can be ported to Pd?
Katja
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Sat, Jun 9, 2012 at 5:18 PM, Matt Barber brbrofsvl@gmail.com wrote:
Csound has a variable write delay opcode that would be worth looking at - the csound website has just been flagged by google for having malicious content so I can't link to the manual page, but the opcode is called "vdelayxw."
Unfortunately I can not understand the c code of vdelayxw. There's comments for the obvious things but not for the magic numbers and other tricks. But it may be a method for sinc-interpolated resampling.
James Fenn pointed to Julius O. Smith's pages on resampling and sinc-interpolation:
https://ccrma.stanford.edu/~jos/resample/
After reading the pages and experimenting with the proposed sinc tables, I noticed that proper sinc-interpolation would be fairly cpu-intensive. It is much less effective than [tabread4~], because 4 point sinc-interpolation would not even work for the simplest resampling job. Compare with a FIR filter, which should be up to a few dozen points per octave resampling.
In addition to that, an object like [ipoke~] or [tabwrite4~] implies a continuously variable resampling factor, which depends on the distances between consecutive float indexes received at the index inlet. Should the interpolation order grow according to distance? If so, what would be the limit?
On the other hand, think of [tabread4~]: it's interpolation scheme is fixed, no matter what resampling factor. With extreme resampling, aliases may be noticeable. But what the hell, it doesn't sound like the original music anyway, when sped up or down to extremes. That is the difference with an offline resampling job, when the original sound must be preserved insofar the new frequency range allows. In that sense, an interpolation scheme like in [tabread4~] could be used for realtime variable speed writing, leaving the consequences for the user. For example, if you make large jumps through the table, many old samples would simply not be rewritten.
But even with interpolation quality requirements so relaxed, it is not by itself clear how the samples should be written. Using sinc-interpolation, each input sample could be written as many samples of a (eventually phase-shifted) sinc function, with amplitude compensation for the overlap. The interpolation scheme of [tabread4~] however can not calculate four output samples based on one input sample, it could only calculate one output sample based on four input samples.
Imagine how one would do this with a fixed resampling factor. For example with resampling factor 0.75 (downsampling) you would write 64
samples, while incrementing the read index by 1 / 0.75 = 1.3333333. Another example, with resampling factor 1.5 (upsampling) you would write 64 * 1.5 = 96 samples into the array for each block of 64 input samples, while incrementing the read index with 1 / 1.5 = 0.6666666. The perform loop would not iterate over an integer n (= blocksize), but it would just break when the float read index exceeds n. To accommodate for interpolation, and for index increments larger than one, a few samples of fixed delay 'headroom' must be introduced.
In a [tabwrite4~], resampling factor would follow from index increments calculated from float index values received at the inlet. But what to do with large increments, exceeding the delay 'headroom' at the end of the input buffer? And another question: what to do with very small increments, leading to massive amounts of written samples and possibly to cpu overload? When starting on [tabwrite4~] a few months ago, I stumbled upon these problems. I then considered a version where you don't enter a float index at signal rate, but a resampling factor at message rate which can be checked for bad values, and set the delay 'headroom' as needed. But such a write object would need another method to optionally synchronize with a read object, and I have not worked that out either. Suggestions or comments are appreciated.
Katja
Hi, I've been going through the vdelayxw code myself. See comments:
On Wed, Jun 13, 2012 at 12:30 PM, katja katjavetter@gmail.com wrote:
On Sat, Jun 9, 2012 at 5:18 PM, Matt Barber brbrofsvl@gmail.com wrote:
Csound has a variable write delay opcode that would be worth looking at - the csound website has just been flagged by google for having malicious content so I can't link to the manual page, but the opcode is called "vdelayxw."
Unfortunately I can not understand the c code of vdelayxw. There's comments for the obvious things but not for the magic numbers and other tricks. But it may be a method for sinc-interpolated resampling.
It almost certainly is some kind of windowed sinc, and you're right about the magic numbers. I don't think you need to know for sure what the exact interpolation scheme is to make sense of it, though; my understanding of it is as follows:
For both the variable read and variable write delay opcodes in csound, one chooses an interpolation window size - say 32 samples.
Now, let's say we're trying to READ from the delay line at sample index 116.33. So we need to interpolate between sample 116 and 117. Given our 32-point interpolation window, the earliest sample that will have an effect on the interpolation is sample 101, and the last one is sample 132, so to find the correct interpolation we need to sum together all the scaled windowed sincs (or whatever convolution kernel is in the interpolation window) for each of those 32 samples, at index 116.33, which gives us our read value.
The write works rather in reverse: if we want to write a sample at index 116.33, then we need to calculate the windowed sinc (or whatever) for the input sample centered on 116.33, and MIX (not overwrite) those values for samples 101-132 into those samples. What emerges, then, becomes the cumulative effect of having interpolated: imagine the next sample written is at index 118.54 - you're going to mix its function into samples 103-134, and the overlap with the previous action is going to cause the interpolation to "work" once those samples reach the read head.
In that way, a variable write into a delay line is somewhat easier conceptually -- if it's done this way -- than a [tabwrite4~] would be, because the way the table is read is predetermined. Nothing is ever read until all the relevant input samples have had a chance to affect the output in the appropriate way.
On the other hand, think of [tabread4~]: it's interpolation scheme is fixed, no matter what resampling factor. With extreme resampling, aliases may be noticeable. But what the hell, it doesn't sound like the original music anyway, when sped up or down to extremes. That is the difference with an offline resampling job, when the original sound must be preserved insofar the new frequency range allows. In that sense, an interpolation scheme like in [tabread4~] could be used for realtime variable speed writing, leaving the consequences for the user. For example, if you make large jumps through the table, many old samples would simply not be rewritten.
But even with interpolation quality requirements so relaxed, it is not by itself clear how the samples should be written. Using sinc-interpolation, each input sample could be written as many samples of a (eventually phase-shifted) sinc function, with amplitude compensation for the overlap. The interpolation scheme of [tabread4~] however can not calculate four output samples based on one input sample, it could only calculate one output sample based on four input samples.
Two points here. The last thing you said is not actually true -- each interpolation scheme has an associated convolution function, which can be calculated by imagining what the interpolation would look like for a single sample whose value was 1.0 surrounded by zeroes everywhere else. This 4-point piecewise function can be used to write four samples in its immediate vicinity the same way that the sinc does in the csound example.
It seems the bigger question to me is, if you skip somewhere far in the table, you're going to write four samples, and then another four samples somewhere else. Maybe this is OK, but another way to think of what to do would be to imagine the incoming signal as something you're interpolating over the way you would do when reading from a table, in which case a very large index increment if you're writing could be just like a bunch of very small index increments when you're reading. So say you jump ahead 48 samples - one way to do it would be to write ALL 48 samples as an interpolation over the the two input samples.
That would open up some other problems, like how to interpret the difference between jumping back in a table vs "wrapping back around." Not sure how to deal with that at all (this problem doesn't arise in the delay line version of a variable write because what is represented is always a chunk of time rather than an abstract table of numbers to be used for whatever, so there's no real concept of "wraparound" in the delay-line version).
It would also lead to there not being a good way to "keep writing into index 1.5 of the table" -- the incoming input samples would be interpolated over zero samples of the table, and so nothing would get written.
Imagine how one would do this with a fixed resampling factor. For example with resampling factor 0.75 (downsampling) you would write 64
- 0.75 = 48 samples into the array for every block of 64 input
samples, while incrementing the read index by 1 / 0.75 = 1.3333333. Another example, with resampling factor 1.5 (upsampling) you would write 64 * 1.5 = 96 samples into the array for each block of 64 input samples, while incrementing the read index with 1 / 1.5 = 0.6666666. The perform loop would not iterate over an integer n (= blocksize), but it would just break when the float read index exceeds n. To accommodate for interpolation, and for index increments larger than one, a few samples of fixed delay 'headroom' must be introduced.
This is a good point -- but the problem wouldn't exist if you were writing four samples in the table for every incoming sample.
I'm just not sure in that case if a 4-point cubic interpolation is nearly enough for the kind of upsampling that might need to occur.
In a [tabwrite4~], resampling factor would follow from index increments calculated from float index values received at the inlet. But what to do with large increments, exceeding the delay 'headroom' at the end of the input buffer? And another question: what to do with very small increments, leading to massive amounts of written samples and possibly to cpu overload?
I'm not sure I understand this - I assume you mean "very small increments in the written table." So lets say you're going to try to write a whole 64-sample input block to between indices 10 and 11 of the table. If you're writing 4 samples each time, what you end up with is not cpu overload, but just four samples with possibly a very high amplitude, depending upon the nature of the signal. And actually, if you think about this with regard to the delay line, this would be what would happen if the sound source were moving toward a microphone at or near the speed of sound, so the "very high amplitude" would in effect be a digital "sonic boom."
Matt
On Wed, Jun 13, 2012 at 3:27 PM, Matt Barber brbrofsvl@gmail.com wrote:
I'm not sure I understand this - I assume you mean "very small increments in the written table." So lets say you're going to try to write a whole 64-sample input block to between indices 10 and 11 of the table. If you're writing 4 samples each time, what you end up with is not cpu overload, but just four samples with possibly a very high amplitude, depending upon the nature of the signal. And actually, if you think about this with regard to the delay line, this would be what would happen if the sound source were moving toward a microphone at or near the speed of sound, so the "very high amplitude" would in effect be a digital "sonic boom."
Matt
I think you'll need to apply a scaling factor so that samples that accumulate values over a short interval will not blow up. I've been reading your discussion, and it looks like a really fun math problem.
I'm stuck wrangling servers and cultivating my ulcer for now... but I hope I can find some time to study it with you soon.
Chuck
Ha, finally a detailed discussion on this topic, I like it. My replies are inlined.
On Wed, Jun 13, 2012 at 10:27 PM, Matt Barber brbrofsvl@gmail.com wrote:
Hi, I've been going through the vdelayxw code myself. See comments:
On Wed, Jun 13, 2012 at 12:30 PM, katja katjavetter@gmail.com wrote:
On Sat, Jun 9, 2012 at 5:18 PM, Matt Barber brbrofsvl@gmail.com wrote:
Csound has a variable write delay opcode that would be worth looking at - the csound website has just been flagged by google for having malicious content so I can't link to the manual page, but the opcode is called "vdelayxw."
Unfortunately I can not understand the c code of vdelayxw. There's comments for the obvious things but not for the magic numbers and other tricks. But it may be a method for sinc-interpolated resampling.
It almost certainly is some kind of windowed sinc, and you're right about the magic numbers. I don't think you need to know for sure what the exact interpolation scheme is to make sense of it, though; my understanding of it is as follows:
For both the variable read and variable write delay opcodes in csound, one chooses an interpolation window size - say 32 samples.
Now, let's say we're trying to READ from the delay line at sample index 116.33. So we need to interpolate between sample 116 and 117. Given our 32-point interpolation window, the earliest sample that will have an effect on the interpolation is sample 101, and the last one is sample 132, so to find the correct interpolation we need to sum together all the scaled windowed sincs (or whatever convolution kernel is in the interpolation window) for each of those 32 samples, at index 116.33, which gives us our read value.
The write works rather in reverse: if we want to write a sample at index 116.33, then we need to calculate the windowed sinc (or whatever) for the input sample centered on 116.33, and MIX (not overwrite) those values for samples 101-132 into those samples. What emerges, then, becomes the cumulative effect of having interpolated: imagine the next sample written is at index 118.54 - you're going to mix its function into samples 103-134, and the overlap with the previous action is going to cause the interpolation to "work" once those samples reach the read head.
In that way, a variable write into a delay line is somewhat easier conceptually -- if it's done this way -- than a [tabwrite4~] would be, because the way the table is read is predetermined. Nothing is ever read until all the relevant input samples have had a chance to affect the output in the appropriate way.
On the other hand, think of [tabread4~]: it's interpolation scheme is fixed, no matter what resampling factor. With extreme resampling, aliases may be noticeable. But what the hell, it doesn't sound like the original music anyway, when sped up or down to extremes. That is the difference with an offline resampling job, when the original sound must be preserved insofar the new frequency range allows. In that sense, an interpolation scheme like in [tabread4~] could be used for realtime variable speed writing, leaving the consequences for the user. For example, if you make large jumps through the table, many old samples would simply not be rewritten.
But even with interpolation quality requirements so relaxed, it is not by itself clear how the samples should be written. Using sinc-interpolation, each input sample could be written as many samples of a (eventually phase-shifted) sinc function, with amplitude compensation for the overlap. The interpolation scheme of [tabread4~] however can not calculate four output samples based on one input sample, it could only calculate one output sample based on four input samples.
Two points here. The last thing you said is not actually true -- each interpolation scheme has an associated convolution function, which can be calculated by imagining what the interpolation would look like for a single sample whose value was 1.0 surrounded by zeroes everywhere else. This 4-point piecewise function can be used to write four samples in its immediate vicinity the same way that the sinc does in the csound example.
Meaning, there is also a convolution kernel for linear interpolation? How would it look like? Ah, it would be a simple dirac delta, but the point is, the kernel can be applied time-shifted with fractional delay, matching the fraction in the index. By the way, this also holds for sinc-interpolated resampling as described by Julius O. Smith: a linear interpolation in the sinc-table to make the result more precise. Interpolating the interpolation kernel...
It seems the bigger question to me is, if you skip somewhere far in the table, you're going to write four samples, and then another four samples somewhere else. Maybe this is OK, but another way to think of what to do would be to imagine the incoming signal as something you're interpolating over the way you would do when reading from a table, in which case a very large index increment if you're writing could be just like a bunch of very small index increments when you're reading. So say you jump ahead 48 samples - one way to do it would be to write ALL 48 samples as an interpolation over the the two input samples.
That would open up some other problems, like how to interpret the difference between jumping back in a table vs "wrapping back around." Not sure how to deal with that at all (this problem doesn't arise in the delay line version of a variable write because what is represented is always a chunk of time rather than an abstract table of numbers to be used for whatever, so there's no real concept of "wraparound" in the delay-line version).
It would also lead to there not being a good way to "keep writing into index 1.5 of the table" -- the incoming input samples would be interpolated over zero samples of the table, and so nothing would get written.
Imagine how one would do this with a fixed resampling factor. For example with resampling factor 0.75 (downsampling) you would write 64
- 0.75 = 48 samples into the array for every block of 64 input
samples, while incrementing the read index by 1 / 0.75 = 1.3333333. Another example, with resampling factor 1.5 (upsampling) you would write 64 * 1.5 = 96 samples into the array for each block of 64 input samples, while incrementing the read index with 1 / 1.5 = 0.6666666. The perform loop would not iterate over an integer n (= blocksize), but it would just break when the float read index exceeds n. To accommodate for interpolation, and for index increments larger than one, a few samples of fixed delay 'headroom' must be introduced.
This is a good point -- but the problem wouldn't exist if you were writing four samples in the table for every incoming sample.
An interpolation kernel like the sinc function is zero-phase apart from the fractional time shift, so there is always an amount of delay implied, depending on kernel length. Would it be possible to create a minimum-phase kernel? Theoretically, yes.
I'm just not sure in that case if a 4-point cubic interpolation is nearly enough for the kind of upsampling that might need to occur.
In the case of J.O.S.'s sinc table method, the kernel length could be varied continuously, according to instantaneous resampling factor. The window must be calculated separately.
In a [tabwrite4~], resampling factor would follow from index increments calculated from float index values received at the inlet. But what to do with large increments, exceeding the delay 'headroom' at the end of the input buffer? And another question: what to do with very small increments, leading to massive amounts of written samples and possibly to cpu overload?
I'm not sure I understand this - I assume you mean "very small increments in the written table." So lets say you're going to try to write a whole 64-sample input block to between indices 10 and 11 of the table. If you're writing 4 samples each time, what you end up with is not cpu overload, but just four samples with possibly a very high amplitude, depending upon the nature of the signal. And actually, if you think about this with regard to the delay line, this would be what would happen if the sound source were moving toward a microphone at or near the speed of sound, so the "very high amplitude" would in effect be a digital "sonic boom."
Matt
There should be an (optional) amplitude compensation for up- and downsampling, as an amplitude effect would be inconvenient in the case of a variable-speed sound-on-sound looper.
Katja
Two points here. The last thing you said is not actually true -- each interpolation scheme has an associated convolution function, which can be calculated by imagining what the interpolation would look like for a single sample whose value was 1.0 surrounded by zeroes everywhere else. This 4-point piecewise function can be used to write four samples in its immediate vicinity the same way that the sinc does in the csound example.
Meaning, there is also a convolution kernel for linear interpolation? How would it look like? Ah, it would be a simple dirac delta, but the point is, the kernel can be applied time-shifted with fractional delay, matching the fraction in the index. By the way, this also holds for sinc-interpolated resampling as described by Julius O. Smith: a linear interpolation in the sinc-table to make the result more precise. Interpolating the interpolation kernel...
The kernel for linear interpolation is a linear ramp from 0.0 to 1.0 and then back down to 0.0. This means that if one were to do linear interpolation in the same way the csound opcode applies the sinc, you would center the peak of this ramp at the fractional index into the table, multiply it by the incoming sample, and then write the values of the ramp at the intersection points of the two nearby samples in the table to those samples.
So, say the index is 126.78 and the incoming sample value was -0.45. You'd in effect ramp down from 0.0 to -0.45 over indices 125.78 to 126.78, and then back up to 0.0 from 126.78 to 127.78. As desired, this would intersect with samples at index 126 and 127, and the values would be -0.099 at 126 and -0.351 at 127.
So, theoretically you could do the same with Pd's cubic lagrangian interpolation kernel, writing 4 points in the table for every incoming sample. This should work fine with a long interpolation kernel, but I foresee a lot of problems with doing this just for 4 points.
Like, imagine you're going to write samples 0, 1, and 2 from the incoming signal to indices 2.3, 45.7, and 89.1. Doing it the way I described with a 4-point interpolator you'd put some values at indices 1-4, 44-47, and 88-91, and leave everything else at 0. Imagine you continued like this for a long time -- you'd have the original signal, each sample of which would be interpolated into 4 points, and each of these little spots would be, separated by long strings of zeroes. This isn't going to play back as "the same sound but lower," unless you ran it through a low-pass filter that smoothed this signal over the intervening samples, which I think would be pretty much like using the sinc for interpolating in the first place.
I should mention that any time I've ever heard of moving the write head to make a variable delay, it's always with a big interpolation kernel. I'm going to look at a couple of other places - something like csound's spat3d opcode, or Richard Furse's old program "vspace" would be candidates for getting ideas.
Hope this is all clear, and sorry if it's all obvious; sometimes it helps me to think about things to write out my thoughts.
Matt
Been in touch with P.A. (he's my supervisor at Huddersfield) and he would be delighted to have a Pd version of iPoke~. If we get a posse together, or if someone is happy to take it on, he's more than happy to share the source code with us/you/them/it. There's also a new version (v.3) which hasn't been released yet and we can take it from that.
We/you should decide who's going to take this on and then we/you can get in touch with P.A.
Hope this helps,
Julian
On 14 June 2012 03:28, Matt Barber brbrofsvl@gmail.com wrote:
Two points here. The last thing you said is not actually true -- each interpolation scheme has an associated convolution function, which can be calculated by imagining what the interpolation would look like for a single sample whose value was 1.0 surrounded by zeroes everywhere else. This 4-point piecewise function can be used to write four samples in its immediate vicinity the same way that the sinc does in the csound example.
Meaning, there is also a convolution kernel for linear interpolation? How would it look like? Ah, it would be a simple dirac delta, but the point is, the kernel can be applied time-shifted with fractional delay, matching the fraction in the index. By the way, this also holds for sinc-interpolated resampling as described by Julius O. Smith: a linear interpolation in the sinc-table to make the result more precise. Interpolating the interpolation kernel...
The kernel for linear interpolation is a linear ramp from 0.0 to 1.0 and then back down to 0.0. This means that if one were to do linear interpolation in the same way the csound opcode applies the sinc, you would center the peak of this ramp at the fractional index into the table, multiply it by the incoming sample, and then write the values of the ramp at the intersection points of the two nearby samples in the table to those samples.
So, say the index is 126.78 and the incoming sample value was -0.45. You'd in effect ramp down from 0.0 to -0.45 over indices 125.78 to 126.78, and then back up to 0.0 from 126.78 to 127.78. As desired, this would intersect with samples at index 126 and 127, and the values would be -0.099 at 126 and -0.351 at 127.
So, theoretically you could do the same with Pd's cubic lagrangian interpolation kernel, writing 4 points in the table for every incoming sample. This should work fine with a long interpolation kernel, but I foresee a lot of problems with doing this just for 4 points.
Like, imagine you're going to write samples 0, 1, and 2 from the incoming signal to indices 2.3, 45.7, and 89.1. Doing it the way I described with a 4-point interpolator you'd put some values at indices 1-4, 44-47, and 88-91, and leave everything else at 0. Imagine you continued like this for a long time -- you'd have the original signal, each sample of which would be interpolated into 4 points, and each of these little spots would be, separated by long strings of zeroes. This isn't going to play back as "the same sound but lower," unless you ran it through a low-pass filter that smoothed this signal over the intervening samples, which I think would be pretty much like using the sinc for interpolating in the first place.
I should mention that any time I've ever heard of moving the write head to make a variable delay, it's always with a big interpolation kernel. I'm going to look at a couple of other places - something like csound's spat3d opcode, or Richard Furse's old program "vspace" would be candidates for getting ideas.
Hope this is all clear, and sorry if it's all obvious; sometimes it helps me to think about things to write out my thoughts.
Matt
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Thu, Jun 14, 2012 at 9:23 AM, Julian Brooks jbeezez@gmail.com wrote:
Been in touch with P.A. (he's my supervisor at Huddersfield) and he would be delighted to have a Pd version of iPoke~. If we get a posse together, or if someone is happy to take it on, he's more than happy to share the source code with us/you/them/it. There's also a new version (v.3) which hasn't been released yet and we can take it from that.
We/you should decide who's going to take this on and then we/you can get in touch with P.A.
Hi,
This is a very nice offer. I think having the code would be very useful, or even just a description of the approach as it relates to what we've been discussing.
I'm hesitant to take the lead on coding, but I'm very happy to contribute however I can.
M
Hey Matt,
Thanks for chiming in on this...
I must admit that most of the above is operating way above my understanding but I'm learning (which I like). Not sure what I could do apart from being an initial conduit and making myself available for some stress-testing, assist with help-file etc. if required. Matt - you do seem to have a decent grasp on the problem as well
Katja - does this interest you at all? You seem to have the skills (slice jockey is the finest looping software in Pd in my opinion) and some interest in these matters. If iPoke~ is of use to our community then I think we should give it a crack. P.A. has very decent coding skills, is supportive of our aims (though has yet to be convinced of its real-world application apart from Pd vanilla) and this would fit nicely with my own passive-aggressive aim to make Hudd music tech more FLOSS/Pd friendly. We also have Alex Harker who has been working on the HISS Tools bundle and is very much a 'fellow-traveller' in these matters, so I'm confident he would pitch-in too.
TBH I'm not sure what would be the best approach here but I'm also willing to do what I can. It's certainly an interesting conundrum.
Cheers,
Julian
On Fri, Jun 15, 2012 at 11:37 AM, Julian Brooks jbeezez@gmail.com wrote:
If iPoke~ is of use to our community then I think we should give it a crack. P.A. has very decent coding skills, is supportive of our aims (though has yet to be convinced of its real-world application apart from Pd vanilla) and this would fit nicely with my own passive-aggressive aim to make Hudd music tech more FLOSS/Pd friendly. We also have Alex Harker who has been working on the HISS Tools bundle and is very much a 'fellow-traveller' in these matters, so I'm confident he would pitch-in too.
A few months ago I had already expressed my interest for the HISS Tools to Alex Harker. The Huddersfield people are doing great stuff, and their intention to release source code for porting to Pd / SC makes me happy. So yes, I would like to join the gang.
Porting [ipoke~] from MaxMsp to Pd should not be too difficult. I'd love to give it a try soon but coming weeks I'll be on a computerless holiday. Anyway I'll try to follow the discussion in the meantime.
Katja
On Wed, Jun 13, 2012 at 6:14 PM, katja katjavetter@gmail.com wrote:
There should be an (optional) amplitude compensation for up- and downsampling, as an amplitude effect would be inconvenient in the case of a variable-speed sound-on-sound looper.
Katja
I think that a consideration here to justify a scaling effect is to deliver the same rate of power.
I like looking at this problem with sinc functions, because the spectrum becomes easy to see, and the energy is easy to calculate.
The function with sampling rate f_s and unit spectrum from -f_s/2 to f_s/2 is f_s*sinc(t*f_s). This function when it's convolved with itself, equals itself.
and if you have f1 < f2, f1*sinc(t*f1) convolved by f2*sinc(t*f2) = f1*sinc(t*f1) which is important for comparing interpolators at different frequencies.
The L2 norm of f_s*sinc(t*f_s) = f_s. Here's the term that grows larger when we increase f_s.
In a given block, you're always writing N samples. Your goal is to write N orthogonal functions that fills all the values in some interval and keep normalized the power during that interval.
I've been thinking about this for some days. I agree there are two fundamentally different approaches (A: deal with each incoming sample independently, for each one adding some sort of filter kernel into the table; or B: advancing systematically through the table, filling each point by interpolating from the input signal).
I think in approach A it's better not to attempt to normalize for speed since there would always be a step where you have to differentiate the location pointer to get a speed value and that's fraught with numerical peril. Plus, we don't know how much the 'user' will know about write pointer speed - perhaps there's a natural way to determine that, in which case the numerically robust way to deal is to have the user take care of it appropriately for the situation.
Aanyway, if we're simulating a real moving source (my favorite example being lightning) it's physically correct for the amplitude to go up if the source moves toward the listener, even to the point of generating a sonic boom.
In the other scenario it seems that the result is naturally normalized, in the sense that a signal of value '1' should put all ones in the table (because how else should you interpolate a function whose value is 1 everywhere?)
Choosing (A) for the moment, for me the design goal would be, "if someone writes a gianl equal to 1 and sent it to points 0, a, 2a, 3a, ... within some reasonable range of stretch values _a_, would I end up with a constant (which I wold suggest should be 1/a) everywhere in the table? If not you'd hear some sort of _a_ - dependent modulation.
I think you have to put a bound on _a_ - if it's allowed to be unbounded there's no fixed-size kernel that will work, and varying the size of the kernel again involves judging the "velocity" _a_ from the incoming data which I argued against already.
cheers Miller
On Thu, Jun 14, 2012 at 12:24:32PM -0500, Charles Henry wrote:
On Wed, Jun 13, 2012 at 6:14 PM, katja katjavetter@gmail.com wrote:
There should be an (optional) amplitude compensation for up- and downsampling, as an amplitude effect would be inconvenient in the case of a variable-speed sound-on-sound looper.
Katja
I think that a consideration here to justify a scaling effect is to deliver the same rate of power.
I like looking at this problem with sinc functions, because the spectrum becomes easy to see, and the energy is easy to calculate.
The function with sampling rate f_s and unit spectrum from -f_s/2 to f_s/2 is f_s*sinc(t*f_s). This function when it's convolved with itself, equals itself.
and if you have f1 < f2, f1*sinc(t*f1) convolved by f2*sinc(t*f2) = f1*sinc(t*f1) which is important for comparing interpolators at different frequencies.
The L2 norm of f_s*sinc(t*f_s) = f_s. Here's the term that grows larger when we increase f_s.
In a given block, you're always writing N samples. Your goal is to write N orthogonal functions that fills all the values in some interval and keep normalized the power during that interval.
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Sorry, for 'gianl' below read 'signal'.
On Thu, Jun 14, 2012 at 11:41:01AM -0700, Miller Puckette wrote:
I've been thinking about this for some days. I agree there are two fundamentally different approaches (A: deal with each incoming sample independently, for each one adding some sort of filter kernel into the table; or B: advancing systematically through the table, filling each point by interpolating from the input signal).
I think in approach A it's better not to attempt to normalize for speed since there would always be a step where you have to differentiate the location pointer to get a speed value and that's fraught with numerical peril. Plus, we don't know how much the 'user' will know about write pointer speed - perhaps there's a natural way to determine that, in which case the numerically robust way to deal is to have the user take care of it appropriately for the situation.
Aanyway, if we're simulating a real moving source (my favorite example being lightning) it's physically correct for the amplitude to go up if the source moves toward the listener, even to the point of generating a sonic boom.
In the other scenario it seems that the result is naturally normalized, in the sense that a signal of value '1' should put all ones in the table (because how else should you interpolate a function whose value is 1 everywhere?)
Choosing (A) for the moment, for me the design goal would be, "if someone writes a gianl equal to 1 and sent it to points 0, a, 2a, 3a, ... within some reasonable range of stretch values _a_, would I end up with a constant (which I wold suggest should be 1/a) everywhere in the table? If not you'd hear some sort of _a_ - dependent modulation.
I think you have to put a bound on _a_ - if it's allowed to be unbounded there's no fixed-size kernel that will work, and varying the size of the kernel again involves judging the "velocity" _a_ from the incoming data which I argued against already.
cheers Miller
On Thu, Jun 14, 2012 at 12:24:32PM -0500, Charles Henry wrote:
On Wed, Jun 13, 2012 at 6:14 PM, katja katjavetter@gmail.com wrote:
There should be an (optional) amplitude compensation for up- and downsampling, as an amplitude effect would be inconvenient in the case of a variable-speed sound-on-sound looper.
Katja
I think that a consideration here to justify a scaling effect is to deliver the same rate of power.
I like looking at this problem with sinc functions, because the spectrum becomes easy to see, and the energy is easy to calculate.
The function with sampling rate f_s and unit spectrum from -f_s/2 to f_s/2 is f_s*sinc(t*f_s). This function when it's convolved with itself, equals itself.
and if you have f1 < f2, f1*sinc(t*f1) convolved by f2*sinc(t*f2) = f1*sinc(t*f1) which is important for comparing interpolators at different frequencies.
The L2 norm of f_s*sinc(t*f_s) = f_s. Here's the term that grows larger when we increase f_s.
In a given block, you're always writing N samples. Your goal is to write N orthogonal functions that fills all the values in some interval and keep normalized the power during that interval.
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
Pd-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
On Thu, Jun 14, 2012 at 2:41 PM, Miller Puckette msp@ucsd.edu wrote:
I've been thinking about this for some days. I agree there are two fundamentally different approaches (A: deal with each incoming sample independently, for each one adding some sort of filter kernel into the table; or B: advancing systematically through the table, filling each point by interpolating from the input signal).
I think in approach A it's better not to attempt to normalize for speed since there would always be a step where you have to differentiate the location pointer to get a speed value and that's fraught with numerical peril. Plus, we don't know how much the 'user' will know about write pointer speed - perhaps there's a natural way to determine that, in which case the numerically robust way to deal is to have the user take care of it appropriately for the situation.
Aanyway, if we're simulating a real moving source (my favorite example being lightning) it's physically correct for the amplitude to go up if the source moves toward the listener, even to the point of generating a sonic boom.
In the other scenario it seems that the result is naturally normalized, in the sense that a signal of value '1' should put all ones in the table (because how else should you interpolate a function whose value is 1 everywhere?)
Scenario (B) would be naturally normalized, but there are a few difficulties with it. First, what would happen if you didn't move the write pointer? In scenario (A) you get the "sonic boom," (and depending on the signal and the filter kernel this could fluctuate, and you'll get less of an effect further from "ground zero." With scenario (B) you never write into the table at all because without an increment you'll never pass over a sample to write (but note, you will write a sample if the index is an integer).
Now, to my mind there are two other things to think about. If someone were to drive the index with white noise, with (A) you're mixing the kernel into the table at random and the result is the emergent effect. It's unclear what (B) should do, though -- first, does a leap backwards from 1024 to 2 interpolate all 1021 intervening samples? If so, then second, does it overwrite those, or mix the result into what's already there?
It seems you would not want it to interpolate over those samples if the table were 1024 samples long and the leap represented a wrap back to the beginning, and I suppose "mixing" vs. "overwriting" could be settable by the user.
Choosing (A) for the moment, for me the design goal would be, "if someone writes a gianl equal to 1 and sent it to points 0, a, 2a, 3a, ... within some reasonable range of stretch values _a_, would I end up with a constant (which I wold suggest should be 1/a) everywhere in the table? If not you'd hear some sort of _a_ - dependent modulation.
I think you have to put a bound on _a_ - if it's allowed to be unbounded there's no fixed-size kernel that will work, and varying the size of the kernel again involves judging the "velocity" _a_ from the incoming data which I argued against already.
I think this is right, but this brings up another design problem -- most sinc-based filter kernels have a value of 1 at 0 and 0 at all other integers, which usually means that if you were to write directly to integer indices you're writing in single samples rather a kernel (since the value of the kernel would be 0 at the surrounding places in the table).
Matt
On Thu, Jun 14, 2012 at 8:41 PM, Miller Puckette msp@ucsd.edu wrote:
I've been thinking about this for some days. I agree there are two fundamentally different approaches (A: deal with each incoming sample independently, for each one adding some sort of filter kernel into the table; or B: advancing systematically through the table, filling each point by interpolating from the input signal).
I've been focused on approach B (interpolate from the input signal) for long time because it could eventually use the [tabread4~] interpolation scheme, creating a parallel between those processes. But... today I realized why approach B could not work at all for an object which takes float indexes as arguments for writing, like you would expect from [tabwrite4~], [ipoke~] or any variable speed writer: for each perform loop, you get N (=blocksize) signal values and equally many index values, so it would be logical to iterate over N input samples, but in contrast, it would be very complicated to iterate over the output samples and couple these to index values. In fact, it would require yet another interpolation. Approach B would only work fine for an object which has a fixed resampling factor, settable via message. And then, the question how to synchronize it with a [tabread4~] is still open.
It's weird how puzzling the task of fractional speed writing is, compared to fractional speed reading. Better focus on approach A (adding a fractionally delayed kernel into the array for each input sample). Approach A does not in itself impose a preferred kernel type or length, so different options could be offered to the user, varying in performance and precision aspects. Each kernel length, if it is fixed (and zero-phase), would imply a known delay, so the user can reckon with it. As I see it, calculating the resampling factor for normalization purposes need not be spoiled by numerical disasters, as each difference is found from two consecutive input index values, there is no autonomous cumulative effect. Or did I overlook something? Anyway, I have to reset my brain again for a new focus on [tabwrite4~], [vtabwrite~] or whatever it's name could be.
Katja
But... today I realized why approach B could not work at all for an object which takes float indexes as arguments for writing, like you would expect from [tabwrite4~], [ipoke~] or any variable speed writer: for each perform loop, you get N (=blocksize) signal values and equally many index values, so it would be logical to iterate over N input samples, but in contrast, it would be very complicated to iterate over the output samples and couple these to index values. In fact, it would require yet another interpolation. Approach B would only work fine for an object which has a fixed resampling factor, settable via message. And then, the question how to synchronize it with a [tabread4~] is still open.
If it used the same interpolator as tabread4~, you could in principle do approach B -- you'd need a struct that held on the the last samples of the previous block, and offset it by a sample.
So, let's say you have a blocksize of 4, the first block of incoming signal is [-0.3, 0.4, 0.6, -0.8], and the index block is [0.2, 1.4, 3.0, 5.8]. The way this could work would be to imagine a previous signal block of [0, 0, 0, 0]. Put the "last 0" of that block at index 0.2 and the -0.3 at index 1.4. This crosses sample 1, so you find out where that sample sits as a fraction of the difference between those two indices (in this case 0.66666), use [0, 0, -0.3, 0.4] as the four points for interpolation between 0 and -0.3, writing sample one as though you were reading from a table with those four points at 0.66666 between the 0 and the -0.3 (so far so good?).
Then you put 0.4 at index 3.0. Now your interpolation points are [0, -0.3, 0.4, 0.6] to interpolate between -0.3 and 0.4. Index 2 occurs 0.375 between these samples so you run the interpolation function for that fractional index and write sample at index 2, and then you go ahead and write the 0.4 to index 3.
Finally, you put 0.6 at index 5.8. You're interpolating between 0.4 and 0.6, and the points are [-0.3, 0.4, 0.6, -0.8]. Index 4 occurs 0.357143 between the two samples and index 5 occurs 0.714286 between, so you run the interpolator twice for those fractional indices, write those samples.
Then you save 0.4, 0.6 and -0.8 (the last the samples of the current block of incoming signal), and 5.8 (the last written index) for the next block. When you have the next block you'll have enough info to interpolate between 0.6 and -0.8 from the last block and between -0.8 and the first sample of this one (these steps were actually implied the first time around), and then you're good to go for the next four samples.
If I haven't forgotten a step, the same principle ought to work for any blocksize 4 or larger, and you'd need specialized policies for blocksizes of 1 or 2.
Sorry for the length, but sometimes detailed examples can be helpful to get things straight.
It's weird how puzzling the task of fractional speed writing is, compared to fractional speed reading.
If you think that's puzzling, try fractional speed dating.
Better focus on approach A (adding a fractionally delayed kernel into the array for each input sample). Approach A does not in itself impose a preferred kernel type or length, so different options could be offered to the user, varying in performance and precision aspects. Each kernel length, if it is fixed (and zero-phase), would imply a known delay, so the user can reckon with it. As I see it, calculating the resampling factor for normalization purposes need not be spoiled by numerical disasters, as each difference is found from two consecutive input index values, there is no autonomous cumulative effect.
Sometimes first-difference for that differentiation is a little fraught. It's kind of the same issue if you wanted to incorporate antialiasing into [tabread4~] -- you need a policy for calculating the "speed" through the table, and first-difference might not be quite accurate enough.
Matt
On Fri, Jun 15, 2012 at 3:34 AM, Matt Barber brbrofsvl@gmail.com wrote:
If it used the same interpolator as tabread4~, you could in principle do approach B -- you'd need a struct that held on the the last samples of the previous block, and offset it by a sample.
So, let's say you have a blocksize of 4, the first block of incoming signal is [-0.3, 0.4, 0.6, -0.8], and the index block is [0.2, 1.4, 3.0, 5.8]. The way this could work would be to imagine a previous signal block of [0, 0, 0, 0]. Put the "last 0" of that block at index 0.2 and the -0.3 at index 1.4. This crosses sample 1, so you find out where that sample sits as a fraction of the difference between those two indices (in this case 0.66666), use [0, 0, -0.3, 0.4] as the four points for interpolation between 0 and -0.3, writing sample one as though you were reading from a table with those four points at 0.66666 between the 0 and the -0.3 (so far so good?).
Then you put 0.4 at index 3.0. Now your interpolation points are [0, -0.3, 0.4, 0.6] to interpolate between -0.3 and 0.4. Index 2 occurs 0.375 between these samples so you run the interpolation function for that fractional index and write sample at index 2, and then you go ahead and write the 0.4 to index 3.
Finally, you put 0.6 at index 5.8. You're interpolating between 0.4 and 0.6, and the points are [-0.3, 0.4, 0.6, -0.8]. Index 4 occurs 0.357143 between the two samples and index 5 occurs 0.714286 between, so you run the interpolator twice for those fractional indices, write those samples.
Then you save 0.4, 0.6 and -0.8 (the last the samples of the current block of incoming signal), and 5.8 (the last written index) for the next block. When you have the next block you'll have enough info to interpolate between 0.6 and -0.8 from the last block and between -0.8 and the first sample of this one (these steps were actually implied the first time around), and then you're good to go for the next four samples.
If I haven't forgotten a step, the same principle ought to work for any blocksize 4 or larger, and you'd need specialized policies for blocksizes of 1 or 2.
Sorry for the length, but sometimes detailed examples can be helpful to get things straight.
After reading through it several times, I think I understand your example, and how this could be expressed and implemented generally:
write indexes, where N = blocksize
float write indexes stored at [n] and [n+1] (there could be none, one, or more than one integer index in that interval)
as a fraction of the interval between the two enclosing float indexes
the sample that must be written
Not sure if I got it right, and if this would give correct results for all cases.
Also, there would be no natural bound on the amount of samples written. Imagine a user feeding large random numbers to the write index inlet... There could be a user-settable bound on resampling factor. For moderate resampling purposes it could be an efficient model. Seems we're getting close to an implementation of [tabwrite4~].
Katja
I'm not sure I understood the whole thread so far... let me back up:
I'm not sure that you want to write samples of a function to the table for each sample you want to write.
You start with two signals (blocks of N), one is the data you want to write, the other is the indexes where you want the variables written.
The data you want to write is an evenly spaced signal with spectrum on -1/2 < f < 1/2.
Depending on how closely spaced the indexes are, you get aliasing on side, and on the other you have more spectrum than you need.
Close together (data written to buffer slower than normal speed): aliasing. The signal that we're writing to the table has fewer points than what's needed to cover the spectrum of the input signal.
If all you did was write 4 points from a constant interpolator, you'd still have the aliasing.
Far apart (writing faster): There's evidently a possibility of a perfect reconstruction because you've got more points than you need. I think that can be done with sinc functions, and then you can choose finite-length interpolator functions as approximations. This is the easy part of the problem, if you just ignore how potentially expensive it is :)
What I see happening is that you when you write-then-read with tabwrite4~ / tabread4~, you get back a sampling of convolving each of the interpolation functions and the input data. The effect of each sample spreads out to cover 8 samples on the output.
On Thu, Jun 14, 2012 at 2:56 PM, Charles Henry czhenry@gmail.com wrote:
I'm not sure I understood the whole thread so far... let me back up:
I'm not sure that you want to write samples of a function to the table for each sample you want to write.
You start with two signals (blocks of N), one is the data you want to write, the other is the indexes where you want the variables written.
The data you want to write is an evenly spaced signal with spectrum on -1/2 < f < 1/2.
Depending on how closely spaced the indexes are, you get aliasing on side, and on the other you have more spectrum than you need.
Close together (data written to buffer slower than normal speed): aliasing. The signal that we're writing to the table has fewer points than what's needed to cover the spectrum of the input signal.
If all you did was write 4 points from a constant interpolator, you'd still have the aliasing.
Right -- but this is no different from [tabread4~] and the rest -- you'll have aliasing when downsampling on read just as much as on write. On write, though, you can pre-filter the incoming signal if you want to reduce aliasing -- just treat it as a normal oversampled signal that you're going to decimate.
Far apart (writing faster): There's evidently a possibility of a perfect reconstruction because you've got more points than you need. I think that can be done with sinc functions, and then you can choose finite-length interpolator functions as approximations. This is the easy part of the problem, if you just ignore how potentially expensive it is :)
What I see happening is that you when you write-then-read with tabwrite4~ / tabread4~, you get back a sampling of convolving each of the interpolation functions and the input data. The effect of each sample spreads out to cover 8 samples on the output.
The "usual use" would be to write the table with interpolation and then read straight through it without interpolation when it's time to read. But really, there's nothing preventing anyone from sending the output of, say, [vd~] into another delay with [vd~] -- you just have to know what you're getting into if you're going to use it this way.
Matt