my guess for your last question:
no overlap: [tabsend~] is now at blocksize 128 and [tabrecieve~] at blocksize 64. Every other block, [tabsend~] is still busy filling the rest of the table while [tabreceive~] is reading the last 64 samples a second time (since the table couldn't been updated yet). so you always get a single 64 sample block tiwce while every other 64 sample block is lost.
with overlap 2: output is the original signal because now the update rate of [tabsend~] is twice as fast and therefore matches [tabreceive~].
Is there a prize? :-)
It would allow you to do things like partitioned convolution without any delay, since the convolution of two 64-sample windows fills a 128-sample window.
sounds more like the classic overlap-add-method. can you explain more?
Gesendet: Samstag, 27. Februar 2016 um 06:01 Uhr Von: "Matt Barber" brbrofsvl@gmail.com An: "Christof Ressi" christof.ressi@gmx.at Cc: "Alexandre Torres Porres" porres@gmail.com, "i go bananas" hard.off@gmail.com, "pd-list@lists.iem.at" pd-list@lists.iem.at Betreff: Re: Re: [PD] s~ & r~ with block size other than 64?
You have to be careful reblocking with [tabsend~] and [tabreceive~] though, because of what happens with blocking and block delay. Hopefully this isn't too obvious to explain. You know the regular situation: suppose you write into the [inlet~] of a subpatch that is blocked at 128 from a parent blocked at 64, and then back out an [outlet~] into the parent patch. When you start dsp, for the first parent block the first 64 samples go in, but nothing comes out because the subpatch needs to collect 128 samples before it sends anything out. On the second parent block, 64 more samples go in, the subpatch can do its calculations on its 128-sample vector(s), and start output immediately, beginning with the first block of input from the parent patch. So everything is delayed by one block in this case, or in general by N_s - N_p where N_s is the subpatch's block size and N_p is the parent's. Now, suppose instead you have an array of size 128 called "depot." From the block-64 parent you [tabsend~] a signal to depot, and you make sure your signal is calculated prior to anything in the subpatch using the [inlet~] trick. [tabsend~ depot] will write the first 64 samples of depot every block, leaving the last 64 untouched. Then inside the block-128 subpatch you [tabreceive~ depot] and send it out to the parent through an [outlet~]. What will happen? When you start dsp, during the parent's first block [tabsend~ depot] writes the first block of samples to depot. Nothing happens in the subpatch because 128 samples haven't passed yet. Then on the parent's second block, [tabsend~ depot] writes the second block of samples to the first 64 samples of depot. 128 samples have passed, so the subpatch can do its thing. [tabreceive~ depot] receives the whole array, starting with the 64 samples just written in by the second parent block, so on output, those 64 samples come out with no block delay. However, since the first parent block's samples were overwritten in depot by the second block's samples, every other block from the parent will be lost in the subpatch. However, if you set the subpatch to overlap by 2 (or generally N_s/N_p), the [tabsend~]/[tabreceive~] pair actually allows you to reblock with no block delay and no lost samples, but with the CPU penalty and the general hassle of dealing with overlapping. It would allow you to do things like partitioned convolution without any delay, since the convolution of two 64-sample windows fills a 128-sample window. So, knowing this, what do you think would happen if you put the [tabsend~] in the subpatch and the [tabreceive~] in the parent and don't overlap in the subpatch? What if you do overlap in the subpatch? NB - overlapping does not affect the block delay of normal [input~]/[output~]. I now realize I should have just built a patch to illustrate all this. Next time. :) Matt On Fri, Feb 26, 2016 at 1:49 PM, Christof Ressi christof.ressi@gmx.at wrote:Thanks Matt for diggin in!
In principle it wouldn't be too hard to let them be any block size so long as they're the same size,
What puzzles me is that I *can* actually send audio from one subpatch and receive it indifferent subpatches for blocksizes greater (but not less) than 64, but only if all the blocksizes match and - this is really weird - there's no more than 1 [r~] per subpatch. I guess you'd call that an "unsupported feature" :-p. I don't use it, however, and I wouldn't recommend other people to use it. So let's keep it a secret.
After all we have [tabsend~] and [tabreceive]. I was just curious about the technical details.
Gesendet: Freitag, 26. Februar 2016 um 17:48 Uhr Von: "Matt Barber" <brbrofsvl@gmail.com[brbrofsvl@gmail.com]> An: "Christof Ressi" <christof.ressi@gmx.at[christof.ressi@gmx.at]> Cc: "Alexandre Torres Porres" <porres@gmail.com[porres@gmail.com]>, "i go bananas" <hard.off@gmail.com[hard.off@gmail.com]>, "pd-list@lists.iem.at[pd-list@lists.iem.at]" <pd-list@lists.iem.at[pd-list@lists.iem.at]> Betreff: Re: [PD] s~ & r~ with block size other than 64?
Here's the short story: [s~] and [r~] are pretty straightforward: [s~] fills a block buffer every sample, and any [r~] with the same name can find that buffer and read from it. In principle it wouldn't be too hard to let them be any block size so long as they're the same size, but there would be some tricky things with overlap and resampling. [catch~] reads from a one-block buffer and zeroes it out as it goes, and [throw~] sums into its catcher's buffer. [delwrite~]/[delread~] work with any block size because the buffer size isn't related to any block size. On Fri, Feb 26, 2016 at 11:23 AM, Christof Ressi <christof.ressi@gmx.at[christof.ressi@gmx.at]> wrote:I think he rather meant that [s~] and [r~] doesn't need to check the vector size for each DSP cycle. The error message you're talking about is only thrown after creating [s~] or [r~] objects in a subpatch with blocksize != 64 AND everytime you set a "forbidden" blocksize dynamically with a message to [block~], so it *could* be that the check is only performed for such events and not for each DSP cycle. Although getting an error message for dynamically changing the blocksize rather implies a check for each DSP cycle... But I'm only making assumptions. Apart from possible performance optimations I can't see any reason for this restriction either!
BTW: It's not like a pair of [s~] and [r~] won't generally work for blocksizes other than 64. It basically works as expected when used as "wireless audio connections" (at least in the situations I tried) but things get screwed up once you try feedback or if the blocksizes don't match. Again, it would be really cool if someone could clarify what's really going on under the hood (e.g. how [s~] and [r~] differ from [delwrite] and [delread~]) or point to an already existing thread in the mailing list archive.
Gesendet: Freitag, 26. Februar 2016 um 07:08 Uhr Von: "Alexandre Torres Porres" <porres@gmail.com[porres@gmail.com][porres@gmail.com[porres@gmail.com]]> An: "i go bananas" <hard.off@gmail.com[hard.off@gmail.com][hard.off@gmail.com[hard.off@gmail.com]]> Cc: "pd-list@lists.iem.at[pd-list@lists.iem.at][pd-list@lists.iem.at[pd-list@lists.iem.at]]" <pd-list@lists.iem.at[pd-list@lists.iem.at][pd-list@lists.iem.at[pd-list@lists.iem.at]]> Betreff: Re: [PD] s~ & r~ with block size other than 64?
really? can't see how much more relevantly efficient it'd be, and it kinda does check it already, hence the errors cheers 2016-02-26 3:07 GMT-03:00 i go bananas <hard.off@gmail.com[hard.off@gmail.com][hard.off@gmail.com[hard.off@gmail.com]]>:I would assume it's also slightly more efficient that pd doesn't have to check the vector size when processing the s~ / r~ functions. _______________________________________________ Pd-list@lists.iem.at[Pd-list@lists.iem.at][Pd-list@lists.iem.at[Pd-list@lists.iem.at]] mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list%5Bhttp://lists.puredata.info/lis...]]]
Pd-list@lists.iem.at[Pd-list@lists.iem.at][Pd-list@lists.iem.at[Pd-list@lists.iem.at]] mailing list
UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list%5Bhttp://lists.puredata.info/lis...]]