At least we all agree that there's a mismatch between the docs and the actual behaviour.
that's a start
In my opinion, being able to use [delwrite~] and [delread~] at different
blocksizes is a nice feature, so what about a nice little warning in the docs that you have to care about the buffer size if you're using different blocksizes?
Even failing to see how one thing prevents the other, my point is that you need to care about buffer size ALL OF THE TIME... it's never Good.
And in my point of view, it's just so simple: *add a bunch of samples to the delay* ; work that internally
Of course Miller could add some complicated mechanism for [delwrite~] to
keep track of all the block sizes of its [delread~] objects.
That doesn't seem crazy, get them all, check their block size, stay with the greater block size, work it out interbally, voilà... all the worries are over. Not too crazy and just elegant simple coding. You're treating this as a mission impossible where it's quite trivial to me.
, but to me the simplest solution is updating the docs and stating: "max. delay time = buffer size - block size of [delread~]"
Might be "simpler", but the craziest, and also the laziest, turning the user and patching experience into a nightmare. You're asking us to check all the block sizes and do the math ourselves and then convert it to ms and then insert it into a delwrite~ object... why live so hard?
cheers