This would be like 'mc' objects in MAX, right? That's awesome!
btw, I see merging season has started, I already updated some documentation files to include the merged changes, please consider merging that soon as well. https://github.com/pure-data/pure-data/pull/1594
cheers
Em sex., 2 de set. de 2022 às 08:03, Antoine Rousseau antoine@metalu.net escreveu:
Sorry forget it, "frames" actually corresponds with the comment /* number of points in each channel */".
Le ven. 2 sept. 2022 à 10:55, Antoine Rousseau antoine@metalu.net a écrit :
probably "s_length" might be called "s_frames"
I'm not sure about that: in many APIs the word "frame" means one "multi-channel sample", e.g 2 samples for a stereo stream.
Le ven. 2 sept. 2022 à 09:36, IOhannes m zmoelnig zmoelnig@iem.at a écrit :
On 9/2/22 01:00, Christof Ressi wrote:
Hi Miller,
this sounds great! First-class multi-channel support would be a real game changer.
yes. that would be so cool!
typedef struct _signal { int s_n; /* *TOTAL* number of points in the array */ t_sample *s_vec; /* the array */ t_float s_sr; /* *TOTAL* samples per second */
[...]
t_float s_rate; /* sample rate */ int s_length; /* number of points in each channel */ int s_nchans; /* number of channels */ int s_overlap; /* number of times each sample will appear */
}
Personally, I would keep s_n as the number of samples /per channel/.
The
total number of samples is simply s_n * s_nchans. Existing externals - that do not know about s_nchans - would effectively operate on the
first
i think the idea is that with "s_n = s_nchans * s_length" existing externals would automatically process *all* channels.
that's nice if the external does not do any delays or so (as they would automatically become multi-channel aware), but not so nice if they *do* things in the time domain (as there would be weird cross-talk between the channels).
i'm not favouring any of the two approaches, just wanted to point their differences.
i somewhat agree with christof's implication, that it's probably best to not have redundant data in the struct.
- 's_n = s_nchans * s_length' (or 's_totalsamples = s_nchans * s_n')
- 's_sr = s_rate * s_overlap * s_nchans'
(my issue being, that with redundancy it's more likely to have inconsistent data; what if the struct says 's_n = 128; s_nchans = 3; s_length = 1024'?)
apart from that: probably "s_length" might be called "s_frames" as this seems to be the less ambiguous term.
and i would personally prefer "s_samplerate" and "s_channels". that would make for an easy distinction: the abbreviated names "s_n" and "s_sr" are the convoluted ones, whereas the long names have the data you'd expect.
channel and ignore the rest. Newer multi-channel-aware externals, on
the
other hand, may use all the channels.
I also think that DSP objects would need a new API method to create multi-channel /outputs/. The general idea is that the /input /channel counts are taken from upstream, but the /output /channel counts are specified by the object and passed downstream. (There might be objects where input and output channel count differs; any kind of merger/splitter/mixer objects comes to my mind.)
+1
vgmasdrf IOhannes _______________________________________________ Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev
Pd-dev mailing list Pd-dev@lists.iem.at https://lists.puredata.info/listinfo/pd-dev