Hallo, Martin Peach hat gesagt: // Martin Peach wrote:
Frank Barknecht wrote:
I already wrapped all tries to append to x->x_outat in a big "if" clause which checks, if MAXOUTAT is reached. This works and keeps Pd from crashing. However somehow it doesn't feel right to just truncate the incoming message, even when posting a big warning message. OTOH I suppose, that declaring *x_outat with a fixed size was done for speed reasons. Having to allocate memory everytime a message comes in would be very slow.
But isn't the memory allocated only when the object is created and reused for each OSC message?
Yep, that's the way it is now: On object creation, space for 50 atoms is allocated and then reused - unfortunatly without a bounds check yet.
From the OSC spec: " The underlying network that delivers an OSC packet is responsible for delivering both the contents and the size to the OSC application.
That's what I thought to refer to with "allocate memory everytime a message comes in": We could get the size from the OSC packet, and then allocate enough atom space in the Pd object according to the packet length. However this would mean allocating and possibly freeing memory all the time, that is, with every incoming message. Or not?
For UDP the data length field in the header is 16 bits and the header itself is 8 bytes, so up to 65528 bytes can be used for an OSC packet, and each element in the OSC packet is 4bytes long, the same size as a t_atom (not counting the path and typetags). So with MAXOUTAT 16382 there would be no problem and no need for a bounds check. 65528 bytes would be no bit memory hog for most people...
Especially when you normally run only a handful of dumpOSC objects, often it will be only a single one.
But for TCP the message could be up to 4,294,967,295 bytes long :(
Urgs. ;)
Ciao