hi martin and the list
if i send [send 11, send 22, send 33, send 44, send 55( to [tcpclient] the [tcpserver] prints
receive: 11 22 33 44 55
actually i expected to get 5 messages back. is this a bug or am i wrong?
thanks eni
Enrique Erne wrote:
hi martin and the list
if i send [send 11, send 22, send 33, send 44, send 55( to [tcpclient] the [tcpserver] prints
receive: 11 22 33 44 55
actually i expected to get 5 messages back. is this a bug or am i wrong?
It's unanticipated behaviour I guess...it also happens if you send the values as separate messages with a single bang. If you put a chain of massages separated by [delay]s you can find the shortest delay that sends them all separately. I find that I need a delay of at least 8ms between messages for them to be transmitted separately. It's up to the implementation of TCP in the machine to decide when to send a packet. Probably UDP would send them all individually.
Martin
On Sun, 2008-09-07 at 12:23 -0400, Martin Peach wrote:
Enrique Erne wrote:
hi martin and the list
if i send [send 11, send 22, send 33, send 44, send 55( to [tcpclient] the [tcpserver] prints
receive: 11 22 33 44 55
actually i expected to get 5 messages back. is this a bug or am i wrong?
It's unanticipated behaviour I guess...it also happens if you send the values as separate messages with a single bang. If you put a chain of massages separated by [delay]s you can find the shortest delay that sends them all separately. I find that I need a delay of at least 8ms between messages for them to be transmitted separately. It's up to the implementation of TCP in the machine to decide when to send a packet. Probably UDP would send them all individually.
yo.. i would say, that it is not only unanticipated behaviour, but a bug, either in [tcpclient]/[tcpserver] or [unpackOSC]. from what i understand, it is a misconcept of [unpackOSC].
please correct me, if interprete things the wrong way, but from my tests [tcp*] and [unpackOSC] don't work 'well' together. [tcp*] seem to not know anything about messages, they simply treat any incoming data as a stream (without any concept of delimiters). depending on how quickly you send messages to the sending object, the receiving object makes one or more pd messages out of it (this most likely happens on the tcp layer, where pd/[tcp*] assumingly doesn't have any influence on). [unpackOSC] on the other hand is 'stream agnostic', it only accepts input as messages with correct number of elements. messages that are longer than one OSC packet are truncated to exactly one OSC packet, while the rest is silently ignored. if one message to [unpackOSC] is too short, [unpackOSC] drops an error:
unpackOSC: packet size (19) not a multiple of 4 bytes: dropping packet
from what i can tell, [tcp*] and [unpackOSC] are incompatible, since the former are 'message agnostic' and the latter is 'stream agnostic'.
if i am right about this, i really hope, that it could be fixed in some way. my suggestion would be to change [unpackOSC] in way, so that it treats incoming messages as a [stream] (in other words: it would completely disregard messages and always give an output, as soon as an OSC packet is completed).
this is no problem as long as you use [unpackOSC] together with [udp*], since then you would expect some messages to drop. but when going the tcp route, you'd expect completeness on the receiving side. currently it is not possible to rely on this, as this little test shows:
[send /test 1, send /best 2( | [packOSC] | [tcpclient]
[tcpserver] | [unpackOSC] | [print]
it prints:
/test 1
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
Roman Haefeli wrote:
On Sun, 2008-09-07 at 12:23 -0400, Martin Peach wrote:
Enrique Erne wrote:
hi martin and the list
if i send [send 11, send 22, send 33, send 44, send 55( to [tcpclient] the [tcpserver] prints
receive: 11 22 33 44 55
actually i expected to get 5 messages back. is this a bug or am i wrong?
It's unanticipated behaviour I guess...it also happens if you send the values as separate messages with a single bang. If you put a chain of massages separated by [delay]s you can find the shortest delay that sends them all separately. I find that I need a delay of at least 8ms between messages for them to be transmitted separately. It's up to the implementation of TCP in the machine to decide when to send a packet. Probably UDP would send them all individually.
yo.. i would say, that it is not only unanticipated behaviour, but a bug, either in [tcpclient]/[tcpserver] or [unpackOSC]. from what i understand, it is a misconcept of [unpackOSC].
IMHO it's not a bug. It's the way TCP is supposed to work. If more data arrives before the previous data has been sent, it all gets sent together.
Pd sends all its messages in between audio blocks, so a comma-separated list of messages will all go at the same time.
There is an option to set TCP_NODELAY on the socket, which I tried today but it has no real effect on WinXP at least, probably because the data is arriving too fast from Pd for it to be separated into packets
please correct me, if interprete things the wrong way, but from my tests [tcp*] and [unpackOSC] don't work 'well' together. [tcp*] seem to not know anything about messages, they simply treat any incoming data as a stream (without any concept of delimiters). depending on how quickly you send messages to the sending object, the receiving object makes one or more pd messages out of it (this most likely happens on the tcp layer, where pd/[tcp*] assumingly doesn't have any influence on).
Sounds correct to me. UDP sends atomic 'datagrams' of up to a maximum size but TCP can send messages of arbitrary length by splitting it into packets and reassembling them at the other end, so there is no way of knowing when the data is finished unless it is encoded in the data.
[unpackOSC] on the other hand is 'stream agnostic', it only accepts input as messages with correct number of elements. messages that are longer than one OSC packet are truncated to exactly one OSC packet, while the rest is silently ignored. if one message to [unpackOSC] is too short, [unpackOSC] drops an error:
unpackOSC: packet size (19) not a multiple of 4 bytes: dropping packet
from what i can tell, [tcp*] and [unpackOSC] are incompatible, since the former are 'message agnostic' and the latter is 'stream agnostic'.
if i am right about this, i really hope, that it could be fixed in some way. my suggestion would be to change [unpackOSC] in way, so that it treats incoming messages as a [stream] (in other words: it would completely disregard messages and always give an output, as soon as an OSC packet is completed).
Yes, [unpackOSC] should probably check to see if more OSC packets are present in its buffer after it has processed the first packet. A problem arises if there is only a partial packet there, but that shouldn't happen unless the whole thing is too long.
this is no problem as long as you use [unpackOSC] together with [udp*], since then you would expect some messages to drop. but when going the tcp route, you'd expect completeness on the receiving side. currently it is not possible to rely on this, as this little test shows:
[send /test 1, send /best 2( | [packOSC] | [tcpclient]
[tcpserver] | [unpackOSC] | [print]
it prints:
/test 1
Yes, and [unpackOSC] has no way of knowing if it is getting data from UDP or TCP so it should probably assume the worst and go for TCP. In fact, to be unbreakably robust it should assume it is getting input one byte at a time and not output anything until either an entire OSC packet has been received or the packet is not valid OSC.
Martin
On Sun, 2008-09-07 at 18:58 -0400, Martin Peach wrote:
Yes, and [unpackOSC] has no way of knowing if it is getting data from UDP or TCP so it should probably assume the worst and go for TCP. In fact, to be unbreakably robust it should assume it is getting input one byte at a time and not output anything until either an entire OSC packet has been received or the packet is not valid OSC.
this is how i would like [unpackOSC] to behave. i don't see any other way to do OSC over tcp.
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
Roman Haefeli wrote:
Martin Peach wrote:
Yes, and [unpackOSC] has no way of knowing if it is getting data from UDP or TCP so it should probably assume the worst and go for TCP. In fact, to be unbreakably robust it should assume it is getting input one byte at a time and not output anything until either an entire OSC packet has been received or the packet is not valid OSC.
this is how i would like [unpackOSC] to behave. i don't see any other way to do OSC over tcp.
I think opening a bundle and putting all the simultaneous messages in it, then closing the bundle and sending it, will work over tcp.
In practical terms a byte-wise [unpackOSC] would have to copy incoming bytes into a buffer and repeatedly attempt to parse them as an OSC message until the entire message had been received. The overhead of doing this would waste a lot of cpu.
If a tcp packet always contains an integral number of OSC packets it's a little easier, we just have to check for more packets in the buffer.
I would try the bundle approach first...
Martin
On Mon, 2008-09-08 at 16:18 +0000, Martin Peach wrote:
Roman Haefeli wrote:
Martin Peach wrote:
Yes, and [unpackOSC] has no way of knowing if it is getting data from UDP or TCP so it should probably assume the worst and go for TCP. In fact, to be unbreakably robust it should assume it is getting input one byte at a time and not output anything until either an entire OSC packet has been received or the packet is not valid OSC.
this is how i would like [unpackOSC] to behave. i don't see any other way to do OSC over tcp.
I think opening a bundle and putting all the simultaneous messages in it, then closing the bundle and sending it, will work over tcp.
In practical terms a byte-wise [unpackOSC] would have to copy incoming bytes into a buffer and repeatedly attempt to parse them as an OSC message until the entire message had been received. The overhead of doing this would waste a lot of cpu.
If a tcp packet always contains an integral number of OSC packets it's a little easier, we just have to check for more packets in the buffer.
I would try the bundle approach first...
ok.. the bundle approach works, even if messages are not sent in 0 zero logical time, which is good. as a workaround i will use this. however, i still consider it a workaround, because i need to make an assumption about what interval is needed, so that the underlying layer (tcp) creates separate packets instead of concatenating them together. i consider having to make this assumption not at all a proper solution, because this time value may differ between operating systems (between every computer, even?). so basically, it's luck whether this is going to work or not. i don't have any influence on it as a user.
while we are at it: i don't see any reason, that [tcp*] are packing the incoming tcp stream to lists of numbers, since the message format doesn't tell you anything about how it was packaged on the sender side. wouldn't it be 'more correct', if [tcpclient]/[tcpreceive]/[tcpserver] would drip each byte instead of putting them into lists?
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Martin Peach wrote:
Roman Haefeli wrote:
Martin Peach wrote:
Yes, and [unpackOSC] has no way of knowing if it is getting data from UDP or TCP so it should probably assume the worst and go for TCP. In fact, to be unbreakably robust it should assume it is getting input one byte at a time and not output anything until either an entire OSC packet has been received or the packet is not valid OSC.
this is how i would like [unpackOSC] to behave. i don't see any other way to do OSC over tcp.
I think opening a bundle and putting all the simultaneous messages in it, then closing the bundle and sending it, will work over tcp.
i did some test sending a file (over localhost). the limit seems to be at 65536. my test file is 49474 but i usually received 2 packets, rarely one single packet with 49474 bytes, sometimes even 3 packets.
i guess this behaves the same for normal send messages (not whole files) it could be a real problem since we can't really tell if a bundle is complete.
eni
Enrique Erne wrote:
i did some test sending a file (over localhost). the limit seems to be at 65536. my test file is 49474 but i usually received 2 packets, rarely one single packet with 49474 bytes, sometimes even 3 packets.
i guess this behaves the same for normal send messages (not whole files) it could be a real problem since we can't really tell if a bundle is complete.
65536 is the largest packet you can send with tcp or udp because the size field in the IP header is 16 bits wide. I think if you need to send that much in a packet you need to think of another way to do it, for instance send the entire file via ftp.
Martin
On Mon, 2008-09-08 at 15:19 -0400, Enrique Erne wrote:
Martin Peach wrote:
Roman Haefeli wrote:
Martin Peach wrote:
Yes, and [unpackOSC] has no way of knowing if it is getting data from UDP or TCP so it should probably assume the worst and go for TCP. In fact, to be unbreakably robust it should assume it is getting input one byte at a time and not output anything until either an entire OSC packet has been received or the packet is not valid OSC.
this is how i would like [unpackOSC] to behave. i don't see any other way to do OSC over tcp.
I think opening a bundle and putting all the simultaneous messages in it, then closing the bundle and sending it, will work over tcp.
i did some test sending a file (over localhost). the limit seems to be at 65536. my test file is 49474 but i usually received 2 packets, rarely one single packet with 49474 bytes, sometimes even 3 packets.
i guess this behaves the same for normal send messages (not whole files) it could be a real problem since we can't really tell if a bundle is complete.
i did a similar test with OSC bundles. it turned out, that you cannot rely on the packet becoming one single list on the receiver side. if it was splitted into two or more tcp packets during tcp transport, the OSC bundle becomes broken.
now i am rather more convinced, that any solution based on tcp packets instead of a tcp stream is not going to work.
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
Roman Haefeli wrote:
i did a similar test with OSC bundles. it turned out, that you cannot rely on the packet becoming one single list on the receiver side. if it was splitted into two or more tcp packets during tcp transport, the OSC bundle becomes broken.
now i am rather more convinced, that any solution based on tcp packets instead of a tcp stream is not going to work.
I still don't know what you are trying to do so I can't say. If you are sending huge amounts of data constantly you really shouldn't be using OSC at all. You could make your own protocol that sends the length of the packet as the first three or four or even more bytes and then use the list objects to chop the stream into pieces. But Pd is really inefficient for this because it's going to convert every single one of your billion bytes into a symbol and put it in memory somewhere.
Martin
On Mon, 2008-09-08 at 20:41 +0000, Martin Peach wrote:
Roman Haefeli wrote:
i did a similar test with OSC bundles. it turned out, that you cannot rely on the packet becoming one single list on the receiver side. if it was splitted into two or more tcp packets during tcp transport, the OSC bundle becomes broken.
now i am rather more convinced, that any solution based on tcp packets instead of a tcp stream is not going to work.
I still don't know what you are trying to do so I can't say. If you are sending huge amounts of data constantly you really shouldn't be using OSC at all.
all i need is a _robust_ OSC over TCP implementation. the bandwidth used is not high in average, but i cannot make any assumptions about peaks, it could well be that 2000 packets are sent at the same time. the idea is to port netpd to OSC. i don't think, that it is generally a bad idea. however, it will only work, if it can use the benefits of tcp, which is: make sure, that all data arrives intact and in the right order. netpd would break (not in all circumstances, but in many) if some packets would be dropped. currently i don't see, how i can make a robust OSC/TCP connection with pd.
You could make your own protocol that sends the length of the packet as the first three or four or even more bytes and then use the list objects to chop the stream into pieces. But Pd is really inefficient for this because it's going to convert every single one of your billion bytes into a symbol and put it in memory somewhere.
of course, i could make my own protocol, but the goal to use OSC _is_ to use OSC.
i asked #networking and they all agreed, that it is up to the application layer to define packet length (if necessary at all). the application shouldn't rely in _any_ case on tcp packeting. but this is what [unpackOSC] currently does. the only fix i can see is making [unpackOSC] stream based.
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Roman Haefeli wrote:
all i need is a _robust_ OSC over TCP implementation. the bandwidth used is not high in average, but i cannot make any assumptions about peaks, it could well be that 2000 packets are sent at the same time.
If you send 2000 packets from one machine to another wouldn't that be construed as a denial of service attack? Surely you can compress the information into fewer packets, say by using a single OSC message to send an array of values. Unless you mean 2000 machines are sending single packets to one machine, in which case the packets will all be separate, there's no problem apart from overloading the cpu for a few seconds.
Martin
On Mon, 2008-09-08 at 21:26 +0000, Martin Peach wrote:
Roman Haefeli wrote:
all i need is a _robust_ OSC over TCP implementation. the bandwidth used is not high in average, but i cannot make any assumptions about peaks, it could well be that 2000 packets are sent at the same time.
If you send 2000 packets from one machine to another wouldn't that be construed as a denial of service attack?
no. would it, if you upload a 1 gig file to a file server? (which is basically the same from tcp's perspective)
Surely you can compress the information into fewer packets, say by using a single OSC message to send an array of values.
in some cases yes, but in this particular case it is not possible. if a new client connects netpd, it request a state dump from another client, this other client sends the dump immediately. i don't know how to compress state dumps from a an arbitrary number of instruments with an arbitrary number of parameters with arbitrary message format and arbitrary number of elements. but this is just some arbitrary example. i generally don't think, that i am trying to use OSC in a wierd manner.
Unless you mean 2000 machines are sending single packets to one machine, in which case the packets will all be separate, there's no problem apart from overloading the cpu for a few seconds.
in netpd everything is possible. there is an arbitrary number of [tcpclient]s connected to one [tcpserver], while the patch with [tcpserver] is acting as an OSC proxy, that forwards incoming OSC packets to one or more clients. see more details on http://www.netpd.org/server
it used to work well with FUDI (besides the monthly netpd-server crashes), since [netserver] and co don't seem to rely on tcp packeting, but use a delimiter ';\n' to separate packets. however, the same thing doesn't work, when i switch to OSC. i can just repeat myself: all trouble comes from [unpackOSC]'s unability to create packets on its own. any pd project based on OSC/TCP will potentially suffer from this problem, it is not only because of some (assumed) strange way of using OSC. if [unpackOSC] is meant to be used only under _certain_ circumstances, i think, it should be documented accordingly.
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
On Tue, 2008-09-09 at 00:03 +0200, Roman Haefeli wrote:
On Mon, 2008-09-08 at 21:26 +0000, Martin Peach wrote:
Roman Haefeli wrote:
all i need is a _robust_ OSC over TCP implementation. the bandwidth used is not high in average, but i cannot make any assumptions about peaks, it could well be that 2000 packets are sent at the same time.
If you send 2000 packets from one machine to another wouldn't that be construed as a denial of service attack?
no. would it, if you upload a 1 gig file to a file server? (which is basically the same from tcp's perspective)
Surely you can compress the information into fewer packets, say by using a single OSC message to send an array of values.
in some cases yes, but in this particular case it is not possible. if a new client connects netpd, it request a state dump from another client, this other client sends the dump immediately. i don't know how to compress state dumps from a an arbitrary number of instruments with an arbitrary number of parameters with arbitrary message format and arbitrary number of elements. but this is just some arbitrary example. i generally don't think, that i am trying to use OSC in a wierd manner.
Unless you mean 2000 machines are sending single packets to one machine, in which case the packets will all be separate, there's no problem apart from overloading the cpu for a few seconds.
in netpd everything is possible. there is an arbitrary number of [tcpclient]s connected to one [tcpserver], while the patch with [tcpserver] is acting as an OSC proxy, that forwards incoming OSC packets to one or more clients. see more details on http://www.netpd.org/server
it used to work well with FUDI (besides the monthly netpd-server crashes), since [netserver] and co don't seem to rely on tcp packeting, but use a delimiter ';\n' to separate packets. however, the same thing doesn't work, when i switch to OSC. i can just repeat myself: all trouble comes from [unpackOSC]'s unability to create packets on its own. any pd project based on OSC/TCP will potentially suffer from this problem, it is not only because of some (assumed) strange way of using OSC. if [unpackOSC] is meant to be used only under _certain_ circumstances, i think, it should be documented accordingly.
sorry, i guess i was sounding harsh.
i understand, that you don't want to change [unpackOSC], if it means completely rewriting it. it's just sad, that netpd cannot make use of it as it is now. it would have been a big step forward.
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
Roman Haefeli wrote:
On Mon, 2008-09-08 at 21:26 +0000, Martin Peach wrote:
Roman Haefeli wrote:
all i need is a _robust_ OSC over TCP implementation. the bandwidth used is not high in average, but i cannot make any assumptions about peaks, it could well be that 2000 packets are sent at the same time.
If you send 2000 packets from one machine to another wouldn't that be construed as a denial of service attack?
no. would it, if you upload a 1 gig file to a file server? (which is basically the same from tcp's perspective)
Oh. I thought you meant 2000 packets at the same time, which would imply 2000 threads running at once.
Surely you can compress the information into fewer packets, say by using a single OSC message to send an array of values.
in some cases yes, but in this particular case it is not possible. if a new client connects netpd, it request a state dump from another client, this other client sends the dump immediately. i don't know how to compress state dumps from a an arbitrary number of instruments with an arbitrary number of parameters with arbitrary message format and arbitrary number of elements. but this is just some arbitrary example. i generally don't think, that i am trying to use OSC in a wierd manner.
Unless you mean 2000 machines are sending single packets to one machine, in which case the packets will all be separate, there's no problem apart from overloading the cpu for a few seconds.
in netpd everything is possible. there is an arbitrary number of [tcpclient]s connected to one [tcpserver], while the patch with [tcpserver] is acting as an OSC proxy, that forwards incoming OSC packets to one or more clients. see more details on http://www.netpd.org/server
It sounds like it could crash the same way stock exchange software sometimes crashes when there is too much trading volume. The model relies on an inexhaustible pool of memory and cpu time.
it used to work well with FUDI (besides the monthly netpd-server crashes), since [netserver] and co don't seem to rely on tcp packeting, but use a delimiter ';\n' to separate packets. however, the same thing doesn't work, when i switch to OSC. i can just repeat myself: all trouble comes from [unpackOSC]'s unability to create packets on its own. any pd project based on OSC/TCP will potentially suffer from this problem, it is not only because of some (assumed) strange way of using OSC. if [unpackOSC] is meant to be used only under _certain_ circumstances, i think, it should be documented accordingly.
The idea is to expand it to fit circumstances as they arise. I hadn't really tried OSC over TCP, so I wasn't aware of this problem. I agree it needs fixing, I'm just not sure of the best way at the moment. It could be simpler to prefix the OSC packet with its length. Then a list abstraction could reassemble incoming lists and send them to [unpackOSC] one OSC packet per list.
Martin
Martin Peach wrote:
The idea is to expand it to fit circumstances as they arise. I hadn't really tried OSC over TCP, so I wasn't aware of this problem. I agree it needs fixing, I'm just not sure of the best way at the moment. It could be simpler to prefix the OSC packet with its length.
That's indeed what is recommended by the OSC specification for stream-based protocols:
http://www.nabble.com/Questions-wrt--OSC-implementation-details.-td1109673.h...
But this makes it more complicated, [packOSC] and [unpackOSC] would need to know whether the data should be sent or is being received from a packet-based protocol or a stream-based protocol, to know whether to prefix the length or not.
Claude Heiland-Allen wrote:
Martin Peach wrote:
The idea is to expand it to fit circumstances as they arise. I hadn't really tried OSC over TCP, so I wasn't aware of this problem. I agree it needs fixing, I'm just not sure of the best way at the moment. It could be simpler to prefix the OSC packet with its length.
That's indeed what is recommended by the OSC specification for stream-based protocols:
http://www.nabble.com/Questions-wrt--OSC-implementation-details.-td1109673.h...
Yes the spec says: " In a stream-based protocol such as TCP, the stream should begin with an int32 giving the size of the first packet, followed by the contents of the first packet, followed by the size of the second packet, etc. " (http://archive.cnmat.berkeley.edu/OpenSoundControl/OSC-spec.html)
But this makes it more complicated, [packOSC] and [unpackOSC] would need to know whether the data should be sent or is being received from a packet-based protocol or a stream-based protocol, to know whether to prefix the length or not.
The length could be added and removed in between, at the expense of a bunch of list objects (which would work right now), or else [packOSC] and [unpackOSC] could have creation arguments to specify the use of a length prefix (which will take a little while to implement). Both are easier than rewriting [unpackOSC] to work with unknown packet lengths, which seems silly since the length _is_ known at the sender and the overhead of sending four more bytes is minimal.
Martin
On Tue, 2008-09-09 at 08:41 -0400, Martin Peach wrote:
Claude Heiland-Allen wrote:
Martin Peach wrote:
The idea is to expand it to fit circumstances as they arise. I hadn't really tried OSC over TCP, so I wasn't aware of this problem. I agree it needs fixing, I'm just not sure of the best way at the moment. It could be simpler to prefix the OSC packet with its length.
That's indeed what is recommended by the OSC specification for stream-based protocols:
http://www.nabble.com/Questions-wrt--OSC-implementation-details.-td1109673.h...
Yes the spec says: " In a stream-based protocol such as TCP, the stream should begin with an int32 giving the size of the first packet, followed by the contents of the first packet, followed by the size of the second packet, etc. " (http://archive.cnmat.berkeley.edu/OpenSoundControl/OSC-spec.html)
thanks for pointing this out, claude and martin. this makes the whole story a _lot_ easier.
But this makes it more complicated, [packOSC] and [unpackOSC] would need to know whether the data should be sent or is being received from a packet-based protocol or a stream-based protocol, to know whether to prefix the length or not.
The length could be added and removed in between, at the expense of a bunch of list objects (which would work right now), or else [packOSC] and [unpackOSC] could have creation arguments to specify the use of a length prefix (which will take a little while to implement). Both are easier than rewriting [unpackOSC] to work with unknown packet lengths, which seems silly since the length _is_ known at the sender and the overhead of sending four more bytes is minimal.
agreed. i am very happy about any simple solution as long as it meets the OSC specification. i missed, that the OSC specs mention transport over stream-based protocols separately. as you said, it is even so easy, that it could be implemented in plain pd. i even think now, that [packOSC]/[unpackOSC] should stay untouched, because from the specs they do exactly what they are supposed to do (encoding / decoding OSC packets), while the frame length seems to be something additional, that is only needed for stream based transport. shouldn't that be called another layer?
i think, i am going to do the frame length part myself. for me the problem is solved (i don't need anyone to change any code). thanks!
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
On Tue, 2008-09-09 at 09:17 -0400, Martin Peach wrote:
Roman Haefeli wrote:
i think, i am going to do the frame length part myself. for me the problem is solved (i don't need anyone to change any code). thanks!
Great! Let us know what you come up with, it should be added to the help files for [packOSC] and [unpackOSC].
Martin
yo.. i made some abstractions based on [packOSC] and [unpackOSC], called [packOSCstream] and [unpackOSCstream].
[packOSCstream] prepends the OSC packet/bundle length as int32 (please: someone needs to confirm, that the format used is actually int32) to each OSC packet/bundle.
[unpackOSCstream] separates OSC packets/bundles according to the length given in the frame header created by [packOSCstream].
get it from: http://romanhaefeli.net/software/pd/OSCstream.tar.gz
i think, especially [unpackOSCstream] needs thorough testing. in the few tests i made, it was working well. however, not having a delimiter, but having to rely on counting bytes seems a bit dangerous to me, since even only one missed byte would cause [unpackOSCstream] to completely fail. i wasn't able to trigger that, though. if TCP is considered robust (and i assume, it should be considered this way), it is unlikely to happen.
potential issues:
[tcpserver]/[tcpreceive], the whole system fails, if one of the clients is sending something else than an OSC packet.
simultaneously to one [tcpsever]/[tcpreceive], [unpackOSCstream] might fail, if the incoming tcp packets are not either one or a multiple of OSC packets long. i didn't experience it yet, but since TCP is a stream based protocol, it should be assumed, that TCP doesn't respect OSC frame borders at all. however, this is only a theoretical problem so far, since i never experienced that TCP is delivering fractured OSC packets.
the only solution for a robust OSC transport i can think of, is if on the receiver side every client gets its own [unpackOSCstream], which is an ugly solution.
probably someone else is able to come up with a better idea.
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
Claude Heiland-Allen wrote:
Martin Peach wrote:
The idea is to expand it to fit circumstances as they arise. I hadn't really tried OSC over TCP, so I wasn't aware of this problem. I agree it needs fixing, I'm just not sure of the best way at the moment. It could be simpler to prefix the OSC packet with its length.
That's indeed what is recommended by the OSC specification for stream-based protocols:
http://www.nabble.com/Questions-wrt--OSC-implementation-details.-td1109673.h...
But this makes it more complicated, [packOSC] and [unpackOSC] would need to know whether the data should be sent or is being received from a packet-based protocol or a stream-based protocol, to know whether to prefix the length or not.
i don't think that [packOSC] and [unpackOSC] should handle these. the OSC-specs say: "The underlying network that delivers an OSC packet is responsible for delivering both the contents and the size to the OSC application." [packOSC]/[unpackOSC] could be seen as the "OSC application" part, whereas [tcpsend]/[tcpreceive] could be seen as the "underlying network" part (or part thereof).
however, i _do not_ propose to add the length prefix to the [tcpreceive]/... part, as this will make it less useable in any other context.
personally i think it is a design flaw in OSC, to have stream-based transmission differently from package-based transmission. claiming that the "underlying network" has to take care of it, is in contradiction to the claim of being "transport layer independent" (but then, the specs doesn't explicitely say so, and i probably just make it up myself; one could also argue, that OSC does not directly build on top of the transport layer and that there should be something inbetween OSC and the transport layer)
i think, the sending of "plain" OSC-messages should probably have been unsupported from the beginning, with eventually having a stripped down "bundle" that just has the message-length.
but anyhow, OSC has been around for quite some time, it's most likely useless to hope for a change.
as (imho) the re-packaging should be neither part of [unpackOSC] nor [tcpreceive], i would suggest using intermediate objects that handle the re-packaging of streams. for now, zexy's [repack] should be able to do it. on the long run: shouldn't Pd's [list]-family contain an atom-accumulator of this kind?
fmga,sdr IOhannes
On Tue, 2008-09-09 at 14:56 +0200, IOhannes m zmoelnig wrote:
Claude Heiland-Allen wrote:
Martin Peach wrote:
The idea is to expand it to fit circumstances as they arise. I hadn't really tried OSC over TCP, so I wasn't aware of this problem. I agree it needs fixing, I'm just not sure of the best way at the moment. It could be simpler to prefix the OSC packet with its length.
That's indeed what is recommended by the OSC specification for stream-based protocols:
http://www.nabble.com/Questions-wrt--OSC-implementation-details.-td1109673.h...
But this makes it more complicated, [packOSC] and [unpackOSC] would need to know whether the data should be sent or is being received from a packet-based protocol or a stream-based protocol, to know whether to prefix the length or not.
i don't think that [packOSC] and [unpackOSC] should handle these. the OSC-specs say: "The underlying network that delivers an OSC packet is responsible for delivering both the contents and the size to the OSC application." [packOSC]/[unpackOSC] could be seen as the "OSC application" part, whereas [tcpsend]/[tcpreceive] could be seen as the "underlying network" part (or part thereof).
however, i _do not_ propose to add the length prefix to the [tcpreceive]/... part, as this will make it less useable in any other context.
personally i think it is a design flaw in OSC, to have stream-based transmission differently from package-based transmission. claiming that the "underlying network" has to take care of it, is in contradiction to the claim of being "transport layer independent" (but then, the specs doesn't explicitely say so, and i probably just make it up myself; one could also argue, that OSC does not directly build on top of the transport layer and that there should be something inbetween OSC and the transport layer)
i think, the sending of "plain" OSC-messages should probably have been unsupported from the beginning, with eventually having a stripped down "bundle" that just has the message-length.
but anyhow, OSC has been around for quite some time, it's most likely useless to hope for a change.
as (imho) the re-packaging should be neither part of [unpackOSC] nor [tcpreceive], i would suggest using intermediate objects that handle the re-packaging of streams. for now, zexy's [repack] should be able to do it. on the long run: shouldn't Pd's [list]-family contain an atom-accumulator of this kind?
why not doing it with plain pd? because it is too much processing overhead?
as you guys mentioned, the OSC specs propose to use int32 for the frame length. however, since pd doesn't know any integer type, not all of the four bytes can be used. otoh, this probably isn't too bad, because packets with more than 16777215 bytes are quite unlikely. is that something that needs to be tought about or shouldn't one care?
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Roman Haefeli wrote:
please correct me, if interprete things the wrong way, but from my tests [tcp*] and [unpackOSC] don't work 'well' together. [tcp*] seem to not know anything about messages, they simply treat any incoming data as a stream (without any concept of delimiters). depending on how quickly you send messages to the sending object, the receiving object makes one or more pd messages out of it (this most likely happens on the tcp layer, where pd/[tcp*] assumingly doesn't have any influence on). [unpackOSC] on the other hand is 'stream agnostic', it only accepts input as messages with correct number of elements. messages that are longer than one OSC packet are truncated to exactly one OSC packet, while the rest is silently ignored. if one message to [unpackOSC] is too short, [unpackOSC] drops an error:
unpackOSC: packet size (19) not a multiple of 4 bytes: dropping packet
from what i can tell, [tcp*] and [unpackOSC] are incompatible, since the former are 'message agnostic' and the latter is 'stream agnostic'.
if i am right about this, i really hope, that it could be fixed in some way. my suggestion would be to change [unpackOSC] in way, so that it treats incoming messages as a [stream] (in other words: it would completely disregard messages and always give an output, as soon as an OSC packet is completed).
this is no problem as long as you use [unpackOSC] together with [udp*], since then you would expect some messages to drop. but when going the tcp route, you'd expect completeness on the receiving side. currently it is not possible to rely on this, as this little test shows:
[send /test 1, send /best 2( | [packOSC] | [tcpclient]
[tcpserver] | [unpackOSC] | [print]
it prints:
/test 1
Actually the proper way to deal with sending multiple OSC messages at the same time is to put them into a bundle. Then [unpackOSC] will unpack them and output them in sequence.
Martin
On Mon, 2008-09-08 at 09:08 -0400, Martin Peach wrote:
Roman Haefeli wrote:
please correct me, if interprete things the wrong way, but from my tests [tcp*] and [unpackOSC] don't work 'well' together. [tcp*] seem to not know anything about messages, they simply treat any incoming data as a stream (without any concept of delimiters). depending on how quickly you send messages to the sending object, the receiving object makes one or more pd messages out of it (this most likely happens on the tcp layer, where pd/[tcp*] assumingly doesn't have any influence on). [unpackOSC] on the other hand is 'stream agnostic', it only accepts input as messages with correct number of elements. messages that are longer than one OSC packet are truncated to exactly one OSC packet, while the rest is silently ignored. if one message to [unpackOSC] is too short, [unpackOSC] drops an error:
unpackOSC: packet size (19) not a multiple of 4 bytes: dropping packet
from what i can tell, [tcp*] and [unpackOSC] are incompatible, since the former are 'message agnostic' and the latter is 'stream agnostic'.
if i am right about this, i really hope, that it could be fixed in some way. my suggestion would be to change [unpackOSC] in way, so that it treats incoming messages as a [stream] (in other words: it would completely disregard messages and always give an output, as soon as an OSC packet is completed).
this is no problem as long as you use [unpackOSC] together with [udp*], since then you would expect some messages to drop. but when going the tcp route, you'd expect completeness on the receiving side. currently it is not possible to rely on this, as this little test shows:
[send /test 1, send /best 2( | [packOSC] | [tcpclient]
[tcpserver] | [unpackOSC] | [print]
it prints:
/test 1
Actually the proper way to deal with sending multiple OSC messages at the same time is to put them into a bundle. Then [unpackOSC] will unpack them and output them in sequence.
yeah, i agree. but what is with messages, that follow each other with less than 8ms and more than 0ms? there is no solution for that. from a user perspective, you don't always know beforehand in what time intervals OSC packets are going to be sent. to cover any case, the only proper solution i can think of is, that [unpackOSC] works stream based and not message based.
may i ask, if there are plans to change [unpackOSC]? the reason i ask is, that i am currently stuck with the development of netpd because of this. if it stays as it is, i'll bury my plans to switch netpd to OSC.
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de
Roman Haefeli wrote:
may i ask, if there are plans to change [unpackOSC]? the reason i ask is, that i am currently stuck with the development of netpd because of this. if it stays as it is, i'll bury my plans to switch netpd to OSC.
Yes, I'm working on it. I tried a few approaches yesterday that didn't work. I think if the input to [unpackOSC] only contains complete packets it's possible. A version that takes one byte at a time calls for a serious rewrite.
Martin
On Mon, 2008-09-08 at 16:28 +0000, Martin Peach wrote:
Roman Haefeli wrote:
may i ask, if there are plans to change [unpackOSC]? the reason i ask is, that i am currently stuck with the development of netpd because of this. if it stays as it is, i'll bury my plans to switch netpd to OSC.
Yes, I'm working on it. I tried a few approaches yesterday that didn't work. I think if the input to [unpackOSC] only contains complete packets it's possible. A version that takes one byte at a time calls for a serious rewrite.
yo.. you probably know more about tcp than i do. is it a robust approach to assume, that tcp always delivers at least complete or multiples of complete packets?
if there'd be an easy way to tell when an OSC packet starts and when it ends, i would do the packet forming on the receiver side myself (in pd). but it seems, that one needs to parse a lot of the info of the OSC packet and it seems that it is not so straightforward to do.
actually, because of the same 'you cannot tell how tcp forms packets' problem, another netpd-server based on [tcpserver] i wrote doesn't work correctly. because it transports FUDI messages, i can make a [FUDI_packet_former] by waiting for a 59 10 sequence (;\n) in the stream to solve the problem. however, this shows, that this problem is not only related to OSC, but is a general problem of [tcp*]: you have to serialize lists anyway, so i wonder whether it would be harmful, if [tcp*] would do it already.
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
On Mon, 2008-09-08 at 16:28 +0000, Martin Peach wrote:
Roman Haefeli wrote:
may i ask, if there are plans to change [unpackOSC]? the reason i ask is, that i am currently stuck with the development of netpd because of this. if it stays as it is, i'll bury my plans to switch netpd to OSC.
Yes, I'm working on it. I tried a few approaches yesterday that didn't work. I think if the input to [unpackOSC] only contains complete packets it's possible. A version that takes one byte at a time calls for a serious rewrite.
i see. i actually don't know how parsing an OSC packet on c level is done, so what i am claiming for is probably the ideal from a theoretical point of view, but not so much from a practical/programming perspective.
for the moment, i go for the bundle route. do you know what is a safe time interval, where tcp packets are not concatenated together?
roman
Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de
Roman Haefeli wrote:
for the moment, i go for the bundle route. do you know what is a safe time interval, where tcp packets are not concatenated together?
I was getting separation at > 8ms. I think it may be related to the Pd audio block size. Pd sends all its messages between audio blocks.
Martin
On Mon, 2008-09-08 at 17:08 +0000, Martin Peach wrote:
Roman Haefeli wrote:
for the moment, i go for the bundle route. do you know what is a safe time interval, where tcp packets are not concatenated together?
I was getting separation at > 8ms. I think it may be related to the Pd audio block size. Pd sends all its messages between audio blocks.
Martin
i can confirm that. however, i would like to know what could be assumed a save value that works on any system.
i'll check on windows, as soon as i have access to.
roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de