On Dec 14, 2004, at 3:29 AM, Johannes M Zmoelnig wrote:
and i have changed the width of all of the arrays to 4 (this is: x/y/z/w but also u/v/0/0) to make them more uniformly. probably someone with a good profiler (chris ?) could check whether the performance loss is really that big or if we can accept that in order to be more flexible (esp. in terms of coding as it is so much easier to just have consider one width)
The data is not general enough to be treated this way and it's unlikely to save any coding effort for more than the most trivial objects. They don't even have the same ranges for valid data. For example, vertex data is not restricted to any coordinate values, but color info is clamped at 1.0, so some sort of scaling would have to take place for these to be exchanged (I would say the color info should be unsigned chars anyway). Also, an array width of 3 for normals is not valid according to the RedBook, so that is going to require truncation. Finally, these objects should have the highest level of optimizations possible as dealing with even a 10,000 poly object at a decent framerate is going to murder most any CPU.
So my vote is that the array widths should go back to there natural sizes as the very minor and rare benefits of equal lengths outweigh the negatives.
cgc