Hello dev list,
Inspired by the intentions to set up an automated unit test procedure for Pd-extended, we propose a template for testing signal objects. "We" is Fred Jan Kraan and me, we've been puzzling on this together last weekend.
As far as we could see, there is no way to generate test patches for all signal objects with a 'one-size-fits-all' content and where you could replace the object under test using a simple script. Conditions are just too different per object. Instead, we opted for a template where you manually put the object under test, together with requisites like test signal, 'set' messages etc. Aspects of the template:
- compares signal under test with reference which is stored with the patch (512 points) - reference array can be recorded in the patch using a Pd known to work well (release build for example) - visual representation of signal under test, reference signal and diff - standard deviation is calculated - maximum tolerated standard deviation can be set (tolerance) - test result is reported as a function of found standard deviation and tolerance ('tested OK' or 'ERROR: deviation ....') - test result is printed to Pd window and stdout - test is automatically executed when patch is loaded - samplerate 44100 is considered the norm, error is reported when other samplerate is detected at patch load
The template is intended towards automated unit testing. Test patches based on the template could be included in a setup like the 'load-every-help' test.
Attached is a .zip with the template and two examples. Please comment on it if you have suggestions.
Katja
That looks really good on many levels. :) I like the layout, I hadn't thought of standard deviation, that makes sense as long as we can specify "exact" as a possibility. My guess is that some of this stuff should produce the same bit sequence every time, but I could be wrong there.
Perhaps the subpatches should be abstractions as part of a 'test' library. If there was a bug or a new feature needed in any of those subpatches, it would be rough to have to modify all of the tests.
About the sample rate, it should be possible to have each patch set the sample rate it needs. You can see hcs/get-audio-dialog-help.pd or the mediasettings library for ways to do that. I think we'll want to test at different sample rates some day, but if its easier for now, we can stick to 44100.
.hc
On Oct 24, 2011, at 6:30 AM, katja wrote:
Hello dev list,
Inspired by the intentions to set up an automated unit test procedure for Pd-extended, we propose a template for testing signal objects. "We" is Fred Jan Kraan and me, we've been puzzling on this together last weekend.
As far as we could see, there is no way to generate test patches for all signal objects with a 'one-size-fits-all' content and where you could replace the object under test using a simple script. Conditions are just too different per object. Instead, we opted for a template where you manually put the object under test, together with requisites like test signal, 'set' messages etc. Aspects of the template:
- compares signal under test with reference which is stored with the
patch (512 points)
- reference array can be recorded in the patch using a Pd known to
work well (release build for example)
- visual representation of signal under test, reference signal and
diff
- standard deviation is calculated
- maximum tolerated standard deviation can be set (tolerance)
- test result is reported as a function of found standard deviation
and tolerance ('tested OK' or 'ERROR: deviation ....')
- test result is printed to Pd window and stdout
- test is automatically executed when patch is loaded
- samplerate 44100 is considered the norm, error is reported when
other samplerate is detected at patch load
The template is intended towards automated unit testing. Test patches based on the template could be included in a setup like the 'load-every-help' test.
Attached is a .zip with the template and two examples. Please comment on it if you have suggestions.
Katja <template~test.zip>_______________________________________________ Pd-dev mailing list Pd-dev@iem.at http://lists.puredata.info/listinfo/pd-dev
----------------------------------------------------------------------------
Programs should be written for people to read, and only incidentally for machines to execute. - from Structure and Interpretation of Computer Programs
Hans, thanks for your comments
On Mon, Oct 24, 2011 at 7:35 PM, Hans-Christoph Steiner hans@at.or.at wrote:
That looks really good on many levels. :) I like the layout, I hadn't thought of standard deviation, that makes sense as long as we can specify "exact" as a possibility. My guess is that some of this stuff should produce the same bit sequence every time, but I could be wrong there.
If you specify tolerance zero, that's exact. But we found that tolerance is needed for different reasons:
- floats are stored as text by Pd, and differences between computed and stored values occur because of the truncated stored floats - there's small differences between single precision and double precision Pd - objects with a 'memory' (like IIR filters) produce slightly different results if you test them repeatedly with the same input sequence
Perhaps the subpatches should be abstractions as part of a 'test' library. If there was a bug or a new feature needed in any of those subpatches, it would be rough to have to modify all of the tests.
Of course, thanks. I'll redesign it in that sense.
About the sample rate, it should be possible to have each patch set the sample rate it needs. You can see hcs/get-audio-dialog-help.pd or the mediasettings library for ways to do that. I think we'll want to test at different sample rates some day, but if its easier for now, we can stick to 44100.
The idea was to only use vanilla objects around the object under test, so you don't depend on external libs for testing. Do you know if it's possible to send standard messages like 'audio-dialog 1 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 44100 20 0' or 'audio-dialog 1 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 48000 20 0' to pd on all systems, without requesting data first? I don't think so. If the device has different number of channels (instead of 2), Pd can not sync. And if the device doesn't allow the alternative samplerate, it won't work either. Samplerate 44100 is safest.
By the way if Pd does not sync with an audio device for whatever reason, you get weird test results anyhow. Even if you do not want to actually hear the sound, the audio device must work well, for these signal object tests. That was one of the first things we observed in practice.
Katja
On Monday, October 24, 2011 10:11 PM, "katja" katjavetter@gmail.com wrote:
Hans, thanks for your comments
On Mon, Oct 24, 2011 at 7:35 PM, Hans-Christoph Steiner hans@at.or.at wrote:
That looks really good on many levels. :) I like the layout, I hadn't thought of standard deviation, that makes sense as long as we can specify "exact" as a possibility. My guess is that some of this stuff should produce the same bit sequence every time, but I could be wrong there.
If you specify tolerance zero, that's exact. But we found that tolerance is needed for different reasons:
- floats are stored as text by Pd, and differences between computed
and stored values occur because of the truncated stored floats
- there's small differences between single precision and double precision
Pd
- objects with a 'memory' (like IIR filters) produce slightly
different results if you test them repeatedly with the same input sequence
Ah, course, makes sense. The third item there, the IIR filters, it should be not too hard to reproduce the exact same operation with them too. With the tests, each one is run in a new Pd instance, so they're always starting from scratch. Pd is then quit, and restarted for the next test.
Perhaps the subpatches should be abstractions as part of a 'test' library. If there was a bug or a new feature needed in any of those subpatches, it would be rough to have to modify all of the tests.
Of course, thanks. I'll redesign it in that sense.
About the sample rate, it should be possible to have each patch set the sample rate it needs. You can see hcs/get-audio-dialog-help.pd or the mediasettings library for ways to do that. I think we'll want to test at different sample rates some day, but if its easier for now, we can stick to 44100.
The idea was to only use vanilla objects around the object under test, so you don't depend on external libs for testing. Do you know if it's possible to send standard messages like 'audio-dialog 1 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 44100 20 0' or 'audio-dialog 1 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 48000 20 0' to pd on all systems, without requesting data first? I don't think so. If the device has different number of channels (instead of 2), Pd can not sync. And if the device doesn't allow the alternative samplerate, it won't work either. Samplerate 44100 is safest.
[pd audio-dialog 1 0 0 0 2 0 0 0 0 0 0 0 2 0 0 0 48000 20 0( is the message that the Audio Settings preference panel sends when you click OK. So it'll work anywhere.
By the way if Pd does not sync with an audio device for whatever reason, you get weird test results anyhow. Even if you do not want to actually hear the sound, the audio device must work well, for these signal object tests. That was one of the first things we observed in practice.
Good to know. We should be able to get a stable audio device setup with the build farm, so that shouldn't be an issue.
.hc
On Mon, 2011-10-24 at 22:11 +0200, katja wrote:
By the way if Pd does not sync with an audio device for whatever reason, you get weird test results anyhow. Even if you do not want to actually hear the sound, the audio device must work well, for these signal object tests. That was one of the first things we observed in practice.
Can you elaborate on that? I used to use Pd setups without real audio device (for instance running Pd on a server producing an Icecast stream) and I never found anything odd. Probably it happens only under certain circumstances? Anyway, I'm interested to hear more about it as I always assumed that -noaudio should lead to same result as with audio (synced to a real device).
Roman
On Tue, Oct 25, 2011 at 4:11 PM, Roman Haefeli reduzent@gmail.com wrote:
On Mon, 2011-10-24 at 22:11 +0200, katja wrote:
By the way if Pd does not sync with an audio device for whatever reason, you get weird test results anyhow. Even if you do not want to actually hear the sound, the audio device must work well, for these signal object tests. That was one of the first things we observed in practice.
Can you elaborate on that? I used to use Pd setups without real audio device (for instance running Pd on a server producing an Icecast stream) and I never found anything odd. Probably it happens only under certain circumstances? Anyway, I'm interested to hear more about it as I always assumed that -noaudio should lead to same result as with audio (synced to a real device).
Roman you're right, a -noaudio Pd doesn't sync with a device and therefore it can not have sync problems. Thanks for pointing to this. It did not cross my mind to do signal tests with -noaudio but it may be a good idea, to exclude sync troubles in any case. Pd started with -noaudio is the same as Pd with input- and output- devices disabled, and this can also be done with an audio-dialog message.
Katja
On Mon, 2011-10-24 at 22:11 +0200, katja wrote:
Hans, thanks for your comments
On Mon, Oct 24, 2011 at 7:35 PM, Hans-Christoph Steiner hans@at.or.at wrote:
That looks really good on many levels. :) I like the layout, I hadn't thought of standard deviation, that makes sense as long as we can specify "exact" as a possibility. My guess is that some of this stuff should produce the same bit sequence every time, but I could be wrong there.
If you specify tolerance zero, that's exact. But we found that tolerance is needed for different reasons:
- floats are stored as text by Pd, and differences between computed
and stored values occur because of the truncated stored floats
Wouldn't it make more sense to load the reference table from some binary format (like WAV file) instead of some textual (lossy) representation in order not to lose any precision?
Both, [writesf~] and [soundfiler], support WAV files with 32-float bit resolution (according to their help patches).
Roman
On Mon, Oct 24, 2011 at 7:35 PM, Hans-Christoph Steinerhans@at.or.at wrote:
Ah, course, makes sense. The third item there, the IIR filters, it should be not too hard to reproduce the exact same operation with them too. With the tests, each one is run in a new Pd instance, so they're always starting from scratch. Pd is then quit, and restarted for the next test.
But how do you want to create and check the test if the only environment the test will produce proper results is the unattended automatic run? If the test would fail, it would be hard to find out why, as the manual run would always produce a different result.
Fred Jan
On Oct 25, 2011, at 11:44 AM, F.J. Kraan wrote:
On Mon, Oct 24, 2011 at 7:35 PM, Hans-Christoph Steiner<hans@at.or.at
wrote:
Ah, course, makes sense. The third item there, the IIR filters, it should be not too hard to reproduce the exact same operation with them too. With the tests, each one is run in a new Pd instance, so they're always starting from scratch. Pd is then quit, and restarted for the next test.
But how do you want to create and check the test if the only environment the test will produce proper results is the unattended automatic run? If the test would fail, it would be hard to find out why, as the manual run would always produce a different result.
I guess I don't quite follow what you mean here. I am thinking that for a more elaborate test, the test patch would would run thru the IIR filter a few times. Each iteration of the IIR filter test should produce the same result.
This does remind me tho, the original load_every_help.py script would load each help patch into the same instance of Pd, just continually reusing it. That would trigger a couple hard-to-reproduce bugs that basically only happened when loading every help patch in a certain order. I was focused on getting a report on each help patch, so I changed the script to make a new Pd instance per test. But Pd should be able to load every help patch without crashing, so perhaps we need a second test mode where the same Pd is reused again and again.
.hc
----------------------------------------------------------------------------
"[W]e have invented the technology to eliminate scarcity, but we are deliberately throwing it away to benefit those who profit from scarcity." -John Gilmore
On Oct 25, 2011, at 10:23 AM, Roman Haefeli wrote:
On Mon, 2011-10-24 at 22:11 +0200, katja wrote:
Hans, thanks for your comments
On Mon, Oct 24, 2011 at 7:35 PM, Hans-Christoph Steiner <hans@at.or.at
wrote:
That looks really good on many levels. :) I like the layout, I hadn't thought of standard deviation, that makes sense as long as we can specify "exact" as a possibility. My guess is that some of this stuff should produce the same bit sequence every time, but I could be wrong there.
If you specify tolerance zero, that's exact. But we found that tolerance is needed for different reasons:
- floats are stored as text by Pd, and differences between computed
and stored values occur because of the truncated stored floats
Wouldn't it make more sense to load the reference table from some binary format (like WAV file) instead of some textual (lossy) representation in order not to lose any precision?
Both, [writesf~] and [soundfiler], support WAV files with 32-float bit resolution (according to their help patches).
That's a good idea, plus it should be easier to manage a .wav file for the reference rather than having it stored in an array in a Pd patch. That should make it possible to have 32-bit float tests that are bit- accurate. I guess we'll still need a standard deviation for when loading 32-bit floats into 64-bit double Pd.
.hc
----------------------------------------------------------------------------
Looking at things from a more basic level, you can come up with a more direct solution... It may sound small in theory, but it in practice, it can change entire economies. - Amy Smith