We use a combo of modified mfcc and k-means at Mogees to get over 95% accuracy of percussion sounds.
To add a few words to what Jamie says, on what I've learned about working in this domain;
ways it can be assisted by ML. Is real-time training needed? Or do you have lots of off-line time for your ML to "think"?
they come from? How diverse/typical are they?
it to make up its own mind about classes/clusters (unsupervised)?
a definite match, or a set of probabilities, or a vector of distances from possible matches?
pitch or duration independence?
boundaries and the size of any transform (fft/wavelet)? Tiny variations can lead to big differences. Will you zero pad to remove junk? Will you use windows/envelopes to soften edges?
to be processed? Are they;
i) 'Samples' with identical byte patterns? ii) Never heard before? iii) Known structure, segments, chunks. eg. speech? iv) Highly structured 'samples', hashable, eg. MIR v) Largely similar, seeking a specific structural variation? vi) Transient, sustained, harmonic? Or complex evolution?
Machine learning right now is a set of quite specialised building blocks. Each of the above shapes of problem may suggest substantially different choices of components and configuration.
Pre-processing, like shelf EQ and compression can make a _huge_ difference to the quality and reliability of results.
I havent used the ml.lib before , but the idea of having a load of ML components to play with in Pd is really attractive and I'm sure you will have tons of fun!
cheers, Andy