Hi all,
Yes I pulled the plug on my webpage a while back and made a GitHub repo for timbreID:
https://github.com/wbrent/timbreIDLib.git. The README has a link to the latest version of the examples package. Note that the library is now called timbreIDLib to distinguish the library itself from the audio feature database object within it, [timbreID].
The version of the examples package linked in the mirror above is pretty old - the latest version has additional examples for audio segmentation and key estimation. It also has some significant updates to the timbre space patch, including a grain sequencing function that's pretty fun to play with. You can see a quick demo video of that here:
http://williambrent.conflations.com/mov/timbre-space-june-2019.mp4.
Simon, for your specific project, you'll have lots of options for extracting audio features from the 5-second audio clips. You can store the feature vector of each clip in [timbreID], and it's no problem that there will be thousands of instances. Finding the best match in the existing database relative to a new clip's feature vector will still be very quick, and there's a relatively new feature for [timbreID] that lets you get the K best matches in order of similarity (not just the single best match). So you can definitely get a list of the best matching files.
The hardest part is coming up with the ideal set of features to extract and a strategy for dealing with the way they change over time. A basic starting point would be to extract multi-frame Bark spectra or BFCCs. The 07-timbre-ordering/order-perc.pd example extracts multi-frame features from pre-recorded audio. A key object is [featureAccum], which can concatenate incoming single-frame feature vectors to produce a long multi-frame vector (it can also sum or average them). In your case, since all clips are precisely 5 seconds long, the multi-frame BFCC vectors will all be the same length, so a similarity calculation is possible without any further work. But...under that model, for two sounds to be "similar," the way their features change over time has to align very tightly. So you might have one participant shake some maracas into the microphone, and another participant shake the exact same maracas at a different tempo, and you'll get a low similarity between the two recordings because the spectro-temporal pattern is so different. That might be ok in some applications, but it also might not since on an intuitive level the sounds are obviously very similar.
There are lots of strategies you could try in order to get the patch better at recognizing the exact kind of "similarity" that you're looking for, and the more you know in advance about the kind of audio you'll be comparing, the more you can customize your feature vector. We can keep chatting off-list about options if you like. I hope that helps to clarify some things in the meantime!
William