hi william and list,
for a museum-art-installation i will do a kind of audio "social-network"
basically a visitor can record a 5 second snippet via microphone or bluetooth and PD saves this snippet as a sample.
out of this ever growing sample-space 8 readsf~ objects will randomly play these samples in various densities to 8 speakers.
so far so easy. (i have implemented that part already)
to get a more social network kind of atmosphere it would be great if newly recorded snippets would increase the likelihood of similar material on the outputs (as in twitter/facebook/instragram blabla, where you find yourself in your bubble)
i still don’t really get howto work with [timbreID] to accomplish this.
maybe someone on the list has an example of this?
the process would be:
-a new sample is recorded (always 5 seconds) -> some [timbreID] analysis happens to create a feature-list -> samples with a similar feature-list should be played back next (a list of files that are similar would be great)
Well, each snippet's feature vector would be added as a new database entry to the leftmost inlet of [timbreID], no? The you'd ask for the nearest entry at its second inlet.
the number of samples can easily grow to thousand of files, since the installation will run for quite some time. each sample is only 441kb though (mono 5 seconds file, according to OSX)
the examples of timbreID i look at either look at a fixed soundfile and slice it to extract features over time, or slice incoming audio based on onset detection for example. i would just want a feature-list created for each new 5 second clip i record.
Yes, your example is not so different from what you see in the help-patches of that great library.
Have you checked out William's examples, which I can't locate in original on his web page (seems down) but which have been mirrored here? https://github.com/mxa/timbreID-examples