Super, I will check out the link and sorry for the double post.
It is in the early concept stages, but I was thinking that the user interface would be a piece of glass/plastic the input device would be a webcam on the inside of a box/resonator. The ambient light outside of the box (no pun intended) would be enough of a difference to read the opaque markings as parse-able characters. The user would draw on an opaque surface and puredata would be able to make something/interpret this input and respond in some way. It may be that I have to simplify this to gestures which I think puredata is also capable of doing. Reading a webcam capture of the entire active area and the comparing that to a prior state.
Colour would open up other modes, but at this stage binary is what I am looking for, just black and white.
I would be happy with the simplest method and the easiest to compute.
Thanks Again.
On Wed, 1 Jun 2011, Jack wrote:Is it just me, or it sounds like it's going to take a lot of preprocessing before you can even think of feeding it to a neural network ?
You can do this with the use of artificial neural network (for character recognition). There are externals for Pd : http://pure-data.svn.sourceforge.net/viewvc/pure-data/trunk/externals/ann/
Human vision is made of a lot more layers of neurons than we can hope to deal with in artificial networks.
At what angles should characters be recognised ?
Which colour on which colour ?
You better settle those things first, so that you can figure out how you can reduce your data beforehand.
But making an OCR using ANN is a lot lot more work than using an OCR library. Making a Pd-to-OCR-library interface is less work than making an OCR abstraction library... and it isn't necessarily because Pd would be bad at it (I don't know about that). It's more because it takes a lot of knowledge to make an OCR library from nearly scratch.
_______________________________________________________________________
| Mathieu Bouchard ---- tél: +1.514.383.3801 ---- Villeray, Montréal, QC