ADNS 2620 optical mouse chip
I've written an object for using the Agilent ADNS 2620 optical mouse chip. The chip is the heart of an optical mouse. In reality it is a digital camera and image processor. It's also cheap ($1.38 at Mouser) and easy to interface. It can function as a light level meter, read apparent motion in two directions and return an 18 x 18 pixel image(0.000324 mega pixels). Surprisingly, that can be a decent image.
I originally got interested in the Propeller as a way to use this chip. My thinking is that with 8 processors operating in parallel 324 pixels could be easily processed for useful information. So far the techniques to process that data has elued me, but I'm still plugging away.
So far the biggest obstacle is optics. The image needs to be passed through a lens (although I suppose a pinhole would also do). Finding lenses and attaching them has not been easy. The distance from the chips aperture to the surface of the image array is 3mm. So far the best solution I found is to use the lens from a cheap(toy) digital camera.
Post Edited (mwalimu) : 12/31/2009 5:40:38 PM GMT
I originally got interested in the Propeller as a way to use this chip. My thinking is that with 8 processors operating in parallel 324 pixels could be easily processed for useful information. So far the techniques to process that data has elued me, but I'm still plugging away.
So far the biggest obstacle is optics. The image needs to be passed through a lens (although I suppose a pinhole would also do). Finding lenses and attaching them has not been easy. The distance from the chips aperture to the surface of the image array is 3mm. So far the best solution I found is to use the lens from a cheap(toy) digital camera.
Post Edited (mwalimu) : 12/31/2009 5:40:38 PM GMT
doc

24K
Comments
The pics remind me of my first graphics project, in the late 70's, My father was interested in the NOAA satellites on 137 MHz and was building drum printers and even modified windscreen wiper arms to print onto chemical soaked paper. I used a 2114 1Kx4bits to give a 32x32 image but never got to 16 levels of grey.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Style and grace : Nil point
I'd like to use these chips for optic flow type stuff on board a model helicopter, resolution is not so much of an issue and I could afford a small array.
Cheers,
Graham
I broke my mouse trying to drill it out, poor lil squeeker ( cue images of eyes and drills )
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Style and grace : Nil point
check out:
http://www.araa.asn.au/acra/acra2007/papers/paper181final.pdf
I can see some synergies here with rapid aquisition of low res images and slower aquisition of higher res images. Theoretically, you don't need better resolution than a honeybee to navigate in the real world. I also read somewhere about using infrared to tell the difference between earth and sky as these are very different in the IR range (which I think is as simple as putting a bit of exposed camera film in front of the lens).
Just thinking aloud here, but I think the Prop may be the first micro that is light enough to fly on a small helicopter/plane but also smart enough to do all the necessary calcs for things like angular rate gyros, and with all the cogs, also have some left over processing power to do some simple image processing.
mwalimu, where are you up to with image processing?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
www.smarthome.viviti.com/propeller
Not very. I'm a math teacher, but the math I've seen thrown around for image processing is daunting. I'm hoping some one here can point me in a simpler direction. After all,honeybees don't do multivariate calculus.
I know that the ADNS chip does some image processing in that it does simple optical flow to tell what direction the chip is moving. That can be exploited without having to do any processing. Just ask the chip.
And this is a real mystery because not only are they navigating they are also controlling their flight which is based on unsteady aerodynamics. So perhaps they are solving Navier stokes equations too. We are actually doing some research on this where I work but I'm afraid I don't have any advice for you. One of the ideas that is being looked at is "sensor rich feedback", you have a lot of compexity in the system so there must be much information. A jet fighter has few sensory inputs and a powerful computer, perhaps insects have the opposite.
Graham
Navigation was the basis of my playing, I wanted to put a simple camera onto a small robot and from the local terain a map of its "world" formed. I wonder if anybody knows how to get at the pics off of a webcam, before it gets turned int USB.
I feel a fresh case of rodentcide coming on.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Style and grace : Nil point
Well, yes, getting very close with that. I have data coming off a webcam using vb.net, and from that I can send it to a prop.
Hardly anyone can do this - everyone wants to go USB but USB hubs are really expensive and complicated. To put it simply, you need bytes, in a simple array. Why is this so hard for us micro users?
www.4dsystems.com.au/prod.php?id=76 is very new. (they also do some Prop graphics displays) I got one just off the manufucturing line. Am emailing the designers daily and have some latest software that isn't on their website. It works very well.
I was going to translate it to Spin but I ran out of Longs. Heck, this is the entire reason I went for the propeller. I'm sick of using micros that run out of memory. I want a micro that works and I never have to worry about memory. Fortunately, heater and Cluso99 have come up with a solution with the CP/M emulation which at least gives you 64k instead of 32k. And you can use sd cards and local ram to get on with coding without worrying about running out of memory.
Anyway, rant aside, the aim here is to capture some simple video for the propeller. You don't need much, indeed, the complexity of 640x480 graphics images complicates the processing. It is amazing what you can get from a 18x18 greyscale, and certainly a 80x60 has enough information to recognise a face.
I think the key here is to copy biology. How does the vision system for a fly work? That involves avoidance and 3d processing, but probably not memory of images (though the fly that insisted on repeatedly landing on my temple while was digging a road by hand in 42C heat today might render that observation null and void) A honeybee is not much bigger and can leave the hive and make its way back ok, as well as finding food on the way. How does it do that with such blurry images?
I'd like to think the Prop is up to such things. We need parallel processing analogues of calculus. I think such things do exist, and I don't think they involve huge complexities of math. I've played with the Neocognitron and got it to the point of recognising speech better than other digital processing solutions, and this was using algorithms modelled on the visual cortex of the cat. Vision ought to be similar. This is the algorithm that can take an 'A' and recognise it upside down. The secret is to start off with small features like edges, then combine them into bigger features like an '^" for the top of an A and then combine those features together to bigger patterns. And, each stage, to allow for errors by a certain percent. So an 'A' becomes an ^ above a /- and a -\
What do you need for image recognition? Some sort of video capture for a start. A mouse video cam 18x18 is very handy, but what happens if it moves more than 18 pixels out of range? Hmm - maybe a video capture that is slower but has a wider field?
But rather than try to build a vision system modelled on the human brain, what could you do with 18x18 pixels? I think, quite a lot, if you capture them often enough and you have the right processing software.
Ok, x,y displacement is already in the chip. Consider, what is the math involved in an image take from a helicopter where the helicpter has risen by z units? All the pixels have moved inwards. Some new pixels have appeared at the outside. What are the formulas involved in that 18x18 image that could say the camera has moved up by z units? Is it just simple pythagoras in the x dimension, maybe two pythagoras equations for x and y?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
www.smarthome.viviti.com/propeller
Post Edited (Dr_Acula) : 1/1/2010 1:23:52 PM GMT
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Style and grace : Nil point
It all comes down to Pythagoras in the long run, I believe. However I think there are some number crunching algorithms that can make the calculations easier.
Not that I know what those are.
Trying to mimic anatomy would be futile. Neurons work very differently than transistors. If you examine the anatomy of a fly's sight, you'd find much of the processing of a visual image is in the eye itself. It works more like a sieve taking the raw visual images and translating them into data for the itty-bitty brain. Silicon works differently, using speed to crunch numbers. More than likely it's a statistical algorithm that can sort through the data coming off a camera.
It does not take a huge amount of information to get a decent picture. I remember reading years ago about an MIT class where students had to build a computer that recoginized a face (any face) and squirted it with water. They only could use 16 photo receptors, I think. One of the reasons I wanted to use a mouse chip is that data produced is large enough to be useful, but small enough to be analyzed quickly.
One of the things I want to try is to subdivide the 324 pixels into groups of 6x6 or 3x3 arrays to develop an optical flow model.
Face recognigtion is based around the " T " of the eyes and nose/mouth pattern. Guinea pigs, cute as may be, didn't fullfil this, and SHE used to get narked if I tracked her.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Style and grace : Nil point
I used to work for Irisys (www.irisys.co.uk/) and we got a lot out of a 16x16 array with Kalman filtering and so on. It did have 256 levels of grey-scale, though.
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM