Shop OBEX P1 Docs P2 Docs Learn Events
ADNS 2620 optical mouse chip — Parallax Forums

ADNS 2620 optical mouse chip

mwalimumwalimu Posts: 44
edited 2010-01-04 18:38 in Propeller 1
I've written an object for using the Agilent ADNS 2620 optical mouse chip. The chip is the heart of an optical mouse. In reality it is a digital camera and image processor. It's also cheap ($1.38 at Mouser) and easy to interface. It can function as a light level meter, read apparent motion in two directions and return an 18 x 18 pixel image(0.000324 mega pixels). Surprisingly, that can be a decent image.

I originally got interested in the Propeller as a way to use this chip. My thinking is that with 8 processors operating in parallel 324 pixels could be easily processed for useful information. So far the techniques to process that data has elued me, but I'm still plugging away.

So far the biggest obstacle is optics. The image needs to be passed through a lens (although I suppose a pinhole would also do). Finding lenses and attaching them has not been easy. The distance from the chips aperture to the surface of the image array is 3mm. So far the best solution I found is to use the lens from a cheap(toy) digital camera.

Post Edited (mwalimu) : 12/31/2009 5:40:38 PM GMT

Comments

  • Toby SeckshundToby Seckshund Posts: 2,027
    edited 2009-12-31 14:21
    I messed about with an optical mouse a few years ago, just to see if an optical movement detector could be made ie instead of the mouse v mat movement being used the mouse would be a camera giving out movement directions. Broke it trying to widen the apature for better lenses.

    The pics remind me of my first graphics project, in the late 70's, My father was interested in the NOAA satellites on 137 MHz and was building drum printers and even modified windscreen wiper arms to print onto chemical soaked paper. I used a 2114 1Kx4bits to give a 32x32 image but never got to 16 levels of grey.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Style and grace : Nil point
  • mwalimumwalimu Posts: 44
    edited 2009-12-31 17:37
    The best thing about the 2620 is that it is cheap. You can afford to break it experimenting. I bought 10 for around $10 from Mouser. The aperture does not have to be widened. It depends on the lenses you use. However, the lid over the image array pops off with some effort and a thin blade,exposing the silicon chip. You have to be careful, touch the silicon and it's game over. Then you can widen the aperture or leave it of entirely. Having the aperture hole is handy for centering the lens.
  • Graham StablerGraham Stabler Posts: 2,510
    edited 2009-12-31 18:27
    This is great, I'll definitely give it a try in the new year. I'm sure a lens can be sorted out, CD/DVD drives have nice small ones but it does sound as if the aperture might be enlarged first.

    I'd like to use these chips for optic flow type stuff on board a model helicopter, resolution is not so much of an issue and I could afford a small array.

    Cheers,

    Graham
  • Toby SeckshundToby Seckshund Posts: 2,027
    edited 2009-12-31 22:13
    If you use low mm focal lenths then the hole should be good enough, if you try to put on a "real" lens then is gets in the way. The thing was made to focus onto something 2-3 mm away, via a lens that is just a slightly flatened sphere. If the bottom can be separated for the hole to be enlarged then it would be easier to find a replacement optic.

    I broke my mouse trying to drill it out, poor lil squeeker ( cue images of eyes and drills ) freaked.gif

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Style and grace : Nil point
  • mwalimumwalimu Posts: 44
    edited 2010-01-01 03:27
    "I'd like to use these chips for optic flow type stuff on board a model helicopter, resolution is not so much of an issue and I could afford a small array."

    check out:

    http://www.araa.asn.au/acra/acra2007/papers/paper181final.pdf
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2010-01-01 04:31
    That looks fascinating - you have got my brain working overtime with that paper posted above. Helicopter stability and UAVs are intriguing. I've been playing around today with the uCam from 4dSystems. At 115k baud (easy for the prop) it can capture an 80x60 pixel, 8 bit grayscale picture in less than a second. It can do jpegs and raw, but raw is easier to then do the image processing and object recognition. Color is just as fast, but in some ways black and white may be easier to process. Bigger pictures take longer.

    I can see some synergies here with rapid aquisition of low res images and slower aquisition of higher res images. Theoretically, you don't need better resolution than a honeybee to navigate in the real world. I also read somewhere about using infrared to tell the difference between earth and sky as these are very different in the IR range (which I think is as simple as putting a bit of exposed camera film in front of the lens).

    Just thinking aloud here, but I think the Prop may be the first micro that is light enough to fly on a small helicopter/plane but also smart enough to do all the necessary calcs for things like angular rate gyros, and with all the cogs, also have some left over processing power to do some simple image processing.

    mwalimu, where are you up to with image processing?

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    www.smarthome.viviti.com/propeller
  • mwalimumwalimu Posts: 44
    edited 2010-01-01 06:30
    "mwalimu, where are you up to with image processing?"

    Not very. I'm a math teacher, but the math I've seen thrown around for image processing is daunting. I'm hoping some one here can point me in a simpler direction. After all,honeybees don't do multivariate calculus.

    I know that the ADNS chip does some image processing in that it does simple optical flow to tell what direction the chip is moving. That can be exploited without having to do any processing. Just ask the chip.
  • Graham StablerGraham Stabler Posts: 2,510
    edited 2010-01-01 10:48
    mwalimu said...
    "mwalimu, where are you up to with image processing?"
    I'm hoping some one here can point me in a simpler direction. After all,honeybees don't do multivariate calculus.
    .

    And this is a real mystery because not only are they navigating they are also controlling their flight which is based on unsteady aerodynamics. So perhaps they are solving Navier stokes equations too. We are actually doing some research on this where I work but I'm afraid I don't have any advice for you. One of the ideas that is being looked at is "sensor rich feedback", you have a lot of compexity in the system so there must be much information. A jet fighter has few sensory inputs and a powerful computer, perhaps insects have the opposite.

    Graham
  • Toby SeckshundToby Seckshund Posts: 2,027
    edited 2010-01-01 12:01
    I read something, about 2 years ago, ot the subject of militry research into the avoidance systems of a common fly. The baseline inate calculations that are perfomed have to be just that, thinking about it is futile.

    Navigation was the basis of my playing, I wanted to put a simple camera onto a small robot and from the local terain a map of its "world" formed. I wonder if anybody knows how to get at the pics off of a webcam, before it gets turned int USB.

    I feel a fresh case of rodentcide coming on.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Style and grace : Nil point
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2010-01-01 13:13
    Re "I wonder if anybody knows how to get at the pics off of a webcam, before it gets turned int USB."

    Well, yes, getting very close with that. I have data coming off a webcam using vb.net, and from that I can send it to a prop.

    Hardly anyone can do this - everyone wants to go USB but USB hubs are really expensive and complicated. To put it simply, you need bytes, in a simple array. Why is this so hard for us micro users?

    www.4dsystems.com.au/prod.php?id=76 is very new. (they also do some Prop graphics displays) I got one just off the manufucturing line. Am emailing the designers daily and have some latest software that isn't on their website. It works very well.

    I was going to translate it to Spin but I ran out of Longs. Heck, this is the entire reason I went for the propeller. I'm sick of using micros that run out of memory. I want a micro that works and I never have to worry about memory. Fortunately, heater and Cluso99 have come up with a solution with the CP/M emulation which at least gives you 64k instead of 32k. And you can use sd cards and local ram to get on with coding without worrying about running out of memory.

    Anyway, rant aside, the aim here is to capture some simple video for the propeller. You don't need much, indeed, the complexity of 640x480 graphics images complicates the processing. It is amazing what you can get from a 18x18 greyscale, and certainly a 80x60 has enough information to recognise a face.

    I think the key here is to copy biology. How does the vision system for a fly work? That involves avoidance and 3d processing, but probably not memory of images (though the fly that insisted on repeatedly landing on my temple while was digging a road by hand in 42C heat today might render that observation null and void) A honeybee is not much bigger and can leave the hive and make its way back ok, as well as finding food on the way. How does it do that with such blurry images?

    I'd like to think the Prop is up to such things. We need parallel processing analogues of calculus. I think such things do exist, and I don't think they involve huge complexities of math. I've played with the Neocognitron and got it to the point of recognising speech better than other digital processing solutions, and this was using algorithms modelled on the visual cortex of the cat. Vision ought to be similar. This is the algorithm that can take an 'A' and recognise it upside down. The secret is to start off with small features like edges, then combine them into bigger features like an '^" for the top of an A and then combine those features together to bigger patterns. And, each stage, to allow for errors by a certain percent. So an 'A' becomes an ^ above a /- and a -\

    What do you need for image recognition? Some sort of video capture for a start. A mouse video cam 18x18 is very handy, but what happens if it moves more than 18 pixels out of range? Hmm - maybe a video capture that is slower but has a wider field?

    But rather than try to build a vision system modelled on the human brain, what could you do with 18x18 pixels? I think, quite a lot, if you capture them often enough and you have the right processing software.

    Ok, x,y displacement is already in the chip. Consider, what is the math involved in an image take from a helicopter where the helicpter has risen by z units? All the pixels have moved inwards. Some new pixels have appeared at the outside. What are the formulas involved in that 18x18 image that could say the camera has moved up by z units? Is it just simple pythagoras in the x dimension, maybe two pythagoras equations for x and y?

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    www.smarthome.viviti.com/propeller

    Post Edited (Dr_Acula) : 1/1/2010 1:23:52 PM GMT
  • Toby SeckshundToby Seckshund Posts: 2,027
    edited 2010-01-01 15:23
    "Visual cortex of a cat". Hmmm two use for the felines, one for venting frustrations out on and then a couple of cameras..... BONUS.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Style and grace : Nil point
  • mwalimumwalimu Posts: 44
    edited 2010-01-01 16:14
    "Is it just simple pythagoras in the x dimension, maybe two pythagoras equations for x and y?"

    It all comes down to Pythagoras in the long run, I believe. However I think there are some number crunching algorithms that can make the calculations easier.

    Not that I know what those are.

    Trying to mimic anatomy would be futile. Neurons work very differently than transistors. If you examine the anatomy of a fly's sight, you'd find much of the processing of a visual image is in the eye itself. It works more like a sieve taking the raw visual images and translating them into data for the itty-bitty brain. Silicon works differently, using speed to crunch numbers. More than likely it's a statistical algorithm that can sort through the data coming off a camera.

    It does not take a huge amount of information to get a decent picture. I remember reading years ago about an MIT class where students had to build a computer that recoginized a face (any face) and squirted it with water. They only could use 16 photo receptors, I think. One of the reasons I wanted to use a mouse chip is that data produced is large enough to be useful, but small enough to be analyzed quickly.

    One of the things I want to try is to subdivide the 324 pixels into groups of 6x6 or 3x3 arrays to develop an optical flow model.
  • Toby SeckshundToby Seckshund Posts: 2,027
    edited 2010-01-01 16:37
    The stimulus for my mutilations of a mouse was an artical in a mag that showed how to drag off data arrays from a webcam and then using it to read a digital multimeter's display. This use HTA files (insiduous little beasties). I got it to allowing me to chase the Guinea Pigs around in their cage with little white boxes. The data coming off the webcam was "only" 288 x something else but the processing time was horendous (frame wise) in the end I selected magic points around the perimeter to kick off a reaction and then only a fraction of the central points for motion. Only at the chosen part of the prog was the full resolution brought back in.

    Face recognigtion is based around the " T " of the eyes and nose/mouth pattern. Guinea pigs, cute as may be, didn't fullfil this, and SHE used to get narked if I tracked her.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Style and grace : Nil point
  • LeonLeon Posts: 7,620
    edited 2010-01-01 16:47
    Dr_Acula,

    I used to work for Irisys (www.irisys.co.uk/) and we got a lot out of a 16x16 array with Kalman filtering and so on. It did have 256 levels of grey-scale, though.

    Leon

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Amateur radio callsign: G1HSM
  • EricEric Posts: 11
    edited 2010-01-04 18:38
    This looks really interesting... can you show your setup? Have you found a source for a lense?
Sign In or Register to comment.