Shop OBEX P1 Docs P2 Docs Learn Events
Interfacing CMOS camera MT9V022 — Parallax Forums

Interfacing CMOS camera MT9V022

janbjanb Posts: 74
edited 2011-11-03 09:45 in Propeller 1
Hi,
I spent all evening studying documentation of this immage sensor from
http://download.micron.com/pdf/product_brief/mt9v022_product_brief.pdf
and it looks like I see how to connect it to Parallax except one line : SYSCLK (pin A5, page 13)

It says in documentation the MT9V022 needs 26.6 MHz.
In Parallax documentation I see only X0 pin connected to the cristal - this will give only 5 MHz (I have PropStick)
Could you advice me how should I generate proper clock to feed it to··MT9V022

BTW, has anyone play with it already?
Thanks
Jan
·

Comments

  • mirrormirror Posts: 322
    edited 2007-06-02 03:53
    Generating the 26.6MHz is probably the easy part.

    Dealing with the avalanche of data that gets pumped out of one of these devices...

    Even if you used the divide by 4 mode (ie throw away the other 3 pixels) you've still only got 12 processing clocks to deal with each pixel.

    ShortLoop:
        mov a, INA      ; That's 4 clocks
     
        jmp #ShortLoop  ; That's another 4 clocks
    

    So, basically that leaves 4 clocks (1 instruction)·to do some meaningful work!

    I think it would probably need some sort of external hardware framebuffer to do anything useful.

    I don't mean to sound so no-can-do, but interfacing direct to this chip using a propeller·is a tough ask.





    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    It's not all that hard to count the number of grains of sand on the beach. The hardest part is making a start - after that it just takes time.·· Mirror - 15 May 2007
  • janbjanb Posts: 74
    edited 2007-06-02 14:08
    <!--StartFragment -->·Hi,
    My goal is to develop sth like CmuCam, but eventually· increase the frame· rate and be able to process· all 3 colors at the same time.
    To start with sth simpler, I'd like to get X & Y profiles for one color at moderate resolution and frame rate.

    Yes, I'm concerned about data throughput as well.
    The specification of the NT9V022 are:
    - full resolution: 75H x 480 V= 361 kPix
    - frame rate 60 fps
    - ADC resolution 10 bit
    So to read it all one would need 21.7 MHz x 10 bit. This is out of the question.

    To reduce it I was thinking to do the following:
    - use parallel data transfer and reduce 10 bits to 8 bits· (OR· 2 pairs of higher bits) so
    ·· I can use 8-bit word for computations sent to 8 Propeller pins
    - degrade sensor resolution by 4 in X and by 4 in Y to 188H x 120 V
    - skip frames (or 8bit words) and capture effectively only 30 fps
    - use 4 cogs in sequence to read subsequent columns of the image
    - use 1 cog to control the chip via 2Wire
    - use another cog to· generate· this 20 MHz clock (this is what you have suggested, right?)

    If I did my math correctly it would reduce significantly the throughput to 170kHz of 8-bit data per cog.

    I expect to use 6 cogs and· have ~350 clock ticks··to accumulate 2 counters (for X and Y sum) using 8-bit input.

    Does it all make sense or· I missed· sth· important
    Could I try to do it in spin?

    Thanks for all suggestions so far
    Jan
    ·
  • simonlsimonl Posts: 866
    edited 2007-06-03 09:35
    This is way over my head, but I'm thinking you _may_ be able to reduce your cog count by one: I believe you can use one of the counters in one of your other cogs to get the sync frequency you're after...

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Cheers,

    Simon
    www.norfolkhelicopterclub.co.uk
    You'll always have as many take-offs as landings, the trick is to be sure you can take-off again ;-)
    BTW: I type as I'm thinking, so please don't take any offense at my writing style smile.gif
  • mirrormirror Posts: 322
    edited 2007-06-03 22:17
    Hi Jan,

    After I sent my post I got to doing some more thinking. I'm don't really have any need for this, but I do think those image grabbers are pretty cool.

    If you were going to do it then cogs/per/colum would be the way to do it. By the way, what do you want to do once you've got the data? Obviously the data will still be pumping though the chip at multiple megabytes per second.

    As to using Spin - not a chance. This will require some very tight assembly language. You could probably do the 2 wire interface in spin though.

    The 26.6MHz requirement should be quite possible - look at the counters tutorial - it's in the sticky threads up the top of the forum. Using the PLL should be able to get you the correct frequecy. I'd probably only suggest trying 26.5MHz - It wouldn't pull the scanner to far out of spec and is a closer multiple of 5Mhz (which would make it easier to sync your assembly code to the pixel clock).

    Good luck.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    It's not all that hard to count the number of grains of sand on the beach. The hardest part is making a start - after that it just takes time.·· Mirror - 15 May 2007
  • Graham StablerGraham Stabler Posts: 2,510
    edited 2007-06-04 01:01
    I have a few of these chips, I intended to use the propeller to do the basic supervisor role while a CPLD (Xlinx coolrunner 2) did the fast stuff, probably feeding the data into some ram. I'm distracted by other projects at the moment but i do have the coolrunner starter kit which is only $50

    Graham
  • janbjanb Posts: 74
    edited 2007-06-08 02:34
    <!--StartFragment -->·Hi Guys,
    thanks for all advices. I did some more thinking and homework.
    1) I used 'Frequency Synthesizer demo' object and (with help of an expert) look at the
    time jitter of· different generated clock frequencies using scope.

    freq (MHz)· jitter (ps)
    26.6········ 406
    26.5········ 167
    26.0········· 26
    20.0········· 40

    Since in the MT9v022 writeup there is no lower limit on the clock frequency I think
    it would be most convenient to reduce it from recommended 26.6 MHz to 20 MHz.
    * it would reduce the jitter to manageable level
    * I'd have exactly 4 clock· in a cog to· read one value:
    >·· mov a, INA····· ; That's 4 clocks

    Attached are 2 diagrams:
    - intended wire connection between image sensor & Propeller
    - a possible task devision among cogs. Here I used suggestion to generate 20 MHz clock and 2-Wire connection on the same cog.
    So far all looks like it may work.
    Any comments are welcome.
    Thanks
    Jan
    960 x 720 - 102K
    960 x 720 - 106K
  • Graham StablerGraham Stabler Posts: 2,510
    edited 2007-06-08 10:20
    You can read one value but then what? You don't have time to loop or increment a counter or anything.

    Graham
  • janbjanb Posts: 74
    edited 2007-06-09 00:11
    <!--StartFragment -->Hi,
    I'd use multiple cogs to sample pixels from multiple frames.

    With degraded resolution by 4 in x & y the image sensor is sending· few hundred frames per second. I need only ~30.

    Let assume one cog needs up to 24 clocks @80 MHz to process one pixel. During this time the image sensor would sent 6 pixels @20 MHz. I can't· get them all.

    Assume I devote 2 cogs (#1,#2) for pixel processing. Cog #0 is the master.

    As shown on the diagram below I'd use cog0 to calculate CNT values matched to
    pixel 1 and 4 (of current image frame) and place them in the hub memory.

    cog1
    ^^^^^
    would pick the value of cnt1 at the end of processing of previous pixel and
    suspend until the· master cnt value matches cnt1. Then it would pick the current pixel data· and process it as long as it takes (assumed here below 24 ticks). At the end it would pick the new value of cnt1 (calculated in the meantime by cog0) and went dormant again.


    Cog2
    ^^^^^
    would do the same except it would synchronize its wake up with another value cnt2.


    As you see if cog0· calculates new values of cnt1 and cnt2· with predefined· phase· cog1 & cog2 would pick certain pixels from current frame with predefined distance between pixels.

    For next image frame cog0 would· match cnt1 & cnt2 to other pixels. After several frames I'd sample all pixels of the image.

    I'd use 4 cogs to sample pixels, not just 2 as in this example.

    Jan
    960 x 720 - 15K
  • rjo_rjo_ Posts: 1,825
    edited 2007-06-10 08:10
    Jan,

    I'm still trying to get the entire system in my head at one time...so, this is not exactly a "newbie" comment ... but close.

    I don't see why you don't the same approach but with multiple props ... all sync'd to the same clock. You would multiply the available ram, etc, and have cogs left over for processing.
    AND if you offset the clocks to each prop... it seems to me that you might be able synthesize a full color video signal.

    Your approach seems entirely reasonable... and I am be interested in seeing how far you can push it.

    Rich
  • deBoogledeBoogle Posts: 1
    edited 2011-11-02 07:50
    janb wrote: »
    <!--StartFragment -->Hi,
    I'd use multiple cogs to sample pixels from multiple frames.

    With degraded resolution by 4 in x & y the image sensor is sending· few hundred frames per second. I need only ~30.

    Let assume one cog needs up to 24 clocks @80 MHz to process one pixel. During this time the image sensor would sent 6 pixels @20 MHz. I can't· get them all.

    Assume I devote 2 cogs (#1,#2) for pixel processing. Cog #0 is the master.

    As shown on the diagram below I'd use cog0 to calculate CNT values matched to
    pixel 1 and 4 (of current image frame) and place them in the hub memory.

    cog1
    ^^^^^
    would pick the value of cnt1 at the end of processing of previous pixel and
    suspend until the· master cnt value matches cnt1. Then it would pick the current pixel data· and process it as long as it takes (assumed here below 24 ticks). At the end it would pick the new value of cnt1 (calculated in the meantime by cog0) and went dormant again.


    Cog2
    ^^^^^
    would do the same except it would synchronize its wake up with another value cnt2.


    As you see if cog0· calculates new values of cnt1 and cnt2· with predefined· phase· cog1 & cog2 would pick certain pixels from current frame with predefined distance between pixels.

    For next image frame cog0 would· match cnt1 & cnt2 to other pixels. After several frames I'd sample all pixels of the image.

    I'd use 4 cogs to sample pixels, not just 2 as in this example.

    Jan

    This seems like a unique and great solution. BUT... i have been using the Omnivision camera SOCs for a while and there is a lag between the clock fed to the sensor and the the pixel clock coming out. Also there are PLLs in the sensor that multiply the input clock up & down to give a different clock out, to what you put in + the lag. Now, this can be accounted for it's just an offset which can be determined empirically or mathematically.Where you will have a problem is in the fact that no PLL/OSC/CLK is 100% stable that is say, the clocks drift, therefore you may not 'sample' exactly when you think you are going to sample due to this drift. This is why the pixel clock is provided by the image sensor, this tells you exactly when to sample the pins to get the pixel data. This is 100% accurate.

    Maybe the solution would be to count the pixel clocks from the sensor and use a COG to sync other COGs for reading each pixel based on this count rather than an offset derived from the expected pixel arrival time.

    Hope this is a useful point.

    deBoogle
  • lanternfishlanternfish Posts: 366
    edited 2011-11-03 01:18
    Maybe the discussion in this thread may be helpful. It was around a (since abandoned) project I was working on. It involved SVGA video capture.
  • KyeKye Posts: 2,200
    edited 2011-11-03 08:34
    Okay, this is what you do.

    1: Run the propeller chip at 100Mhz or 96Mhz. You can then easily get a 25Mhz or 24Mhz very low jitter output using a cog counter - the counter will drive the camera.
    2: Use ASM code with waitpne aligned to the pixel clock output of the camera to capture data.
    3: Put the D0-D# lines from the camera on P0-P# on the propeller chip.
    4: You latch data like this then:

    WAITPEQ pclk, pclk
    MOV temp, ina
    WAITPNE pclk, pclk

    This will wait for the clock to go high... sample data... and then for the clock to go low. If you want faster code drop the WAITPNE statement and never wait for the clock to go low. This works as long as the clock goes low before the next time you execute the WAITPEQ statement - This technique will work with an 8Mhz pixel clock. If you use a 4.8Mhz or lower pixel clock you must use the WAITPNE statment.

    The rest of the code is up to you. You will need to wrap all the control logic that you think you will need around the above code. Also note that at a high speed pixel clock you may need to drop pixels to get things to work.

    Below is some code I wrote to do this for an Omnivision 9665 sensor - it should get you thinking. The sensor has a PCLK and HREF line and D0-D7. The HREF line goes high when the data is valid and low when the data is not valid. The PCLK runs all the time. Data appears on the output bus when the PCLK line goes high.

    This code takes an 8MHz PCLK at 640 pixels and down samples it to 160 pixels and stores them in a buffer for future processing. Each pixel is made up of two bytes that appear right after each other out of the data bus.

    ' //////////////////////Next Loop//////////////////////////////////////////////////////////////////////////////////////////////
                          
                            mov     LNRowCounter,           #_CAMERA_V_WIN               ' Setup for the next frame.
                            waitpeq LNVSYNCMask,            LNVSYNCMask                  '
                                                   
    LNGrabberLineLoop       mov     LNColumnCounter,        #(_CAMERA_H_WIN / 2)         ' Setup for the pixel storage loop.
                            movd    LNStorePixelModify,     #(LNPixelStorage - 1)        '
    ' //////////////////////Store Pixel Unlocked/////////////////////////////////////////////////////////////////////////////////// 
                            
                            waitpne LNHREFMask,             LNHREFMask                   ' Wait for frame end.
                            mov     phsa,                   #0                           '
    LNStorePixelLoop        waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' Grab a pixel.
                            mov     LNFirstPartBuffer,      ina                          '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        '
                            mov     LNSecondPartBuffer,     ina                          '
                                
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' Mask that pixel.
                            and     LNFirstPartBuffer,      #$FF                         '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' 
                            and     LNSecondPartBuffer,     #$FF                         ' 
                                                           
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' Shift that pixel.
                            shl     LNFirstPartBuffer,      #16                          '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        '
                            shl     LNSecondPartBuffer,     #8                           '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' Combine that pixel.
                            or      LNFirstPartBuffer,      LNSecondPartBuffer           ' 
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        '
                            add     LNStorePixelModify,     LNDestinationIncrement       ' 
                            
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' Grab another pixel.
                            mov     LNThirdPartBuffer,      ina                          '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        '
                            mov     LNFourthPartBuffer,     ina                          '
                            
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' Shift and mask another pixel.
                            and     LNThirdPartBuffer,      #$FF                         '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        '
                            shl     LNFourthPartBuffer,     #24                          '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' Combine those pixels.   
                            or      LNThirdPartBuffer,      LNFourthPartBuffer           '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        '
                            or      LNFirstPartBuffer,      LNThirdPartBuffer            '
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        ' Store [4 1 2 3]. ROR 8 = [3 4 1 2].
    LNStorePixelModify      mov     (LNPixelStorage - 1),   LNFirstPartBuffer            ' 
                            waitpeq LNHREFMaskAndPCLKMask,  LNHREFMaskAndPCLKMask        '                   
                            djnz    LNColumnCounter,        #LNStorePixelLoop            '
                            
    ' //////////////////////Process Data///////////////////////////////////////////////////////////////////////////////////////////
    
    


    Thanks, Goodluck,
Sign In or Register to comment.