Shop OBEX P1 Docs P2 Docs Learn Events
Specific analysis of video signal in real time, using prop, possible? — Parallax Forums

Specific analysis of video signal in real time, using prop, possible?

CuriousOneCuriousOne Posts: 931
edited 2013-05-05 05:29 in Propeller 1
I want to do the following: live video is being fed into prop via composite video in. There is a user defined area, in which, changes in image, compared to whole frame area, should be tracked, whenever this will be changes in brightness, motion or whatsoever. Is this possible, or more powerful chip is needed?

Comments

  • kwinnkwinn Posts: 8,697
    edited 2013-05-04 23:44
    This might be possible for small areas at reduced frame rates, but the prop is not fast enough to do this at the full frame rate, nor and does it have enough memory to store a full frame.
  • CuriousOneCuriousOne Posts: 931
    edited 2013-05-05 00:08
    Forgot to mention, input video will be black and white, so no need for color analysis. And regarding the area, Say, monitored area is 32x32 pixels. To detect change from surroundigs, It might be enough to analyse say 48x48 pixel zone around 32x32 pixel one, no need for full frame storage.
  • frank freedmanfrank freedman Posts: 1,983
    edited 2013-05-05 00:22
    Actually, you may be able to do this provided the prop is limited to control and parameterization of the video comparison system. Hope my math is right, have not done this in over 10 years....

    Assuming NTSC composite video at normal frame rates (30f/s = 2 fields/frame) => 15.5ms/field .. 525lines / frame => 256.5 lines / field => 59.04uS / line /525pixel/line =112ns/pixel so about 8Mhz bandwidth actually higher if you are sampling the video between the hsync which I lumped in with the line time. You also will not use all 525 lines as Vblank eats some of them. But that gives you about 10 clock cycles per pixel at best to directly interact with the incoming signal, fetch the stored previous frame, and compare them. Not with the propI.

    In the '90s I was training field engineers on a video processing system and the system was controlled by a microprocessor (M68K series). The boards in the video chain were 24" square or more of LSI/MSI logic glue and many ASICS developed for pipeline processing of Highline rate video (1024 lines/frame 30f/s). The video would come in be captured, and then edge/contrast enhanced, subtracted from a mask (all in memory at realtime rates), motion detected and averaged to smooth the image, mixed with graphics and redisplayed on the live monitors. The max capture rate on one system was 30f/s and its companion for another application was 6f/s. All through pipeline logic, all controlled by MCU. Now it can be done near realtime but still requires specialized circuits, still under cpu control.

    Sounds like a fun project, but one that may give the controller a fit if this is in the hobby budget....

    FF

    @Curious one, just saw your last note after posting. You may be able to get away with this using the counters to set up and define the timing to determine the sampling area. Unfortunately the timing remains the same as listed above if you are doing this realtime. One big challenge will be the digitizing of the samples to work with, though it occurs to me that you could use the prop counters and some sample and hold circuits to basically capture an average of one zone and compare it with say the inner sampled area by selecting it again using similar switching and hold to an highspeed analog comparator and / or peak detector to send the prop a "difference exists" signal. But to do bit for bit comparisons, take out wallet and open wide... no, wider....... wwwwiiiiiiddddeeeeerrrr.... (sorry, that last line was my smart @$$ streak showing itself at very late night. forgive me if offended)
  • CuriousOneCuriousOne Posts: 931
    edited 2013-05-05 03:43
    Well, my $30 quad processor does all that, and it uses 3 ICs (two are ASICs and one is 64mbit ram), so I thought it should be easy for prop, too.
  • Brian FairchildBrian Fairchild Posts: 549
    edited 2013-05-05 05:29
    CuriousOne wrote: »
    ...no need for full frame storage.

    But there is.

    Consider the case where your target area move to the left in the frame; this is in effect earlier in time. As no-one has invented the negative delay this means that you have to store the complete field/frame so that you can do the comparison (one field/frame minus the amount of movement) later.
Sign In or Register to comment.