Shop OBEX P1 Docs P2 Docs Learn Events
Any analog video summator schematics available? — Parallax Forums

Any analog video summator schematics available?

CuriousOneCuriousOne Posts: 931
edited 2013-05-10 02:22 in General Discussion
Hello.

I have bought for quite cheap 4 pcs of 1/4 sharp CCD boards with lens. They perform quite nice, but I have an idea - align them all into one direction, and sum the video signal, to get better low light visibility. I've checked maxim, analog, etc. websites, was not unable to find an IC that can join together several analog video signals, only switches and muxes are available. There are no such solution?

Comments

  • whickerwhicker Posts: 749
    edited 2013-05-09 00:30
    reality doesn't work the way you're thinking.

    although if you did want to use DSP practices to try and stitch together the overlapping multiple images it would be a fascinating project.
    sort of like fly eye vision... scratch that, sort of like spider eye vision, to start with.
  • CuriousOneCuriousOne Posts: 931
    edited 2013-05-09 06:30
    Actually, I can design a such circuit from the scratch, but it will take a while, so this is why I asked, maybe there's some complete solution.

    DSP stacking is good, but it needs much more specific knowledge.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2013-05-09 06:53
    Good luck, take your time. From what I've seen, this kind of video application is really hard to find off-the-shelf chips to use. And devices often have no documentation.
  • kwinnkwinn Posts: 8,697
    edited 2013-05-09 09:09
    As whicker says in post 2 it really doesn't work that way, however if you really wanted to sum video signals you could run each signal through a resistor to the inverting input of a high speed (video) op amp. The output would be the inverted sum of all the signals including the video and DC levels so that output would have to be inverted and level shifted to output a proper video signal.

    For better low light sensitivity it may be simpler to use a lens that collects more light.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2013-05-09 09:55
    kwinn wrote:
    For better low light sensitivity it may be simpler to use a lens that collects more light.

    Ditto that.

    You can't just sum video signals and get a valid video signal as a result. Firs of all, they won't be synchronized with each other unless you spend a lot of money on cameras that accept an external sync. Second, even if the signals were in sync, only the video portion should be summed, not the syncs. And, finally, you would ultimately be defeated by parallax due to the optical axes of four lenses not being colinear. Once you adjusted things to get image convergence at one distance, subjects at other distances would be non-convergent.

    -Phil
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2013-05-09 10:43
    Phil pointed at the fatal flaw... all the timing has to be the same or you don't get a proper mix as each camera is providing info from a different location on their frame.
  • CuriousOneCuriousOne Posts: 931
    edited 2013-05-09 11:09
    acually, situation is not that bad as you think. First of all, all these video processor ICs, used in video board modules have external sync in. Which is enabled either via eeprom config bit, or via specific pin. Also, they have sync out, so one module can act as master for others. And even sync separation, if desired, is not that hard to do, being done in TV sets for at least 50 years, LM1880 just as example. I think this is just my fault, i should target more anlogue oriented forum with such questions. Anyone can suggest?
  • CuriousOneCuriousOne Posts: 931
    edited 2013-05-09 11:10
    Forget to mention, I'm already using F1.2 lens - brightest lens for cctv.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2013-05-09 11:32
    Phil addressed several other issues beside the external sync.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2013-05-09 11:40
    The sync thing is still an issue. Although you can certainly sync whichever external processing chips you use, that doesn't help a bit if the cameras themselves are not in sync with each other. Therein lies the rub, and you simply cannot fix it with cheap board-level cameras unless you use expensive processor chips that include full-on RAM frame buffers with independent output clocking.

    -Phil
  • tonyp12tonyp12 Posts: 1,951
    edited 2013-05-09 11:48
    16 lenses that with software can do many things, like see a person through vegetation as with a slight angle from each lens the leafs can be eliminated.
    http://www.engadget.com/2013/05/02/pelican-imaging-array-camera-coming-2014/
  • jonesjones Posts: 281
    edited 2013-05-09 12:02
    Let me suggest a Watec 902H2 Ultimate monochrome video camera, which is a complete product, not terribly expensive and quite sensitive. If you can live with slower frame rate, the Samsung SCB-2000 is a color camera, allows for integration to achieve high sensitivity and costs even less. You're getting good advice, as you'll ultimately discover.
  • CuriousOneCuriousOne Posts: 931
    edited 2013-05-09 12:08
    What I'm trying to reach is far beyond of capabilities of any relatively cheap (<1000$) imaging system. Currently, I've ordered NoS intensifier tube from Russia, let's see how it arrives.
  • jonesjones Posts: 281
    edited 2013-05-09 13:13
    The cameras I mentioned are routinely used for astronomy with small telescopes and are surprisingly sensitive. What is it that you're trying to do?
  • CuriousOneCuriousOne Posts: 931
    edited 2013-05-09 13:17
    Long distance day/nighttime wildlife observation without IR illuminator or any artifical light source (most predators, unlike humans, are sensitive to IR and scared by IR beam). Astronomy widely uses image stacking and long exposures up to 30 second, which is not acceptable for moving subjects such as wolves, etc.
  • jonesjones Posts: 281
    edited 2013-05-09 20:53
    The Watec camera I mentioned has a sensitivity of roughly 0.0001 lux at f/1.2 and 30 fps. I can record 13th magnitude stars with a 14" telescope at f/4 with no stacking or integration. It's not enough for what you want to do, however, at least not without moonlight. Good luck with the intensifier and let us know how it turns out.
  • frank freedmanfrank freedman Posts: 1,983
    edited 2013-05-09 22:29
    <speculation>
    Depending on the actual ccd chip used, if they are externally clocked, you may be able to parallel them so that the charge in each bucket is clocked out of the same pixels at the same time syncronously. You could sum these outputs into an op amp circuit but I think you would have to be able to sum either the current or convert the charge current to a voltage to sum in the op amp. That all assumes that none of the clock drivers is built into the CCD chip itself. It was one thing to sync multiple chips to get a higher resolution scan of an image (with very specialized lens assemblies and software stitching trickery) quite another to try to overlap the same image. CCDs are discrete packages of charge laid out row/column. If you are just a bit off in timing, the rows of columns will overlap and I would guess the net effect would be the same as if you defocussed an analog pickup tube and overscanned the same areas as you went. As to lenses, Nightmare one would be lining up 4 separate lenses, Phil ran that one down. if you used one lens with a splitter to go 4 ways,odds are the losses would offset any gain of using four CCDs Since the amount of light will not increase as it moves through the lens assembly, each CCD would then see one fourth of the amount of light as was coming into the lens assembly. The result, a blurry lower contrast image than one ccd w/ one lens.........

    </speculation>

    The common clocking and lens assy was one OEMs way to get higher resolution AND frame rates out of a medical system camera based on two ccd chips ...........
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2013-05-09 22:38
    Another thing to consider at low light is the signal-to-noise ratio. (Noise in a photosensor comes from thermal electrons.) Even assuming perfectly aligned images, four cameras will not yield four times the sensitivity at low light, since S/N improves only as the square root of the number of independent signal receptors. So the best ideal sensitivity increase with four cameras would be 2X, not 4X.

    You would probably get a better S/N improvement by cooling the sensor in one camera and amplifying the image portion of the video output.

    -Phil
  • CuriousOneCuriousOne Posts: 931
    edited 2013-05-10 02:22
    I've tried cooling. Sure it helps, but really nice it gets at -40C and below, which is quite power consuming, dew generating and so on.

    Here's datasheet for CCD sensor used in normal quality ccd camera boards:
    http://www.datasheetarchive.com/sony+icx639-datasheet.html
Sign In or Register to comment.