Shop OBEX P1 Docs P2 Docs Learn Events
PropCAM: A Propeller imaging camera AVAILABLE NOW! — Parallax Forums

PropCAM: A Propeller imaging camera AVAILABLE NOW!

Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
edited 2014-01-05 17:50 in Propeller 1
Attached are some photos taken of (and with) an image capture device I've developed for the Propeller. I call it the PropCAM. It's a black-and-white video camera having 128x96-pixel resolution and capable of acquiring up to 30 fps of 4-bit grayscale image data. It interfaces to the Propeller via ten pins: +3.3V, Gnd, and A0-A7.

With the Spin and assembly code I'm developing for it, it is capable of capturing images in the following modes:
  • Single snapshot
  • Frame grab from a continuous exposure sequence
  • Capture at a fixed time interval (1/30 sec. or more)
  • Capture synchronized to video output (for a live, flicker-free display)
Exposure modes are as follows:
  • Fixed exposure time and gain
  • Auto exposure time and fixed gain
  • Fixed exposure time and auto gain
  • Auto exposure time and gain
Unlike with most image sensors that use "rolling exposures", the pixels in the PropCAM are all exposed simultaneously. This ensures an accurate representation of moving objects, without the tearing or skewing that usually predominates. Also, the integration (exposure) time is independent of the exposure interval. In fact, the camera can idle indefinitely, waiting to synchronize a single exposure to an external event.

For display, the software uses a grayscale buffer which is overlayed upon output from another video driver, from which it derives its sync. The attached photos show capture output on top of text from Chip's TV_text object. Any video driver can theoretically be used, but memory usage will probably constrain one to the text display objects. In order to facilitate additional graphics, such as intensity histograms, the grayscale overlay area can be made larger than the camera's image size. The camera image can then be placed anywhere inside this grayscale "window", with the margin areas being used for additional graphics. It will also be possible to draw on top of the camera image itself.

The grayscale display can be at its native size or double its native size (line-doubled and stretched). The attached photos show one example of each. In the native-scale display, the captured image is shown centered in a larger window with a gray surround. Grayscale display uses a separate cog utilizing CTRA in its DUTY mode, and whose output is ORed onto one of the video pins. (Thanks to Chip for the suggestion!) This will work with either the PropSTICK or Demo Board and requires only the addition of a 220pF cap across the video output to filter the high-frequency hash from CTRA. The aspect ratio has been adjusted to come as close to square pixels as possible.

Yet to be written are the image processing routines that will turn the PropCAM+Propeller into a real machine vision system. They're coming, but it'll take time.

My plans are to have these units produced for sale, if there's sufficient demand. (Unlike the PropSTICK, they use SMD devices, so can't be made into a kit.) The boards are designed to plug into a product yet to be announced, but they'll be available prior to that with a cable similar to the one shown in the photos for easy interface to either the Demo Board or PropSTICK.

Cheers!
Phil
800 x 673 - 40K
800 x 785 - 80K
796 x 586 - 50K
800 x 594 - 61K
«13456712

Comments

  • Mike GreenMike Green Posts: 23,101
    edited 2006-06-17 23:23
    I'd be interested in one, please put me on your list.
    Michael
  • El PaisaEl Paisa Posts: 375
    edited 2006-06-18 01:14
    ··· hop.gifhop.gifhop.gifhop.gifhop.gif
    PhiPi:
    Thats really, really good work.
    Keep us posted in the further development.
    Please keep me in the list to buy your product.
  • Kaos KiddKaos Kidd Posts: 614
    edited 2006-06-18 03:45
    I'll be up for one!
    Awesome!

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Just tossing my two bits worth into the bit bucket


    KK
    ·
  • cgraceycgracey Posts: 14,155
    edited 2006-06-18 05:23
    Phil,

    This is very neat!!! This will lead to a whole new class of applications. I remember the old thread "Can the Propeller show a human face on a TV?" (or something like that). Well, it can now. And it takes only 6KB of RAM per picture!

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔


    Chip Gracey
    Parallax, Inc.
  • Andrew SmithAndrew Smith Posts: 18
    edited 2006-06-18 07:22
    Looks way Cool!!!

    Put me on the list for at least one!

    Thx!!!

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Andrew
  • IAI-CaptainIAI-Captain Posts: 33
    edited 2006-06-18 07:55
    Yes! Thank you Phil!

    Wow!

    Nice and simple.

    I've been looking for a cmos or ccd imaging divice/ IC to do Visual processeing. Stereo vision is what I've·been hopeing for, so!

    Sign me up for TWO (2).

    But I have a few Questions:

    Can more than one Propcam be interfaced at the same time?

    Are two or more cmos devices to much for system resources?)

    How many Cogs are needed to get the Data? (or involved in the I/O)

    Would I just need to start a new cog and run the same code using a second set of·seven pins say A8-A12? (If I were using two PropCams?)

    Once the Image Data is loaded into the Propeller can the same cog/s doing the I/o also do the image procsesing or would aditional cogs be·needed?

    How critical is the timming, in terms of sync?

    But most of all:· When? how much?
    did I say when????? when!
    smile.gif

    oh ya! what about Color? would a color version be lurking just over the next hill? Maybe·with enough Propellers we can Fly there!

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    IAI (Indigenous Alien Intelligence),
    The New View on Machine Intelligence.
    Because There is nothing Artificial about it!
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2006-06-18 17:23
    IAI-Captain said...

    Can more than one Propcam be interfaced at the same time?

    Two of them could likely be accomodated, but I haven't tried it yet.

    Are two or more cmos devices to much for system resources?)

    The RAM requirements double from 6K to 12K. But that's within reason.

    How many Cogs are needed to get the Data? (or involved in the I/O)

    It depends on the mode. In any event, two cogs are used to read the data and stuff it into RAM. If one of the continuous exposure modes is selected, another cog is launched to manage that.

    Would I just need to start a new cog and run the same code using a second set of seven pins say A8-A12? (If I were using two PropCams?)

    No additional cogs would be necessary, but the data capture code would need to be altered slightly to read from both cameras. The data lines would need to be rearranged a bit so that the four data lines from each camera would reside at A0-A7 (i.e. A0-A3 for camera 1, and A4-A7 for camera 2). The four control lines could probably be paralleled, resulting in 12 lines total.

    Once the Image Data is loaded into the Propeller can the same cog/s doing the I/o also do the image procsesing or would aditional cogs be needed?

    If you're only taking the occasional snapshot, the data capture cogs can be stopped. I doubt that this would be necessary, though, as there are still a couple cogs available to do image processing.

    How critical is the timming, in terms of sync?

    You determine the synchronization. In the scenario I've outline above, both cameras would be exposed exactly simultaneously.

    But most of all: When? how much?

    Soon, I hope. I have to arrange for the board fab and assembly, which does take time. If it's done Stateside, this could happen quickly, but will be more expensive. If done offshore, it takes longer, but will be cheaper. I know that's pretty vague, but I'll try to flesh out a better answer as I know more...

    oh ya! what about Color? would a color version be lurking just over the next hill? Maybe with enough Propellers we can Fly there!

    Sure, you can have any color you want! Edmund Scientific sells plastic filters that can be used over the lens to obtain contrast between different-colored objects. (Ohhh! You meant RGB color! Maybe later. If each pixel is assigned an RGB value, the memory requirements can multiply threefold. However, it may be possible to load just the raw Bayer-pattern data and infer the color of each pixel from its nearest neighbors. But at 128x96 pixels, this kind of interpolation might yield a pretty coarse image.)

    Just as interesting as capturing stereo images is displaying them. Using the gray-level display techniques developed so far, it would be rather easy to drive a pair of LCD shutter glasses with alternate interlaced video fields — without losing any resolution, since the data are already duped between fields. A red/blue or red/cyan anaglyphic display would take more work, since the CTRA method of displaying grayscale images can't generate individual colors. But other display methods are under consideration and could yield results. Stay tuned!

    -Phil

    Post Edited (Phil Pilgrim (PhiPi)) : 6/18/2006 7:30:12 PM GMT
  • Michael OBannonMichael OBannon Posts: 19
    edited 2006-06-18 19:16
    Phil,

    Keep me on your list for the PropCam! I have just the application for several of these.

    Thanks!

    Michael
  • cgraceycgracey Posts: 14,155
    edited 2006-06-18 20:15
    Phil Pilgrim (PhiPi) said...

    Just as interesting as capturing stereo images is displaying them. Using the gray-level display techniques developed so far, it would be rather easy to drive a pair of LCD shutter glasses with alternate interlaced video fields — without losing any resolution, since the data are already duped between fields. A red/blue or red/cyan anaglyphic display would take more work, since the CTRA method of displaying grayscale images can't generate individual colors. But other display methods are under consideration and could yield results. Stay tuned!

    -Phil
    Phil,

    I've been playing with 3D on the Propeller for a few days now and I've come to the conclusion that red/blue anaglyph is the way to go for a few reasons:

    Shutter glasses are expensive and require CRT monitors·which are fast disappearing. LCD monitors buffer data and re-scan it at their own rate, so shutter glasses don't work on these. TV's are mainly CRTs, but you can only get 30Hz per eye which is very flickery.

    Anaglyph works on all monitors, and even printed materials. It uses red and blue lenses to the block blue and red light, respectively. There are also red/cyan anaglyphic combos which let more RGB color through to the brain (the brain sums the left and right eyes' colors). These are good for color, but if you want the optimal anaglyph experience, use red and blue in monochrome. That's the tact I'm taking with the Propeller. Red and blue plus dark grey, along·with white a background makes for some dramatic depth effects (minus the color). Since the Propeller has hardware support for 4-color mode (2 bits per pixel, where you can pick the color for each combo), we've got exactly what we need. It goes like this:

    %00 = white (background)
    %01 = blue (dark grey to left eye with red lens, invisible to right eye with blue lens)
    %10 = red (dark grey to right eye with blue lens, invisible to left eye with red lens)
    %11 = dark grey (dark grey to both eyes)

    Notice that to draw a 3D image, you just clear all pixels to %00, draw the left eye image·by OR'ing·%01 into pixels, then draw the right eye image by OR'ing %10 into pixels. The pixels that wind up with both bits set appear as dark grey to both eyes. Pretty simple!

    Parallax will probably start selling good plastic red/blue 3D glasses for $5 a pair. If you have any glasses already, check out this site:

    www.anachrome.com

    I called there last Saturday at 10:00pm and the guy who runs the site, Alan Silliphant, talked to me for over three hours about anaglyphic 3D. He's made some novel strides to make 3D viable for theatres in the near-term, like sprocket refits for 35mm projectors and special glasses which are inexpensive and have slight positive diopters on the red lenses to help your brain focus the red better. Also, he uses cyan which lets both green and blue through, resulting in better color conveyance to the viewer's brain. He is a super nice and down-to-earth guy. He even offered to come to Parallax and teach us how to get into 3D so that we could have 3D product shots on our website. He's on a 3D mission!

    The long-term 3D solution for theatres is dual digital projectors with complementary polarized filters, and matching glasses for the viewers. This would look like the Captian EO show at Disneyland, but all digital, with 4k pixels per line. Allan thinks this is 7-10 years away, yet.

    Anyway, I hope to have some simple, but stunning, 3D demo on the Propeller soon. The longer-term goal will be to make 3D world transforms so that you could animate 3D coordinates in real-time and view them in 3D (with red/blue glasses). What is the point of all this? To have fun and be able to explore sensor visualization in 3 dimensions! A dual PropCam would be a great testbed, but also you could explore 'seeing' ultrasound (or RF?)· in 3D and who knows what else. Our eyes are the biggest inputs to our brains. Like our ears, they're wired straight in, but receive lots more data. We should use their 3D capacity, as well to perceive phenomena.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔


    Chip Gracey
    Parallax, Inc.

    Post Edited (Chip Gracey (Parallax)) : 6/18/2006 8:19:46 PM GMT
  • Mike GreenMike Green Posts: 23,101
    edited 2006-06-18 20:42
    Just a side note ... 3D vision is great for those that have it. Red/Blue does look strange otherwise.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2006-06-18 21:24
    Chip,

    I'm glad to see you've come so far with the stereo 3D stuff! Oddly enough, a couple years ago I bought a set of plastic anaglyph glasses from another company I found on the web, Berezin Stereo Photography Products (www.berezin.com/3d/3dglasses.htm). They might be the same glasses that Anachrome is offering; I'm not sure. They're quite comfortable to wear over my regular glasses, and I keep a pair near each of my PCs -- mostly for looking at 3D photos from Mars (www.lyle.org/mars/synth/). One thing I did notice, though, is that the filtering on these glasses is not quite as good as on the cheap cardboard ones I got from the same company. The pricier ones have more ghosting, due to the wrong color getting through. It's most noticeable in high-contrast, computer-generated images and less so with low-contrast photos. I also prefer the red/cyan glasses to the red/blue ones. The eye is relatively insensitive to blue. By adding a bit of green to the non-red side, the left/right intensities seem to balance a little better than with blue alone. But that's an entirely subjective observation and more a matter of personal preference than anyting else.

    I've also got some LCD shutter glasses that work on my TV's video out, switching between even and odd fields. You're right: they are headache-inducing. But I've also attended 3D Imax showings that use these with no discomfort, likely the benefit of a much higher frame rate.

    What I'd really like to be able to do is display a grayscale image in 3D. I've figured out a way to do it anaglyphically using the ctra grayscale method, but it requires alternating fields. It relies on the fact that tv_text uses a lookup table for character and background colors. All you have to do is set the background color underlying the grayscale window to dark red for one field and change it in the lookup table to dark blue or cyan for the alternate field. Unfortunately, you'd end up with a double dose of headache inducement. If the color separation doesn't cause a migraine, the flicker certainly will!

    I think the best approach might stem from your earlier suggestion to use the video circutiry for generating both the duty pulses that come from ctra now, along with some chroma modulation. OTOH, there's no reason ctra can't be used for intensity in the high-order bit, with the video circuitry doing the chroma in the two low-order bits -- in the same cog. It'd take some fancy scheduliing to coordinate waitvids with writes to frqa, but it might work.

    So much to try!

    -Phil
  • SSteveSSteve Posts: 808
    edited 2006-06-18 21:50
    That's some amazing and exciting stuff. I'm looking forward to an in-depth tutorial on how to use the counters.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    OS-X: because making Unix user-friendly was easier than debugging Windows

    links:
    My band's website
    Our album on the iTunes Music Store
  • Cliff L. BiffleCliff L. Biffle Posts: 206
    edited 2006-06-18 23:53
    Nice work, Phil. smile.gif That lens assembly looks familiar; are you working with the 6620 too?

    I'm working on interfacing the commercially-available 3088 module; since I'm not interested in selling hardware (and my jealous employer wouldn't take kindly to it anyway), I'm hoping to provide a mostly-software solution for the off-the-shelf modules. Depending on how much of your code you're interested in open-sourcing, perhaps we could share image processing code.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2006-06-19 01:35
    Hi Cliff,

    Thanks!

    I'm working with a lower-resolution sensor than the 6620; and it's B/W, not color. But it has the advantage of true frame mode sensing without needing a mechanical shutter during pixel transfer, which is critical for certain machine vision apps. The lens assembly came from my Korean lens supplier and is one I bought for a different project. It's not optimum for this camera, tough, and I shall have to find a lens with a shorter focal length for best performance.

    Good luck on your 3088 project!

    -Phil
  • Cliff L. BiffleCliff L. Biffle Posts: 206
    edited 2006-06-19 02:01
    Hey, nice; the frame mode on the 6620 is enough of a pain to be useless to me.

    I guess these lenses are pretty much stock. Say, do you know if yours has an IR cut filter? The cheap Omnivision lenses have an IR-reflective coating on the frontmost element, which in my case is bad.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2006-06-19 02:20
    Hi Cliff,

    Yeah, for B/W imaging the IR-cut filters can be a pain if you need high sensitivity. The lenses I've been testing all have them, but I'm going to try finding one that doesn't. OTOH, cheap S-mount (M12x0.5) lenses don't come with an adjustable aperture. Using them in bright, IR-rich sunlight without an IR filter may swamp the sensor no matter how short the integration time. So there's definitely a tradeoff.

    -Phil
  • Mike AMike A Posts: 1
    edited 2006-06-19 02:42
    Phil,
    This is exactly the product I've been looking for, sign me up for at least two!
    -Mike
  • Tim-MTim-M Posts: 522
    edited 2006-06-19 18:44
    Hi Phil,

    Just my two bits... Looks like a wonderful product that I'd be interested in too!

    Tim
  • bambinobambino Posts: 789
    edited 2006-06-19 20:02
    Phil, this is what I've been talking about as well. The folks breathing down my neck actually wanted color, but they will get what they get A?

    Is that model available with Infared? Or possible an option for your baseline model?hop.gif
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2006-06-19 20:09
    Bambino,

    The half-max sensitivity point on the sensor's spectral response curve is around 800nm. At 900nm it drops to 10%, and at 950nm to zero.

    -Phil
  • bambinobambino Posts: 789
    edited 2006-06-20 15:21
    Thank you Phil, Currently my specialty is in high speed, remote data logging of accelerometers.
    The camera project is something my boss is going to send me to school for, so your reply really exposed my ignorance of the subject matter.
    However, if I can glean some proof that the proppeller will do the job, then he will likely get all the propeller equipment I can handle and I want have to sweat it.
    But, If I have to wait till christmas I'm getting one of these puppies!







    Way over my head, but I know the boss is going to throw me in head first!
  • bambinobambino Posts: 789
    edited 2006-06-20 17:12
    All right Phil, I've broke out my Prism and dusted it off.
    Thanks for the reply!·10microns would be good for me or maybe just a good algorythme to distingush human from inanimate objects blown by the wind!·
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2006-06-20 17:24
    Bambino,

    10 microns (10,000nm) is well beyond the reach of most image sensors and ordinary optics. I'm not sure what to recommend, except perhaps one of those IR motion detectors.

    Good luck with your project!
    -Phil
  • bambinobambino Posts: 789
    edited 2006-06-20 19:10
    Thanks Phil,
    I got that bit of info from (http://imagers.gsfc.nasa.gov/ems/infrared.html.)
    Quiet possible I'm compareing apples to oranges.
    I have the IR motion detectors, there great, I just need the camera to confirm false alarms.
    Thanks for straightening me out!
    Keep up the good work.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2006-06-20 19:37
    Bambino,

    I can certainly sympathize with the false alarm problem. My shop has a burglar alarm equipped with IR motion detectors. I kept getting alarms in the middle of the night. I'd go out to investigate — and nothing. 'Turns out, a squirrel was getting in through an unused flue opening and scampering out when he heard me unlock the door!

    It may be possible for a camera to disambiguate something like this, based on the number of pixels that change between exposures. It would require either a double buffer or a capture mode that made the comparson and wrote the difference in real time. This is an interesting problem!

    -Phil
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2006-06-20 20:47
    Bambino,

    The PropCAM uses two cogs for image acquisition: one for even lines, one for odd lines. I've added some code to select which lines to acquire in any given snapshot: even, odd, or both. This makes motion detection feasible, since it allows comparison of two successive images, interlaced into one. I'm not sure if this will help in your app, but your comments certainly inspired the program change!

    -Phil

    Update: Attached is an image illustrating the principle. The first exposure captured the even lines, when there was just a schematic in the field of view. The second exposure captured the odd lines, after a Sharpie pen tip had moved in. By comparing the even lines with the odd ones, the change (i.e. motion between frames) is readily apparent.

    Post Edited (Phil Pilgrim (PhiPi)) : 6/20/2006 9:09:33 PM GMT
    640 x 482 - 37K
  • jammajamma Posts: 33
    edited 2006-06-20 23:25
    Uncannily perfect fit for my planned propeller project. Sign me up for two, please!
    ·
  • Vampyr NoireVampyr Noire Posts: 13
    edited 2006-06-21 05:43
    Provided I have the funds when they become available, I am pretty certain I will have to get one! Good work!

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    vampyrnorie.jpg
  • SSteveSSteve Posts: 808
    edited 2006-06-21 13:46
    I'm interested. Any idea what the cost will be?

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    OS-X: because making Unix user-friendly was easier than debugging Windows

    links:
    My band's website
    Our album on the iTunes Music Store
  • TyreBiterTyreBiter Posts: 40
    edited 2006-06-21 14:45
    Phil:

    I am interested in at least one.
Sign In or Register to comment.