Shop OBEX P1 Docs P2 Docs Learn Events
PropCAM Image Processing — Parallax Forums

PropCAM Image Processing

Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
edited 2014-09-07 08:16 in Propeller 1
As promised, I've completed documenting the 4-bit grayscale image processor object designed to work with the PropCAM-DB. I'll be posting examples in this thread of what it can do.

The object is designed to both analyze and draw grayscale images. The guts are written in PASM, so it's very fast at what it does -- virtually real-time for acquired images. It works with an iterator that can scan lines, rectangles, and the perimeters of irregular "blobs." It analyzes images over the iterated area by comparing pixel intensities to a 16-bit mask, such that each bit corresponds to a certain intensity. Those intensities with 1-bits in the mask are counted and their positions averaged. Optionally, pixels meeting and/or not meeting the mask conditions can be replaced with pixels of another value. This is how drawing is effected. For example, to draw a white box, all you have to do is define the box's opposite corner coordinates and specify that all pixels iterated over by the box's perimeter be set to 15.

In addition to the centroid (average position of 1-masked pixels), every iteration computes a histogram of all pixel intensities, plus a bounding box for the 1-masked pixels. Attached is a demo of the histogram capability. The width of the grayscale buffer has been extended to the right by 16 pixels to accommodate the histogram display, which ranges from black at the top to white at the bottom. Here's what the output looks like, as displayed by the Propeller Backpack:

attachment.php?attachmentid=106212&d=1389908840

BTW, this is a photo of Jack London at the wheel of a tall ship in Port Townsend Bay.

-Phil
«1

Comments

  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-01-16 19:36
    Identification by Size

    This demo discriminates between apples and tangerines based upon their comparative sizes. It does this by thresholding the brightness and counting the bright pixels in the image. Here are some images from the Propeller Backpack:

    attachment.php?attachmentid=106221&d=1389929739

    attachment.php?attachmentid=106222&d=1389929739

    attachment.php?attachmentid=106220&d=1389929738

    attachment.php?attachmentid=106219&d=1389929738

    Note: The Moire effect in the above photos is an artifact of the TV monitor's LCD pixels vs. the sensor pixels in the camera that took the photos. They are not visible to the eye.

    I won't pretend that it's this easy in the field. There are a lot of variables that have to be considered when sorting produce, not the least of which is proper illumination. But at least this program shows what's possible under controlled, laboratory conditions.

    -Phil
    648 x 486 - 64K
    648 x 486 - 69K
    648 x 486 - 61K
    648 x 486 - 66K
  • Brian_BBrian_B Posts: 842
    edited 2014-01-17 16:09
    Phil,
    I'm going to drag out my old washer sizing conveyor and try it out, The line scan camera was to tedious.

    Keep up the cool work!

    Brian
  • banjobanjo Posts: 447
    edited 2014-01-17 23:52
    Phil,
    I've been playing around with neural networks for ~20 years. During Christmas I first time used a Propeller (Activityboard) and was able to compile and run a 10 year old NN 'runtime' in C.
    On my 'to-do' list for 2014 is to buy a PropCam and try some image matching with a NN. The heavy floating point number crunching would be done on a PC.
    What's the purpose of this? For me just to learn and prove it can be done, nothing more :-)

    - Thomas
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-01-19 14:01
    Motion Detection and Location

    In this example, the PropCAM has been programmed to detect and locate movement in its field of view. This could be used, for example to aim a higher-resolution security camera or even to steer a photovoltaic array (with proper optical filtering, of course). It uses the same image processing object as the other examples and works in real time.

    The way this example operates is to sense only the even scan lines from the Kodak sensor. These scan lines get written alternately in the even and odd lines of the buffer. The image-processing object uses a filter labeled EODIF that works with the absolute values of the differences in pixel intensities between pairs of buffer lines. When the difference is above a certain threshold (IOW, when a pixel changes intensity by a certain amount), it's counted. If the total count of changed pixels is above 20, the program assumes some motion occurred and signals the fact by changing the color of the message stripe at the top of the image from green to orange. Since the image-processing object automatically computes the centroid position of the selected pixels, it knows where in the frame the motion occurred. The demo program marks this location onscreen by positioning a block character in the text overlay. Also, it's only when the 20-pixel change threshold is exceeded that scan lines are switched to the alternate buffer lines. This is done to ensure that slow movements do not escape the camera's attention.

    Here's a video of the demo in action:

    [video=vimeo;84535780]

    Note that the vertical resolution of the still image is only half of what a normal PropCAM image would be, since the same pixels are written to both the even and odd display lines, so it looks very blocky. When motion occurs, you will notice a zebra-stripe effect where the two scan lines differ. This is what the image-processing object "sees" also.

    Attached is an archive with the demo code. (In the process of preparing this demo, I found a minor bug in the camera driver code. It's been fixed, and the driver in the archive is the most up-to-date.

    -Phil
  • T ChapT Chap Posts: 4,223
    edited 2014-01-19 14:34
    That is amazing. The MoBug video cracks me up.
  • TubularTubular Posts: 4,702
    edited 2014-01-20 18:25
    These examples are terrific, Phil. Thankyou.

    I did get the WS2812B based illumination mezzanine working too, but haven't hooked up to a backpack yet.

    I would also like to look at using one of Kuroneko's 128 column VGA character drivers with this. It would allow us to place a small ascii character inside each 'pixel'. You could put a small 'c' for centroid, for example.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-01-20 20:17
    Please count your change!

    This demo uses a backlight to silhouette randomly-placed coins against a bright background:

    attachment.php?attachmentid=106320&d=1390277050

    The objective is to locate, count, and compute the total value of the change placed on the diffuser glass. It uses the ImageProcess4's outline method to find the perimeters of the blobs it locates in the field of view and to assign a monetary value to each one. The procedure is this:
    1. Call outline over a grid of points in the field-of-view, 16 pixels apart in each direction.
    2. If the grid point happens to be a dark "interior" point, compute the perimeter and bounding box.
    3. If the bounding box has not been seen before (i.e if a previous grid point was not also inside the coin shadow), record the bounding box and perimeter.
    4. Once the grid has been spanned, iterate over the list of found coins, and compute the value of each based on the size of its perimeter and its closest match with previous-computed perimeters.
    5. Add up the values and display.

    Here's a scan of multiple coins before evaluaiton:

    attachment.php?attachmentid=106322&d=1390277051

    Here's a scan of the same coins after evaluation, showing the bounding boxes:

    attachment.php?attachmentid=106324&d=1390277600

    The ImageProcess4.spin object in the attached archive includes a bug fix made since my last post. Also, I've left some debug code in the demo program, which makes it easier to calibrate it against the coins one has at hand.

    BTW, I've excluded pennies from the demo, since it's difficult to discriminate them from dimes with a wide-angle lens. The 3.6mm lens provided with the PropCAM is defined as "wide-angle" since its field-of-view is larger than that of a comparable pinhole lens. As a consequence, it exhibits a tiny bit of "barrel distortion" which renders objects near the periphery a little smaller than the same objects would appear in the center. So a penny near the periphery might measure the same as a dime at the center. I could have calibrated this discrepancy out of the computations; but it would have complicated what is, otherwise, a simple demo.

    -Phil
  • ieeepvieeepv Posts: 4
    edited 2014-03-19 14:28
    Hello

    We are using the Propcam DB and DB Expander with the Activitybot for our Robotics Competition. Our goal is to have the Propcam identify a 3 shapes (circle, square, triangle) and select one. Is this functionality possible, it seems to be similar to this Identification by size code. Also, Propeller Backpack required?
    Any assistance would be appreciated.

    Thanks,
    Tiffany
    ieeepvrobotics@gmail.com
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-19 15:29
    Tiffany,

    Welcome to the Parallax forum!

    Identifying shapes is definitely possible with the PropCAM. Assuming the shapes are always rotated the same, you should be able to distinguish among circle, square, and triangle by measuring the widths of the candidate shape in three places: 25% from the top, 50%, from the top, and 75% from the top. These three numbers will provde a signature for the shape that you can use to determine which it is.

    As to the PropBackpack, no it's not required with the ActivityBot if you're using the DB Expander.

    Good luck with your project!

    -Phil
  • ieeepvieeepv Posts: 4
    edited 2014-03-19 16:55
    Hi Phil,

    Thank you for the welcome, we are excited about building our first robot! Is there a way to get the Identification by Size .spin files in .c format? We tried to convert them for the Simple IDE platform but to no avail. Also is there a library and breakdown of the functions? We viewed the Product Guide from the Parallax site but we still need assistance getting the camera to take a picture. Thanks

    Tiffany
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-19 17:02
    All of the PropCAM software to-date is written in Spin/PASM. If you want to use it with ActivityBot C programs, you might want to consider the Backpack after all. That way the PropCAM programming could be done in Spin for the Backpack which would transfer info to the ActivityBot's C program via the three-pin servo cable.

    -Phil
  • ieeepvieeepv Posts: 4
    edited 2014-03-20 18:05
    Okay to make sure I am understanding correctly, are you saying that the Propeller Backpack allows us to run the Propcam .spin code while running the C code for the Activitybot? I would just like to clarify before I make the purchase.

    Thanks,
    Tiffany
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-20 19:21
    ieeepv wrote:
    Okay to make sure I am understanding correctly, are you saying that the Propeller Backpack allows us to run the Propcam .spin code while running the C code for the Activitybot?

    Yes! You will need to wirte or modify the higher-level Spin code for the Backpack to do your shape detection and communicate the results to the ActivityBot's C program. That would be more efficient than simply downloading raw pixel data to the ActivityBot for analysis there.

    -Philk
  • william chanwilliam chan Posts: 1,326
    edited 2014-03-24 16:33
    Hi Phil,

    What would be the fastest way to transfer a captured image down the SIG pin on the Propeller backpack and convert it to a BMP file?
    How do we implement full duplex serial communication with PropCam+BackPack module if there is only one SIG pin?
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-24 16:45
    What would be the fastest way to transfer a captured image down the SIG pin on the Propeller backpack and convert it to a BMP file?
    The Prop could do the BMP file conversion. There's code for that in my Spinneret demo. Then just send it down the pipe at 230400 baud.
    How do we implement full duplex serial communication with PropCam+BackPack module if there is only one SIG pin?
    Use the PropPlug interface instead.

    -Phil
  • ieeepvieeepv Posts: 4
    edited 2014-03-26 12:26
    Hey Phil,

    We have a few questions if you could help. #1. The parallax product description states that we can use any NTSC video monitor plugged into the A/V connector for display but is there a way to directly display on a laptop? Perhaps import the code serially, save it to the SD memory card on the ActivityBot and output to the laptop?

    #2. How can we program the histogram functionality to manipulate the image into distinguishable black and white pixels for the shape detection?

    #3 Below is a code we started to develop for our Activitybot C code to access data from the Propcam DB via the Backpack. Does this C code prompt the propcam to turn on if we set the pin of the backpack to high, and if so does 'shift_in' function return the size of pixels from the spin code?

    byte getData(
    {
    while(get_state(11) != 1)
    {
    return shift_in (int pinDat, int pinClk, int mode, int bits); //the shift in function receive data from a synchronous serial device.
    }
    }


    high(11); // the backpack is on the eleventh pin, and will turn on the eleventh pin
    size = getData();
    ..

    #4 If not, can you suggest another way to communicate to and from the activity bot and the backpack.


    Thanks,
    Tiffany
  • william chanwilliam chan Posts: 1,326
    edited 2014-03-29 21:03
    Hi Phil,

    How to do metering based on a smaller rectangle (like for example 50,50 to 80,80) instead of the whole image?
    Thanks.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-30 10:07
    ieeepv wrote:
    The parallax product description states that we can use any NTSC video monitor plugged into the A/V connector for display but is there a way to directly display on a laptop? Perhaps import the code serially, save it to the SD memory card on the ActivityBot and output to the laptop?
    That could certainly be done. There's Spin code in the webcam demo for converting the camera data to a BMP file. I'm also working on a way to display camera output from the PropPlug port directly in a web browser window.
    How can we program the histogram functionality to manipulate the image into distinguishable black and white pixels for the shape detection?
    This is done for you in the image processing object by thresholding. Keep in mind, though, that good, even lighting is extremely important for success when doing sizing and shape recognition. Ambient light will almost never cut it.
    Below is a code we started to develop for our Activitybot C code to access data from the Propcam DB via the Backpack. Does this C code prompt the propcam to turn on if we set the pin of the backpack to high, and if so does 'shift_in' function return the size of pixels from the spin code? ... If not, can you suggest another way to communicate to and from the activity bot and the backpack.
    No. shift_in is for synchronous serial I/O and uses two pins: data and clock. The Backpack can be programmed in Spin for asynchronous serial I/O using the Full Duplex Serial object. Your C program will need a similar routine to communicate with the Backpack.
    How to do metering based on a smaller rectangle (like for example 50,50 to 80,80) instead of the whole image?
    'Not sure I understand the question. Please elaborate.

    -Phil
  • william chanwilliam chan Posts: 1,326
    edited 2014-03-30 15:21
    What I meant was,

    How to do AutoExposure and AutoGain over smaller rectangle (example 50,50 to 80,80) instead of the whole image?
    For example if a person's face in a bright background will appear dark. We need to auto increase the exposure to be able to capture the facial features.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-03-30 16:49
    William,

    In that case I would pick a target for the number of pixels in the top intensity bin(s) of the histogram, rather than using the average intensity. This could be done in Spin. To restrict to an smaller area in the center of the FOV for computing the average would require digging into the PASM driver.

    Alternatively, you could use the filled_box method in the image analysis object to count the number of pixels greater than a certain brightness within your area of concern and use that to adjust the exposure time.

    -Phil
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-04-01 14:58
    Tiffany,

    Take a look here for a program that lets you use your web browser to monitor the PropCAM's image capture and save the image files on your PC.

    -Phil
  • william chanwilliam chan Posts: 1,326
    edited 2014-04-17 11:49
    Hi Phil,

    Do you know where to purchase the female connector (solder to PCB type) that plugs directly to PropCAM-DB's 12pin header connector?
    Is this connector available in element14 or RS?
    Thanks.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-04-17 12:45
    William,

    Here's a link to the mating receptacle:

    -Phil
  • KeithEKeithE Posts: 957
    edited 2014-04-20 21:17
    Phil,

    Do you think that this would be possible for the propeller and PropCAM?

    -Capture a 64x64 greyscale 4bpp image about once a second.
    For each capture scan the columns from left to right and output up to 64 sinusoids with intensities proportional to the pixel value in each row. (See http://www.seeingwithsound.com/extra/ieeebme1992.pdf for the details, and perhaps http://www.seeingwithsound.com/hificode.c)

    Some people have been trying to get this to work on the Raspberry Pi (see http://www.raspberrypi.org/forums/viewtopic.php?t=49634&p=499149) but haven't managed it yet. Someone offered a $500 bounty on the Pi site too. You can get free Windows/Android software to demo this on the seeingwithsound.com site.

    Anyways I stumbled across this and thought it might be an interesting application for PropCAM.
  • william chanwilliam chan Posts: 1,326
    edited 2014-07-10 19:44
    Hi Phil,

    Where to buy lenses for PropCAM?
    I am looking for more wide angled lenses ( maybe 1.8mm ) than the standard lense for macro photography with the object about 2 inches away.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-07-10 22:57
    Try http://www.alaudoptical.com. That's where I buy mine, and they are amenable to providing samples, as long as you pay the shipping.

    -Phil
  • Duane DegnDuane Degn Posts: 10,588
    edited 2014-07-11 13:56
    I've purchased lenses from these guys. I don't know if they have lenses to work with the PropCAM but they were very helpful when I emailed them with questions about an automatic iris.
  • Jack BuffingtonJack Buffington Posts: 115
    edited 2014-07-11 14:19
    The prop shouldd be capable of it. I was working towards that very thing a couple of year ago but wanted to have a speech synthesizer in what I was developing. I was trying to massage Chip's synthesizer into shape but it just couldn't do the percussive or fricative sounds very well. People weren't able to understand it. I'll probably take another stab at it using some new knowledge about signal processing in the next year or so.

    As far as generating the tones, that is definitely possible. You would just need to have multiple cogs working together to do it. I wrote a program that was generating multiple simultaneous sine waves with variable volume. It was maxing out somewhere around 13 to 16 sine waves using a single COG if I remember correctly.

    I ended up stopping because I ran low on time not because of any limitation with the Prop.
  • william chanwilliam chan Posts: 1,326
    edited 2014-07-12 03:40
    Hi Phil,

    If a dark room suddenly becomes bright, how many cam.acquire() will it take for PropCAM to arrive at near optimum exposure?
    What ranges should avg_pix be for a image to be considered as correctly exposed? ( not over exposed or under exposed )

    Is there a way for the PropCAM to do metering without acquiring images, before one perfect cam.acquire is called?

    Thanks for all your help.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-07-12 09:08
    If a dark room suddenly becomes bright, how many cam.acquire() will it take for PropCAM to arrive at near optimum exposure?
    Only one, if you're running in continuous mode with AUTOEXPOSURE or AUTOBOTH. Otherwise, it may take 10-15 calls, I would guess.
    What ranges should avg_pix be for a image to be considered as correctly exposed? ( not over exposed or under exposed )
    I would set it to half of the maximum value to start (e.g. 8 for 4-bit, 32 for 6-bit). A lot depends upon the background. For example if your subject is backlighted, a higer value will be necessary.
    Is there a way for the PropCAM to do metering without acquiring images, before one perfect cam.acquire is called?
    That's what the background modes are for. It will keep adjusting in the background whether you call acquire or not.

    You can also write your own autoexposure routine using a different metric (e.g. the number of saturated pixels) and adjust accordingly. It's just something you need to experiment with in the context of your own application.

    -Phil
Sign In or Register to comment.