is it posible to process a 640x480 BW image with the prop in real time or faste
Anubisbot
Posts: 112
Hi,
it has been 2 years since i was active with a prop and back then i just did some small stuff.
But now i am looking for a solution where i can capture with a cam a BW picture and find just blobs and turn them into coordinates.
Does some one know if there is a cam sensor what could be hoocked up to a pop, and do that task in 60fps+???
Like the wii remote, it has a cam sensor and captures the BW image and then gets the coordinates of the blobs (ir light ponts).
Best regards Anubisbot...
it has been 2 years since i was active with a prop and back then i just did some small stuff.
But now i am looking for a solution where i can capture with a cam a BW picture and find just blobs and turn them into coordinates.
Does some one know if there is a cam sensor what could be hoocked up to a pop, and do that task in 60fps+???
Like the wii remote, it has a cam sensor and captures the BW image and then gets the coordinates of the blobs (ir light ponts).
Best regards Anubisbot...
Comments
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
640*480pixels * 60 fps totals 18.4 million pixels per second processed. With an 80MHz clock the prop gets 20 million (assembly) instructions per second per cog. So 1 cog could do basically one operation per pixel (like "addition assuming there was no overhead of transferring data from the camera or even from HUB RAM (which unfortunately there will be).
Assuming 8-bit monochrome, 640x480 gives you 300 KB or memory needed to store the data, which is quite a bit more than the 32KB total available to the prop for code and variables.
Also, for the calculation of centroids there are multiplications, which are not performed in hardware.
You will need to down-sample your data as you are reading it in from the camera to a much smaller resolution, then from there it is a much easier problem. Also note that detecting the single brightest point/blob will be relatively easy compared to finding the top N brightest points/blobs.
Jonathan
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
lonesock
Piranha are people too.
Best regards Anubisbot
Leon
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Amateur radio callsign: G1HSM
Suzuki SV1000S motorcycle
Time and time again this topic arises...
The theory is simple: (for NTSC, same technique applies to VGA)
With this technique, you reduce the virtual detection screen from 192x224 (NTSC approximation), sampling every 4th horizontal line by every 4th pixel per line you get a virtual screen of 48 x 56 pixels. The processing savings are evident: 43,008 pixels vs 2,688 pixels to process per frame. It seems like a good trade-off between object tracking and processing power, especially with the bottlenecks inherent with the Prop. I'm not knocking the Prop at all, but the reality is, a Pentium III would have issues processing that amount of data. The prop can do it, of this i'm sure, as long as the data being processed is scaled down so that the Prop has some "code time" headroom so that the program doesn't crash or become unstable.
Also, if you are using a VGA camera, the HSYNC and VSYNC pulses are already there, and an added bonus is the free filtering by only monitoring the RED channel, since your using infrared leds (the camera has to be able to detect this, most do not).
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Quicker answers in the #propeller chat channel on freenode.net. Don't know squat about IRC? Download Pigin! So easy a caveman could do it...
http://folding.stanford.edu/ - Donating some CPU/GPU downtime just might lead to a cure for cancer! My team stats.
Post Edited (RinksCustoms) : 1/17/2009 10:36:25 PM GMT