LCD data from Camera
zabador
Posts: 5
hey all,
I'm looking to embed a camera into a pcb and have that data transmitted over zigbee modules and to a lcd display.
Im new to the whole electronics field but i've been programming for over 20 years so im sure i can handle things on that end, i just need a place to start.
Anyone know how i might start this project?
The LCD i've got uses both serial and parallel. (if that makes any diffrence)
the graphics objects seem to only handle things like lines, strings, and the such and not complex images/frames (i dont mind programming this part myself however, it'd be nice if it was already built in somewhere)
Thanks in advance for all the help.
I'm looking to embed a camera into a pcb and have that data transmitted over zigbee modules and to a lcd display.
Im new to the whole electronics field but i've been programming for over 20 years so im sure i can handle things on that end, i just need a place to start.
Anyone know how i might start this project?
The LCD i've got uses both serial and parallel. (if that makes any diffrence)
the graphics objects seem to only handle things like lines, strings, and the such and not complex images/frames (i dont mind programming this part myself however, it'd be nice if it was already built in somewhere)
Thanks in advance for all the help.
Comments
Let's assume your image is only 1/2 megapixel, pretty low-res. Let's assume also that is monochrome, 16 levels from white to black, which also is not too swift. That means 4 bits (1/2 byte) per pixel.
Then each image will require 256 kilobytes of memory. The entire RAM in the Propeller will hold 1/8 of this wretched low-quality image. That's if you use zero bytes for the program.
As I say, you're in for a challenge.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
· -- Carl, nn5i@arrl.net
What if the LCD has memory for storing pixels as many do, one could buffer slices of images being captured to Propeller HUB memory with one or more COGs and update an LCD attached to the same Propeller with another COG to build a "dynamic" composite image. If that works, then it seems you might get something "remotely" useful assuming the Zigbee can operate at near SPI (serial) LCD rates.
I have a tiny CAM and an LCD with such memory ... might be worth some investigation. A challenge is good even if success is improbable if you are willing to forgo other priorities for a time.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
--Steve
I was actually planning on creating some kind of a buffering system where data would be written to a CF card or some such thing (does ram come in seperate modules like EEPROM does?) and then draw to the display from one or more cogs. (using some kind of an interlacing method, one cog handling even fields another cog handling odd fields)
Any more input would be great.
The camera data, how would it be formed?
does it come in only on one pin? if so reading the data the camera is sending would take a while (even in low res) and would limit frame rates
Thanks for the help, anyone have any more ideas?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
What im doing with this is making a video post it system of sorts. Press record, record a note and it gets sent over zigbee to a server. Press play and the server will stream the video back over zigbee to be viewed.
For a relative newbie to this world i realize this is an ambitions undertaking; however, I'd love to learn more about how these things work and the education kits will only take me so far.
another question is how would i get microphone data, seems to me it uses an analog system and as far as i can tell the propellor doesn't have an ADC, are there seperate IC's i can buy for this function?
Thanks for all the help so far everyone
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
1- Resolution. How much detail do you need in the picture. The original VGA was 640x480 in 16 colors which needed only 153.6KBytes of ram. 256 color was 307.2KB.
2- Color or grey scale. The human eye can only distinguish about 64 shades of grey, but much more than 256 colors, so grey scale requires much less memory or bandwidth. If 256 color is adequate it only requires 33% more than 6 bit grey scale.
3- How fast is the image data coming in (resolution x frame rate). If it is very slow (SSTV) then it may not need to be buffered. It could be sent as it is received. If it comes in at 2x the transmission rate (or a bit less) it may be possible to store one pixel and transmit the next one so only half as much memory is required. Then send the stored portion of the image after.
4-Can the data be compressed as it is received. If so RLL encoding may be an option which reduces memory and bandwidth requirements.
5-Can some image detail be sacrificed to reduce memory/bandwidth requirements.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Need to make your prop design easier or secure? Get a PropMod has crystal, eeprom, and programing header in a 40 pin dip 0.7" pitch module with uSD reader, and RTC options.
This discussion of RAM chips and how to add buffered memory has been going on for a while. I am eager to see how the few projects that are using multiple props and various ram chips turn out. The last couple weeks it seems like more of these projects are coming out.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Timothy D. Swieter, E.I.
www.brilldea.com - Prop Blade, LED Painter, RGB LEDs, 3.0" LCD Composite video display, eProto for SunSPOT
www.tdswieter.com
When play back is being sent the chips will be used as a buffer and then the data will be transformed back into analog signals and played back (same process as above but in reverse, from zigbee to stored in ram with one cog, then read off by another cog and used in the LCD and speaker).
In this way data rates for the zigbee (256 Kb only under optimal conditions) will become less important as everything is being buffered on board. The screen itself will be fairly small so resolution and color data will become less important, perhaps these can be improved in future versions.
I'm working on a protocol right now which will test ping between the unit and the server where it can test speeds, then make quality determinations where it can be increased or decreased to get the most out of it without over running the buffer.
Is anyone familiar with how the data will be sent and formatted? I'm presuming that the camera data wont be exactly compatible with the LCD where the data can be directly fed into the LCD without any further formatting.
Using the FREQ registers could I manage the conversions using that? or is there something wrong with the accuracy?