Well I have decided to try to do a decent B/W video recorder
This new version is called the PropPixelator.
I have uploaded the code to the start page of this thread
I'ts display is using a modified 1pin GreyTV by Eric Ball .
I think much of the difficulty that many who have tried this program may have been the InfaRed interface, so this version uses the keyboard for operator entry.
A full screen display is presented by capturing 240 lines by 64 pixels at 30 Hz
as well as the audio at the the horizontal scan rate
You are able to use 100 possible file names from VIDEO_00.PXR to VIDEO_00.PXR by adjusting the numbers displayed in the show mode.
This version uses an amazingly stupid synchronization circuit which is just a dc coupled transistor inverter
It works extremely well with my cheap video camera. but not with my satellite receiver.
I just uploaded the file VIDEO12.ZIP unzip that and place it on your SD card with PLEASE.PXR.
PLEASE.PXR is a test pattern that is shown every time the program changes modes.
equivalent to that annoying display on some satellite relievers "Pleas wait we are processing your resuest"!
if you don't have any audio/video in hardware you can still play the video files.
when you start the code you will see a test pattern with 00 overlaid on it.
enter the digits 12 to select VIDEO_12.PXR and press the P/p key to see that video.
I have some other videos taken from the satellite with the LM18881 but they are too large to put on this forum.
This is some amazing code that you have written. I am mainly interested in the video capture. I have tried your code with a sync separator on a breadboard, and it works surprisingly well!
I would like to increase the horizontal resolution. I see that your code is configured for 6bit AD conversion, which gives 64 pixels. I have tried to set the conversion resolution to 5 bits, but it just gives me blank frames. After playing with your code for a while now, and changing code all over the place, I am stumped. I realise the code is very timing dependent, but I didn't realise how sensitive it was. Am I missing something obvious, or how can I get it to 5 bit sampling and increased horizontal resolution?
I realise the limits of the Propeller's memory come into play. I would like to store the AD results as 4 bit numbers. This allows me to fit two pixels into every byte.
Thanks for your great work!
Edit: After reading through some of your other posts, it seems timing down to single instructions is very important. I guess I will try to count instructions.
believe that I can do color with not much more circuitry, but no one has called me on this assertion, or asked how it might be done.
Ok, I'm game!
I'm thinking old-school video capture here. At a simplistic level, the prop has 3 pins with a D to A producing color video with 8 levels, so if you can produce color video, can you not also capture it?
Bandwidth of a NTSC is 4.2Mhz. Maybe you can sample at that rate, though this article http://www.maxim-ic.com/app-notes/index.mvp/id/750 says you ought to go higher. Thinking of a sine wave at 4Mhz, this means you get several samples through that wave, rather than a single pulse. Yes, a single pulse that is low pass filtered will resurrect the wave but only if you happen to sample it right sample point. So it seems logical to get a few more samples.
Next thing - a fast A to D. With only 8 discrete levels, a series of 8 comparators might do the trick. A LM319 has a response time of 80ns. That might get 3 samples for the fastest sine wave?
An A to D normally produces a binary output but that adds delays and some extra complexity. Given there are only 8 levels, you could sample this into a SRAM chip as a "bargraph" series of 8 levels, rather than as a 3 bit binary level.
I don't know if the prop is fast enough. Even if it is, a frame is going to be more than the prop's memory. So could you sample into an SRAM at, say, 12Mhz. For a 512k sram that might capture more than one frame, so then once the ram is full, the prop could go through the values, work out the start of the frame, the colorburst etc and turn it into something like a bitmap.
Maybe something is wrong with the maths but I think it ought to be possible. I'm playing around with trying to get data out of an sram fast enough to produce a video picture, and I guess if that works out, the next experiment might be to get a video frame into an sram.
Or, as your teaser says, have you already solved this one?!
Hanno uses an ADC08100 chip. This ADC chip has an 8-bit data bus.
I was just looking over the datasheet and it says it can sample at 100MHz. I wonder if this would be fast enough for color. There is still the memory problem. I've wondered about if the 8-bits of data could be stored with another 8-bit bus to SRAM. I made a DIY 8-bit memory module in hopes of testing a set up like this.
If the ADC shared the 8-bit bus with the SRAM, it wouldn't even need to reside in Prop memory at all. Of course the data would eventually need to be read into the Prop to be useful. I think this might be a good application for one of Jazzed's TetraProp boards. Maybe there some vision algorithms that could be performed by multiple Props. I don't really know what I'd want to do with a Prop that could "see" but it's fun to think about.
I don't understand the way color works with NTSC. Maybe 100MHz is fast enough to capture color, but would 8-bits be enough to read the color information in the signal?
believe that I can do color with not much more circuitry, but no one has called me on this assertion, or asked how it might be done.
There are two ways to I can think of to extract color from the video signal:
1. Sample the signal at 4 times the colorburst frequency which then gives you a Y+U Y+V Y-U Y-V sequence, extracting Y, U & V is then trivial.
2. Run the input through a set of filters & color demodulator to output the analog signals then sample each separately at a lower frequency. Not what I'd call "not much more circuitry" though.
It's probably possible to create a digital comb filter by storing the samples from the previous line then doing sum & difference to extract the Y and modulated color signal. (This each line has a extra half colorburst cycle, which may not be true). You'd need to sample at least 3 times the colorburst frequency so you have something to demodulate. Demodulation could be done in software, although it would be tricky to avoid multiplication.
Comments
This new version is called the PropPixelator.
I have uploaded the code to the start page of this thread
I'ts display is using a modified 1pin GreyTV by Eric Ball .
I think much of the difficulty that many who have tried this program may have been the InfaRed interface, so this version uses the keyboard for operator entry.
A full screen display is presented by capturing 240 lines by 64 pixels at 30 Hz
as well as the audio at the the horizontal scan rate
You are able to use 100 possible file names from VIDEO_00.PXR to VIDEO_00.PXR by adjusting the numbers displayed in the show mode.
This version uses an amazingly stupid synchronization circuit which is just a dc coupled transistor inverter
It works extremely well with my cheap video camera. but not with my satellite receiver.
I just uploaded the file VIDEO12.ZIP unzip that and place it on your SD card with PLEASE.PXR.
PLEASE.PXR is a test pattern that is shown every time the program changes modes.
equivalent to that annoying display on some satellite relievers "Pleas wait we are processing your resuest"!
if you don't have any audio/video in hardware you can still play the video files.
when you start the code you will see a test pattern with 00 overlaid on it.
enter the digits 12 to select VIDEO_12.PXR and press the P/p key to see that video.
I have some other videos taken from the satellite with the LM18881 but they are too large to put on this forum.
Perry
This is some amazing code that you have written. I am mainly interested in the video capture. I have tried your code with a sync separator on a breadboard, and it works surprisingly well!
I would like to increase the horizontal resolution. I see that your code is configured for 6bit AD conversion, which gives 64 pixels. I have tried to set the conversion resolution to 5 bits, but it just gives me blank frames. After playing with your code for a while now, and changing code all over the place, I am stumped. I realise the code is very timing dependent, but I didn't realise how sensitive it was. Am I missing something obvious, or how can I get it to 5 bit sampling and increased horizontal resolution?
I realise the limits of the Propeller's memory come into play. I would like to store the AD results as 4 bit numbers. This allows me to fit two pixels into every byte.
Thanks for your great work!
Edit: After reading through some of your other posts, it seems timing down to single instructions is very important. I guess I will try to count instructions.
Ok, I'm game!
I'm thinking old-school video capture here. At a simplistic level, the prop has 3 pins with a D to A producing color video with 8 levels, so if you can produce color video, can you not also capture it?
Bandwidth of a NTSC is 4.2Mhz. Maybe you can sample at that rate, though this article http://www.maxim-ic.com/app-notes/index.mvp/id/750 says you ought to go higher. Thinking of a sine wave at 4Mhz, this means you get several samples through that wave, rather than a single pulse. Yes, a single pulse that is low pass filtered will resurrect the wave but only if you happen to sample it right sample point. So it seems logical to get a few more samples.
Next thing - a fast A to D. With only 8 discrete levels, a series of 8 comparators might do the trick. A LM319 has a response time of 80ns. That might get 3 samples for the fastest sine wave?
An A to D normally produces a binary output but that adds delays and some extra complexity. Given there are only 8 levels, you could sample this into a SRAM chip as a "bargraph" series of 8 levels, rather than as a 3 bit binary level.
I don't know if the prop is fast enough. Even if it is, a frame is going to be more than the prop's memory. So could you sample into an SRAM at, say, 12Mhz. For a 512k sram that might capture more than one frame, so then once the ram is full, the prop could go through the values, work out the start of the frame, the colorburst etc and turn it into something like a bitmap.
Maybe something is wrong with the maths but I think it ought to be possible. I'm playing around with trying to get data out of an sram fast enough to produce a video picture, and I guess if that works out, the next experiment might be to get a video frame into an sram.
Or, as your teaser says, have you already solved this one?!
I've been interested in video capture for a while. I've used Hanno's method with some success.
Hanno uses an ADC08100 chip. This ADC chip has an 8-bit data bus.
I was just looking over the datasheet and it says it can sample at 100MHz. I wonder if this would be fast enough for color. There is still the memory problem. I've wondered about if the 8-bits of data could be stored with another 8-bit bus to SRAM. I made a DIY 8-bit memory module in hopes of testing a set up like this.
If the ADC shared the 8-bit bus with the SRAM, it wouldn't even need to reside in Prop memory at all. Of course the data would eventually need to be read into the Prop to be useful. I think this might be a good application for one of Jazzed's TetraProp boards. Maybe there some vision algorithms that could be performed by multiple Props. I don't really know what I'd want to do with a Prop that could "see" but it's fun to think about.
I don't understand the way color works with NTSC. Maybe 100MHz is fast enough to capture color, but would 8-bits be enough to read the color information in the signal?
Duane
There are two ways to I can think of to extract color from the video signal:
1. Sample the signal at 4 times the colorburst frequency which then gives you a Y+U Y+V Y-U Y-V sequence, extracting Y, U & V is then trivial.
2. Run the input through a set of filters & color demodulator to output the analog signals then sample each separately at a lower frequency. Not what I'd call "not much more circuitry" though.
It's probably possible to create a digital comb filter by storing the samples from the previous line then doing sum & difference to extract the Y and modulated color signal. (This each line has a extra half colorburst cycle, which may not be true). You'd need to sample at least 3 times the colorburst frequency so you have something to demodulate. Demodulation could be done in software, although it would be tricky to avoid multiplication.
1. there must be a good audio channel
2. some video should be attached preferably "frames per seconds" not seconds per frame
so I still have much work to do to get the color speed up to snuff.
P.S. I also now have a VGA driver for this project if any one is interested?