Shop OBEX P1 Docs P2 Docs Learn Events
P2 NTSC/PAL video input - Page 2 — Parallax Forums

P2 NTSC/PAL video input

2

Comments

  • Surac wrote: »
    I would like to build a Genlock.

    Superimpose computer image over Video. Like a Bluebox or Greenbox.
    I think a genlock application is easier. You don't need to acquire the video signal, just sync on it and output/mix when needed. The OSD was already done with P1.
    https://forums.parallax.com/discussion/92536/propeller-based-video-overlay-osd-module/p1
  • I have build genlocks for Amiga Computer for a living years ago. Would love to build one with a p2 again 🤗
  • Amiga was built for this kind of things. An A2000 was my first video post-production equipment.
    The video section had external sync inputs that makes external genlock hardware again just a "summing" hardware as amiga was generating its video signals in sync with external video source.
  • RaymanRayman Posts: 14,744
    I wonder if one can use the RCA connectors on the A/V expansion board to test this out...

    Am I reading it right, that you can capture and show black & white now with P2.
    Also, you can capture the color info, but not show it in real time yet?
  • Yes Amiga was nice to build a genlock. I dream of using a cog like a time base corrector. Scan a tv input with its natural timing and give it out with specific timing that is provided by a external source.

    I build this with fpga but it will be a nice project for P2 for long winters

    But it is a little of Nonsens too today as analog video is dying
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2020-07-09 19:22
    The colorburst is about 42MHz, so there will be no color from these cameras with the P2 adc.

    You don't need to sample the colorburst with the ADC. Just use the counters as an I/Q demodulator, as I laid out here for the P1:

    https://forums.parallax.com/discussion/135244/propeller-backpack-color-ntsc-capture/p1

    However, without PLLs, you'll have to pick a crystal frequency that gives you 42MHz exactly.

    -Phil
  • RaymanRayman Posts: 14,744
    So, 21 MHz crystal and run at 210 MHz maybe?
  • Rayman wrote: »
    I wonder if one can use the RCA connectors on the A/V expansion board to test this out...

    Am I reading it right, that you can capture and show black & white now with P2.
    Also, you can capture the color info, but not show it in real time yet?
    Yes. The code I posted uses the AV breakout. But it's not optimal. For my testing I use a purpose built board. It has 75 Ohm termination and AC coupling. This way I can use the 3.16x ADC gain mode which has a 1.56Vpp range.

    Here is a frame straight from the P2 memory. There are a few interlacing artifacts from not holding the camera perfectly still. The mesh texture is the chroma carrier. Not a P2 artifact.
    p2_selfie.png
    784 x 525 - 236K
  • Hmm it looks like a filter could take care of that chroma mesh artifact without too much trouble

    In about 3 weeks the Mars 2020 Perseverance rover launches. From memory is has about 23 cameras on board. It'd be fun to make a model that had 23 cameras, captured using the P2.

    Color images from Mars almost seem monochromatic anyway, you'd just need a kind of redder version of 'sepia'
  • RaymanRayman Posts: 14,744
    Looks pretty good to me!
  • SaucySolitonSaucySoliton Posts: 524
    edited 2020-07-11 02:49
    Tubular wrote: »
    Hmm it looks like a filter could take care of that chroma mesh artifact without too much trouble

    In about 3 weeks the Mars 2020 Perseverance rover launches. From memory is has about 23 cameras on board. It'd be fun to make a model that had 23 cameras, captured using the P2.

    Color images from Mars almost seem monochromatic anyway, you'd just need a kind of redder version of 'sepia'
    Wow, that's a lot of cameras. But they could all be connected to a P2. It couldn't decode all of them at the same time, but it could switch between them with no extra hardware. Color isn't a big problem if we don't need high frame rate.

    I have a filter designed for that already. The NTSC standard specified max -2dB @ 1.3MHz, min -20dB @ 3.57MHz. The vertical lines show those limits. This filter fails the first limit by just a tiny bit.
    taps=[.2 .3 .3 .2]
    


    Rayman wrote: »
    Looks pretty good to me!
    Also, just ran the PC based color decoder on the above image. Overall, I'd say the quality is equal to a slightly noisy analog broadcast.
    The P2 is running at 280Mhz, so that means a 280mbps raw data rate. But a bit of that is wasted on the blanking intervals and not filling the full ADC range.
    1200 x 900 - 17K
    784 x 525 - 222K
  • The P2 is running at 280Mhz, so that means a 280mbps raw data rate. But a bit of that is wasted on the blanking intervals and not filling the full ADC range.

    Is that one bit per P2 clock or one byte per P2 clock, @SaucySoliton ? Lower case mbps seems to imply bits. I'd hope it was bits otherwise it would be difficult to write at that byte rate into HyperRAM.
  • rogloh wrote: »
    The P2 is running at 280Mhz, so that means a 280mbps raw data rate. But a bit of that is wasted on the blanking intervals and not filling the full ADC range.

    Is that one bit per P2 clock or one byte per P2 clock, @SaucySoliton ? Lower case mbps seems to imply bits. I'd hope it was bits otherwise it would be difficult to write at that byte rate into HyperRAM.

    One bit per clock. That is the rate from the ADC. The data I process is from the scope filter. That downsamples it to 12.273 megabytes per second or 98.184 megabits (rates chosen for square pixels on NTSC). But that could be doubled or quadrupled if using more than one ADC. All the screenshots I've posted are from one ADC.
  • rogloh wrote:
    Lower case mbps seems to imply bits.

    Also implied by lower case is millibits per second. :)

    -Phil
  • Gee, for something that never purposely set out to capture analog video, thats mighty useful

    Yes, one (or two, sep cog) cameras at a time would be plenty, the P2 could mux which input is active at a time. It would be good to have two for stereoscopic vision.
  • roglohrogloh Posts: 5,837
    edited 2020-07-11 03:57
    One bit per clock. That is the rate from the ADC. The data I process is from the scope filter. That downsamples it to 12.273 megabytes per second or 98.184 megabits (rates chosen for square pixels on NTSC). But that could be doubled or quadrupled if using more than one ADC. All the screenshots I've posted are from one ADC.

    Great. Then I think we'll be in business for video capture into HyperRAM at decent resolutions that would tax HUB RAM too much, and you could do some post processing from there. For colour decoding it would be ideal to come up with a filter algorithm that can work on bursts read from the HyperRAM into HUB and possibly written back to HyperRAM if needed. Sort of by scanline or groups of lines at a time. I'd imagine that would be how your filter would want to do it anyway, though I'm not much of a DSP guy. I wonder how many COGs will be needed in total for a single NTSC source and at what rate it could process the captured frames?
  • kwinnkwinn Posts: 8,697
    edited 2020-07-11 14:15
    Rayman wrote: »
    Looks pretty good to me!

    +1 - I have used a short length of cat5 cable and the standard D 15 pin connectors to connect to an ODROID board (720P) and a Propeller board (1280x1024) to their individual monitors with good results in both cases. Images were sharp, stable, and without artifacts.
  • Cluso99Cluso99 Posts: 18,069
    edited 2020-07-13 20:10
    Tubular wrote: »
    Gee, for something that never purposely set out to capture analog video, thats mighty useful

    Yes, one (or two, sep cog) cameras at a time would be plenty, the P2 could mux which input is active at a time. It would be good to have two for stereoscopic vision.

    The mars 2020 rover has IIRC stereo vision specifically for determining 3D (depth). I’m not sure if curiosity has stereo but I suspect is has too. I’ll have to check.

    Posted it: yes, curiosity has stereo cameras on the mast, and low slung stereo cameras for navigation. The 2020 (Perserverance) has more and better cameras as you’d expect for a ~12 years later design.
  • pik33pik33 Posts: 2,383
    This means a scandoubler for a retrocomputer is doable with P2.
  • TharkunTharkun Posts: 67
    edited 2020-12-21 19:49
    dMajo wrote: »
    Surac wrote: »
    I would like to build a Genlock.

    Superimpose computer image over Video. Like a Bluebox or Greenbox.
    I think a genlock application is easier. You don't need to acquire the video signal, just sync on it and output/mix when needed. The OSD was already done with P1.
    https://forums.parallax.com/discussion/92536/propeller-based-video-overlay-osd-module/p1

    Then it should be possible to generate a colored overlay !?
  • I've got a fun demo for y'all. How many other microcontrollers can capture FOUR video signals simultaneously? The P2 can.

    The program also has the ability to display a single signal at full 640x480 resolution. The image can be displayed on a computer using the debug window. That takes 10 seconds, though.

    This is the same functionality as quad processors that were used to monitor security cameras. Devices with this functionality cost $249 back in 1999.

    1213 x 1080 - 179K
  • cgraceycgracey Posts: 14,206

    @SaucySoliton said:
    I've got a fun demo for y'all. How many other microcontrollers can capture FOUR video signals simultaneously? The P2 can.

    The program also has the ability to display a single signal at full 640x480 resolution. The image can be displayed on a computer using the debug window. That takes 10 seconds, though.

    This is the same functionality as quad processors that were used to monitor security cameras. Devices with this functionality cost $249 back in 1999.

    Nice, Saucy!

    How many cogs does it take per channel? Are you ganging ADC pins for faster conversions?

  • @cgracey said:

    @SaucySoliton said:
    I've got a fun demo for y'all. How many other microcontrollers can capture FOUR video signals simultaneously? The P2 can.

    The program also has the ability to display a single signal at full 640x480 resolution. The image can be displayed on a computer using the debug window. That takes 10 seconds, though.

    This is the same functionality as quad processors that were used to monitor security cameras. Devices with this functionality cost $249 back in 1999.

    Nice, Saucy!

    How many cogs does it take per channel? Are you ganging ADC pins for faster conversions?

    1 cog per channel. I sort of ran out of memory to do more. Theoretically it could go up to 7 channels, then 1 other cog for the display. But going higher would require more downsampling or cropping of the video. Each channel uses about 14k of hub ram to buffer unprocessed samples. I could switch that from longs to bytes to save a bunch. I think it would be possible to run 240x240 from each channel into a 720x480 frame. I had to squeeze some stuff just to be able to debug.

    1 pin per channel. It uses the scope filter and streamer to pick off samples at 12.7 MSPS. If I were to use multiple ADCs for the same signal, I would need to add together bytes from within the same long. The cycles to sum 3 or 4 bytes together becomes significant. I felt the video quality was good enough with 1 pin. Applications that need high quality shouid use an external ADC/decoder chip.

  • cgraceycgracey Posts: 14,206

    Thanks for the explanation, Saucy.

  • This is interesting work Saucy. I wonder if it would be able to write to HyperRAM or PSRAM and read back and display from a frame buffer. What is the total memory bandwidth you need for doing that? I do have the ability to get multiple writers sharing the memory as well as graphics writes that can skip a programmed offset in memory per scanline written so you could put 4 source windows into a common framebuffer.

  • As seen on the live forum. With the fps limited by USB 1.1, color decoding became more practical.

    The composite color video decoding is written in spin. Currently only tested on Linux with the new 320x240 resolution.

    fifo_demo runs 2.2 fps. The speedup is from doing the color decoding in a separate cog.

    USB Device demo runs 1.6 fps.

    The USB port has bandwidth for 9 fps in NV12 format and 6.7 fps in UYVY format.

    Flexspin only.

  • RaymanRayman Posts: 14,744
    edited 2024-10-10 23:02

    That was an impressive demo yesterday …
    Being able to make p2 appear as a camera is interesting.

    Could one go higher resolution with lower frame rate?

    Or vice versa..

  • @Rayman said:
    That was an impressive demo yesterday …
    Being able to make p2 appear as a camera is interesting.

    Could one go higher resolution with lower frame rate?

    Or vice versa..

    Yes. I ran 780x240 during development as that is the size of the raw capture I used.

    There is 1 MB/ sec of bandwidth available for whatever configuration we want to use. There is a possibility of using JPEG encoding. Doing an 8x8 DCT takes an estimated 32 clocks per pixel. With a generous allowance for the other parts of jpeg encoding we would have a total of 80 clocks per pixel if written in PASM. So maybe one cog could compress 4 MB/sec into JPEG format. Or we could send YUV data in h.264 format to allow things like partial screen updates. https://www.cardinalpeak.com/blog/worlds-smallest-h-264-encoder

  • roglohrogloh Posts: 5,837

    :smiley: I like the idea of JPEG compression Saucy, could be useful for my other capture stuff too.

  • RaymanRayman Posts: 14,744

    Some of the camera modules have a jpg output option, so that might be interesting mix…

Sign In or Register to comment.