Simultaneous VGA and composite
tof
Posts: 10
Hi,
Is it possible to use the same Graphics (from the Graphics object) as a source for VGA and composite video? Will this tax the micro-controller a lot more?
Thank you.
Is it possible to use the same Graphics (from the Graphics object) as a source for VGA and composite video? Will this tax the micro-controller a lot more?
Thank you.
Comments
If you are wanting to do more than the basic drivers can do, which is 2 or 4 color graphics, with colors assignable per "tile" which is 16x16 or 16x32 pixel blocks on screen, then the color issue becomes more important as the raw color data isn't directly compatible between TV and VGA.
Are you planning to use graphics.spin, TV.spin and VGA.spin?
Your program will be running in a COG or two, one COG will be running graphics.spin, two other COGS will be running both TV.spin, and VGA.spin. That's really the cost. One extra COG needed for the second display.
VGA resolutions are different from the TV, and you will need to do some work setting up the display, number of tiles, etc... so that the VGA display is showing the graphics in a fashion similar to the TV display.
Actually, I created an audiovisual show called Tvestroy two years ago. You can check a 5 min overview of the show here: http://www.youtube.com/watch?v=Z4Xp_Vwj__8 (sorry if its a little noisy ).
The video is generated through VGA by two computers running Pure Data. The audio consists entirely of the RGB signals of the VGA video signal connected to a sound mixer (the audio is the sound).
I want to create an embedded version of the show. So I am working out a way to generate the audio from the sound. Also, I want this version to work with a composite video signal. My possible solutions are as follow:
1) Create the Graphics, feed them to TV.spin to generate the composite video signal and also feed them to VGA.spin to generate the audio signal (from the VGA video signal).
2) Create the Graphics, feed them to TV.spin and create a program to serialize the video buffer on an output pin to generate the audio signal. This does not seem like the right solution.
3) Create the Graphics, feed them to TV.spin, use a couple of transistors to duplicate the composite video signal and use the duplicated signal as the audio source. The problem with this method is that a composite video signal does not "sound" as nice as a VGA signal.
What do you think?
I also started reading your Full Color Tile Driver Thread. If I understand correctly, that driver allows the creation of tiles that have all 64 colors? Do you think your driver would be capable of creating the real-time graphics similar to the ones seen neat the end of the Tvestroy video? I probably is not an issue, I think I could generate the same visuals with the reference objects, but it is still nice to know.
Thank you for your help.
Tvestroy looked great!
Linus (lft) had a demoscene project (Turbulence) that was coded on the Propeller, using VGA:
http://www.linusakesson.net/scene/turbulence/index.php
You might look at that project and see if there are techniques you might be able to borrow.
--trodoss
Am I correct in understanding that you don't actually need it to display on VGA? You just want the signals to get the sound?
As for the real-time graphics, lots can be done! The drivers I've been involved in are generally geared toward less abstract displays, using formal tiles and sprites. A lot of the real-time stuff I saw in your presentation could be done with loops, and data fed to the video generators in real time, or through a buffer. What I think you need is a framework where the video sync signals are all taken care of, operating properly, then various loops in the active pixel area that take advantage of the different pixel clock and color options, under software control from another COG.
What did your code look like for that demonstration? Maybe it would help some to understand what you did there.
The code was written in OpenGL with the free data-flow programming language called Pure Data (http://puredata.info/). I would create a few basic shapes and then animate them. For example, I would create a rectangle and then change its scale every second.
I am not worried by the translation of the abstract graphics into loops and such. As you said, I just want to have a good framework and good technique to start working with.
Basically what I am aiming for is a set of "scenes". The user can trigger between the scenes on the fly. The time it takes to switch between two scenes is not critical and can be used to setup a scene before running it.
Yes, that is interesting, but I think his code needs a special compiler.
The templates will throw up color bars for the active display area. (all of it) From there you can code bitmaps, and other things into loops, with branches that depend on data fetched from the HUB. A "scene" then, is a combination of that COG, running video, doing various basic things, and SPIN, playing traffic COP, controlling the parameters and objects seen.
That combination is fast enough to reproduce a lot of what I saw in your demo, though to do it, you will be on the more advanced end of propeller video, dynamically drawing things, instead of just buffering them in a bitmap, or something.
A single COG can do some basic things, like a bitmap, or text, perhaps some variations on that. To exceed that, you would use more COGS to feed what a lot of us call "the signal cog". In that mode, the video cog is drawing from a scan line buffer, which other COGS, or even SPIN fills. (SPIN can't do much though)
My latest 80 column text driver uses that technique to do color text, and several of the sprite drivers also use it. You might look one or more of those over (in my sig and blog, Eric Ball, Baggers, et al...) to see how it all comes together.
If it were me, I would get a signal cog running with a graphics cog, doing a simple bitmap display. That vets your line buffer, video signal (so that you don't disturb it while building), and your graphics cog, handshaking and timing. From there, you can start to build your scenes.
We can load driver code from SD card, along with other data. You could make quite a presentation, a piece at a time, loading stuff, starting up the signal, doing it, then loading other stuff, and doing that, etc... fetching graphics data, signal types, etc....
In this way, Props are pretty unique, but it won't be all that easy. Should be fun though.
One other thing. If resolution isn't too high, you can use bitmaps, and the graphics.spin COG. That's a 4 color, tile or bitmap based system, and it's what is shipped with the Propeller tool. You get 4 colors per tile, and you get the ability to point the tiles at various places in the 32K HUB memory. That's really great for repeating patterns, simple shapes, etc...
I've got that commented in my signature, and I've also shown a single buffer approach in my blog. Single buffer may not work for you though, depending on what your expectations are for "real time".
That's a great place to start, because it is kind of sort of OGL like, in that you can define graphics window memory, coordinate systems, and execute various primitives, line, arc, box, shape at, etc...
Finally, I would get a scene working, then dedicate a cog to outputting the signal as digital data. I think it's going to work off the same line ram that the TV cog does. You might also consider just bit banging that data to some pins, which might sound quite nice, compared to the composite, which has a clear buzz, for the vertical refresh.
I want composite+VGA because I will probably produce a dozen of these and I want to hook them up to CRT television sets. For the same reason, I want the part count to be rather low.
@potatohead
Thank you for you help. I will try to digest it all and write some sample code today.
http://wiki.t-o-f.info/uploads/Propeller/propeller_graphics.zip
I based my test code on the Graphics_Demo.spin
I modified the following elements near the end. I found that this method syncs the code a lot better with the tv's refresh rate:
Also, instead of copying the buffer, I will probably just switch the buffer pointers. That should speed things up.
If I have time today, I will feed the Graphics to the VGA and check out how it sounds.
It works and sounds great if you take into account the different ways they display the colors.
The only problem, is that I would want both the video and output in sync with their respective vertical blank.
I doubt there is enough space on the Propeller to store another "bitmap base".
Any suggestions? Maybe making two "screen" arrays (pointers to the tiles) and substitute "tv_screen" and "vga_screen" at the right moment.