QVGA VIDEO IN VGA DEMO - now also plays full-screen & audio
Peter Jakacki
Posts: 10,193
Here's a short demo of playing a 320x257 video within a VGA buffer. I used VLC with a scene filter to output the frames as pngs and then converted these to QVGA BMPs using XNCONVERT. I then place all these frames within a folder and put then onto my SD card and simply write a couple of simple routines in TAQOZ and presto!
pub VIEWFRM FRAMEBUF DUP 10 + W@ + DUP $400 - PALETTE 256 LMOVE 257 DUP 320 * ROT + SWAP 0 DO DUP SCR I vwin W@ + 640 * + hwin W@ + 80 LMOVE 320 - LOOP DROP ; pre WATCH ( <folder> -- ) ROOT [C] CD " FRM00001.BMP" file$ $! CLRSCR BEGIN file$ FOPEN$ @FILE WHILE @FILE FRAMEBUF FSIZE SDRDS SYNCH VIEWFRM FILE++ REPEAT ;
Comments
Envy... Just kidding!
Henrique
Is there anything you cannot make TAQOZ do?
Is this TAQOZ identical to what will be in P2+ ROM, or is this some expanded super set ?
The playback window can be moved around but I also have it bouncing around the screen just for the fun of it. I've been doing boring stuff lately and just wanted to play
Now I need to add sound but it might be fun to write a small GUI media player.
Now you're just showing off
That was only a quick test demo and the video could have been much longer and in fact it could have taken up the whole card even if it was my 128GB card. But it works out at around 34mins of video for every 4GB.
Since then though I have already combined the individual frames into one large file and even though FAT32 is limited to 4GB file size, TAQOZ looks at the card as one big virtual memory so it can handle contiguous 4GB files as one very large file so that a 64GB card can play over 9 hours of video. I may come up with a more compact format for the frames but then I will have to write some PC code to encode it in the first place. My next step is to add sound to the video as well and mono 16-bit wave files at 44.1kHz needs 88,200 bytes/sec which means I only need to add about 3.5kB per frame for audio or just have a separate audio file that is read concurrently.
Note: that dark part of the screen seems to be a battle between the monitor's polarizer and the camera's.
Being an insta poster she does know however to ask "did anyone comment?" to which i said "nah, not yet, maybe they don't like musicals (without the music)".
So when i get back home i will add the audio track although I'm thinking of the best way of doing that both in terms of encoding and decoding. The 44100 16-bit wave audio would account for around 4% of the file and at the moment I'm thinking i could add the full audio file to the start of the video file or perhaps just have a separate audio file of the same name. Thoughts?
https://docs.microsoft.com/en-us/windows/desktop/directshow/dv-data-in-the-avi-file-format
There might be a way to get ffmpeg to convert to this format with uncompressed A/V.
Bravo!!!!!!
My P2 is a PC! I have a USB/PS/2 keyboard and the monitor so once TAQOZ loads up from SD I could get by without a PC.
So with regards to encoding and audio I want to keep the encoding as simple as possible. At present I use VLC to convert a video to bmp frames but there is no control to allow me to set it to 8-bit color so I run those frames through xnconvert and then cat the files into one. It's a little awkward but it works. Now I also use WINFF, the GUI version of ffmpeg for all kinds of conversions and especially to upload videos since it does a really good job of compressing but I see that @ke4pjw has been using ffmpeg for his OLED video. I wonder if I can get that to do what I want although there is still the audio to incorporate. WINFF allows me to view and edit the ffmpeg command line too. Using 8-bit BMP format seems to keep the colors acceptable and there is very little overhead with this format.
Now 320x240 seemed like a good compromise since it only needs about 80k for a second framebuffer but the trouble is that monitors are 16:9 so we really need a resolution that matches this aspect ratio. If I keep it 640 wide then that means I can drop down to 360 lines and simply triple each line etc to work with 1920x1080 and so a full frame buffer need only be 230kB. So I might make my VGA driver switchable for these two aspect ratios with no real difference elsewhere except for the loss of some lines. I can still display 320x240 videos or full screen 640x360 which may not need a second framebuffer if I synch and time the update to not overwrite any data still being displayed.
btw - are there any good examples or ideas of using any of the DAC modes to output 16-bit wave audio?
I wanted to make the QVGA video full-screen on 640x480 and getting it to repeat each scanline is easy but I'm still trying to find an efficient routine for doubling up the pixels on each line. For now the method I have which simply reads a source byte, doubles it up into a word and writes the word to the screen works well enough.
Here's the pixel doubler code at present. I am looking at using setq2 to read in the source, double it up in LUT, and then write it to the screen. Suggestions welcome
How could it be?
Even under your mind and hands directions, this motion picture never gets better than it was in 1953, when it was launched (two years elder than me), but your work at TAQOZ, never gets worse, comparing it to the lauch date version... both drove by same mind and hands...
Another observation...
Did you noticed the two huge SHARKS, in a circle, at the right middle of the screen, stealling the scene, just undercovered, between 2:31 and 2:42?
You did exposed them, by switching the color palettes... Perhaps they were just the original location tenants; perhaps just guys from an ancient actor's tip-collectors team.
Congrats, it's your style making an excellent work.
P.S. As a side just-for-curiosity note: are you a left-handed person?
Henrique
btw - unfortunately I'm right handed, but my other half is left handed. Does that make me ambidextrous?
My fastest x2 scaler so far involves setq2 rdlong into lut, doubling up all the pixels into a holding buffer in lut, and then setq2 wrlong back into the framebuffer.
EDIT: Actually a REPd unrolled version of the above is fast at 6,6ms (per frame)
But the method I wrote before simply wrote out each long as it created it, works almost just as fast.
Also! Bouncing it around to screen? AMAZING!!
Looks great, Peter!
I have questions:
1) Is the entire screen bit-mapped? Or, is the text tile-mapped?
2) When you scroll, are you scrolling the memory or redrawing a new screen?
3) Is there any double-buffering going on?
I was observing the scrolling and wondering what was going on to make that happen. I suspect you are bit-mapped, because I think scrolling a text screen would be even much faster.
Looks good!
Instead of repeating horizontal pixels, I just slowed down the pixel rate and halved the horizontal timing...
This tricks it into thinking it's 640 pixels when only really 320...
@cgracey - at present the framebuffer is fixed at 640x320 8bpp bitmap so when I watch a video it reads into a holding buffer first and then writes into the framebuffer at the current floating window position, so it is very easy to make it bounce around. The VGA driver is setup to read in the palette and and other hub parameters every vsynch so when I play a video it updates the palette per frame.
The scrolling of the text is bitmapped so this relies on the setq2 rdlong/wrlong method you recommended to move that memory around which is also why I can setup a scroll region too like I do when I type 8 TERM and the text scrolling is confined to the bottom 8 lines.
But a 640x480x8 bitmap takes over 300kB so there is no room for double buffering the full screen although the 320x240 video does have a buffer but full-screen has to directly upscale and write that to the screen fast enough. I got the full-screen upscale render down to 6.6ms once the frame has been read in from SD using multi-block sector reads.
@Rayman - youv'e been doing great stuff and I got a little bored so I wanted to have some fun too before I press on with some new hardware which will also include HyperRAM as well!
While I have the option of changing the VGA resolution, I wanted to test out this software method instead. I will however be adding selectable 16:9/4:3 aspects to the VGA driver so I can switch it as required although I think I may only end up using 640x360x8 16:9 most of the time.
EDIT: full-screen frame update now takes 3ms - I realized I could just upscale one line and write that out twice.
Very nifty! You are leveraging almost every behind-the-scenes opportunity (literally), to refresh the information that will be available to be displayed, by the next (or already started) frame scan.
I was asking myself where did such an excellent scene-by-scene color matching came from; surely, by not restricting the palette choices to a single one, but the underlying mechanism used by your current work wasn't totally clear to me, untill you depicted it.
Since you are using an ancient moving picture to show your progress, the least I can say is... Bravo!
Henrique
When we discussed this last night i was suggesting double scanning the same pixel line but obviously didn’t explain myself properly.
As for expanding the pixels, what Rayman did was the best. I knew there was an efficient way. Just change the freq setting in the streamer which has virtually no penalty.