Shop OBEX P1 Docs P2 Docs Learn Events
QVGA VIDEO IN VGA DEMO - now also plays full-screen & audio — Parallax Forums

QVGA VIDEO IN VGA DEMO - now also plays full-screen & audio

Peter JakackiPeter Jakacki Posts: 10,193
edited 2019-04-03 03:04 in Propeller 2
Here's a short demo of playing a 320x257 video within a VGA buffer. I used VLC with a scene filter to output the frames as pngs and then converted these to QVGA BMPs using XNCONVERT. I then place all these frames within a folder and put then onto my SD card and simply write a couple of simple routines in TAQOZ and presto!

pub VIEWFRM
	FRAMEBUF DUP 10 + W@ +
	DUP $400 - PALETTE 256 LMOVE
	257 DUP 320 * ROT + SWAP
	0 DO DUP SCR I vwin W@ + 640 * + hwin W@ + 80 LMOVE 320 - LOOP DROP
	;

pre WATCH ( <folder> -- )
	ROOT [C] CD
	" FRM00001.BMP" file$ $!
	CLRSCR
	BEGIN
	  file$ FOPEN$
	  @FILE
	WHILE
	  @FILE FRAMEBUF FSIZE SDRDS
	  SYNCH VIEWFRM
	  FILE++
	REPEAT
	;
«1

Comments

  • Mr. Peter "What will he do next?" Jakacki...

    Envy... Just kidding! :lol:

    Henrique
  • Cluso99Cluso99 Posts: 18,066
    Nice work as usual Peter :smiley:

    Is there anything you cannot make TAQOZ do?
  • Amazing!!! Super cool!
  • jmgjmg Posts: 15,140
    Here's a short demo of playing a 320x257 video within a VGA buffer. I used VLC with a scene filter to output the frames as pngs and then converted these to QVGA BMPs using XNCONVERT. I then place all these frames within a folder and put then onto my SD card and simply write a couple of simple routines in TAQOZ and presto!

    Is this TAQOZ identical to what will be in P2+ ROM, or is this some expanded super set ?

  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2019-03-26 23:05
    @jmg - the ROM version is a cut down version that is totally compiled with p2asm (or PNut) and includes some SD and FAT32 support. The version I use does not have any SD support in the precompiled kernel but the kernel compiles the source for all the extras including FAT32, disk utilities, VGA text and graphics, PS/2 keyboard, FTP and HTTP servers etc.


    The playback window can be moved around but I also have it bouncing around the screen just for the fun of it. I've been doing boring stuff lately and just wanted to play :)
    Now I need to add sound but it might be fun to write a small GUI media player.
  • Super nice, Peter!
    The playback window can be moved around but I also have it bouncing around the screen just for the fun of it. I've been doing boring stuff lately and just wanted to play :)
    Now I need to add sound but it might be fun to write a small GUI media player.

    Now you're just showing off
  • Neat! How much video time can be played back?
  • Neat! How much video time can be played back?

    That was only a quick test demo and the video could have been much longer and in fact it could have taken up the whole card even if it was my 128GB card. But it works out at around 34mins of video for every 4GB.

    Since then though I have already combined the individual frames into one large file and even though FAT32 is limited to 4GB file size, TAQOZ looks at the card as one big virtual memory so it can handle contiguous 4GB files as one very large file so that a 64GB card can play over 9 hours of video. I may come up with a more compact format for the frames but then I will have to write some PC code to encode it in the first place. My next step is to add sound to the video as well and mono 16-bit wave files at 44.1kHz needs 88,200 bytes/sec which means I only need to add about 3.5kB per frame for audio or just have a separate audio file that is read concurrently.

  • How is the pixel data arranged? RGB 8:8:8 or something else? (I really don't know how the video buffers are done for VGA, etc.) Is it interlaced?

  • Can this be connected to a larger VGA screen?
  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2019-03-28 15:15
    I have now created a single bit-mapped video "BMV" file format which the TAQOZ video player automatically adjusts x and y resolutions for. Essentially the BMV is just all the frames concatenated together in one big file but each frame is aligned to 512 byte sectors. I think I can encapsulate an audio file in there too on my next attempt when I look at adding a GUI to the player as well. There are some controls I've added to pause, rewind, fast-forward, and adjust speed as well as bouncing the video frame around the screen (just for fun).
    Note: that dark part of the screen seems to be a battle between the monitor's polarizer and the camera's.


  • Just showed my wife this latest post (i do that occasionally) pointing out technical features to a rather non-technical spouse :)
    Being an insta poster she does know however to ask "did anyone comment?" :smile: to which i said "nah, not yet, maybe they don't like musicals (without the music)".

    So when i get back home i will add the audio track although I'm thinking of the best way of doing that both in terms of encoding and decoding. The 44100 16-bit wave audio would account for around 4% of the file and at the moment I'm thinking i could add the full audio file to the start of the video file or perhaps just have a separate audio file of the same name. Thoughts?
  • ke4pjwke4pjw Posts: 1,065
    edited 2019-03-29 02:32
    Check out .avi format. It has support for uncompressed video. https://docs.microsoft.com/en-us/windows/desktop/directshow/avi-riff-file-reference

    https://docs.microsoft.com/en-us/windows/desktop/directshow/dv-data-in-the-avi-file-format

    There might be a way to get ffmpeg to convert to this format with uncompressed A/V.
  • I have now created a single bit-mapped video "BMV" file format which the TAQOZ video player automatically adjusts x and y resolutions for. Essentially the BMV is just all the frames concatenated together in one big file but each frame is aligned to 512 byte sectors. I think I can encapsulate an audio file in there too on my next attempt when I look at adding a GUI to the player as well. There are some controls I've added to pause, rewind, fast-forward, and adjust speed as well as bouncing the video frame around the screen (just for fun).
    Note: that dark part of the screen seems to be a battle between the monitor's polarizer and the camera's.


    Bravo!!!!!!
  • roglohrogloh Posts: 5,122
    edited 2019-03-29 05:51
    Peter, looks good so far. When adding audio, and to help keep things in sync, you might just be able to have a frame's worth of audio samples interleaved with the video in your file at the end of each video frame's data block. It does mess up the file a bit for simple editing/creating frames etc but for a known audio sample rate like 44100 you will just skip some fixed number of bytes per video frame when displaying the video, making things easy. If your video file format is fixed frame size (no variable compression), audio can also skip some fixed number of bytes between reading its buffer samples. You may need a header of sorts to carry dimensions, sample rates etc if you want some extensible format. 44100 is nicely divisible by 25,30,50,60 fps (though not 24). A separate audio file also could work, but you need to be able to start both at the same time and then keep them both in sync during speed up /slow down etc, and if you are reading/seeking between two input files it will hit the sd card harder, probably still doable though if there is enough buffering.
  • cgraceycgracey Posts: 14,133
    Peter, that looks great! I saw it this morning, but got sidetracked before I could comment. It's pretty neat that you've got all that working. I love the terminal display, too! This is like a PC from a while back.
  • Cluso99Cluso99 Posts: 18,066
    +1 :smiley:
  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2019-03-29 13:59
    cgracey wrote: »
    Peter, that looks great! I saw it this morning, but got sidetracked before I could comment. It's pretty neat that you've got all that working. I love the terminal display, too! This is like a PC from a while back.

    My P2 is a PC! I have a USB/PS/2 keyboard and the monitor so once TAQOZ loads up from SD I could get by without a PC.


    So with regards to encoding and audio I want to keep the encoding as simple as possible. At present I use VLC to convert a video to bmp frames but there is no control to allow me to set it to 8-bit color so I run those frames through xnconvert and then cat the files into one. It's a little awkward but it works. Now I also use WINFF, the GUI version of ffmpeg for all kinds of conversions and especially to upload videos since it does a really good job of compressing but I see that @ke4pjw has been using ffmpeg for his OLED video. I wonder if I can get that to do what I want although there is still the audio to incorporate. WINFF allows me to view and edit the ffmpeg command line too. Using 8-bit BMP format seems to keep the colors acceptable and there is very little overhead with this format.


    Now 320x240 seemed like a good compromise since it only needs about 80k for a second framebuffer but the trouble is that monitors are 16:9 so we really need a resolution that matches this aspect ratio. If I keep it 640 wide then that means I can drop down to 360 lines and simply triple each line etc to work with 1920x1080 and so a full frame buffer need only be 230kB. So I might make my VGA driver switchable for these two aspect ratios with no real difference elsewhere except for the loss of some lines. I can still display 320x240 videos or full screen 640x360 which may not need a second framebuffer if I synch and time the update to not overwrite any data still being displayed.

    btw - are there any good examples or ideas of using any of the DAC modes to output 16-bit wave audio?
  • Peter you are a wizzard, you never cease to amaze me with all you do with Forth! Great job Sir...
  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2019-04-01 03:06
    FULL-SCREEN VIDEO
    I wanted to make the QVGA video full-screen on 640x480 and getting it to repeat each scanline is easy but I'm still trying to find an efficient routine for doubling up the pixels on each line. For now the method I have which simply reads a source byte, doubles it up into a word and writes the word to the screen works well enough.



    Here's the pixel doubler code at present. I am looking at using setq2 to read in the source, double it up in LUT, and then write it to the screen. Suggestions welcome
    ' Double up each pixel  --- 640 ADO C@++ DUP 8<< OR I W! 2 +LOOP DROP
    DWIDTH
    			setq2	#9		' copy into LUT
    			rdlong	lmbuf,##DWLUT
    			jmp	#lmbuf+$200	' and run
    
    DWLUT			rep	#7,#320		' destination of 320 words
    .l0			rdbyte	fx,tos1		' read 1 byte of 320 byte source
    			add	tos1,#1		' update source ptr
    			mov	r0,fx		' double up byte as word
    			shl	fx,#8
    			or	fx,r0
    			wrword	fx,tos		' write double byte as word to screen
    			add	tos,#2		' update screen ptr
    			jmp	#DROP2
    
  • Hi Peter

    How could it be?

    Even under your mind and hands directions, this motion picture never gets better than it was in 1953, when it was launched (two years elder than me), but your work at TAQOZ, never gets worse, comparing it to the lauch date version... both drove by same mind and hands...

    Another observation...

    Did you noticed the two huge SHARKS, in a circle, at the right middle of the screen, stealling the scene, just undercovered, between 2:31 and 2:42?

    You did exposed them, by switching the color palettes... Perhaps they were just the original location tenants; perhaps just guys from an ancient actor's tip-collectors team.

    Congrats, it's your style making an excellent work.

    P.S. As a side just-for-curiosity note: are you a left-handed person?

    Henrique :smile:
  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2019-04-01 07:39
    @Yanomani - Yeah, I SAW those SHARKS! Scary! :)
    btw - unfortunately I'm right handed, but my other half is left handed. Does that make me ambidextrous? :)



    My fastest x2 scaler so far involves setq2 rdlong into lut, doubling up all the pixels into a holding buffer in lut, and then setq2 wrlong back into the framebuffer.
    ' 8.6ms per frame @300MHz
    DWIDTH			setq2	#21-1		' copy into LUT
    			rdlong	lutfree,##DWLUT
    			jmp	#lutfree+$200	' and run
    
    DWLUT			setq2	#80-1
    			rdlong	srcbuf,tos1	' read all of source
    '
    			mov	r3,#srcbuf	' set lut src
    			mov	r1,#lmbuf	' set lut dst
    			mov	r2,#80
    .l0			rdlut	acc,r3		' read 4 bytes of source
    			add	r3,#1
    			call	#@DWSUB		' double up 2 pixels
    			call	#@DWSUB		' next 2 pixels
    			djnz	r2,#.l0
    '
    			setq2	#160-1		' write out result to screen dst
    			wrlong	lmbuf,tos
    			jmp	#DROP2
    
    DWSUB			getbyte	fx,acc,#0	' get byte
    			setbyte	fx,acc,#1	' double it up
    			shr	acc,#8		' next byte
    			setbyte	fx,acc,#2	' set it
    			setbyte	fx,acc,#3	' double it up
    			wrlut	fx,r1		' save in lut
    			shr	acc,#8
    		_ret_	add	r1,#1		' update dst ptr
    

    EDIT: Actually a REPd unrolled version of the above is fast at 6,6ms (per frame)

    But the method I wrote before simply wrote out each long as it created it, works almost just as fast.
    ' 8.973ms per frame @300MHz
    DWIDTH			setq2	#17		' copy into LUT
    			rdlong	lutfree,##DWLUT
    			jmp	#lutfree+$200	' and run
    
    DWLUT			setq2	#80-1
    			rdlong	lmbuf,tos1	' read all of source
    			mov	r3,#lmbuf
    			mov	r2,#80
    .l0			rdlut	acc,r3		' read 4 bytes of source
    			add	r3,#1
    			call	#@DWSUB
    			call	#@DWSUB
    			djnz	r2,#.l0
    
    			jmp	#DROP2
    
    DWSUB			getbyte	fx,acc,#0
    			setbyte	fx,acc,#1
    			shr	acc,#8
    			setbyte	fx,acc,#2
    			setbyte	fx,acc,#3
    			wrlong	fx,tos
    			shr	acc,#8
    		_ret_	add	tos,#4		' update dst ptr
    
  • I noticed that the video did a lot of speeding up and slowing down. Is that intentional?

    Also! Bouncing it around to screen? AMAZING!!
  • cgraceycgracey Posts: 14,133
    FULL-SCREEN VIDEO
    I wanted to make the QVGA video full-screen on 640x480 and getting it to repeat each scanline is easy but I'm still trying to find an efficient routine for doubling up the pixels on each line. For now the method I have which simply reads a source byte, doubles it up into a word and writes the word to the screen works well enough.



    Here's the pixel doubler code at present. I am looking at using setq2 to read in the source, double it up in LUT, and then write it to the screen. Suggestions welcome
    ' Double up each pixel  --- 640 ADO C@++ DUP 8<< OR I W! 2 +LOOP DROP
    DWIDTH
    			setq2	#9		' copy into LUT
    			rdlong	lmbuf,##DWLUT
    			jmp	#lmbuf+$200	' and run
    
    DWLUT			rep	#7,#320		' destination of 320 words
    .l0			rdbyte	fx,tos1		' read 1 byte of 320 byte source
    			add	tos1,#1		' update source ptr
    			mov	r0,fx		' double up byte as word
    			shl	fx,#8
    			or	fx,r0
    			wrword	fx,tos		' write double byte as word to screen
    			add	tos,#2		' update screen ptr
    			jmp	#DROP2
    

    Looks great, Peter!

    I have questions:

    1) Is the entire screen bit-mapped? Or, is the text tile-mapped?
    2) When you scroll, are you scrolling the memory or redrawing a new screen?
    3) Is there any double-buffering going on?

    I was observing the scrolling and wondering what was going on to make that happen. I suspect you are bit-mapped, because I think scrolling a text screen would be even much faster.

    Looks good!
  • RaymanRayman Posts: 13,798
    I did a similar resolution for the 90's style 3D demo.
    Instead of repeating horizontal pixels, I just slowed down the pixel rate and halved the horizontal timing...
    This tricks it into thinking it's 640 pixels when only really 320...
  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2019-04-01 14:01
    @AwesomeCronk - yep, that was intentional since I'm demonstrating using about a dozen media controls that can speeds it up and down, skip forward/reverse, bounce, full-screen, etc.

    @cgracey - at present the framebuffer is fixed at 640x320 8bpp bitmap so when I watch a video it reads into a holding buffer first and then writes into the framebuffer at the current floating window position, so it is very easy to make it bounce around. The VGA driver is setup to read in the palette and and other hub parameters every vsynch so when I play a video it updates the palette per frame.

    The scrolling of the text is bitmapped so this relies on the setq2 rdlong/wrlong method you recommended to move that memory around which is also why I can setup a scroll region too like I do when I type 8 TERM and the text scrolling is confined to the bottom 8 lines.

    But a 640x480x8 bitmap takes over 300kB so there is no room for double buffering the full screen although the 320x240 video does have a buffer but full-screen has to directly upscale and write that to the screen fast enough. I got the full-screen upscale render down to 6.6ms once the frame has been read in from SD using multi-block sector reads.

    @Rayman - youv'e been doing great stuff and I got a little bored so I wanted to have some fun too before I press on with some new hardware which will also include HyperRAM as well! :)
    While I have the option of changing the VGA resolution, I wanted to test out this software method instead. I will however be adding selectable 16:9/4:3 aspects to the VGA driver so I can switch it as required although I think I may only end up using 640x360x8 16:9 most of the time.

    EDIT: full-screen frame update now takes 3ms - I realized I could just upscale one line and write that out twice.
  • YanomaniYanomani Posts: 1,524
    edited 2019-04-01 15:12
    at present the framebuffer is fixed at 640x320 8bpp bitmap so when I watch a video it reads into a holding buffer first and then writes into the framebuffer at the current floating window position, so it is very easy to make it bounce around. The VGA driver is setup to read in the palette and and other hub parameters every vsynch so when I play a video it updates the palette per frame.

    Very nifty! You are leveraging almost every behind-the-scenes opportunity (literally), to refresh the information that will be available to be displayed, by the next (or already started) frame scan.

    I was asking myself where did such an excellent scene-by-scene color matching came from; surely, by not restricting the palette choices to a single one, but the underlying mechanism used by your current work wasn't totally clear to me, untill you depicted it.

    Since you are using an ancient moving picture to show your progress, the least I can say is... Bravo! :lol:

    Henrique
  • You could support MPNG without tremendous effort, however you'd need to reimplement the inflate algorithm by hand. I wrote a PNG decoder for DOS that used zlib, since it was available for DOS, but libpng was not.
  • Very impressive work, Peter!
  • Cluso99Cluso99 Posts: 18,066
    Peter, very impressive work as usual :smiley:

    When we discussed this last night i was suggesting double scanning the same pixel line but obviously didn’t explain myself properly.

    As for expanding the pixels, what Rayman did was the best. I knew there was an efficient way. Just change the freq setting in the streamer which has virtually no penalty.
Sign In or Register to comment.