Prop2 Texture Mapping
cgracey
Posts: 14,206
I got the texture mapping documented. It will undergo some refinement in the coming days, but all the information is there. There's also an example program:
Prop2_Docs.zip
I would probably never have known how to formulate a simple texture mapper, but Andre LaMothe spent a lot of time explaining it to me, and then Roy Eltham came around later and added pixel-based alpha blending and mirroring.
If any of you guys are very versed in 3D graphics, there should be all the info you need here to implement a 3D rendering engine.
Prop2_Docs.zip
I would probably never have known how to formulate a simple texture mapper, but Andre LaMothe spent a lot of time explaining it to me, and then Roy Eltham came around later and added pixel-based alpha blending and mirroring.
If any of you guys are very versed in 3D graphics, there should be all the info you need here to implement a 3D rendering engine.
zip
37K
Comments
The texture mapping sounds great.
Thanks.
Have You maybe values to change NTSC to PAL ?
Thanks for the timely example. I've been wrestling with mode F and trying to get the data into the stack ram in between waitvids, and this example shows exactly how to do it. I had been going down a multi threaded path but this is much better. Very nice.
I'll make a VGA example, since that will be sharpest and work everywhere.
http://forums.parallax.com/attachment.php?attachmentid=99833&d=1362968699
Let's just say I'm a fan of the lower sweep rates.
Seems to me, something like WOLF3D can be done in hardware now, not a draw list like Baggers did. I just read that again and it sounds crappy. I don't mean that. The code Baggers posted is awesome for P1. Not supposed to be possible.
You probably could get that texture demo working on VGA, unless, of course, it's impossible.
:cool:
Here`s the results from my test:
Displayed on a Hisense LED LCD TV Model H32K38E
(fFrom Left to right) Picture 1 : NTSC 256 x 192 - luma/color bars , Picture 2: NTSC 256 x 192 - texture (the edges are actually square, the camera angle makes it look squished in on the bottom sides.
FWIW, I like NTSC component, not just for the nice, slow sweeps where there is the max time to do things, but also for the portability. I can setup an NTSC composite or S-video display and run that up to 640x400 or so. The composite / s-video will not render color at that detail, and pixels will be lost, etc... but overall the display works just fine, meaning I can capture it, work on the go, whatever. When I get to a better quality display, switch to the component and work at full detail.
This is darn cool. Other display mappings require different sweep timings which will eventually impact tighter code. At least with these two, it really is just a color space mapping and pin setup, little else. Just an FYI as to why I went this way right away. I can get full color resolution on the component, and a reasonable pixel density with no meaningful code changes. Nice. The colorburst and such are present in the signal, but ignored by the Y input, FYI.
Edit: Just thought of something. It's very highly likely that PAL compatable sets with component inputs will display this just fine. Anyone have a device to test? If they won't, I'm thinking a simple change to 50 Hz will fix that, and it will render just fine. If so, many American sets will display 50Hz signals, and they do so even when formatted NTSC. Some computers were capable of this, two I can think of were the C= Amiga and Tandy Color Computer 3, both able to output 50/60 Hz NTSC, or PAL depending on where they were made, and the software options selected.
A prop could output 50/60 Hz component signals and that signal might just display anywhere there are component inputs. If so, that's a near universal 640x200 or 400-420 line interlaced display.
Anyway, thought I'd give a heads up, as I ( very soon ) will be a DE2 owner and will be able to start joining in with the fun.
As for Wolfenstein, that could be a good starting point
I also live in PAL world, and also have component TVs so will be able to test this for you, once my boards arrive and are set up!
PS, it's great to be back!
Cheers,
Jim.
Cheers, and likewise Baggers.
I've always thought that 3d graphics was mostly all about polygon rendering.
Texture mapping onto the polygons is the next step up in quality from rendering with solid, possibly shaded colors.
What the texture mapper is doing is one step of an inner loop to a span rasterizer. When you do 3D graphics it all boils down to spans of pixels that comprise a triangle (usually) or polygon.
You will still need to do all the math and setup for a triangle/polygon, and then walk the edges. For each step along the edge(s) you would setup the texture mapper registers, and then loop across the screen pixels for the span.
You could also walk the edges using these instructions, but I'm not sure if it's a win. because you would need to do a pass walking the edges and saving the values, then loop back over those values again and run the spans.
Roy
Roy, did the texture mapping doc's make complete sense to you? (I ask Roy because he knows this stuff inside and out.)
Also, I keep forgetting that the registers aren't all mapped, so you can't use these to walk the edges, since you can't read back the values to "save" them at each edge step. So doing all the math for the edge walking is going to eat available cog memory. probably will want to use another cog for that, and just feed the rendering cog with the values.
Because you can't read them back, you'll have to compute terminal=initial+delta*steps on your own. That's three instructions per parameter.
Roy
BTW: Another way to do 3D is raytracing... It'd be interesting to see if Prop2 could do real-time raytracing at some low resolution...
Re: realtime ray tracing:
Seriously doubt it could be anything near realtime (even at very low res). Maybe if your scene was ultra simple (like no mesh data, just planes and spheres, and no textures), static, the resolution was extremely low (16x16?), and you only did like 1 maybe 2 ray bounces.
Modern GPUs barely achieve it with reasonable scenes in medium resolution, and they are using massively parallel processing (on the order of 512 to 2048 cores, resulting in a 2-4 TFLOP (TeraFLOP) throughput.
However, I bet we could get a Ray Tracer working that would produce a pleasing image in a few seconds.
This is some serious stuff here. Sorry, I have never done anything like this, so I am just enjoying reading what you guys are up to
I realized today that the next thing I must do is make a driver for the SDRAM chip that is on the DE0 and DE2 boards, as well as the Prop2 module we are building at Parallax. We need to get high-resolution bit-mapped displays going to graphically demonstrate a lot of the Prop2's features.
Is the output from GetPix always 32bit?
ie, can it output 16bit? so it can then be fed into a bitmap area? or would it have to be converted to 16bit, if so, is there an instruction to do this quick? or is it a case of shifts, ands and ors per word?
GETPIX always outputs $00_RR_GG_BB data, or 8:8:8 RGB. It's 24-bit, anyway, with 8 leading 0 bits.
I never thought to make it less than 8:8:8 because 5 bits per color produces obvious gradients. At 7 bits you can hardly see gradients and at 8 they disappear. So, I left it 8:8:8, only.
You might not need it for this, but have a look at the MovF bitfield mover, it's nice and quick and has some auto increment features.