Shop OBEX P1 Docs P2 Docs Learn Events
Running a Presentation from a Propeller - putting a BMP in VGA? - Page 7 — Parallax Forums

Running a Presentation from a Propeller - putting a BMP in VGA?

15791011

Comments

  • pedwardpedward Posts: 1,642
    edited 2012-05-22 23:51
    Dr_A,

    I'm afraid it would have to be a lot smaller. I'm trying to get a 288 x 330-pixel (144 wide pixels x 330 normal pixels) down to less than 13kB or so. The text can take up 9kB or more, and there still needs to be room for the SD reader and I'm not sure yet what else. Anyway, the smaller the better, since it keeps Ken's options open for slide complexity.

    BTW, I downloaded Gimp yesterday, in hopes that their UI had improved since the last time I tried it on Linux, thinking maybe Ken could use it to prepare the slides. The UI is still pretty hideous, though, with multiple floating windows, instead of one unified GUI. It's a powerful program, to be sure, but Corel PhotoPaint will do almost everything that Gimp will, including the palette optimization and FS dithering. However, when you tell it to palettize an image with an optimized n colors, it won't start from an image that's already been palettized with, say, the 64 Prop VGA colors. Maybe I should take another look at Gimp, but I still prefer coding it in Perl, since I have complete control (issues :) ).

    -Phil

    http://www.makeuseof.com/tag/gimp-28-released-adds-singlewindow-view-updates/

    Give 2.8 a try. But yeah, the gaggle of windows is so annoying.
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2012-05-23 00:16
    @PhiPi
    I'm afraid it would have to be a lot smaller. I'm trying to get a 288 x 330-pixel (144 wide pixels x 330 normal pixels) down to less than 13kB or so.

    You are a hard taskmaster!

    Ok, 13k it is. 151 pixels by 172 pixels for the picture, 16 color optimised palette, FS dithering, 2 pixels per byte is 12986 bytes. Then take the picture and expand it so each original pixel is now 2 wide and 2 high so it is a 2x2 square on a 640x480 screen. Patch that back into your text which is at a higher resolution.

    Is the picture good enough or is it too grainy?
    640 x 480 - 113K
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2012-05-23 04:41
    The challenge was to do this on a vga or TV screen so the post above is more relevant to that and I am hoping that fitting the picture into 13k will be helpful in some way. But as an aside;

    Ken said
    Ultimately, the vibrance of a VGA display is much better, at least in my limited experience. Heck, we're six years into Propeller 1 and I'm still uncovering basic features.

    Yes, so am I :)

    So... just for fun, and not on a VGA but on a touchscreen, here is a little slideshow for the propeller using those 5 slides. It is a bit blurry as I am not using a tripod and the screenshot picture gives a better idea of the resolution.
    800 x 600 - 81K
    800 x 600 - 136K
  • ericballericball Posts: 774
    edited 2012-05-23 06:03
    Let me see if I can summarize what PIMP needs to accomplish:
    1. input is a 24 bit image, scaled to the desired resolution
    2. output is a 2bpp image with a 4 of RrGgBb palette for each group of 16 horizontal pixels (i.e. a total of 4bpp)

    First PIMP needs to reduce the 24 bit pixels down to 6 bit pixel, probably with some error diffusion. Might want to use a small kernel (i.e. # 1/2 \n 1/4 1/4) to keep the color smear localized.

    Next take each group of 16 pixels and determine the best 4 colors. There's probably some algorithms for this from the EGA days. Something like:
    1. calculate the average color
    2. for each color calculate the distance to the average color multiplied by the number of pixels which use the color
    3. select the maximum value as the first color
    4. repeat for the next three colors
    5. use the nearest color for each pixel and push the error down to the next row
  • RaymanRayman Posts: 14,827
    edited 2012-05-23 07:09
    July! Wow, that gives us all kinds of time to mull this over...

    If we can use Phil's compression with the data in flash, should be able to use almost all hub ram for the image...
    Here's what our 6-bit VGA driver could show:

    BoeBot_6bit.png
    184 x 208 - 18K
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-05-23 07:37
    ericball wrote:
    1. input is a 24 bit image, scaled to the desired resolution
    PIMP needs a little help here so far. Inferring photo vs. text is proving a bit difficult, so I'm using two images: one with the photo masked out via a 50% gray rectangle, and a separate file for the photo. 50% gray is not a Prop VGA color and not allowed in the text area, so it's easy to spot.
    First PIMP needs to reduce the 24 bit pixels down to 6 bit pixel, probably with some error diffusion.
    Error diffusion takes place only during the last color-reduction stage, just before the four colors are selected for each block. The errors are re-computed from the original 24-bit photo, based upon the actual colors used. This helps the result remain true to the original.

    -Phil
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-05-23 10:21
    Gimp UI may not be great as-is, but if you're on Windows/Mac, try GimpShop. It has the same core as Gimp, but attempts to duplicate the PhotoShop UI. It may also be worth looking into either PerlMagic (ImageMagick interface for Perl) or Script-Fu/Python-Fu (scripting Gimp). That way you can use Gimp as a platform to build your own apps/filters on. You could write an export filter for Gimp, so Open image, separate image/text into layers, then save as PropImg, and have a wizard for doing the color reduction before the final save.
  • Dave HeinDave Hein Posts: 6,347
    edited 2012-05-23 10:42
    I would suggest making dithering/error diffusion optional when converting images. Some images, such as computer generated drawings will look better by just selecting the nearest color. Of course, natural images will require error diffusion to get anything that looks close to the original. We should show off the strenths of the Prop and not the weaknesses, and rendering natural looking images is definately a weakness of the Prop.
  • RaymanRayman Posts: 14,827
    edited 2012-05-23 10:48
    Here's a Windows tool that creates 6-bit photos with optional dithering for VGA, TV, or PTP2 palettes:

    http://www.rayslogic.com/Propeller/Products/PTP2/PTP2_ResourceEditor.zip

    Note: It provides 3 dithering options and also resizing, if desired.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-05-23 11:53
    I also forgot to mention that Gimp 2.8 adds a single-window mode.

    One advantage of using something like Gimp Script-Fu would be that you can have it load up the Propeller pallette, then change the image to Indexed and give the user an option of Floyd-Steinberg, Error distribution, and No dithering in the dialog. You could also script it to split the image up into 16 pixel stripes or squares and do pallette reduction on each of them, repeating the settings used by the user in the first place.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-05-23 12:49
    With 16-pixel blocks of x-doubled pixels and four colors per group, the pixels in the BoeBot image (288 x 330) require 11800 bytes. Using "dumb swapping" the BoeBot image requires 4433 color swaps to the four-color palette. This pushes the overall image size to 16.3kB, exclusive of a color swap index. (Each color swap byte can contain a 6-bit color, plus a two-bit index into the palette. But that does not provide information about which block the swap belongs to.) However, the number of color swaps can be reduced by observing that not all pixel blocks require four colors, so one to three vestigial colors can sometimes be carried forward to subsequent blocks that need them. Right now, color replacement is a random selection: any color that is not used in a block is a candidate for replacement by one that is used. In the BoeBot image, there are 1865 vestigial colors, which are potential placeholders for carrying colors over to subsequent palettes. So, in order to reduce the number of color swaps, I need to write a smarter swap assignment algorithm.

    -Phil
  • pedwardpedward Posts: 1,642
    edited 2012-05-23 13:20
    What about 1 palette per 16x16 pixel area? The image could be decoded in realtime from memory.
  • LawsonLawson Posts: 870
    edited 2012-05-23 13:48
    Can any compression be gained from the structure of the image data itself? My first thought is that the difference between sucessive lines of pixels would favor only a few common values. At that point, RLE or some other loss-less coding should have a better compression ratio? Maybe try something a bit fancier and encode differences between a starting color and a fractal sub-division? (i.e. have a global average color plus a 4 difference colors for the average color of 4 sub-areas. Then repeat this on each sub-area down to the pixel level. A quick compression gain would be to truncate any branches which don't change color?)

    Lawson
  • RaymanRayman Posts: 14,827
    edited 2012-05-24 08:15
    Just for fun, I took a quick look at what the stock VGA driver could do on this slide with 2 and 4 color bitmaps...
    It's not horrible if you're a few feet away from the screen:

    KenSlide2a.JPG


    BTW: I did the text very easily with Microsoft Paint (that comes with Windows).
    Just create a file and save it as monochrome bitmap. Then, when you use the text tool, it draws text without the anti-aliasing...
    640 x 480 - 140K
  • potatoheadpotatohead Posts: 10,261
    edited 2012-05-24 08:50
    I've an idea for color. Haven't had quite the time I wanted to implement it. (stuff broke around the house, so...)

    I'll put it here, because it makes sense. I've got some TV driver stuff in the works that will eventually make use of it.

    Do relative color. So take the image, build a histogram containing X colors. Those colors go into a list. If every color is needed, the list contains all the colors. (Some drivers can have a lot of colors...)

    If not all the colors are needed, prioritize them, and make a shorter list. Could be just a few colors.

    Index this list, each color getting a value from 0 to x colors.

    Break the image into color and intensity. Maybe not, dithering could work too.

    On the color map, blur the image to reduce color detail, so that low resolutions can be used, if desired. On the intensity map, break the intensity down to a few levels.

    Video code draws the image, sourcing color from the list. Each color pixel is a nibble, or two bits, where the color associated with the pixel value is pulled from the list. A 00 value can be compressed away, and represents the same color. Signed values, or just absolute values, index into the list, fetching the colors needed. The limitation is not every color in the list can be selected. A "distance" must be travelled to get at the desired colors. The benefit is color can be referenced by only a few bits, but still allowing a high color diversity.

    Sort the list, so that the pixel values are optimal. The key here is the pixel values add. First pixel value indexes directly into the list. The next one is relative to that one, and so on. Won't work well for moving images, but will work very well for static ones, given they are processed and compiled some.

    This is similar to the Amiga "Hold And Modify" HAM mode.

    For some images, it may make sense to use a longer list and simply repeat colors in it, so that the offsets can get to the right colors at the right time. The more colors in the image, the more challenging this is, but the savings are potentially greater as well.

    Overlay intensity bits at a higher resolution than the color. People key off of intensity first, then color. Color detail, particularly blue, can be very blurry and the image perception will still be very good.

    If pixels are nibbles, then a signed index of 7/8 colors is possible. If it's two bits, then less obviously. Maybe not have two bits be signed, and use a longer color list.

    One or two COGS will be needed to perform color look ups, depending on resolution. They fill a scan line buffer, along with the intensity COG doing the same.

    On VGA, intensity overlay is pretty easy. On TV, it's only realistic on some drivers. The one I'm working with can do this, but only at specific resolutions.

    If images are split in a graphics program, color reduction in terms of depth and spatial resolution can be done easily. Reduce to some color set, whatever it is. Then break into Red, Green and Blue. Significantly blur the blue. Marginally blur the Red and Green. Quantize down to some pixel size that makes sense. 160x96 or something. No dither on the color is likely optimal.

    Depth reduce and quantize the intensity for a two bit per pixel bitmap, allowing dithers.

    Output palettes, perform sort, produce data for Prop video system.
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2012-05-24 15:58
    @rayan
    Just for fun, I took a quick look at what the stock VGA driver could do on this slide with 2 and 4 color bitmaps...
    It's not horrible if you're a few feet away from the screen:

    I reckon that looks pretty good.

    Thinking about image compression, there are a myriad of algorithms out there. It is a matter of what you can decode on the fly line by line. .jpg is probably the hardest and in between are many others. As a quick test, look at how much smaller a file is - .jpg and .bmp of the same file. I also took a look at the algorithm used by winzip which looks for repeating patterns. A .zip file of a .bmp is about 1/3 of the size.

    I'd be interested to hear what Ken thinks about the picture in post #195
  • Ken GraceyKen Gracey Posts: 7,400
    edited 2012-05-24 16:57
    Dr_Acula wrote: »
    I'd be interested to hear what Ken thinks about the picture in post #195

    Hmm, well, it looks fair at a small size but would look quite poor once displayed from a projector. I appreciate it from a technical perspective, but I know that people unfamiliar with the constraints might find it quite poor.

    I'm thinking I might need to add Rayman's external hardware to get something reasonably good if Phil doesn't burn a few more days. Isn't it frustrating when you can't have what you want? Chip, where's that darned P2?

    Ken Gracey
  • RaymanRayman Posts: 14,827
    edited 2012-05-24 17:20
    Using the DVI Graphics Shield on the PropBOE would be extremely easy to use and produce perfect quality images.
    You can show a presentation from PowerPoint in 3 simple steps:

    1. In PowerPoint use "File"->"Save As" and then select "Device Independent Bitmap" from the drop down menu for file type.
    (this will save the slides as "bmp" files in the directory you picked with names "slide1.bmp", "slide2.bmp"...)
    2. Copy the slide BMP files from PC to SD card's root folder and then insert SD card into PropBOE.
    3. Run Slideshow app on PropBOE using PropTool.

    (Optional step: resize your bitmap files with the PC to 640x480 resolution for best results).
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-05-24 17:35
    Ken Gracey wrote:
    ...if Phil doesn't burn a few more days.
    'Got the 16-color image reduced small enough, including the color swaps and indexing. The pixel part of the image overlay is working, and this evening I'll tackle the PASM color swap code.

    -Phil
  • RaymanRayman Posts: 14,827
    edited 2012-05-24 18:09
    One other thought... Perhaps using two SQI flash chips one could have the bandwidth to show a 640x480 image with 16 color palette.

    Just relocated the scrolling 16-color map demo for single Flashpoint on TV here:
    http://forums.parallax.com/showthread.php?130928

    That was 196 pixels wide with a 20 MHz dot clock...

    VGA is tougher with 25 MHz dot clock. But with two SQI in parallel to make an 8-bit bus, one should be able to double the throughput...

    Might not be enough for 16-colors over the whole screen though.
    As a fallback, 4-color over the whole screen would work for sure.
    Or, drop to 320x240 resolution for the image or restrict image to small area and use Phil's technique for the text...
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2012-05-24 19:45
    One other thought... Perhaps using two SQI flash chips one could have the bandwidth to show a 640x480 image with 16 color palette.

    Interesting thought.

    A few months back I tried doing that with a sram and an 8 bit data bus. Set off one cog displaying a line and another cog reading the next line. For TV it was not quite fast enough to pull data off the sram, but it could get just over half a screen off the sram. So it was possible to double the resolution with half the picture in hub and half in sram.

    VGA is faster than TV.

    But... if you have a 16 bit wide bus you could double the speed and theoretically could get all the data off the sram. Use counters for the address as we are using for the fast toucscreen refreshes. And - if you went for 2 pixels per byte as shown in the last few days with a 16 color palette, that doubles the resolution. So that is a 4x improvement on what I was doing - 2x for going from 8 to 16 bit data bus and 2x for going from 1 pixel per byte to 2 pixels per byte.

    Intriguing...
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-05-24 20:06
    Here's a monochrome image from the Prop to whet your appetites, while I'm getting the color-fill to work:

    attachment.php?attachmentid=92930&d=1337914781

    The color-fill looks like it's going to require a separate cog to process. I've got 32 VGA pixels times (25 instructions) to do possibly one rdlong and up to four color swaps to the palette in time for the next waitvid. That's not enough time.

    -Phil
    468 x 359 - 50K
  • pedwardpedward Posts: 1,642
    edited 2012-05-24 20:50
    Is that F-S or error diffusion?
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-05-24 21:04
    Neither. It's error propagation and correction on the vertical axis only. Methods involving a horizontal correction component work poorly with x-elongated pixels.

    -Phil
  • Dr_AculaDr_Acula Posts: 5,484
    edited 2012-05-24 22:22
    Re post #203, that looks rather good. How many k does the black and white picture take?
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-05-24 22:36
    Just under 12kB.

    -Phil
  • RaymanRayman Posts: 14,827
    edited 2012-05-25 06:03
    Since Ken is sounding receptive to the DVI Graphics Shield idea...

    I took the 800x600 example and resized it to 640x480 using Irfanview (free image software).
    Stuck the file on a uSD card and inserted in Prop system with DVI Graphics Shield attached.
    Modified the software to show the slide. This took about 3 minutes.
    Here's the result (it actually looks much better on the screen):

    Ken_Slide1_VGA.jpg


    One question one might ask is this: Is this presentation about the Propeller Chip itself and how neat it is or about the PropBOE and all the wonderful things you can do with it? If it's about the Propeller Chip, then native Prop graphics might make sense, even if the quality isn't all that great. But, if it's about all the cool things you can do with PropBOE and how you can exand it wall all kind of things, then I would argue that using the DVI graphics shield is worth considering.
    1024 x 768 - 88K
  • Dave HeinDave Hein Posts: 6,347
    edited 2012-05-25 06:47
    Rayman, that's a great idea to use your DVI shield. Maybe the slide show could be run either directly from the Prop or using your shield. This would demonstrate the capabilities of the Prop without hardware assist, and what it can do when additional hardware is added. This might get around having to display dithered images, which might give a bad impression about the Prop.
  • Ken GraceyKen Gracey Posts: 7,400
    edited 2012-05-25 10:16
    Rayman wrote: »
    Since Ken is sounding receptive to the DVI Graphics Shield idea...

    I took the 800x600 example and resized it to 640x480 using Irfanview (free image software).
    Stuck the file on a uSD card and inserted in Prop system with DVI Graphics Shield attached.
    Modified the software to show the slide. This took about 3 minutes.
    Here's the result (it actually looks much better on the screen):

    Ken_Slide1_VGA.jpg


    One question one might ask is this: Is this presentation about the Propeller Chip itself and how neat it is or about the PropBOE and all the wonderful things you can do with it? If it's about the Propeller Chip, then native Prop graphics might make sense, even if the quality isn't all that great. But, if it's about all the cool things you can do with PropBOE and how you can exand it wall all kind of things, then I would argue that using the DVI graphics shield is worth considering.

    Hey Rayman,

    It's a most appropriate question and one that warrants an explanatory answer. I also welcome any input on my approach.

    I'll tell you the direction I'm taking with my presentations for education, distributors and universities. I'm not focusing on the Propeller architecture, design, technical features and specifications directly. In the beginning, we'd present too many block diagrams and technical details while people yawned waiting for examples. This was partially a result of not having the huge code base we have today, which let's me promote the idea:

    It's all just a matter of high-level integration! Well, at least getting started should be. And that's important since I just want us to break the barrier of it being different, or being "hard" as I heard from a very capable engineer who has never used it, or that it uses a proprietary language that somebody can't learn.

    I'm taking an approach which shows the following:
    1. How easy it is to program the Propeller, using a very simple example first (with LEDs) but then going straight into examples that integrate Objects.
    2. Learn.parallax.com and what can be done very easily with the PropBOE and little accessory hardware (robot chassis, sensors, XBee, etc.). And showing how anyone (like the high school freshmen I'm teaching) can use our material to succeed with the Propeller.
    3. Spin, ASM and C, with a focus on the open[ing] tool chains.
    4. Impressive load-and-run demonstrations, like WAV file playback, video display with sensors, etc.
    The technical details are conveyed through the examples. Through each of them I show how the program runs in the architecture (number of cogs, objects used, etc.) along with any special Prop-specific features (counters, shared memory) and where the code comes from. There's a change I'm introducing in our Educational agenda related to this same method. The goal is that customers of different levels of skill (and interest) can jump in the Propeller. For example, a high school student should be able to load wav files and run them from a robot without understanding how it all works. But an engineering student in the university could understand the design of the application from our same documentation. This lets us present the information first in a load and run (learn by doing) format followed with explanation about how it works for those who are interested.

    The commercial customers will receive a mix of the above with customer examples in renewable energy, robotics, from Parallax Semiconductor's technical and business perspective. They obviously program specifically for their product design.

    So, running a presentation from the Propeller itself is the ultimate show-and-tell, without actually selling a product. Although I'd love to show the raw Prop, I don't think either of these audiences would truly discount our demonstration if we tell them how it works with accessory hardware. Therefore I should certainly have a DVI adapter (do we have to call it a shield?). How do I order one? I should take a look for myself as the various solutions around these forums start to emerge.

    Perhaps the best example of what Parallax has to offer is conveyed by people posting on this thread: solutions, collaboration, and contributions from people who use our products.

    Ken Gracey
  • RaymanRayman Posts: 14,827
    edited 2012-05-25 10:47
    Ken, since you're sending me a PropBOE, I can surely comp a DVI Graphics Shield.
    Also, I finally have the production analog VGA plugin boards, so I'll include one of those too, for cases where DVI is not available...

    We don't have to call it a shield, but I think that might be what's printed on the board...
    Of course, for $75 I can have ExpressPCB make you one with anything you want printed on it :)
    Actually, a couple stickers would be just as good, right?
    I'll make a sticker that says "PropBOE DVI Adapter"...
Sign In or Register to comment.