So, I might have an injured module... not dead, but maybe not healthy.
Jmg
FT2232h rev 1.1
You should be able to run a loop-back test to confirm it is still ok.
Did you try 2 stop bits ?
When streaming large continual blocks, that helps cover baud-creep effects.
Also check 2MBbd, then 4MBd ( I think that's valid) which is an exact baud fit.
I now have it working, but when I hook it up to my PC Program the sent characters from the PC have noise in them...
In the PC side that opens the COM port, the settings are usually along the lines of (eg) 115200,N,8,2
which configures for 8b UART, No Parity, and 2 Stop bits.
Rx, on the prop side, is still set at 1 stop bit.
My experiment is essentially a giant loop through... The Pc sends the P2 an image... the P2 displays the image out on ntsc... then I ask it the P2 to send the image back to the PC. It sends the image back to the PC just fine... whatever is showing on the NTSC monitor ends up on my 32" visio.
But in the process of sending the image from the pc to the P2, through the mini-ft, it gets corrupted. Remember this is just the mini-ft board. Everything is just fine when I stick to the1-2-3's USB board. To my mind, it seems the p2_tx functions fine but it looks like p2_rx doesn't like the signal coming from the mini-ft module... or I baked the module...
by the way, it is possible to restart the P123 board, just by putting a jumper wire underneath it and tapping the board a little;)
You should see my desk... looks like I left and some rats came in and were looking for food.
My experiment is essentially a giant loop through... The Pc sends the P2 an image... the P2 displays the image out on ntsc... then I ask it the P2 to send the image back to the PC. It sends the image back to the PC just fine... whatever is showing on the NTSC monitor ends up on my 32" visio.
But in the process of sending the image from the pc to the P2, through the mini-ft, it gets corrupted. Remember this is just the mini-ft board. Everything is just fine when I stick to the1-2-3's USB board. To my mind, it seems the p2_tx functions fine but it looks like p2_rx doesn't like the signal coming from the mini-ft module... or I baked the module...
What chip does 1-2-3 use ? - That may not pack all bytes to 3Mbd, so your FT2232H may be a tougher test.
If you have a good frequency counter, stream 0x55 continually thru both boards, and measure Tx frequency.
FT2232H should be within some 10's of ppm from 1.50000MHz at 1 Stop bit.
by the way, it is possible to restart the P123 board, just by putting a jumper wire underneath it and tapping the board a little;)
You should see my desk... looks like I left and some rats came in and were looking for food.
Ouch, I'd be very careful with FPGAs as you do not want 3.3V or 5V to make onto Core Vcc pins !
Make a protection back-plate for it.
Rich
Your FT2232H module is probably Ok.
Most of my experiments with this module have been Tx only.
In RS232 mode 12Mbaud in both channels works great!
Doing some tests last night I am seeing the same Rx issue your encounetring.
One hack I did with the V7z image was to ignore any rx bytes of value FF
By doing this allowed all other data to be received fine.
The V8 image fixed this so noise looks like the culprit.
I will keep digging....
The A7 was documented as FT231X. The one on the A9 board isn't marked and I don't have a resource on it.
I'd say it will be the same.
Checking a FT231X here, I see it can manage 3Mbd simplex (PC TX only) to give 1.500MHz on counter, but in loop-back the wheels fall off a little with Rx count <> Tx Count, and MHz drops a little.
An FT232H can manage 1.500028MHz loopback with no missing Chars at 3MBd and 2.00037 MHz at 4MBd loopback
(ie that is continual, no-gaps sending, 1 stop bit)
Remember... no problem if I use the 1-2-3's USB...
Ok Rich,
I've managed to replicate the issue.
It had me scratching my head why code worked perfectly on P123-A9 uart Rx but not FT2232H module Rx.
I went back to V7z because V8 has a better (more tolerant) uart.
If you have the following smartpin config the fault appears fot FT2232H module. (P123-A9 local FTDI Ok)
pinsetm #%1_11111_1,#rx_pin
Removing the OUT control bit (bit 6) fixed the problem here.
This is cool and all. But there is only one screen left in the house with VGA input. I can imagine that by the time the P2 arrives it will have failed and there will be none.
This is cool and all. But there is only one screen left in the house with VGA input. I can imagine that by the time the P2 arrives it will have failed and there will be none.
Sad but true.
This screen format/protocol thing always seems to bite me.
In another P2 demo "spinning fozzie" I used NTSC which seemed to be an issue for propheads too.
When my RPi arrived I had the opposite issue that the only screen that had HDMI was the households main viewing screen.
Adding to my everyday drama is that apparently I should be coding in "the other" language and using "another" OS. Arrgh!
Sadly my "flux capacitor" seems to be on permanent back order and my DeLorean replacement parts don't seem to fit, so I can't go back in time to try and attempt to adjust the time space continuum to ease our pain.
This screen format/protocol thing always seems to bite me.
In another P2 demo "spinning fozzie" I used NTSC which seemed to be an issue for propheads too.
I'm not sure why ?
NTSC/PAL have a clear niche, in low cost camera systems, and I see many Car Backing LCD Monitors available on eBay, now have all 3 Video formats supported - in done in one Realtek? Chipset.
* NTSC / Composite
* VGA
* HDMI
The better ones of medium size have proper connectors on these.
Yep, the death of VGA is greatly exaggerated, I think. I have some little sub $10 driver boards with Realtek chips that happily drive VGA screens (at 800x480 native). I've said it before, but I think the P1/P2 are more suitable for smaller screens anyway. And if VGA is wrong now, then NTSC/PAL was likely just as wrong back in '06, yet both formats are still around. So, ozproddev, you just go on working your video voodoo. If you build it, they will come.
PS: Anyone got a pic of that sidewise (rotated) text? I'd like to see how well it rendered. And I'm wondering if it might look better at 640 horizontal (possibly stretched) resolution due to being wider (not as narrow).
Thanks ozpropdev, that fixed the 800x600 version too.
I should say that almost every PC monitor I've seen lately has a VGA port. Maybe one day they won't... But, I'm sure there will be VGA to HDMI converter boxes for cheap when that happens...
I just tested 800x600 and 1024x768 modes on a very widescreen monitor.
The 1024x768 didn't look too good. Was like a 1 pixel horizontal shift back and forth every 4 lines or so. Was stable, but lines were jagged instead of straight.
800x600 looks perfect though. I actually use this monitor rotated and the rotated text actually looks pretty good (see attached).
I just tested 800x600 and 1024x768 modes on a very widescreen monitor.
The 1024x768 didn't look too good. Was like a 1 pixel horizontal shift back and forth every 4 lines or so. Was stable, but lines were jagged instead of straight.
800x600 looks perfect though...
What is the 'native' resolution ?
I think there is a lot of sampling/ remapping going on in these multiple-resolution monitors, and in testing a MCU with a RC Oscillator as a Video Clock source, you get an interesting resolution inversion effect.
Lower res screens look worse, as their sampling quanta is larger, and they can amplify any jitter.
Higher res screens are pretty much good enough, on the RC oscillator.
All this means there maybe should be some fine tune of Video Clock rates, to try to avoid those sampling-steps.
I've not seen a 'every 4 lines or so' effect - how was the clock generated for that ?
Could that be a NCO rounding effect ? - NCO can have quite low beat effects.
@Rayman
I'm getting the same effect here @ 1024x768 on my new displays.
I tried a different timing spec (VESA @75Hz) which fixed the edges but introduces some pixel jitter.
Here's a version that's a bit easier to try different modes/timing.
It looks like 1024x768 doesn't play nicely with these newer displays.
800x600 is rock solid here too.
@Rayman; Thanks for posting that pic. Looks good. I'm seeing 50 "rows" of text (displayed vertically) at 37.5 columns (800/50=16 and 600/16=37.5). I believe that means that the driver is using every other line (pixel row) of the standard font's normal 32 lines (the normal font being 32 pixel rows x 16 pixel columns when displayed normally/horizontally, of course). However, for all I know, perhaps you tweaked the driver to only use 16 out of 32 pixel lines of the font. Perhaps using the full 32 pixel rows makes the rotated text appear too tall (relative to its baseline)? Another reason could be that programmers like to get a lot of text on screen (even at the expense of resolution). Anyway, if I understood you correctly, you use this monitor rotated vertically (unlike in the pic), the whole monitor physically rotated 90* to give portrait mode. Guess that lets you look at a lot of lines of code on the screen at once (kind of like how Chip views the P2's Verilog, if I recall correctly). I viewed Windows on a monitor orientated in portrait mode for a while, but ultimately switched back to landscape mode as it better matched my general usage. Again, thanks for the pic.
I like having two displays, with one of them in portrait mode...
Use portrait for secondary display, nice for web and full page text...
Anyway, the display in the photo is in portrait mode. My phone just decided to rotate the image for some reason..
It could also be that manufacturers make sure some old modes work well. For instance, I think 640x480 is always made to work on any monitor. Maybe they do this now for 800x600 too...
I think the native resolution is 1920x1080, it's Samsung S24E450.
I think the Nx768 modes just use of too much ram...
I'm guess a good balance is to have ~1/2 of HUB RAM for graphics and rest for code.
For 4-bit mode, 800x600 does this.
On the topic of Memory and ceilings.. today's press release on sampling MCUs for Automotive Clusters...
* ARM Cortex-R5 core at 240 MHz ( 40nm Flash Process)
* up to 4MB of high-density embedded flash, 512 KB RAM and 2 MB of Video RAM
* 2 x 12-pin HyperBus memory interfaces
* 50 channels of 12-bit Analog to Digital Converters (ADC),
* 12 channels of multi-function serial interfaces and I2S interfaces
* AUDIO DAC to output the complex, high-quality sounds
* TEQFP-208 and TEQFP-216, -40˚C to +105˚C
* Low-voltage Differential Signaling (LVDS) video output,
No price indications, but some serious resource indicated there.
In any case, 16 colors, well chosen, can be dithered at this resolution. Anything over about 320 pixels on smaller displays will dither well enough to be useful and good looking.
And, that's per scan line too. A small display list with per line, or region of lines palette entries could do a lot. That could also be extended to tiles, similar to how the P1 driver Chip did works.
We didn't see that one used to full potential very much. The color to palette to tile mapping was just complex enough to inhibit use. 2 bits was also just a bit too low for the better dither combinations.
Here, resolution, RAM, and available colors does change the game. A palette set is basically 16 longs. 16 of those ends up being 1Kb of color definition memory. This, mapped to 16 or 32 pixel tiles, each tile definition unique, or shared, like the P1 driver, would yield a flexible and sharp looking display, also capable of partial, or region based buffering.
Such a display could occupy 256kb, or much less depending on the tile mapping done.
16 colors is also enough to reserve a few for roaming screen elements. Pointers, etc...
P2 is fast enough to dynamically draw, or worst case, partial buffer drawing, which makes that display useful. Still have half the HUB left, and a very good overall display while presenting simple bitmap techniques and primitives to users. Could even put a palette optimizer in there. Let it shuffle colors and free the user from all that. They just ask for stuff, and if it dither, or color allocation is exceeded, they can just make a different choice rather than dig into the guts of it.
Here are some great 320 pixel, 16 fixed color images...
For GUI applications, a 4 bit palette, black, white, grey, dark grey, red, blue, green (bright), red, blue, green (dark), leaves 6 free range colors. Most things can be done very nicely with that. Dither, or map multiple palettes per region, whichever makes the most sense. Or reduce free range colors, add yellow, aqua, violet, and allow for icons, etc... these techniques do decrease effective resolution for broad color specifications but still allow for fine detail where that's not needed
When we get more than one COG able to access these things, another alternative exists when using component video.
I've been itching to give this a go:
Component video offers a single monochrome channel and two color difference channels. Run the monochrome at high resolution, and or, high depth, and run color lower. This delivers a mixed display with good color, lean on RAM.
One would trade complex palette schemes for two bitmap spaces with a consistent mapping between them. Depending on requirements, RAM economy could be very good while still presenting a robust color selection.
An example might be 200x200 full or 8 bit color with an 800x600 monochrome channel at 4 or 16 colors. The human eye is intensity dominant. (What NTSC depended on back in the day)
Comments
You should be able to run a loop-back test to confirm it is still ok.
Did you try 2 stop bits ?
When streaming large continual blocks, that helps cover baud-creep effects.
Also check 2MBbd, then 4MBd ( I think that's valid) which is an exact baud fit.
How do I use two stop bits:)
No joke.
which configures for 8b UART, No Parity, and 2 Stop bits.
Rx, on the prop side, is still set at 1 stop bit.
But in the process of sending the image from the pc to the P2, through the mini-ft, it gets corrupted. Remember this is just the mini-ft board. Everything is just fine when I stick to the1-2-3's USB board. To my mind, it seems the p2_tx functions fine but it looks like p2_rx doesn't like the signal coming from the mini-ft module... or I baked the module...
You should see my desk... looks like I left and some rats came in and were looking for food.
If you have a good frequency counter, stream 0x55 continually thru both boards, and measure Tx frequency.
FT2232H should be within some 10's of ppm from 1.50000MHz at 1 Stop bit.
Ouch, I'd be very careful with FPGAs as you do not want 3.3V or 5V to make onto Core Vcc pins !
Make a protection back-plate for it.
The only frequency counter I have is the one I haven't built yet... and the P1&P2's:)
The A7 was documented as FT231X. The one on the A9 board isn't marked and I don't have a resource on it.
Your FT2232H module is probably Ok.
Most of my experiments with this module have been Tx only.
In RS232 mode 12Mbaud in both channels works great!
Doing some tests last night I am seeing the same Rx issue your encounetring.
One hack I did with the V7z image was to ignore any rx bytes of value FF
By doing this allowed all other data to be received fine.
The V8 image fixed this so noise looks like the culprit.
I will keep digging....
The P2 Mode %01101 = Count A-input positive edges
Set for a 1s capture repeat time, should make a good enough counter for > 1MHz
I'd say it will be the same.
Checking a FT231X here, I see it can manage 3Mbd simplex (PC TX only) to give 1.500MHz on counter, but in loop-back the wheels fall off a little with Rx count <> Tx Count, and MHz drops a little.
An FT232H can manage 1.500028MHz loopback with no missing Chars at 3MBd and 2.00037 MHz at 4MBd loopback
(ie that is continual, no-gaps sending, 1 stop bit)
I've managed to replicate the issue.
It had me scratching my head why code worked perfectly on P123-A9 uart Rx but not FT2232H module Rx.
I went back to V7z because V8 has a better (more tolerant) uart.
If you have the following smartpin config the fault appears fot FT2232H module. (P123-A9 local FTDI Ok) Removing the OUT control bit (bit 6) fixed the problem here. Check your code and see if that bit is set.
I just tried changing pinsetm to WRPIN, but that doesn't seem to be enough...
I'm thinking that 800x600x4-bit is a good place to be...
You were so close.
The only other line requiring a change was the "dacmode" constant.
Sad but true.
In another P2 demo "spinning fozzie" I used NTSC which seemed to be an issue for propheads too.
When my RPi arrived I had the opposite issue that the only screen that had HDMI was the households main viewing screen.
Adding to my everyday drama is that apparently I should be coding in "the other" language and using "another" OS. Arrgh!
Sadly my "flux capacitor" seems to be on permanent back order and my DeLorean replacement parts don't seem to fit, so I can't go back in time to try and attempt to adjust the time space continuum to ease our pain.
Sigh.
NTSC/PAL have a clear niche, in low cost camera systems, and I see many Car Backing LCD Monitors available on eBay, now have all 3 Video formats supported - in done in one Realtek? Chipset.
* NTSC / Composite
* VGA
* HDMI
The better ones of medium size have proper connectors on these.
PS: Anyone got a pic of that sidewise (rotated) text? I'd like to see how well it rendered. And I'm wondering if it might look better at 640 horizontal (possibly stretched) resolution due to being wider (not as narrow).
I should say that almost every PC monitor I've seen lately has a VGA port. Maybe one day they won't... But, I'm sure there will be VGA to HDMI converter boxes for cheap when that happens...
I just tested 800x600 and 1024x768 modes on a very widescreen monitor.
The 1024x768 didn't look too good. Was like a 1 pixel horizontal shift back and forth every 4 lines or so. Was stable, but lines were jagged instead of straight.
800x600 looks perfect though. I actually use this monitor rotated and the rotated text actually looks pretty good (see attached).
I think there is a lot of sampling/ remapping going on in these multiple-resolution monitors, and in testing a MCU with a RC Oscillator as a Video Clock source, you get an interesting resolution inversion effect.
Lower res screens look worse, as their sampling quanta is larger, and they can amplify any jitter.
Higher res screens are pretty much good enough, on the RC oscillator.
All this means there maybe should be some fine tune of Video Clock rates, to try to avoid those sampling-steps.
I've not seen a 'every 4 lines or so' effect - how was the clock generated for that ?
Could that be a NCO rounding effect ? - NCO can have quite low beat effects.
I'm getting the same effect here @ 1024x768 on my new displays.
I tried a different timing spec (VESA @75Hz) which fixed the edges but introduces some pixel jitter.
Here's a version that's a bit easier to try different modes/timing.
It looks like 1024x768 doesn't play nicely with these newer displays.
800x600 is rock solid here too.
Use portrait for secondary display, nice for web and full page text...
Anyway, the display in the photo is in portrait mode. My phone just decided to rotate the image for some reason..
It could also be that manufacturers make sure some old modes work well. For instance, I think 640x480 is always made to work on any monitor. Maybe they do this now for 800x600 too...
I think the native resolution is 1920x1080, it's Samsung S24E450.
Maybe if driver was changed to output 1366 columns, it would work better.
I'm guess a good balance is to have ~1/2 of HUB RAM for graphics and rest for code.
For 4-bit mode, 800x600 does this.
Think I really want 4-bit color for a nice looking GUI...
Do you mean 4 bits going via a CLUT ?
On the topic of Memory and ceilings.. today's press release on sampling MCUs for Automotive Clusters...
* ARM Cortex-R5 core at 240 MHz ( 40nm Flash Process)
* up to 4MB of high-density embedded flash, 512 KB RAM and 2 MB of Video RAM
* 2 x 12-pin HyperBus memory interfaces
* 50 channels of 12-bit Analog to Digital Converters (ADC),
* 12 channels of multi-function serial interfaces and I2S interfaces
* AUDIO DAC to output the complex, high-quality sounds
* TEQFP-208 and TEQFP-216, -40˚C to +105˚C
* Low-voltage Differential Signaling (LVDS) video output,
No price indications, but some serious resource indicated there.
Likely.
In any case, 16 colors, well chosen, can be dithered at this resolution. Anything over about 320 pixels on smaller displays will dither well enough to be useful and good looking.
And, that's per scan line too. A small display list with per line, or region of lines palette entries could do a lot. That could also be extended to tiles, similar to how the P1 driver Chip did works.
We didn't see that one used to full potential very much. The color to palette to tile mapping was just complex enough to inhibit use. 2 bits was also just a bit too low for the better dither combinations.
Here, resolution, RAM, and available colors does change the game. A palette set is basically 16 longs. 16 of those ends up being 1Kb of color definition memory. This, mapped to 16 or 32 pixel tiles, each tile definition unique, or shared, like the P1 driver, would yield a flexible and sharp looking display, also capable of partial, or region based buffering.
Such a display could occupy 256kb, or much less depending on the tile mapping done.
16 colors is also enough to reserve a few for roaming screen elements. Pointers, etc...
P2 is fast enough to dynamically draw, or worst case, partial buffer drawing, which makes that display useful. Still have half the HUB left, and a very good overall display while presenting simple bitmap techniques and primitives to users. Could even put a palette optimizer in there. Let it shuffle colors and free the user from all that. They just ask for stuff, and if it dither, or color allocation is exceeded, they can just make a different choice rather than dig into the guts of it.
Here are some great 320 pixel, 16 fixed color images...
http://c64pixels.com/main.php
This is the palette:
https://upload.wikimedia.org/wikipedia/commons/6/65/Commodore64_palette.png
For GUI applications, a 4 bit palette, black, white, grey, dark grey, red, blue, green (bright), red, blue, green (dark), leaves 6 free range colors. Most things can be done very nicely with that. Dither, or map multiple palettes per region, whichever makes the most sense. Or reduce free range colors, add yellow, aqua, violet, and allow for icons, etc... these techniques do decrease effective resolution for broad color specifications but still allow for fine detail where that's not needed
I've been itching to give this a go:
Component video offers a single monochrome channel and two color difference channels. Run the monochrome at high resolution, and or, high depth, and run color lower. This delivers a mixed display with good color, lean on RAM.
One would trade complex palette schemes for two bitmap spaces with a consistent mapping between them. Depending on requirements, RAM economy could be very good while still presenting a robust color selection.
An example might be 200x200 full or 8 bit color with an 800x600 monochrome channel at 4 or 16 colors. The human eye is intensity dominant. (What NTSC depended on back in the day)
May be a sweet target for shared LUT mode COGS.
Comes in at an even 300 kB for 4bpp. Can do a decent job with photos too (see attached).