James,
You've missed the other half of that comment where we were talking about politics of the industry. It has always been a bugbear for me that VGA monitors, the first PC monitors that supported decent analogue signalling, didn't allow horizontal scan frequencies below 30 kHz, ie: basic VGA 640x480 was the bottom and no support for less.
At no point has this problem improved even though there has been ample opportunities to do so. There is many times I've had to tell someone to order $5000 monitor replacements for older equipment just because of this issue.
You can't even buy a universal scan converter that works in that region between 15 kHz and 30 kHz.
You can't even buy a universal scan converter that works in that region between 15 kHz and 30 kHz.
I've had success with a $20 converter I bought on EBay, GBS8200. It does 15 and 24KHz.
They are a bit noisy but can be cleaned up with some foil tape shielding.
There is other new stuff still to test, e.g. Sinc2/Sinc3, Hann/Tukey windows, SCOPE instruction. I imagine Chip has a list that he is working through in order and we'll have to be patient.
Yes, checking how this works with external ADCs will be interesting, as will the noise floor on the clock gated P2.
I see ADi have some new parts in this area :
ADUM7701, ADUM7703 : Isolated modulators, 16b, external filter - broadly similar to others, better dV/dT and better SFDR ?
ADAU7112 : Stereo PDM to I2S/TDM Converter - looks like a sinc3 filter and a time slot allocate, so multiple PDM signals can aggregate onto a TDM bus.
This uses either edge of the CLK to sample a Stereo PDM, and the PDM_CLK is decided by the I2S clock/frame, so that needs to be present while the part decides mode.
I think P2 can either replace many of these, or work with many of these, depending on which makes the most wiring sense.
To match the stereo feature, I think 2 smart pins would map onto PDM_CLK, PDM_DAT, with opposite polarity clock selected ?
ADAU7112 : Stereo PDM to I2S/TDM Converter - looks like a sinc3 filter and a time slot allocate, so multiple PDM signals can aggregate onto a TDM bus.
This uses either edge of the CLK to sample a Stereo PDM, and the PDM_CLK is decided by the I2S clock/frame, so that needs to be present while the part decides mode.
I think P2 can either replace many of these, or work with many of these, depending on which makes the most wiring sense.
To match the stereo feature, I think 2 smart pins would map onto PDM_CLK, PDM_DAT, with opposite polarity clock selected ?
You might have skimmed the datasheet a little too quick there JMG. I2S is a specified PCM over SPI. The ADAU7112 converts PDM input to PCM output. The TDM naming is because of the PCM interleaving, which I guess must be an extension to I2S these days.
I think P2 can either replace many of these, or work with many of these, depending on which makes the most wiring sense.
Oh, do you mean a prop2 being used with a number of stereo PDM parts, instead of using any ADAU7112? Wow, multiplexing the PDM is pretty out there. You do realise that most stereo ADCs are going to be SPI/I2S interfaces anyway. I certainly haven't seen a stereo isolated PDM part, kind of defeats the purpose of isolation.
Huh, there is a basic count mode that could do it maybe:
%01100 = Count A-input positive edges when B-input is high
B-input would be the PDM data pin. A-input would be the fed back external clock from another smartpin that also goes to the external ADCs. Like you say, two of those smartpins with inverted A-input on the second would seem to do it. EDIT: I have an AD7400 eval board but I didn't use this mode for it. I ran it asynchronously with mode %01111, which seemed fine.
I don't think the SINC2/3 modes have any edge+gating config though.
I think P2 can either replace many of these, or work with many of these, depending on which makes the most wiring sense.
Oh, do you mean a prop2 being used with a number of stereo PDM parts, instead of using any ADAU7112?
Yup. My reading of the ADAU7112 is it takes PDM (they seem to target microphones, but could be any PDM front end) and then they feed that into a Decimation filter, and then they can drop the result into a choice of time-slots, on the output pins, with examples showing 4 pairs of time-slot choices.
Their PDM_DAT samples on both clk edges for a stereo or 2 channel system, so they must have 2 Decimation filters.
Or, someone may chose to merge the sensors remotely with up to 4 x ADAU7112, then feeding a single i2s/TDM channel back to P2.
fighting with TonyB_'s Z80 ByteCode Interpreter. What is that with $1F8-$1FF The Google Doc is empty there.
Examples are sparse. I would like to use the first 4 bits as index and the next 4 bits somehow as skip bits I found a very nice doc about the Motorola 68000, that brung memories back, the P2 opcodes are so - booring.
The 68000 has names like ANDI, ILLEGAL, BRA(!), PFLUSH, PFLUSHR and FATAN.
I made a program tonight to check for any remaining race conditions between DIR and OUT signals, which caused pins to glitch low when going from high to float on the first silicon. On the new silicon, DIR is supposed to transition before OUT, to avoid glitching.
On the new silicon, pin 16 still exhibits a race condition, but only slightly. All the other pins are clean. I need to ask Wendy why this could be, since we made timing constraints to assure that DIR transitions before OUT on all the pins. It's not a big deal, but in future versions of P2 chips, we'll want this problem gone, all the way.
Here's my test code:
DAT org
'
'
' Test for race conditions between DIR and OUT signals
'
hubset ##%1_000000_1111111111_1111_10_00 'configure PLL to top out
waitx ##20_000_000 / 200 'allow crystal and PLL 5ms to stabilize
hubset ##%1_000000_1111111111_1111_10_11 'switch to PLL
' wrpin ##%0000_100_000_000_00_00000_0,apins 'enable clocked digital I/O
' wrpin ##%0000_100_000_000_00_00000_0,bpins
.loop drvl apins 'RACE CONDITION ON P16 WHEN DRIVEN HIGH THEN FLOATED - SPIKES DOWN
drvl bpins
waitx delay
fltl apins
fltl bpins
waitx delay
drvl apins
drvl bpins
waitx delay
flth apins
flth bpins
waitx delay
drvh apins
drvh bpins
waitx delay
fltl apins
fltl bpins
waitx delay
drvh apins
drvh bpins
waitx delay
flth apins
flth bpins
waitx delay
jmp #.loop
apins long 31<<6 + 0
bpins long 31<<6 + 32
delay long 1000
fighting with TonyB_'s Z80 ByteCode Interpreter. What is that with $1F8-$1FF The Google Doc is empty there.
Examples are sparse. I would like to use the first 4 bits as index and the next 4 bits somehow as skip bits I found a very nice doc about the Motorola 68000, that brung memories back, the P2 opcodes are so - booring.
The 68000 has names like ANDI, ILLEGAL, BRA(!), PFLUSH, PFLUSHR and FATAN.
fighting with TonyB_'s Z80 ByteCode Interpreter. What is that with $1F8-$1FF The Google Doc is empty there.
Examples are sparse. I would like to use the first 4 bits as index and the next 4 bits somehow as skip bits I found a very nice doc about the Motorola 68000, that brung memories back, the P2 opcodes are so - booring.
The 68000 has names like ANDI, ILLEGAL, BRA(!), PFLUSH, PFLUSHR and FATAN.
fighting with TonyB_'s Z80 ByteCode Interpreter. What is that with $1F8-$1FF The Google Doc is empty there.
Examples are sparse. I would like to use the first 4 bits as index and the next 4 bits somehow as skip bits I found a very nice doc about the Motorola 68000, that brung memories back, the P2 opcodes are so - booring.
The 68000 has names like ANDI, ILLEGAL, BRA(!), PFLUSH, PFLUSHR and FATAN.
No I meant with jumping to them with the bytecode interpreter, some demos use 1F8 not 1FF, why?
Mike
The stack must be $1FF for XBYTE.
There is a debug ROM that hides in that space during a debug interrupt, though. The code there gets you in and out of a debug interrupt. $1F8..$1FC saves registers $000..$00F to high hub RAM, then loads new code from high hub RAM and jumps to it. $1FD..$1FF restores $000..$00F from high hub RAM and returns from the debug interrupt. This is not documented, yet, but ozpropdev knows how it works.
If you get a chance, it would be good to know where the ADC rolls off
The ADC didn't change. It should act the same as it does on the first silicon. Or, did I not understand your question?
You mentioned a plan to divide the integrating capacitor by a factor of 8, to reduce the clumpiness of the sigma delta stream. Was it solely that cacpacitor that changed? Or was the whole front end sped up?
If you get a chance, it would be good to know where the ADC rolls off
The ADC didn't change. It should act the same as it does on the first silicon. Or, did I not understand your question?
You mentioned a plan to divide the integrating capacitor by a factor of 8, to reduce the clumpiness of the sigma delta stream. Was it solely that cacpacitor that changed? Or was the whole front end sped up?
I re-simulated the ADC quite a bit, but determined that shrinking the integrator cap was not much of a solution, since the front-end FETs are what's really slowing things down. I tried speeding them up, but it wasn't practical to do so. It would have helped a little to shrink the integrator cap, but a heavy at ON Semi didn't like me asking for layout changes, so I just did the PLL filter which was the most critical.
In future chips, I want the ADC to run always independently of everything else in the pin, and I want to put in maybe 15 independent integrators get a 4-bit reading per clock.
No I meant with jumping to them with the bytecode interpreter, some demos use 1F8 not 1FF, why?
The stack must be $1FF for XBYTE.
Originally, XBYTE did need $1F8 ...$1FF on the stack, but Chip and myself simplified and improved it and now it is only $1FF, no matter what the bytecode table size. The doc explains how it works, the direct PDF download link is https://docs.google.com/document/d/1UnelI6fpVPHFISQ9vpLzOVa8oUghxpI6UpkXVsYgBEQ/export?format=pdf
(Note. $001F8.. should be deleted in Debug Interrupt section on p.38)
... but a heavy at ON Semi didn't like me asking for layout changes, so I just did the PLL filter which was the most critical.
Right, Chip, there seems to be a discrepancy between this improved PLL and your recent claims of PLL limited 390 MHz sysclock. I'm thinking the sysclock limit is not from the PLL at all.
I re-simulated the ADC quite a bit, but determined that shrinking the integrator cap was not much of a solution, since the front-end FETs are what's really slowing things down. I tried speeding them up, but it wasn't practical to do so. It would have helped a little to shrink the integrator cap, but a heavy at ON Semi didn't like me asking for layout changes, so I just did the PLL filter which was the most critical.
In future chips, I want the ADC to run always independently of everything else in the pin, and I want to put in maybe 15 independent integrators get a 4-bit reading per clock.
Ok, thanks for the explanation. I think the current balance is quite useful for many things, its kind of reassuring it hasn't altered.
... but a heavy at ON Semi didn't like me asking for layout changes, so I just did the PLL filter which was the most critical.
Right, Chip, there seems to be a discrepancy between this improved PLL and your recent claims of PLL limited 390 MHz sysclock. I'm thinking the sysclock limit is not from the PLL at all.
I remember changing the VCO inverters' gate lengths while I was editing the layout, before I turned it over to ON Semi. Later, they massaged the layout to work with their tools and I had them change the filter resistor settings, which improved the PLL in the new silicon. I need to get the layout database from them so I'll have the latest layout on my end.
Anyway, I can give the PLL impossibly-high-frequency settings and it goes as fast as it can and becomes temperature dependent, since it can't lock. This makes me think that the VCO divider is faster than the VCO, which is better than the opposite, since things top out safely.
Right, so 390 MHz will just be the natural limit of the sythesised core logic/flops at room temp without active cooling. The v1 chip could get close to that, maybe 370 or 380 MHz for about one second before it got too hot.
Right, so 390 MHz will just be the natural limit of the sythesised core logic/flops at room temp without active cooling. The v1 chip could get close to that, maybe 370 or 380 MHz for about one second before it got too hot.
That 390MHz is the limit of the analog VCO in the XI/XO pads. The VCO divider (10-bit) and crystal divider (6-bit) and post divider (4-bit) are all custom logic in the XI/XO pad. The core just receives the final synthesized clock signal (or the crystal, RCFAST, or RCSLOW). We don't know how fast the core could run, given a faster clock signal. The only option for more speed is to drive a faster clock into XI and select oscillator mode %01 and select source %10 (HUBSET #%0110).
The 4-bit post divider is outside the closed loop of the PLL, right? I've been using that divider, by setting it to /2, to test the top frequency of the PLL of the v1 chip. The result I got was around 415 MHz at room temp.
Comments
You've missed the other half of that comment where we were talking about politics of the industry. It has always been a bugbear for me that VGA monitors, the first PC monitors that supported decent analogue signalling, didn't allow horizontal scan frequencies below 30 kHz, ie: basic VGA 640x480 was the bottom and no support for less.
At no point has this problem improved even though there has been ample opportunities to do so. There is many times I've had to tell someone to order $5000 monitor replacements for older equipment just because of this issue.
You can't even buy a universal scan converter that works in that region between 15 kHz and 30 kHz.
I've had success with a $20 converter I bought on EBay, GBS8200. It does 15 and 24KHz.
They are a bit noisy but can be cleaned up with some foil tape shielding.
I see ADi have some new parts in this area :
ADUM7701, ADUM7703 : Isolated modulators, 16b, external filter - broadly similar to others, better dV/dT and better SFDR ?
ADAU7112 : Stereo PDM to I2S/TDM Converter - looks like a sinc3 filter and a time slot allocate, so multiple PDM signals can aggregate onto a TDM bus.
This uses either edge of the CLK to sample a Stereo PDM, and the PDM_CLK is decided by the I2S clock/frame, so that needs to be present while the part decides mode.
I think P2 can either replace many of these, or work with many of these, depending on which makes the most wiring sense.
To match the stereo feature, I think 2 smart pins would map onto PDM_CLK, PDM_DAT, with opposite polarity clock selected ?
Huh, there is a basic count mode that could do it maybe:
%01100 = Count A-input positive edges when B-input is high
B-input would be the PDM data pin. A-input would be the fed back external clock from another smartpin that also goes to the external ADCs. Like you say, two of those smartpins with inverted A-input on the second would seem to do it. EDIT: I have an AD7400 eval board but I didn't use this mode for it. I ran it asynchronously with mode %01111, which seemed fine.
I don't think the SINC2/3 modes have any edge+gating config though.
Yup. My reading of the ADAU7112 is it takes PDM (they seem to target microphones, but could be any PDM front end) and then they feed that into a Decimation filter, and then they can drop the result into a choice of time-slots, on the output pins, with examples showing 4 pairs of time-slot choices.
Their PDM_DAT samples on both clk edges for a stereo or 2 channel system, so they must have 2 Decimation filters.
Or, someone may chose to merge the sensors remotely with up to 4 x ADAU7112, then feeding a single i2s/TDM channel back to P2.
If you get a chance, it would be good to know where the ADC rolls off
fighting with TonyB_'s Z80 ByteCode Interpreter. What is that with $1F8-$1FF The Google Doc is empty there.
Examples are sparse. I would like to use the first 4 bits as index and the next 4 bits somehow as skip bits I found a very nice doc about the Motorola 68000, that brung memories back, the P2 opcodes are so - booring.
The 68000 has names like ANDI, ILLEGAL, BRA(!), PFLUSH, PFLUSHR and FATAN.
anyways
have fun,
Mike
On the new silicon, pin 16 still exhibits a race condition, but only slightly. All the other pins are clean. I need to ask Wendy why this could be, since we made timing constraints to assure that DIR transitions before OUT on all the pins. It's not a big deal, but in future versions of P2 chips, we'll want this problem gone, all the way.
Here's my test code:
Here's what P16 looks like on the scope:
The ADC didn't change. It should act the same as it does on the first silicon. Or, did I not understand your question?
Those are registers:
1F8 = PTRA
1F9 = PTRB
1FA = DIRA
1FB = DIRB
1FC = OUTA
1FD = OUTB
1FE = INA
1FF = INB
No I meant with jumping to them with the bytecode interpreter, some demos use 1F8 not 1FF, why?
Mike
The stack must be $1FF for XBYTE.
There is a debug ROM that hides in that space during a debug interrupt, though. The code there gets you in and out of a debug interrupt. $1F8..$1FC saves registers $000..$00F to high hub RAM, then loads new code from high hub RAM and jumps to it. $1FD..$1FF restores $000..$00F from high hub RAM and returns from the debug interrupt. This is not documented, yet, but ozpropdev knows how it works.
Here's the code in $1F8..$1FF:
You mentioned a plan to divide the integrating capacitor by a factor of 8, to reduce the clumpiness of the sigma delta stream. Was it solely that cacpacitor that changed? Or was the whole front end sped up?
Took me a while to find your comment,
https://forums.parallax.com/discussion/comment/1457679/#Comment_1457679
I re-simulated the ADC quite a bit, but determined that shrinking the integrator cap was not much of a solution, since the front-end FETs are what's really slowing things down. I tried speeding them up, but it wasn't practical to do so. It would have helped a little to shrink the integrator cap, but a heavy at ON Semi didn't like me asking for layout changes, so I just did the PLL filter which was the most critical.
In future chips, I want the ADC to run always independently of everything else in the pin, and I want to put in maybe 15 independent integrators get a 4-bit reading per clock.
Originally, XBYTE did need $1F8 ...$1FF on the stack, but Chip and myself simplified and improved it and now it is only $1FF, no matter what the bytecode table size. The doc explains how it works, the direct PDF download link is
https://docs.google.com/document/d/1UnelI6fpVPHFISQ9vpLzOVa8oUghxpI6UpkXVsYgBEQ/export?format=pdf
(Note. $001F8.. should be deleted in Debug Interrupt section on p.38)
Ok, thanks for the explanation. I think the current balance is quite useful for many things, its kind of reassuring it hasn't altered.
I remember changing the VCO inverters' gate lengths while I was editing the layout, before I turned it over to ON Semi. Later, they massaged the layout to work with their tools and I had them change the filter resistor settings, which improved the PLL in the new silicon. I need to get the layout database from them so I'll have the latest layout on my end.
Anyway, I can give the PLL impossibly-high-frequency settings and it goes as fast as it can and becomes temperature dependent, since it can't lock. This makes me think that the VCO divider is faster than the VCO, which is better than the opposite, since things top out safely.
That 390MHz is the limit of the analog VCO in the XI/XO pads. The VCO divider (10-bit) and crystal divider (6-bit) and post divider (4-bit) are all custom logic in the XI/XO pad. The core just receives the final synthesized clock signal (or the crystal, RCFAST, or RCSLOW). We don't know how fast the core could run, given a faster clock signal. The only option for more speed is to drive a faster clock into XI and select oscillator mode %01 and select source %10 (HUBSET #%0110).
Does drvh followed by fltl do the same thing as drvh followed by flth?