What is this diagonal line in layer M3 through some of the holes in the pad?
Set the top cell to PAD_IO, CTRL+scroll down until all layers above M3 are invisible, and then zoom in on the square pad on the top.
This next image is with only layer M3 visible, viewed from underneath. You can see it's not rectangular, not to mention the Z-buffer glitches due to coplanar polygons.
The problem could easily just be my computer, a 2009 Macbook running Gentoo Linux with an Intel GMA GPU that probably has bad drivers.
EDIT: If I set the top cell to PAD_BASIC, it goes away. But it consistently comes back if I look at PAD_IO.
The strange diagonal piece is likely some kind of data error, while that shimmering is due to Z-buffer/float uncertainty that I wish the UNUM idea would finally fix, once and for all. I noticed that in some cases, vias are missing 3 out of 6 of their planes. That stuff is annoying and I don't know how we could remedy it.
.... We should see an improvement in the next layout, which will be done in two days. I'll post it here.
If you are doing a respin of the PAD ring, can you include a Crystal CL fix to have no-added C choice ?
I think there was a spare decode slot, and IIRC you have Nett=15pf and Nett=30pF lines, but no Nett=PinC for lowest Cap crystals, or Clipped Sine feed cases.
More steps would be nice, but I can see that is more work ? 3 should be simple.
3 CL Steps also allows some correction action, where you target the mid value, and then measure and nudge CL values Up/Down using the other 2 choices, to trim for long term 'zero-drift'.
This can also allow deliberate clock modulate for sampling, to avoid exact sync effects.
Did you look at the PLL Counters, and allowing (Crystal / M) -> PFD = VCO/N ?
Present PFD= crystal, is quite restrictive in VCO choices.
A 5 bit Crystal divider keeps PFD in the MHz + region, & gives 8-9 bits VCO div.
Examples :
you can generate a USB related 48 or 96 or 144 MHz from a low cost GPS 19.2MHz Clipped Sine TCXO
or, you can use 19.2 or 26MHz GPS TCXOs to PLL to an exact 100.000MHz
- That's not possible with PFD=Crystal.
I don't really want to get that all changed around, at this point, as what we have was tested in silicon in '09, for the most part, and it passes all-corner simulation. It would be a week's diversion to revisit that enough to be confident in the outcome. I know it works with 20MHz crystals just fine, and the VCO has the range to go from 2x to 16x, no problem. I'm already over a barrel on the Verilog, but almost done. Next chip, though, we can put all these things in.
In this case, the layout guy just decided to use even metal layers for running wires one direction, while odd layers were used to run wires perpendicularly. You can see how, by breaking that rule, you could save a whole layer of metal. That's going to get optimized. ...
Does saving a layer on the Pins area actually help any, if the Logic area routing determines the total metal-layer count anyway ?
It's true that the synthesized logic is going to eat 6 layers, including power grid, so there's no real point in conserving layers in the I/O pad, unless it means allowing for better power routing, which is what we are working on right now.
I don't really want to get that all changed around, at this point, as what we have was tested in silicon in '09, for the most part, and it passes all-corner simulation. It would be a week's diversion to revisit that enough to be confident in the outcome. I know it works with 20MHz crystals just fine, and the VCO has the range to go from 2x to 16x, no problem. I'm already over a barrel on the Verilog, but almost done. Next chip, though, we can put all these things in.
OK.
Meantime, users can always add an External Clock Synthesis chip, for those cases where the limited P2 granularity is not up to the task.
I see OnSemi NB3V60113G, NB3H63143G in OTP, or SiLabs Si5351A in i2c programmable.
Adafruit have a nice Si5351 module https://www.adafruit.com/product/2045
It's true that the synthesized logic is going to eat 6 layers, including power grid, so there's no real point in conserving layers in the I/O pad, unless it means allowing for better power routing, which is what we are working on right now.
Ah, that makes sense. More plane related metal allows lower impedances, and more capacitance for local decoupling.
A s(t)imulation of this model showing the various wires and elements switching would be the next step, though if such things exist I suspect they are big money?
I don't know that such a simulation tool exists, though I'd like to make one. If only chip fabrication weren't so expensive, there would be a nice market for such a thing.
An accurate hardware / graphical simulation of the prop would be great.
But even 3d presentations of small sections of the chip would be very educational.
While very slow, Minecraft's redstone is pretty cool, lots of "computers" have been built in the game.
You can also (virtually) walk around "inside" the computers.
Very cool, but are you sure you're not "bending" a few NDA's posting this? In my research all publicly available posts of chip layouts are subject to restrictions on how much sizing information someone can glean from the images... GDS files don't leave much to the imagination...
Very cool, but are you sure you're not "bending" a few NDA's posting this? In my research all publicly available posts of chip layouts are subject to restrictions on how much sizing information someone can glean from the images... GDS files don't leave much to the imagination...
I don't know. I don't think so, anyway.
All 180nm CMOS logic processes are nearly identical today, with very little variation on design rules. 180nm has been pretty much commoditized, at this point.
This layout was all from scratch, based on our schematic. There is no one else's "IP" in there, but our own. I suppose if those fabs wanted to find out specifics on the other guys' processes, they would have more direct means. Maybe they would be sensitive about their cutting-edge or specialty processes, but this basic 180nm process is 17 years old, already. So, I doubt there's any issue.
There isn't much on the web to look at, though, as real layouts go. That's for sure. I could never find anything that seemed really meaty and edifying. For someone curious, this layout and viewer would be a golden find.
It's too bad there isn't a GDSii <=> Region file format or .MCR or .MCA file conversion. That way anybody that can build using MineCraft could build IC layout. Instead of crowd funding, you would have crowd resourcing and the Propeller 2 would be completed in a week.
I wish (when P2 is finished) that Chip will go back to P1 verilog someday. This way maybe we all can completely understand every single line of the verilog code, and be able to do our own modifications.
This way he will not be alone anymore.
I don't like the idea of a P3 with a bigger package, more power hungry, and bloated than P2. In fact, I think it will be highly desirable to have a slightly improved P1 in a medium sized package (44 | 64 | 80 pin).
Can you give us more details about the ADC, DAC specifications (resolution/bits, samples per second, etc...)?
If you have the GDSII file then there should be some design specs for the ADC/DAC, right ?
The THICK_DAC_SLOW is an 8-bit R-2R DAC that can feed the comparator for pin-level sensing. The comparator can also be used between pins, and positive or negative feedback can be output, as well, to form op-amps, filters, etc.
The THICK_DAC_FAST is an 8-bit sum-of-resistors DAC that has a 120-ohm mode (3ns to LSB settling, for RF/video) and a 1k-ohm mode (general purpose, audio).
The THICK_ADC is a first-order sigma-delta ADC that uses a current-balancing scheme to generate a bit stream. Summing 2^n samples yields an n-bit conversion (256 clocks = 8-bit sample). It has several scale modes, plus calibration modes.
Some time ago Chip you mentioned 75% of the ADC range would cover 0 to 3.3v - ie could operate slightly below and above the rails.
Is that still the case, or has it changed now you're going through Treehouse?
That's still the case. It's about 1/8th duty at GND and 7/8th duty at 3.3V. You can pull those pins about 400mV above or below the power rails before the parasitic diodes start to clamp. There are modes where you can internally connect the ADC input to GND or 3.3V to calibrate it, through identical pathways as hooking the pin to either GND or 3.3V externally would cause. You should be able to get a 'y=ax+b' equation going where you compensate for scale and offset, and get very accurate readings.
if someones wonders what it means... segmentation fault!
Sadly that is a bit unspecific.
You may try valgrind, it is excellent at finding where is it that (most probably in this case) a program is using: already freed memory, null or invalid pointers and so on.
The core image (imagem do núcleo), can be loaded into gdb for a similar purpose, I find valgrind a bit easier on the user .
if someones wonders what it means... segmentation fault!
Sadly that is a bit unspecific.
You may try valgrind, it is excellent at finding where is it that (most probably in this case) a program is using: already freed memory, null or invalid pointers and so on.
The core image (imagem do núcleo), can be loaded into gdb for a similar purpose, I find valgrind a bit easier on the user .
Ale I did not see your post until know, I installed Valgrind and got this, between others,
==32031== Invalid read of size 8
==32031== at 0x4049B8: Wm_X11::main(int, char**) (in ...GDS/GDS3D-master/linux/GDS3D)
==32031== by 0x404EB3: main (in .../GDS/GDS3D-master/linux/GDS3D)
==32031== Address 0x8 is not stack'd, malloc'd or (recently) free'd
==32031==
==32031==
==32031== Process terminating with default action of signal 11 (SIGSEGV)
==32031== Access not within mapped region at address 0x8
==32031== at 0x4049B8: Wm_X11::main(int, char**) (in .../GDS/GDS3D-master/linux/GDS3D)
==32031== by 0x404EB3: main (in .../GDS/GDS3D-master/linux/GDS3D)
==32031== If you believe this happened as a result of a stack
==32031== overflow in your program's main thread (unlikely but
==32031== possible), you can try to increase the size of the
==32031== main thread stack using the --main-stacksize= flag.
==32031== The main thread stack size used in this run was 8388608.
=32031== 1 bytes in 1 blocks are possibly lost in loss record 1 of 528
==32031== at 0x4C2A9B5: calloc (vg_replace_malloc.c:711)
==32031== by 0x16D501CC: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x16B9625B: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x16E9D48E: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x16E4CE73: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x16E4CF54: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x51B7A26: ??? (in /usr/lib64/libGL.so.1.2.0)
......
.....
Vglib,
I've just gone back and run that again myself to see what is normally reported. Here's the early part of the output ...
==============================================================================
GDS3D v1.8, Copyright (C) 2013 IC-Design Group, University of Twente
Created by Jasper Velner and Michiel Soer, http://icd.el.utwente.nl
Based on code by Roger Light, http://atchoo.org/gds2pov/
==============================================================================
It continues on with a ton of object construction details. The important part that appears missing from your output is the activation of OpenGL and reporting of the driver and GPU in your system.
I'm guessing GDS3D doesn't check an import return code and tries to use non-existent feature of your driver. What does the following command produce for you?:
[root@ ]# glxinfo |grep version
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.2.2
OpenGL core profile shading language version string: 3.30
OpenGL version string: 3.0 Mesa 11.2.2
OpenGL shading language version string: 1.30
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 11.2.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
Evanh,
I tried it out in Windows and work right with the onc18.txt and prop2_V1.gds. Now, I trying to run my own gds file made with 0,35 um technology but I got error messages.
I've changed the onc18.txt to onc35 file according to the technology description of the AMS 0,35 um which I've attached too. Please, can you help me to find, where is my error?
Hmm, doesn't look promising. I just tried downgrading my OpenGL support but it still worked.
$ LIBGL_ALWAYS_SOFTWARE=1 glxinfo |grep "version\|vendor"
server glx vendor string: SGI
server glx version string: 1.4
client glx vendor string: Mesa Project and SGI
client glx version string: 1.4
GLX version: 1.4
Max core profile version: 3.3
Max compat profile version: 3.0
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.0
OpenGL vendor string: VMware, Inc.
OpenGL core profile version string: 3.3 (Core Profile) Mesa 17.2.0-rc5
OpenGL core profile shading language version string: 3.30
OpenGL version string: 3.0 Mesa 17.2.0-rc5
OpenGL shading language version string: 1.30
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 17.2.0-rc5
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00
==============================================================================
GDS3D v1.8, Copyright (C) 2013 IC-Design Group, University of Twente
Created by Jasper Velner and Michiel Soer, http://icd.el.utwente.nl
Based on code by Roger Light, http://atchoo.org/gds2pov/
==============================================================================
You could try that command yourself to see if software only mode stops the crashes. If yours still crashes then I guess it's not an OpenGL problem at all.
EDIT: Also try: glxinfo |grep "version\|vendor"
EDIT: EDIT: Ah, the OGL option for software rendering only works when the GLX vendor is SGI/Mesa. I can achieve this when changing my graphics driver to Nouveau. When I revert back to the nVidia driver then setting that LIBGL_ALWAYS_SOFTWARE variable makes no diff.
Comments
The strange diagonal piece is likely some kind of data error, while that shimmering is due to Z-buffer/float uncertainty that I wish the UNUM idea would finally fix, once and for all. I noticed that in some cases, vias are missing 3 out of 6 of their planes. That stuff is annoying and I don't know how we could remedy it.
I don't really want to get that all changed around, at this point, as what we have was tested in silicon in '09, for the most part, and it passes all-corner simulation. It would be a week's diversion to revisit that enough to be confident in the outcome. I know it works with 20MHz crystals just fine, and the VCO has the range to go from 2x to 16x, no problem. I'm already over a barrel on the Verilog, but almost done. Next chip, though, we can put all these things in.
It's true that the synthesized logic is going to eat 6 layers, including power grid, so there's no real point in conserving layers in the I/O pad, unless it means allowing for better power routing, which is what we are working on right now.
OK.
Meantime, users can always add an External Clock Synthesis chip, for those cases where the limited P2 granularity is not up to the task.
I see OnSemi NB3V60113G, NB3H63143G in OTP, or SiLabs Si5351A in i2c programmable.
Adafruit have a nice Si5351 module https://www.adafruit.com/product/2045
Somewhat related : http://visual6502.org/JSSim/expert.html
Not 3d or "general purpose".
An accurate hardware / graphical simulation of the prop would be great.
But even 3d presentations of small sections of the chip would be very educational.
While very slow, Minecraft's redstone is pretty cool, lots of "computers" have been built in the game.
You can also (virtually) walk around "inside" the computers.
Mike
I don't know. I don't think so, anyway.
All 180nm CMOS logic processes are nearly identical today, with very little variation on design rules. 180nm has been pretty much commoditized, at this point.
This layout was all from scratch, based on our schematic. There is no one else's "IP" in there, but our own. I suppose if those fabs wanted to find out specifics on the other guys' processes, they would have more direct means. Maybe they would be sensitive about their cutting-edge or specialty processes, but this basic 180nm process is 17 years old, already. So, I doubt there's any issue.
There isn't much on the web to look at, though, as real layouts go. That's for sure. I could never find anything that seemed really meaty and edifying. For someone curious, this layout and viewer would be a golden find.
This way he will not be alone anymore.
I don't like the idea of a P3 with a bigger package, more power hungry, and bloated than P2. In fact, I think it will be highly desirable to have a slightly improved P1 in a medium sized package (44 | 64 | 80 pin).
Thanks for sharing the GDSII layout.
Can you give us more details about the ADC, DAC specifications (resolution/bits, samples per second, etc...)?
If you have the GDSII file then there should be some design specs for the ADC/DAC, right ?
The THICK_DAC_SLOW is an 8-bit R-2R DAC that can feed the comparator for pin-level sensing. The comparator can also be used between pins, and positive or negative feedback can be output, as well, to form op-amps, filters, etc.
The THICK_DAC_FAST is an 8-bit sum-of-resistors DAC that has a 120-ohm mode (3ns to LSB settling, for RF/video) and a 1k-ohm mode (general purpose, audio).
The THICK_ADC is a first-order sigma-delta ADC that uses a current-balancing scheme to generate a bit stream. Summing 2^n samples yields an n-bit conversion (256 clocks = 8-bit sample). It has several scale modes, plus calibration modes.
Is that still the case, or has it changed now you're going through Treehouse?
That's still the case. It's about 1/8th duty at GND and 7/8th duty at 3.3V. You can pull those pins about 400mV above or below the power rails before the parasitic diodes start to clamp. There are modes where you can internally connect the ADC input to GND or 3.3V to calibrate it, through identical pathways as hooking the pin to either GND or 3.3V externally would cause. You should be able to get a 'y=ax+b' equation going where you compensate for scale and offset, and get very accurate readings.
It'll let us measured scaled bipolar signals too, if we're careful
After Installed all GDS3D needs and make: make -C linux with no problems. The following step was:
./GDS3D -p onc18.txt -i Prop2_v1.gds
The results:
Opened process file "onc18.txt"
Opening GDS file "Prop2_v1.gds"..
Summary:
Paths: 6569
Boundaries: 251853
Boxes: 0
Strings: 936
Stuctures: 11691
Arrays: 1
Picking "DIE_100" as topcell.
Falha de segmentação (imagem do núcleo gravada)
How can I solve it?. I am a newbie in Linux, I 'd appreciate any advice.
if someones wonders what it means... segmentation fault!
Sadly that is a bit unspecific.
You may try valgrind, it is excellent at finding where is it that (most probably in this case) a program is using: already freed memory, null or invalid pointers and so on.
The core image (imagem do núcleo), can be loaded into gdb for a similar purpose, I find valgrind a bit easier on the user .
==32031== Invalid read of size 8
==32031== at 0x4049B8: Wm_X11::main(int, char**) (in ...GDS/GDS3D-master/linux/GDS3D)
==32031== by 0x404EB3: main (in .../GDS/GDS3D-master/linux/GDS3D)
==32031== Address 0x8 is not stack'd, malloc'd or (recently) free'd
==32031==
==32031==
==32031== Process terminating with default action of signal 11 (SIGSEGV)
==32031== Access not within mapped region at address 0x8
==32031== at 0x4049B8: Wm_X11::main(int, char**) (in .../GDS/GDS3D-master/linux/GDS3D)
==32031== by 0x404EB3: main (in .../GDS/GDS3D-master/linux/GDS3D)
==32031== If you believe this happened as a result of a stack
==32031== overflow in your program's main thread (unlikely but
==32031== possible), you can try to increase the size of the
==32031== main thread stack using the --main-stacksize= flag.
==32031== The main thread stack size used in this run was 8388608.
=32031== 1 bytes in 1 blocks are possibly lost in loss record 1 of 528
==32031== at 0x4C2A9B5: calloc (vg_replace_malloc.c:711)
==32031== by 0x16D501CC: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x16B9625B: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x16E9D48E: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x16E4CE73: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x16E4CF54: ??? (in /usr/lib64/dri/i965_dri.so)
==32031== by 0x51B7A26: ??? (in /usr/lib64/libGL.so.1.2.0)
......
.....
How can I solve, this?
Thanks in advance.
I've just gone back and run that again myself to see what is normally reported. Here's the early part of the output ... It continues on with a ton of object construction details. The important part that appears missing from your output is the activation of OpenGL and reporting of the driver and GPU in your system.
I'm guessing GDS3D doesn't check an import return code and tries to use non-existent feature of your driver. What does the following command produce for you?:
These is what I have,
[root@ ]# glxinfo |grep version
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.2.2
OpenGL core profile shading language version string: 3.30
OpenGL version string: 3.0 Mesa 11.2.2
OpenGL shading language version string: 1.30
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 11.2.2
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
Is missing something?
Thanks in advance.
I tried it out in Windows and work right with the onc18.txt and prop2_V1.gds. Now, I trying to run my own gds file made with 0,35 um technology but I got error messages.
I've changed the onc18.txt to onc35 file according to the technology description of the AMS 0,35 um which I've attached too. Please, can you help me to find, where is my error?
Thanks in advance.
EDIT: Also try: glxinfo |grep "version\|vendor"
EDIT: EDIT: Ah, the OGL option for software rendering only works when the GLX vendor is SGI/Mesa. I can achieve this when changing my graphics driver to Nouveau. When I revert back to the nVidia driver then setting that LIBGL_ALWAYS_SOFTWARE variable makes no diff.