I skimmed that first page. Much could be critiqued about how things have gone. That was before smart pins and a bunch of other developments.
Looking back, we should have pushed through P2-Hot, just to have something completed. The one-clock-per-instuction with automatic pointers was really nice.
What we have now is very nice, too, and I believe is still much lower-power than P2-Hot would have been.
A new test chip will be underway soon. Then, we'll make a full chip - barring the advent of a global economic crash or world war.
Wow.... re-reading this thread (well, the first few pages, anyhow) has been interesting! You know, P2 may have been a long time in the making, but when you go back and see just how much work has gone into making what we have today, it's actually quite impressive. It just underscores to me that "Propeller 2" doesn't do justice to the scope of changes and improvements in this new design.
Looking back, we should have pushed through P2-Hot, just to have something completed. The one-clock-per-instuction with automatic pointers was really nice.
They say that hindsight is 20:20, though perhaps that's only the case if one is wearing one's glasses, as we tend to overlook things in the past and blur lots of stuff together.
The P2-Hot didn't have the "Lazy Susan" memory scheme for sequential reads or writes at full speed, which seems super spiffy. And there must be a lot of other things it lacked (though it's a much different architecture from the current P2, which makes comparisons kind of dicey).
As another example, the P2-Hot didn't have the mechanisms to speed up byte code processing (by) several fold, which has me excited about programming in SPIN. Having such speed available in a high-level language--even though interpreted--could really round out the chip. The only thing that could surpass that would be built-in hardware to specifically process SPIN, but that wouldn't work for other byte-code languages.
But yeah, a lot of water has sure passed over the dam. I think if Chip could really go back in time, he'd go back about seven years and figure out a way to have finished a version of the P1 with 64 I/O pins. That would have more likely impacted the world than a chip one could cook on.
But I might be wrong about that, as Cluso--I believe it is--has said many times that the P2-Hot should have been released (considering how complete the design was) and that it might not have been as hot as feared under many use cases. But it just seemed too risky at decision time to pour additional money (during those relatively high expenditure years) into a chip that wasn't designed with power conservation as a significant priority from the start.
So history went a different direction. Perhaps the P2-Hot was headed in the same direction that Chip's health was then (something I had wondered about around the time of the switch), and so radical changes were in order. I realize that the stress of the current P2 probably brought things to a boil, health-wise, but things must have been building for years before the move away from P2-Hot. So what we have now is a totally renovated P1 successor (which benefits greatly from the P2-Hot work) and a vastly rejuvenated Chip Gracey (thanks to his God, doctors, family and friends, as well as that amazing miracle diet he's on, with hopefully some exercise and other balancing factors thrown in). Yes, thank God that the course corrections came in time so that Chip may give the world the desires of his heart and care for his family.
A new test chip will be underway soon. Then, we'll make a full chip....
That's good to know be reassured of. It's also good that P2 development costs have gone down considerably as Parallax has ridden the learning curve. Still, I can't help but wonder if now might be the time to consider ramping up the spending again to do a full chip test instead of just (another) limited test chip. Probably not, as a conservative approach gives time to check and double-check and to tweak. But if it were just a matter of money, then potentially delaying the chip (to do the partial test) might cause it to fall out of its window of opportunity. Of course, I doubt that it's *just* a money consideration (that is, there's no doubt more design and testing to do), but I have asked myself if Parallax is waiting for education-related purchases to come in to secure financing before the final push. And if so, like others, I've also wondered if Parallax forum members couldn't contribute in terms of near-term financing. For example, if 100 forum members kicked in $1,000 USD, perhaps that could help with the funding to do a full chip test. In return, contributors could be "compensated" with an equal dollar amount of P2 chips when the chip comes out (or other Parallax merchandise if working silicon doesn't materialize). Maybe an extra $100K could make the difference. And, recently, Ken said that some sort of crowd-funding was within the realm of possibilities (or at least not a presently dismissed idea).
Based on what we've been told, a full test chip can be funded by Parallax itself when the design is ready (i.e., in due time). But maybe allowing forum members (or the world at large if going with a standard crowd sourcing site/service) would be another way to let people contribute prior to the P2 actually coming out. I guess what I'm saying is that I wouldn't take it as a sign of weakness to add some type of crowd funding into the mix, as it could be part of the overall strategy to seek input from others, and it might help with marketing somehow. But maybe I'm just coming up with arguments because I'm chomping at the bit to get silicon, and, if so, pardon my impatience, if that's what it is.
Actually, I'm willing to wait longer if other great improvements make their way into the chip, such as a package-on-package memory chip or perhaps 1MB of hub RAM (not possible in the present package) or what have you. But it seems we're near the end as to what can be added to the chip/package that would make a significant improvement and wouldn't entail significant redesign. Not to contradict my last statement, but I have wondered if Chip is overly (perhaps even stubbornly?) committed to the current package choice, as it seems like it'll be a bear to lay out and might be overly conservative in terms of power/ground pins (though I know nothing about chip design other than what I've read on this forum), but I guess he wants clean signal processing ability. And a lot of time and money has been put into the custom outer ring stuff, and the design decisions seem pretty solid. Anyway, I did wonder if the possibility of being able to use On Semi's fuses might affect the analysis in terms of the necessity (or not) of doing a partial test chip (and maybe even open up the package choices somehow). But I'm kind of just voicing pent up stuff now and have veered off course.
I will confess that I do hope that Parallax will consider publically releasing at least a rough time plan for getting the chip into production (a best-case time scenario assuming that test chips go well and so on). I recall Ken doing so in the past. It's understandable that he hasn't done so recently (and that he may never do so until he has working chips in hand and a signed fabrication contract). But now that the design is nearly done, it's at least worth thinking about. I feel that most members want to know at least what year to expect a chip. I don't think that the recent thread about this (not the first, I'm sure) got any Parallax responses, which is their prerogative, and we certainly don't want them spending too much time on the forums or creating false expectations. But I don't think anyone was looking for promises, as such. You've got to understand that some have lost the faith, so to speak and moved on. Obviously, Parallax has more than demonstrated their commitment to getting the P2 out when it's ready. But a skeptical (cautiously pessimistic?) person might wonder if a less than cash flush Parallax is perhaps just waiting us out until we lose interest, or hoping that FPGA prices will continue to fall and just go that route. I don't believe either of those to be the case at all but could understand someone wondering along those lines.
However, it would at least be nice to know approximately when SPIN 2 will be ready enough to start coding for. Someone posted the other day that he was getting excited about the forth-coming chip now that SPIN 2 development was coming along. Me, too! I'm going to guess that Chip might have something runnable in two or three months, or perhaps by the time that a test chip comes back (barring unforeseen challenges), but I'm just guessing without any real knowledge of the situation or challenges involved. So, even if a timeframe for silicon can't be projected (at least for now), a guesstimate about SPIN 2 readiness would be much appreciated, and would also be another line of evidence (among many) that Parallax is not only committed but also on track (a new track but still a track) to bringing the design to fruition. But even without any time estimates, I'm convinced, since chip design and helping to bring inventions to life clearly are desires of Chip's heart (and mind), as supported/encouraged by Ken and Parallax. But now that significant progress has been made on the processing "engine" (so to speak) for SPIN 2, it should be easier to guess when something runnable might be ready.
Anyway, pardon me for spilling my guts (some of them, anyway). Overall, it really looks like several things are coming together all at the right time a few months down the line. At that time (I guess/expect), Parallax will be able to say more. The funding picture should be clearer and confidence in the design higher, and so on and so forth. My above talk about some kind of crowd funding possibly advancing the release of silicon is likely off base, then. But crowd funding still might be worth considering for reasons other than time-to-market and funding, so I'm leaving my comment. And my fears about laying out the P2 will likely disappear once silicon is realized and sample designs are available (though I'm hopeful that the P2 won't require a 4-layer board). But while I'm at it, I do wish that there was a published design for a module for it, like the one that was done for the P2-Hot (perhaps a module with optional hyperRAM, as regular DRAM really does take up too many pins for many uses). That could shed a lot of light on the situation. Anyway, I've put everything but the kitchen sink into this post, so I'll give it a rest for now. Hope I didn't step on any toes, as that was not my intention. I don't think I said anything too overboard, other than perhaps my parenthetical use of the word "stubbornly" (which I'm keeping only because I have a gut-feeling that package situation might merit some further consideration, despite the investments already made). Looking back at the last three years, this "baby elephant" (long gestation period) of a chip is about to be born. It seems like we're on the Concord (okay, maybe an MD-80) and have nearly crossed the ocean and we're starting to reduce our airspeed to come in for a landing. It would be so great if we could pull up to the gate and unfasten our seatbelts by this time next year. Fingers crossed!
Actually, I'm willing to wait longer if other great improvements make their way into the chip, such as a package-on-package memory chip or perhaps 1MB of hub RAM (not possible in the present package) or what have you.
1MB of Hub Ram is not going to happen, in this process/package, & even the 512kB is an aspiration target that may yet bump into place and route and mm2 realities.
Stacked die memory (SPI Flash) as an option might be easier, as other vendors do offer that, and it may be a matter of talking with OnSemi about bonding issues.
And my fears about laying out the P2 will likely disappear once silicon is realized and sample designs are available (though I'm hopeful that the P2 won't require a 4-layer board). But while I'm at it, I do wish that there was a published design for a module for it, like the one that was done for the P2-Hot (perhaps a module with optional hyperRAM, as regular DRAM really does take up too many pins for many uses).
HyperRAM is going to be important alongside P2, and it is a clear module candidate.
First modules I would expect to be 4 layer, simply because that is the safest way to approach it.
Once that is proven stable, someone brave can try 2 layer if they want.
At these small PCBs, the price difference between 2L and 4L is not great.
First modules I would expect to be 4 layer, simply because that is the safest way to approach it. Once that is proven stable, someone brave can try 2 layer if they want.
Thanks, jmg. That makes sense. Now if it were currently known that a 4-layer board was an absolute requirement (or at least highly desirable), then maybe Parallax could say so to set expectations realistically. Well, in a way, you've effectively done so for them. Anyway, obviously, in an ideal world, one would have separate planes to run power and ground (or at least ground). I guess I'm just gun shy because the prices for prototype or low-volume runs from the supplier I use (Seeed) do differ quite a bit depending on the number of layers.
Okay, I just checked, and, for example, a 150x150 mm board runs $55 for QTY 10, whereas a 4-layer version costs $137. I'd expect costs to approach at higher quantities, but QTY 50 of that size board runs $200 and $366 for 2L and 4L versions, respectively, and QTY 100 is quoted at $360 and $633 for 2L and 4L boards, respectively. However, these prices are from Seeed's prototyping service (where they combine various orders together), and not for standard runs (I don't know, but they may not actually offer regular runs because they outsource the actual manufacturing). Anyway, I assume you're right about the relatively small price difference for regular manufacturing runs.
And I'm sure that it's a lot more "professional" to use four layers as a standard design technique if one can afford it. I just haven't gone (and haven't needed to go) that route myself yet. But I haven't done any BGA designs (like the packaging for the HyperRAM, I believe), and if I had to, I'd probably want (or need) to use four layers. I probably just haven't gotten used to the 4L thinking yet, as my needs have been limited. But I sure appreciate your expertise on the above matter.
If we do need a 4L board, though, it is a bit of a shock to me because, a few years back, I recall seeing specs for an ARM system-on-a chip (you know, the kind with a GPU built in and so on) that touted being two-layer friendly with a QFP package with as many pins as or more pins than the proposed P2 (it was almost for sure done at a higher process density than the P2, and I don't recall if it had a thermal pad). Anyway, designing with the P2 is going to be somewhat of a different world than what many of us are used to with the P1 (in DIP and QFP forms), so I suppose we'd better start getting used to it. In many cases, a module will do the heavy lifting, but only if the price is right. ***Update*** My slight shock is no doubt mostly due to the fact that I'm not comparing apples and oranges, as the ARM chip I'm remembering didn't have 64 configurable smart pins.
BTW, I'm glad that you're bullish on HyperRAM for the P2. I've seen your regular posts about testing it with the P2 (apparently it's yet to be demonstrated), and also your comments in the P1 section about the work that RJSM has done interfacing it to the P1. Maybe Chip could commit to producing a tentative HyperRAM module design for the P2 like the one he did for P2-Hot with DRAM (assuming the package type for the P2 is set in stone, I mean). Then again, he really has his hands full already, doesn't he! Well, in due time, then, someone at Parallax will do so.
Anyway, I assume you're right about the relatively small price difference for regular manufacturing runs.
And I'm sure that it's a lot more "professional" to use four layers as a standard design technique if one can afford it.
Keep in mind that 4L module can be better cooled, and smaller, and that counts for quite a bit.
If we take a standard web price form : https://www.pcbway.com/
and scale those prices to a PiZero module size of 30 x 65mm we get
1.593*30*65/(100*100) = $0.31 / 4L_PCB / ~5k
0.908*30*65/(100*100) = $0.18 / 2L_PCB / ~5k
So you pay 13c more per PCB, for 4L, which in the price of a module, is not that significant.
Now if it were currently known that a 4-layer board was an absolute requirement (or at least highly desirable),.....
Note here I am talking about P2 Modules, rather than any specific customer end use.
Modules by nature have to be conservative, and able to operate fully loaded (64io) and clocked (~160MHz), and give best Analog specs, and users expect them to be small & packed.
Someone who only used a subset of pins, and those at low digital only speeds, may well get away with a 2 layer design.
I'd expect them to develop using a 4L module, so it should be clear if the 2L compromises things too much.
BTW, I'm glad that you're bullish on HyperRAM for the P2. I've seen your regular posts about testing it with the P2 (apparently it's yet to be demonstrated), and also your comments in the P1 section about the work that RJSM has done interfacing it to the P1. Maybe Chip could commit to producing a tentative HyperRAM module design for the P2 like the one he did for P2-Hot with DRAM (assuming the package type for the P2 is set in stone, I mean). Then again, he really has his hands full already, doesn't he! Well, in due time, then, someone at Parallax will do so.
HyperRAM is a good example of what should go onto a module, because it is hard for low volume users to handle, and it allows instant proof of operation of external memory.
Mouser shows HyperRAM for $1.63/1k, so it is not going to impact a P2 module price too much.
HyperRAM is a good example of what should go onto a module, because it is hard for low volume users to handle, and it allows instant proof of operation of external memory.
Mouser shows HyperRAM for $1.63/1k, so it is not going to impact a P2 module price too much.
And Chip has signaled being very interested in HyperRAM, too. He's just been tied up with getting everything else in place.
For the ones concerned in associating HyperRam(HyperFlash) with the P2 and also interested in good layout technics that can lead to a juditious decision about wheter use two or four layer boards, the following links does give some guidance and examples.
Yes, JRetSapDoog, HyperRAM on a Prop2 module would be something Parallax would make. Big RAM opens up lots of application possibilities.
The reason it's sound to prove the test chip out, before making a full chip, is because all those analog blocks are going to get mixed in with the digital design. The digital design's functionality is deterministic these days due to the advanced state of design tools, but the analog must be proven before adding into the mix.
Things are coming together nicely on all fronts to get a full chip built. Our current test chip will go into a shuttle in about 4 weeks. It usually takes 10 weeks to get back. By that time, the Prop2 digital design should be quite certain and proven. Notice that in the last release, there were no new instructions, just refinements to what already exists.
Work on the Spin2 interpreter has been going slowly lately, as I've been working on refining the main Verilog code and getting prepared for a China trip where we are taking a bunch of kids to help teach Chinese kids how to program our Scribbler 3 robots in Blockly. Blocky is a really neat idea for getting people started in programming. You could pull a random person off the street, show them a blockly program, run it on some hardware, and they would probably be able to modify and add to it without any explanation. It's maybe too easy. I hope we don't wind up with a bunch of smart, but bored, Chinese kids. We'll need to get creative on giving them lots of project challenges. During the next few weeks, I may not be able to get onto the forum, since I'll be behind the Great (fire)Wall of China. I got a new FPGA release out, but am still needing to update the Google Doc to reflect new XBYTE behavior and, I just remembered this morning, the new smart pin input filtering. Am I missing anything?
... I got a new FPGA release out, but am still needing to update the Google Doc to reflect new XBYTE behavior and, I just remembered this morning, the new smart pin input filtering. Am I missing anything?
The smart pin modes DOCs could certainly do with an expansion, as there seems some variance between what the chip actually does, and what it should do to be field useful.
The terse DOCs do not help, as readers are forced to guess what might/should/could be happening behind the scenes and Evanh's tests show some modes are not quite as one would expect or need...
My preference is to use equations (examples already given) over prose/sentences, as sentences always omit some vital detail.
My feeling is that things like Internal/External clock, Clock gating, and Clear on Capture should be individual config-bit accessible more than locked into a mode case statement.
Did OnSemi already forwarded any news about the fuse layout?
Henrique
No, but your post made me ask again. Last we emailed them, somebody was going to send us a data sheet for a 128-bit fuse block that is available for the ONC18 process.
Did OnSemi already forwarded any news about the fuse layout?
Henrique
No, but your post made me ask again. Last we emailed them, somebody was going to send us a data sheet for a 128-bit fuse block that is available for the ONC18 process.
Did you ask about FLASH, EEPROM, OTP ROM, etc too? You could use any of these for the fuses too.
Did OnSemi already forwarded any news about the fuse layout?
Henrique
No, but your post made me ask again. Last we emailed them, somebody was going to send us a data sheet for a 128-bit fuse block.
Thanks Chip
Wow
It will be a bit strange if they do have a datasheet, only to address the 128 bit fuse-block layout by itself.
For a ready-to-use (drop-in-place), seven-bit addressable, matrix-wise 128-bit instance, with read addressing and programming circuits, it seems a datasheet is a must have, though.
Otherwise, if they are simultaneously and individualy accessible, like the ones needed for analog trimming, or optional logic circuit behavior enabling/disabling and yet must be part of an instance, meant to be spatialy confined, close to each other, perhaps you'll need more than two instances to avoid clobering its vicinity with routing wires. Even if you need to leave part of them unused.
We'll see what comes next, from their part.
I've PM'ed you twice, about this subject and some other questions.
Just had a random thought, The P1 is one of a very small set of micro-controllers I can find on digikey that are speced for the extended temperature range of (-55~125 degC). I was wondering if the P2 is intended to have the same spec or if tighter timing constraints will push this to a narrower range? (or if it's simply too early to think about :P ).
Note: Digikey lists the P1 as "-" in temperature range so it could be there are others missed there as well.
Thanks for your comments, Chip. Looking forward to a P2 module once the P2 is ready. Hope the testing for the analog blocks goes well.
Best of luck at the Scribbler 3 "camp" using Blocky to bring kids and bots together. Sounds like an "east meets west" kind of thing. It's great that Scribbler is easy to use with Blocky because the easier something is to use, the more one can add one's own creativity to it. The kids will enjoy that (if the adults guide without getting in the way).
And congrats on making it to "base camp" with Spin 2 development. It sounds like you've already discovered and added the P2 instructions needed to expedite Spin 2's main processing loop. So further development can be done during those relatively long fab waits, possibly allowing Spin 2 to be ready once the full chip comes out, if not before. Apologies for my over-eager time estimates.
Thanks for your comments, Chip. Looking forward to a P2 module once the P2 is ready. Hope the testing for the analog blocks goes well.
Best of luck at the Scribbler 3 "camp" using Blocky to bring kids and bots together. Sounds like an "east meets west" kind of thing. It's great that Scribbler is easy to use with Blocky because the easier something is to use, the more one can add one's own creativity to it. The kids will enjoy that (if the adults guide without getting in the way).
And congrats on making it to "base camp" with Spin 2 development. It sounds like you've already discovered and added the P2 instructions needed to expedite Spin 2's main processing loop. So further development can be done during those relatively long fab waits, possibly allowing Spin 2 to be ready once the full chip comes out, if not before. Apologies for my over-eager time estimates.
Thanks.
Hopefully, this Scribbler/Blockly camp goes well. The kids pick up Blockly almost without effort, so I finally realized yesterday that we need to come up with lots of challenges which invite them to think like a programmer, at a level or two above just understanding what the blocks will do. If we can come up with two or three good challenges a day, I think it will keep them going. Kids are quite spoiled nowadays by video games and internet stuff, and they're not used to deriving pleasure from honest challenges. Their attention is vied for by all kinds of forces on the internet. I've been teaching Blockly to the six kids we are taking on the trip and the more advanced ones will toggle over to their favorite apps as soon as they suppose they heard me and they sense a lull of any kind. If I move around so their screens would be in my view, a subtle keystroke toggles them back to Blockly before I can see anything. It makes me a little crazy. I don't know how teachers deal with this. And I really see the other side of "generation gap" nowadays. I remember thinking, as a kid, when a teacher would tell us what they liked, how boring it all sounded, like death or something. Now, I'm all boring and death-like.
Oh yes. When I was in school our old English teacher used to tell us teenagers "If you are bored, it's your own fault". That made no sense to us whilst stewing over the classic English poets and authors which seemed very boring at the time. It was years before I understood what he meant.
He also said, to the whole class, "When Heater brings his homework in, he is going to need a wheelbarrow". I had not done any for a year or two!
I can't imagine you can be boring Chip. There will be a few kids that get inspired, there will be a majority that are not. I suspect this ratio has not changed so much as you think since our day.
Oh yes. When I was in school our old English teacher used to tell us teenagers "If you are bored, it's your own fault". That made no sense to us whilst stewing over the classic English poets and authors which seemed very boring at the time. It was years before I understood what he meant.
He also said, to the whole class, "When Heater brings his homework in, he is going to need a wheelbarrow". I had not done any for a year or two!
I can't imagine you can be boring Chip. There will be a few kids that get inspired, there will be a majority that are not. I suspect this ratio has not changed so much as you think since our day.
We know a parent who tells her kids, "If you're bored, it's because YOU'RE boring".
Engineering is being over-prescribed these days. It's now for everybody. The reality is that maybe 2% of the population are inclined to be interested in it.
I'm going to try to be realistic about all this and accept that, yes, only a few kids are going to really get something out of it. For the rest, at least, we can try to make it a fun experience that they have good memories of.
Recalling school days there was only a couple of us kids in the final year that were inclined to mess with electronics. We had a fascination with the new fangled technology becoming available, like LEDs or TTL chips or opamps. Sadly there were no computers about, being well before the micro-processor revolution started, but we read about and dreamed about that possibility. We were the same kids in the metal shop every lunch break turning parts for Sterling Engines and such.
The others, well, they had a lot of time to talk about football...
As for the toggling to their favorite app... adults do this now too.
I have moved to a more interactive model. Set things up so a short, and I mean like 10 minute tops, lecture leads into an exercise.
Now the trick I use is to make the lecture itself interactive. I use the tool, only occasionally showing notes or static info. As I do the lecture, I do the thing.
Many will follow along, some of them will have questions. Great!
Once the thing is done, I pause, give free time to complete the exercise and go around the room troubleshooting, taking questions.
Sometimes someone does something novel, or worth sharing, so I do that.
When it's done, say in 30 to 40 minute chunks, I move onto the next one.
I provide material in finished stages, so people can jump in when they want to.
Every class has a few alphas. They do the stuff, ask about it all, and do so with vigor. Many do some of it, whatever they felt made sense for them. No worries.
The rest had a fun day and that's it. Good as it gets.
As for the toggling to their favorite app... adults do this now too.
I have moved to a more interactive model. Set things up so a short, and I mean like 10 minute tops, lecture leads into an exercise.
Now the trick I use is to make the lecture itself interactive. I use the tool, only occasionally showing notes or static info. As I do the lecture, I do the thing.
Many will follow along, some of them will have questions. Great!
Once the thing is done, I pause, give free time to complete the exercise and go around the room troubleshooting, taking questions.
Sometimes someone does something novel, or worth sharing, so I do that.
When it's done, say in 30 to 40 minute chunks, I move onto the next one.
I provide material in finished stages, so people can jump in when they want to.
Every class has a few alphas. They do the stuff, ask about it all, and do so with vigor. Many do some of it, whatever they felt made sense for them. No worries.
The rest had a fun day and that's it. Good as it gets.
I have found a trend lately and it is people just don't read the documentation. They want to explore it, whatever the tech is, and discover, not read, process, then apply.
For years, I've taught advanced CAD classes. About 10 years ago, I began taking demonstration material intended for sales, breaking that down into little, useful bits that are flexible enough for discovery to happen.
Ditched the official training / education material about that time too. Didn't want to pay fees / royalties. (not that they were unreasonable, I just didn't see the value matching trends)
Rather than self-publish, because that takes a ton of time, I have those little bits both memorized and lightly documented.
Today, I can on basically almost zero notice, deliver a fairly advanced set of material to a group of students and be successful. All I really need is a computer, projector and a little time to understand the students before I begin. If I'm denied that, which happens amazingly, I just weave doing that into the first sessions.
Early on, classes were probably half lecture in the formal way. You talk, they listen.
Today, it's all a conversation from beginning to end. The lecture bits where I really do need and or want to just talk for a bit and have them listen are easy to get, because the rest of the time is interactive as they want it to be.
Aggressive students can put 80, 90 percent of the class into hands on, if they want to.
I'll start each topic with, "open this file, and..." then I'll take a minute or two to frame up the why, link it to some other stuff they should know, "voice of experience" style, then actually deliver the material interactive.
One advantage to this is, once a person has learned how to do while talking, or to support the talking in real time, is the pace ends up in human sweet spot limits.
Since the material is flexible, kind of "can't miss" type simple, digressions, questions, what if type things happen in the moment. I'll save the file right there, entertain whatever it is, load, return, continue. They can follow, or just wait, or whatever they want to do.
At no time do people get behind, as in their choices deny progress more than about 30 minutes worth of material. They have "at stage" saved files to pull from and can jump right in and be relevant, current with the happenings in the room.
One fact that always struck me was the reality of human comprehension and attention. People get some small percentage of lectures. The better ones take notes and get more, but that's not the norm in my experience.
But, when we exercise more senses, and employ kinestetic (sp) type engagement, those percentages go way up. It's like the old, "read it, write it, say it, do it" idea for better memory. That stuff works!
People get a phone call, lose attention, need a break, or just get stuck, or fixated on something interesting, and it's fine. They can tune out on that segment, do it, deal with it, and then drop back in on the next one. File load... and it's all good.
I do limit "need to know" on each segment so they are largely atomic too.
Then, near end of day, I offer free time. Usually an hour, or for a demanding group, I'll stay over or late an hour. People can do anything they want, including leave, in that time.
This is where all the catch ups happen, odd questions, and or "I really need this" discussions happen! It's often the highest value of the day.
Amazingly, doing this gets super easy after one has built up the core chunks of material needed. Frankly, most of my new ones are derived from the questions and digressions that happen in class. I'll do one, and if it's worthy, I'll save that off, and in the evening repeat it, refine it, and add it to the library.
About a third of my stuff is just in my head. I'll have them open a new file, and we will just create right there, in the moment, keeping the whole class, minus a few, tracking along.
Comments
It's now over 3 years old!!!
Looking back, we should have pushed through P2-Hot, just to have something completed. The one-clock-per-instuction with automatic pointers was really nice.
What we have now is very nice, too, and I believe is still much lower-power than P2-Hot would have been.
A new test chip will be underway soon. Then, we'll make a full chip - barring the advent of a global economic crash or world war.
They say that hindsight is 20:20, though perhaps that's only the case if one is wearing one's glasses, as we tend to overlook things in the past and blur lots of stuff together.
The P2-Hot didn't have the "Lazy Susan" memory scheme for sequential reads or writes at full speed, which seems super spiffy. And there must be a lot of other things it lacked (though it's a much different architecture from the current P2, which makes comparisons kind of dicey).
As another example, the P2-Hot didn't have the mechanisms to speed up byte code processing (by) several fold, which has me excited about programming in SPIN. Having such speed available in a high-level language--even though interpreted--could really round out the chip. The only thing that could surpass that would be built-in hardware to specifically process SPIN, but that wouldn't work for other byte-code languages.
But yeah, a lot of water has sure passed over the dam. I think if Chip could really go back in time, he'd go back about seven years and figure out a way to have finished a version of the P1 with 64 I/O pins. That would have more likely impacted the world than a chip one could cook on.
But I might be wrong about that, as Cluso--I believe it is--has said many times that the P2-Hot should have been released (considering how complete the design was) and that it might not have been as hot as feared under many use cases. But it just seemed too risky at decision time to pour additional money (during those relatively high expenditure years) into a chip that wasn't designed with power conservation as a significant priority from the start.
So history went a different direction. Perhaps the P2-Hot was headed in the same direction that Chip's health was then (something I had wondered about around the time of the switch), and so radical changes were in order. I realize that the stress of the current P2 probably brought things to a boil, health-wise, but things must have been building for years before the move away from P2-Hot. So what we have now is a totally renovated P1 successor (which benefits greatly from the P2-Hot work) and a vastly rejuvenated Chip Gracey (thanks to his God, doctors, family and friends, as well as that amazing miracle diet he's on, with hopefully some exercise and other balancing factors thrown in). Yes, thank God that the course corrections came in time so that Chip may give the world the desires of his heart and care for his family.
That's good to know be reassured of. It's also good that P2 development costs have gone down considerably as Parallax has ridden the learning curve. Still, I can't help but wonder if now might be the time to consider ramping up the spending again to do a full chip test instead of just (another) limited test chip. Probably not, as a conservative approach gives time to check and double-check and to tweak. But if it were just a matter of money, then potentially delaying the chip (to do the partial test) might cause it to fall out of its window of opportunity. Of course, I doubt that it's *just* a money consideration (that is, there's no doubt more design and testing to do), but I have asked myself if Parallax is waiting for education-related purchases to come in to secure financing before the final push. And if so, like others, I've also wondered if Parallax forum members couldn't contribute in terms of near-term financing. For example, if 100 forum members kicked in $1,000 USD, perhaps that could help with the funding to do a full chip test. In return, contributors could be "compensated" with an equal dollar amount of P2 chips when the chip comes out (or other Parallax merchandise if working silicon doesn't materialize). Maybe an extra $100K could make the difference. And, recently, Ken said that some sort of crowd-funding was within the realm of possibilities (or at least not a presently dismissed idea).
Based on what we've been told, a full test chip can be funded by Parallax itself when the design is ready (i.e., in due time). But maybe allowing forum members (or the world at large if going with a standard crowd sourcing site/service) would be another way to let people contribute prior to the P2 actually coming out. I guess what I'm saying is that I wouldn't take it as a sign of weakness to add some type of crowd funding into the mix, as it could be part of the overall strategy to seek input from others, and it might help with marketing somehow. But maybe I'm just coming up with arguments because I'm chomping at the bit to get silicon, and, if so, pardon my impatience, if that's what it is.
Actually, I'm willing to wait longer if other great improvements make their way into the chip, such as a package-on-package memory chip or perhaps 1MB of hub RAM (not possible in the present package) or what have you. But it seems we're near the end as to what can be added to the chip/package that would make a significant improvement and wouldn't entail significant redesign. Not to contradict my last statement, but I have wondered if Chip is overly (perhaps even stubbornly?) committed to the current package choice, as it seems like it'll be a bear to lay out and might be overly conservative in terms of power/ground pins (though I know nothing about chip design other than what I've read on this forum), but I guess he wants clean signal processing ability. And a lot of time and money has been put into the custom outer ring stuff, and the design decisions seem pretty solid. Anyway, I did wonder if the possibility of being able to use On Semi's fuses might affect the analysis in terms of the necessity (or not) of doing a partial test chip (and maybe even open up the package choices somehow). But I'm kind of just voicing pent up stuff now and have veered off course.
I will confess that I do hope that Parallax will consider publically releasing at least a rough time plan for getting the chip into production (a best-case time scenario assuming that test chips go well and so on). I recall Ken doing so in the past. It's understandable that he hasn't done so recently (and that he may never do so until he has working chips in hand and a signed fabrication contract). But now that the design is nearly done, it's at least worth thinking about. I feel that most members want to know at least what year to expect a chip. I don't think that the recent thread about this (not the first, I'm sure) got any Parallax responses, which is their prerogative, and we certainly don't want them spending too much time on the forums or creating false expectations. But I don't think anyone was looking for promises, as such. You've got to understand that some have lost the faith, so to speak and moved on. Obviously, Parallax has more than demonstrated their commitment to getting the P2 out when it's ready. But a skeptical (cautiously pessimistic?) person might wonder if a less than cash flush Parallax is perhaps just waiting us out until we lose interest, or hoping that FPGA prices will continue to fall and just go that route. I don't believe either of those to be the case at all but could understand someone wondering along those lines.
However, it would at least be nice to know approximately when SPIN 2 will be ready enough to start coding for. Someone posted the other day that he was getting excited about the forth-coming chip now that SPIN 2 development was coming along. Me, too! I'm going to guess that Chip might have something runnable in two or three months, or perhaps by the time that a test chip comes back (barring unforeseen challenges), but I'm just guessing without any real knowledge of the situation or challenges involved. So, even if a timeframe for silicon can't be projected (at least for now), a guesstimate about SPIN 2 readiness would be much appreciated, and would also be another line of evidence (among many) that Parallax is not only committed but also on track (a new track but still a track) to bringing the design to fruition. But even without any time estimates, I'm convinced, since chip design and helping to bring inventions to life clearly are desires of Chip's heart (and mind), as supported/encouraged by Ken and Parallax. But now that significant progress has been made on the processing "engine" (so to speak) for SPIN 2, it should be easier to guess when something runnable might be ready.
Anyway, pardon me for spilling my guts (some of them, anyway). Overall, it really looks like several things are coming together all at the right time a few months down the line. At that time (I guess/expect), Parallax will be able to say more. The funding picture should be clearer and confidence in the design higher, and so on and so forth. My above talk about some kind of crowd funding possibly advancing the release of silicon is likely off base, then. But crowd funding still might be worth considering for reasons other than time-to-market and funding, so I'm leaving my comment. And my fears about laying out the P2 will likely disappear once silicon is realized and sample designs are available (though I'm hopeful that the P2 won't require a 4-layer board). But while I'm at it, I do wish that there was a published design for a module for it, like the one that was done for the P2-Hot (perhaps a module with optional hyperRAM, as regular DRAM really does take up too many pins for many uses). That could shed a lot of light on the situation. Anyway, I've put everything but the kitchen sink into this post, so I'll give it a rest for now. Hope I didn't step on any toes, as that was not my intention. I don't think I said anything too overboard, other than perhaps my parenthetical use of the word "stubbornly" (which I'm keeping only because I have a gut-feeling that package situation might merit some further consideration, despite the investments already made). Looking back at the last three years, this "baby elephant" (long gestation period) of a chip is about to be born. It seems like we're on the Concord (okay, maybe an MD-80) and have nearly crossed the ocean and we're starting to reduce our airspeed to come in for a landing. It would be so great if we could pull up to the gate and unfasten our seatbelts by this time next year. Fingers crossed!
Stacked die memory (SPI Flash) as an option might be easier, as other vendors do offer that, and it may be a matter of talking with OnSemi about bonding issues.
HyperRAM is going to be important alongside P2, and it is a clear module candidate.
First modules I would expect to be 4 layer, simply because that is the safest way to approach it.
Once that is proven stable, someone brave can try 2 layer if they want.
At these small PCBs, the price difference between 2L and 4L is not great.
Thanks, jmg. That makes sense. Now if it were currently known that a 4-layer board was an absolute requirement (or at least highly desirable), then maybe Parallax could say so to set expectations realistically. Well, in a way, you've effectively done so for them. Anyway, obviously, in an ideal world, one would have separate planes to run power and ground (or at least ground). I guess I'm just gun shy because the prices for prototype or low-volume runs from the supplier I use (Seeed) do differ quite a bit depending on the number of layers.
Okay, I just checked, and, for example, a 150x150 mm board runs $55 for QTY 10, whereas a 4-layer version costs $137. I'd expect costs to approach at higher quantities, but QTY 50 of that size board runs $200 and $366 for 2L and 4L versions, respectively, and QTY 100 is quoted at $360 and $633 for 2L and 4L boards, respectively. However, these prices are from Seeed's prototyping service (where they combine various orders together), and not for standard runs (I don't know, but they may not actually offer regular runs because they outsource the actual manufacturing). Anyway, I assume you're right about the relatively small price difference for regular manufacturing runs.
And I'm sure that it's a lot more "professional" to use four layers as a standard design technique if one can afford it. I just haven't gone (and haven't needed to go) that route myself yet. But I haven't done any BGA designs (like the packaging for the HyperRAM, I believe), and if I had to, I'd probably want (or need) to use four layers. I probably just haven't gotten used to the 4L thinking yet, as my needs have been limited. But I sure appreciate your expertise on the above matter.
If we do need a 4L board, though, it is a bit of a shock to me because, a few years back, I recall seeing specs for an ARM system-on-a chip (you know, the kind with a GPU built in and so on) that touted being two-layer friendly with a QFP package with as many pins as or more pins than the proposed P2 (it was almost for sure done at a higher process density than the P2, and I don't recall if it had a thermal pad). Anyway, designing with the P2 is going to be somewhat of a different world than what many of us are used to with the P1 (in DIP and QFP forms), so I suppose we'd better start getting used to it. In many cases, a module will do the heavy lifting, but only if the price is right. ***Update*** My slight shock is no doubt mostly due to the fact that I'm not comparing apples and oranges, as the ARM chip I'm remembering didn't have 64 configurable smart pins.
BTW, I'm glad that you're bullish on HyperRAM for the P2. I've seen your regular posts about testing it with the P2 (apparently it's yet to be demonstrated), and also your comments in the P1 section about the work that RJSM has done interfacing it to the P1. Maybe Chip could commit to producing a tentative HyperRAM module design for the P2 like the one he did for P2-Hot with DRAM (assuming the package type for the P2 is set in stone, I mean). Then again, he really has his hands full already, doesn't he! Well, in due time, then, someone at Parallax will do so.
Keep in mind that 4L module can be better cooled, and smaller, and that counts for quite a bit.
If we take a standard web price form :
https://www.pcbway.com/
and scale those prices to a PiZero module size of 30 x 65mm we get
1.593*30*65/(100*100) = $0.31 / 4L_PCB / ~5k
0.908*30*65/(100*100) = $0.18 / 2L_PCB / ~5k
So you pay 13c more per PCB, for 4L, which in the price of a module, is not that significant.
Note here I am talking about P2 Modules, rather than any specific customer end use.
Modules by nature have to be conservative, and able to operate fully loaded (64io) and clocked (~160MHz), and give best Analog specs, and users expect them to be small & packed.
Someone who only used a subset of pins, and those at low digital only speeds, may well get away with a 2 layer design.
I'd expect them to develop using a 4L module, so it should be clear if the 2L compromises things too much.
HyperRAM is a good example of what should go onto a module, because it is hard for low volume users to handle, and it allows instant proof of operation of external memory.
Mouser shows HyperRAM for $1.63/1k, so it is not going to impact a P2 module price too much.
Understood. And Chip has signaled being very interested in HyperRAM, too. He's just been tied up with getting everything else in place.
I note at some stage the OSHpark 2L specs got updated, and 20 mil vias with 10 mil holes are now supported
https://we-online.com/web/en/index.php/show/media/04_leiterplatte/2013_1/webinare_1/signalintegritaet/Webinar_Signal_final_engl.pdf
cypress.com/file/278156/download
Hope it helps.
Henrique
P.S. Another one that could be of interest:
cypress.com/file/202451/download
The reason it's sound to prove the test chip out, before making a full chip, is because all those analog blocks are going to get mixed in with the digital design. The digital design's functionality is deterministic these days due to the advanced state of design tools, but the analog must be proven before adding into the mix.
Things are coming together nicely on all fronts to get a full chip built. Our current test chip will go into a shuttle in about 4 weeks. It usually takes 10 weeks to get back. By that time, the Prop2 digital design should be quite certain and proven. Notice that in the last release, there were no new instructions, just refinements to what already exists.
Work on the Spin2 interpreter has been going slowly lately, as I've been working on refining the main Verilog code and getting prepared for a China trip where we are taking a bunch of kids to help teach Chinese kids how to program our Scribbler 3 robots in Blockly. Blocky is a really neat idea for getting people started in programming. You could pull a random person off the street, show them a blockly program, run it on some hardware, and they would probably be able to modify and add to it without any explanation. It's maybe too easy. I hope we don't wind up with a bunch of smart, but bored, Chinese kids. We'll need to get creative on giving them lots of project challenges. During the next few weeks, I may not be able to get onto the forum, since I'll be behind the Great (fire)Wall of China. I got a new FPGA release out, but am still needing to update the Google Doc to reflect new XBYTE behavior and, I just remembered this morning, the new smart pin input filtering. Am I missing anything?
The smart pin modes DOCs could certainly do with an expansion, as there seems some variance between what the chip actually does, and what it should do to be field useful.
The terse DOCs do not help, as readers are forced to guess what might/should/could be happening behind the scenes and Evanh's tests show some modes are not quite as one would expect or need...
My preference is to use equations (examples already given) over prose/sentences, as sentences always omit some vital detail.
My feeling is that things like Internal/External clock, Clock gating, and Clear on Capture should be individual config-bit accessible more than locked into a mode case statement.
Did OnSemi already forwarded any news about the fuse layout?
Henrique
No, but your post made me ask again. Last we emailed them, somebody was going to send us a data sheet for a 128-bit fuse block that is available for the ONC18 process.
Did you ask about FLASH, EEPROM, OTP ROM, etc too? You could use any of these for the fuses too.
THIS IS THE OLD P2 THREAD WHICH TERMINATED JUST AFTER THE "P2-HOT".
I revived it just for reference. New P2 discussions should be addressed in the later thread
forums.parallax.com/discussion/162298/prop2-fpga-files-updated-4-july-2017-version-20/p1
Thanks Chip
Wow
It will be a bit strange if they do have a datasheet, only to address the 128 bit fuse-block layout by itself.
For a ready-to-use (drop-in-place), seven-bit addressable, matrix-wise 128-bit instance, with read addressing and programming circuits, it seems a datasheet is a must have, though.
Otherwise, if they are simultaneously and individualy accessible, like the ones needed for analog trimming, or optional logic circuit behavior enabling/disabling and yet must be part of an instance, meant to be spatialy confined, close to each other, perhaps you'll need more than two instances to avoid clobering its vicinity with routing wires. Even if you need to leave part of them unused.
We'll see what comes next, from their part.
I've PM'ed you twice, about this subject and some other questions.
Henrique
Note: Digikey lists the P1 as "-" in temperature range so it could be there are others missed there as well.
Best of luck at the Scribbler 3 "camp" using Blocky to bring kids and bots together. Sounds like an "east meets west" kind of thing. It's great that Scribbler is easy to use with Blocky because the easier something is to use, the more one can add one's own creativity to it. The kids will enjoy that (if the adults guide without getting in the way).
And congrats on making it to "base camp" with Spin 2 development. It sounds like you've already discovered and added the P2 instructions needed to expedite Spin 2's main processing loop. So further development can be done during those relatively long fab waits, possibly allowing Spin 2 to be ready once the full chip comes out, if not before. Apologies for my over-eager time estimates.
Thanks.
Hopefully, this Scribbler/Blockly camp goes well. The kids pick up Blockly almost without effort, so I finally realized yesterday that we need to come up with lots of challenges which invite them to think like a programmer, at a level or two above just understanding what the blocks will do. If we can come up with two or three good challenges a day, I think it will keep them going. Kids are quite spoiled nowadays by video games and internet stuff, and they're not used to deriving pleasure from honest challenges. Their attention is vied for by all kinds of forces on the internet. I've been teaching Blockly to the six kids we are taking on the trip and the more advanced ones will toggle over to their favorite apps as soon as they suppose they heard me and they sense a lull of any kind. If I move around so their screens would be in my view, a subtle keystroke toggles them back to Blockly before I can see anything. It makes me a little crazy. I don't know how teachers deal with this. And I really see the other side of "generation gap" nowadays. I remember thinking, as a kid, when a teacher would tell us what they liked, how boring it all sounded, like death or something. Now, I'm all boring and death-like.
He also said, to the whole class, "When Heater brings his homework in, he is going to need a wheelbarrow". I had not done any for a year or two!
I can't imagine you can be boring Chip. There will be a few kids that get inspired, there will be a majority that are not. I suspect this ratio has not changed so much as you think since our day.
We know a parent who tells her kids, "If you're bored, it's because YOU'RE boring".
Engineering is being over-prescribed these days. It's now for everybody. The reality is that maybe 2% of the population are inclined to be interested in it.
I'm going to try to be realistic about all this and accept that, yes, only a few kids are going to really get something out of it. For the rest, at least, we can try to make it a fun experience that they have good memories of.
Recalling school days there was only a couple of us kids in the final year that were inclined to mess with electronics. We had a fascination with the new fangled technology becoming available, like LEDs or TTL chips or opamps. Sadly there were no computers about, being well before the micro-processor revolution started, but we read about and dreamed about that possibility. We were the same kids in the metal shop every lunch break turning parts for Sterling Engines and such.
The others, well, they had a lot of time to talk about football...
As for the toggling to their favorite app... adults do this now too.
I have moved to a more interactive model. Set things up so a short, and I mean like 10 minute tops, lecture leads into an exercise.
Now the trick I use is to make the lecture itself interactive. I use the tool, only occasionally showing notes or static info. As I do the lecture, I do the thing.
Many will follow along, some of them will have questions. Great!
Once the thing is done, I pause, give free time to complete the exercise and go around the room troubleshooting, taking questions.
Sometimes someone does something novel, or worth sharing, so I do that.
When it's done, say in 30 to 40 minute chunks, I move onto the next one.
I provide material in finished stages, so people can jump in when they want to.
Every class has a few alphas. They do the stuff, ask about it all, and do so with vigor. Many do some of it, whatever they felt made sense for them. No worries.
The rest had a fun day and that's it. Good as it gets.
Excellent, Potato Head!
I have found a trend lately and it is people just don't read the documentation. They want to explore it, whatever the tech is, and discover, not read, process, then apply.
For years, I've taught advanced CAD classes. About 10 years ago, I began taking demonstration material intended for sales, breaking that down into little, useful bits that are flexible enough for discovery to happen.
Ditched the official training / education material about that time too. Didn't want to pay fees / royalties. (not that they were unreasonable, I just didn't see the value matching trends)
Rather than self-publish, because that takes a ton of time, I have those little bits both memorized and lightly documented.
Today, I can on basically almost zero notice, deliver a fairly advanced set of material to a group of students and be successful. All I really need is a computer, projector and a little time to understand the students before I begin. If I'm denied that, which happens amazingly, I just weave doing that into the first sessions.
Early on, classes were probably half lecture in the formal way. You talk, they listen.
Today, it's all a conversation from beginning to end. The lecture bits where I really do need and or want to just talk for a bit and have them listen are easy to get, because the rest of the time is interactive as they want it to be.
Aggressive students can put 80, 90 percent of the class into hands on, if they want to.
I'll start each topic with, "open this file, and..." then I'll take a minute or two to frame up the why, link it to some other stuff they should know, "voice of experience" style, then actually deliver the material interactive.
One advantage to this is, once a person has learned how to do while talking, or to support the talking in real time, is the pace ends up in human sweet spot limits.
Since the material is flexible, kind of "can't miss" type simple, digressions, questions, what if type things happen in the moment. I'll save the file right there, entertain whatever it is, load, return, continue. They can follow, or just wait, or whatever they want to do.
At no time do people get behind, as in their choices deny progress more than about 30 minutes worth of material. They have "at stage" saved files to pull from and can jump right in and be relevant, current with the happenings in the room.
One fact that always struck me was the reality of human comprehension and attention. People get some small percentage of lectures. The better ones take notes and get more, but that's not the norm in my experience.
But, when we exercise more senses, and employ kinestetic (sp) type engagement, those percentages go way up. It's like the old, "read it, write it, say it, do it" idea for better memory. That stuff works!
People get a phone call, lose attention, need a break, or just get stuck, or fixated on something interesting, and it's fine. They can tune out on that segment, do it, deal with it, and then drop back in on the next one. File load... and it's all good.
I do limit "need to know" on each segment so they are largely atomic too.
Then, near end of day, I offer free time. Usually an hour, or for a demanding group, I'll stay over or late an hour. People can do anything they want, including leave, in that time.
This is where all the catch ups happen, odd questions, and or "I really need this" discussions happen! It's often the highest value of the day.
Amazingly, doing this gets super easy after one has built up the core chunks of material needed. Frankly, most of my new ones are derived from the questions and digressions that happen in class. I'll do one, and if it's worthy, I'll save that off, and in the evening repeat it, refine it, and add it to the library.
About a third of my stuff is just in my head. I'll have them open a new file, and we will just create right there, in the moment, keeping the whole class, minus a few, tracking along.
The rest is on a thumb drive.