Shop OBEX P1 Docs P2 Docs Learn Events
Propeller II update - BLOG - Page 131 — Parallax Forums

Propeller II update - BLOG

1128129131133134223

Comments

  • K2K2 Posts: 693
    edited 2013-12-07 18:35
    David Betz wrote: »
    Can someone explain where determinism is important? I understand that it's needed for things like UARTs, USB, PWM, and video but those are usually handled by dedicated hardware on other MCUs. Where is general determinism required? I'm not arguing that it isn't necessary. I'm just trying to understand where it is required that might not be handled by other processors like an ARM.

    Apologies for my slow response. Some excellent NCAA conference championship games are getting in the way of my geek time today!

    I can easily provide an example of conventional processors falling short wrt determinism:

    With careful engineering, and major reliance on assembly, you might get your pet ARM Cortex chip to generate a decent VGA interface for a picky LCD monitor. Now, how do you superimpose a second independent VGA port on this code? And a third? The customer will not tolerate even momentary artifacts on any of the displays.

    PWM facilities are an awkward fit and require too frequent processor intervention because of the poor fit. (You are constantly reverting to tricks to get edges anywhere near where they need to be.) There aren't enough PWM modules to go around, anyway.

    I'm not driving VGA displays, and the memory demand isn't as great, but the timing is actually more critical, and the total number of strobes is greater.

    It's a breeze with a Prop (edges always within 2 ns of expectation!), and a Byzantine nightmare with a monoprocessor. With the complexity of the monoprocessor code comes unreliability. Unreliability results in blown fuses and expensive downtime.

    Relegating certain timing functions to an external CLPD or a second processor has a significant cost. It also makes updates problematic.

    Were there a market for a million widgets and were updates not in the picture (in other words, it was strictly a disposable consumer product) a careful cost analysis would be in order (which would probably result in an ASIC, anyway). But with things the way they are, I'm entirely conviced that a cheaper solution, if it were possible, would be more expensive in the long run.
  • David BetzDavid Betz Posts: 14,516
    edited 2013-12-07 18:58
    K2 wrote: »
    Apologies for my slow response. Some excellent NCAA conference championship games are getting in the way of my geek time today!

    I can easily provide an example of conventional processors falling short wrt determinism:

    With careful engineering, and major reliance on assembly, you might get your pet ARM Cortex chip to generate a decent VGA interface for a picky LCD monitor. Now, how do you superimpose a second independent VGA port on this code? And a third? The customer will not tolerate even momentary artifacts on any of the displays.

    PWM facilities are an awkward fit and require too frequent processor intervention because of the poor fit. (You are constantly reverting to tricks to get edges anywhere near where they need to be.) There aren't enough PWM modules to go around, anyway.

    I'm not driving VGA displays, and the memory demand isn't as great, but the timing is actually more critical, and the total number of strobes is greater.

    It's a breeze with a Prop (edges always within 2 ns of expectation!), and a Byzantine nightmare with a monoprocessor. With the complexity of the monoprocessor code comes unreliability. Unreliability results in blown fuses and expensive downtime.

    Relegating certain timing functions to an external CLPD or a second processor has a significant cost. It also makes updates problematic.

    Were there a market for a million widgets and were updates not in the picture (in other words, it was strictly a disposable consumer product) a careful cost analysis would be in order (which would probably result in an ASIC, anyway). But with things the way they are, I'm entirely conviced that a cheaper solution, if it were possible, would be more expensive in the long run.
    Thanks for your reply! I'm kind of discounting VGA because I don't use it myself and I wonder how many real-world MCU applications need to generate video.

    Your comments about PWM are interesting. I guess you're saying that even though some MCUs have dedicated PWM hardware, it isn't convenient to use and/or doesn't really do what is necessary. I guess that's a place where the Propeller fits well since you can create any kind of PWM driver that you need.
  • bruceebrucee Posts: 239
    edited 2013-12-07 19:13
    We'll you can buy an ARM with built in LCD controller easily capable of VGA, with a camera interface so you can overlay the display, throw in SDRAM external interface, and for good measure Ethernet and USB. Add to that 1M of Flash and almost 200K on chip RAM. All that for $14, quantity 1. It's an LPC4357. Oh yeah it has 2 cores running at 200 MHz, and floating point hardware. ST has a similar device. That's before you step up to the Linux capable ARMs that aren't much more

    I'm not saying there isn't a place for a P2, it's just there is a lot of competition out there, and those vendors are not sitting still, usually each introducing a new family every 4 months or so. Yes the P2 is simpler to pick up and that is the attraction in the hobbyist, education market. That is an area Parallax use to dominate, but any volume application, say 10K or more it's hard to imagine a P2 being cost competitive, and if it really needs 1 A of power, forget the general market.
  • K2K2 Posts: 693
    edited 2013-12-07 19:19
    David Betz wrote: »
    Thanks for your reply! I'm kind of discounting VGA because I don't use it myself and I wonder how many real-world MCU applications need to generate video.

    Your comments about PWM are interesting. I guess you're saying that even though some MCUs have dedicated PWM hardware, it isn't convenient to use and/or doesn't really do what is necessary. I guess that's a place where the Propeller fits well since you can create any kind of PWM driver that you need.

    Like I mentioned part way down my potatoheadesque reply, I'm not doing VGA either. It was just an example of real-world complex timing.

    I'm hesitant to mention what I am doing because 1) it is an unorthodox use of a microcontroller and I don't care to get into a discussion with anyone over the propriety of what I'm doing and, 2) I don't need any more competition.

    PWM as provided by the average uC can be great! But it is far from a universal solution to all signal generation requirements.

    Edit: I hold potatohead in the highest regard and would never use this expression if I though it would offend him!
  • cgraceycgracey Posts: 14,155
    edited 2013-12-07 19:48
    K2 wrote: »
    Apologies for my slow response. Some excellent NCAA conference championship games are getting in the way of my geek time today!

    I can easily provide an example of conventional processors falling short wrt determinism:

    With careful engineering, and major reliance on assembly, you might get your pet ARM Cortex chip to generate a decent VGA interface for a picky LCD monitor. Now, how do you superimpose a second independent VGA port on this code? And a third? The customer will not tolerate even momentary artifacts on any of the displays.

    PWM facilities are an awkward fit and require too frequent processor intervention because of the poor fit. (You are constantly reverting to tricks to get edges anywhere near where they need to be.) There aren't enough PWM modules to go around, anyway.

    I'm not driving VGA displays, and the memory demand isn't as great, but the timing is actually more critical, and the total number of strobes is greater.

    It's a breeze with a Prop (edges always within 2 ns of expectation!), and a Byzantine nightmare with a monoprocessor. With the complexity of the monoprocessor code comes unreliability. Unreliability results in blown fuses and expensive downtime.

    Relegating certain timing functions to an external CLPD or a second processor has a significant cost. It also makes updates problematic.

    Were there a market for a million widgets and were updates not in the picture (in other words, it was strictly a disposable consumer product) a careful cost analysis would be in order (which would probably result in an ASIC, anyway). But with things the way they are, I'm entirely conviced that a cheaper solution, if it were possible, would be more expensive in the long run.


    That's an excellent explanation of why determinism matters!

    Most embedded software practitioners these days have zero appreciation for determinism, because it's never been on the menu - not even hypothetically. They suppose everything is to be written in C and it will run "fast enough" on interchangeable chips that might as well all be same. Whenever a need for determinism arises, a solution will immediately be sought via some on-chip peripheral. I know that mindset is heavily inertial and is reflexively perpetuated by employers' desire for interchangeable people, and universities looking to make relevant employees. It's a paradigm that busies lots of companies and users, but results in nihilism towards any other way.

    For 99% of the things I look forward to making, lack of determinism would be a total bust. I have about zero interest in programming chips that don't let me control when things happen, down to the clock edge - and that's just about every chip out there. FPGA's are the only programmable solution to the real-time problem, but they are very time-consuming to develop for. The Propeller chips are like FPGA's, in that they provide determinism, but they are configurable in software, which is much faster to develop in than a hardware description language is. The only people that can benefit from the Propeller chips are those that can mentally break out of the prevailing paradigm.
  • David BetzDavid Betz Posts: 14,516
    edited 2013-12-07 20:01
    Heater. wrote: »
    David Betz wrote:
    I'm currently working on a C-like bytecode compiler with a REPL that will run on the Propeller.
    Interesting...
    Well, it's a work in progress based on some code I wrote for Dr. Dobbs Journal many years ago. I created a repository on GitHub if you want to take a look at it.

    https://github.com/dbetz/bob

    I want to get rid of its use of malloc in the bytecode compiler and I also need to resurrect the documentation since that file seems to have gotten truncated somewhere along the way. I also plan on adding code to allow something like the "setwatch" feature of the JavaScript in the video you posted a while ago.
  • K2K2 Posts: 693
    edited 2013-12-07 20:34
    Chip,

    Couldn't agree more, and I'm so appreciative of all you've brought to the world of embedded control!!
  • dr hydradr hydra Posts: 212
    edited 2013-12-07 21:08
    Chip

    I completely agree...this is another reason why i love propellers...

    I cannot stand programming PCs anymore...no matter how hard I try to write tight code...I am completely at the mercy of the operating system:(
  • cgraceycgracey Posts: 14,155
    edited 2013-12-07 21:24
    dr hydra wrote: »
    Chip

    I completely agree...this is another reason why i love propellers...

    I cannot stand programming PCs anymore...no matter how hard I try to write tight code...I am completely at the mercy of the operating system:(

    Yes, the modern paradigm has sucked all the fun out of programming. Our computers now despise us, though they pretend to go along, in a limited sort of way.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-12-07 21:31
    @K2 -- likewise!

    I'm wordy. Maybe it's the theater / music student in me that never quite went away. Best to be honest and make the most of who we are.

    I am not easily offended, and on the odd time it happens, it does not last long. Grudges make you die sooner.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-12-07 21:57
    For 99% of the things I look forward to making, lack of determinism would be a total bust. I have about zero interest in programming chips that don't let me control when things happen, down to the clock edge - and that's just about every chip out there. FPGA's are the only programmable solution to the real-time problem, but they are very time-consuming to develop for. The Propeller chips are like FPGA's, in that they provide determinism, but they are configurable in software, which is much faster to develop in than a hardware description language is. The only people that can benefit from the Propeller chips are those that can mentally break out of the prevailing paradigm.

    This has remained a fairly consistent expression from Chip the entire time I've known him. It resonates with me. I'm sure quite a few of us feel that too.

    I would challenge the "only people" part of that, and instead say, "those who can break the paradigms will benefit the most from Propeller Chips", because I think there are more general benefits when Propellers are packaged in to products people can use.
  • evanhevanh Posts: 15,920
    edited 2013-12-07 22:51
    potatohead wrote: »
    The end game on pairing is a 4 COG chip.

    Potato precisely summing up in a one liner! Had a stroke or something lately perhaps? :P

    I'm only just skimming the huge volume of input here but this feature, of hub slot reuse, does seem to be more beneficial to a future 16 Cog Propellor design.

    EDIT: Quote taken from post #3232 - http://forums.parallax.com/showthread.php/125543-Propeller-II-update-BLOG?p=1223389&viewfull=1#post1223389
  • potatoheadpotatohead Posts: 10,261
    edited 2013-12-07 23:29
    Needs more cowbell. Sorry about that. I'll try harder next time.
  • evanhevanh Posts: 15,920
    edited 2013-12-08 00:03
    Dave Hein wrote: »
    Bean wrote: »
    I hate to say it but I see the P2 being the best microcontroller never made.
    Bean, I agree with you. However, the good news is that the P2 is being skipped entirely so that the P3 will come out sooner. :)

    Can't stop myself chuckling on this one. Got the giggles proper.

    I keep wanting to slap those complaining about delay when there is no delay, but this one was the perfect response. Dave, thanks for lightning it up a little.
  • evanhevanh Posts: 15,920
    edited 2013-12-08 02:22
    Heater. wrote: »
    What our generation fell in love with was much smaller and nicer, C64's, BBC micros and so on.

    The survival of Apple and the MACs shows this. Sadly only those with money were in a position to avoid the PC ugliness.

    I know this wasn't your point but the survival of Apple was not due to the Mac's looks nor engineering of OS nor architectural choices nor hardware performance nor any enthusiasts with money to spare. Apple was on the same trajectory as every computer company that had already died from the clone invasion. With Sun not far behind.

    Apple's reason for survival is down to the Web and one bold move that was begging to be filled by anyone with a little money - making a killer app and the matching hardware - Itunes.

    The Web has since spawned many further upstarts.
  • KC_RobKC_Rob Posts: 465
    edited 2013-12-08 09:44
    potatohead wrote: »
    I am not easily offended, and on the odd time it happens, it does not last long. Grudges make you die sooner.
    So very true. :)
  • LeonLeon Posts: 7,620
    edited 2013-12-08 10:09
    cgracey wrote: »
    That's an excellent explanation of why determinism matters!

    Most embedded software practitioners these days have zero appreciation for determinism, because it's never been on the menu - not even hypothetically. They suppose everything is to be written in C and it will run "fast enough" on interchangeable chips that might as well all be same. Whenever a need for determinism arises, a solution will immediately be sought via some on-chip peripheral. I know that mindset is heavily inertial and is reflexively perpetuated by employers' desire for interchangeable people, and universities looking to make relevant employees. It's a paradigm that busies lots of companies and users, but results in nihilism towards any other way.

    For 99% of the things I look forward to making, lack of determinism would be a total bust. I have about zero interest in programming chips that don't let me control when things happen, down to the clock edge - and that's just about every chip out there. FPGA's are the only programmable solution to the real-time problem, but they are very time-consuming to develop for. The Propeller chips are like FPGA's, in that they provide determinism, but they are configurable in software, which is much faster to develop in than a hardware description language is. The only people that can benefit from the Propeller chips are those that can mentally break out of the prevailing paradigm.

    What about XMOS?
  • Dave HeinDave Hein Posts: 6,347
    edited 2013-12-08 10:42
    cgracey wrote: »
    That's an excellent explanation of why determinism matters!

    Most embedded software practitioners these days have zero appreciation for determinism, because it's never been on the menu - not even hypothetically. They suppose everything is to be written in C and it will run "fast enough" on interchangeable chips that might as well all be same. Whenever a need for determinism arises, a solution will immediately be sought via some on-chip peripheral. I know that mindset is heavily inertial and is reflexively perpetuated by employers' desire for interchangeable people, and universities looking to make relevant employees. It's a paradigm that busies lots of companies and users, but results in nihilism towards any other way.

    For 99% of the things I look forward to making, lack of determinism would be a total bust. I have about zero interest in programming chips that don't let me control when things happen, down to the clock edge - and that's just about every chip out there. FPGA's are the only programmable solution to the real-time problem, but they are very time-consuming to develop for. The Propeller chips are like FPGA's, in that they provide determinism, but they are configurable in software, which is much faster to develop in than a hardware description language is. The only people that can benefit from the Propeller chips are those that can mentally break out of the prevailing paradigm.
    Chip, I agree that determinism is required for certain things, such as a VGA driver or PWM, and it may even be possible that 99% of P2 applications will require determinism. However, not all the code in the application will require determinism. Basically, it's only the device drivers that require it, and their cogs must have dedicated hub slots. The rest of the code in the app could share the hub in first-come-first-served basis, and more efficiently use the hub bandwidth this way.

    I proposed a method where P2 could support both deterministic timing on some cogs, and non-deterministic timing on other cogs at the same time. I think it would make P2 a much more powerful chip. It might even make it a more interesting chip to the mindless embedded software practitioners like me, and to their companies that want to develop products quickly and efficiently.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-12-08 10:47
    @Heater: On the Javascript thread, you and I discussed testing and how it can resolve type conflicts by inference. I just thought of the perfect analogy for the market research and it's hunger for data.

    The determining factors are derived by inference. Once the data is in, and it's coorelated to some prospects and their business, one can think through those things, as if "checking" the validity of the business model, and often the metrics needed to compute price will fall out of that by inference. It's still squishy, because we have no equation for value, only norms and sometimes standardized expectations and rules of thumb kinds of things, but a lot of the "squishy" can be resolved away by inference and a good set of representative data that is both inclusive enough to cover the most likely adopters, and not too polluted by data from not so likely, or niche adopters. Maybe that helps some.

    @Leon: Well, XMOS ended up settling into some clear niches. It's my opinion the devices simply are not as accessible as Chip's designs appear to be. Some of our friends here went down that road, and I followed them in XMOS land, looking at code examples, and watching their adventures. Here, we would knock something out like a display driver rather easily and flexibly. There, the same got done, but it was a greater effort requiring more mental "bandwidth" to manage more details, etc...

    Heater demonstrated similarities, and I'm not entirely sure everybody here buys his arguments, but I do, and so I think the difference came down to how accessible the technology is. I didn't get the "fun" sense from those devices that I do from both P1 and the P2 FPGA I've been working with. Truth is, Propellers overall are more LEAN in what it takes from a person to do things. Lean means more of a person's overall mindshare can be applied to the problem at hand, not the meta-problem of getting the device to perform on the problem at hand, if that makes any sense.

    XMOS has some niche wins. I think that's a very important element to examine, and if I were tasked with that analysis related to P2, XMOS and others in niches where some design attributes were maximized, or presented in ways somewhat related to the P2 would be at the top of my list. High volume niches or even moderate to low volume but high margin niches are where the P2 is very highly likely to see some success outside of the markets Parallax enjoys success with right now.
  • KC_RobKC_Rob Posts: 465
    edited 2013-12-08 11:02
    Leon wrote: »
    What about XMOS?
    There is more than one way to skin a cat. Always...
    Sorry, Leon. I couldn't resist. =D
  • potatoheadpotatohead Posts: 10,261
    edited 2013-12-08 11:04
    I have a question!

    This: [changing HUB behavior]
    I think it would make P2 a much more powerful chip. It might even make it a more interesting chip to the mindless embedded software practitioners like me, and to their companies that want to develop products quickly and efficiently.

    has been out there consistently for a long time. I was here shortly after P1 got released, and this same discussion was around then too.

    My question is:

    a) how much more powerful?

    If possible, explain with a use case or two where it could not be done with the traditional round robin scheme. And in particular, given so many HUB cycles, when does this COG have time to act on them?

    b) how much of this "power" is mere perception, given a problem is factored into a more parallel form best suited for the deterministic operation we know today?

    And I'm asking because this really needs to be resolved in some fashion or other and ideally quantifed a bit better beyond some, "but this spec could reach this really big number" kinds of assertions we see regularly today.

    In the "stealing" only case "A", there is the assumption that many cycles go "wasted" and that is somehow bad. So let's say one COG is written to take advantage of those "wasted" cycles, and to maximize it, other COGS are written to be extremely lean about their consumption of HUB cycles. Now you have a COG that gets more HUB cycles, with the provision that it be placed with other lean COGS that do not use the HUB much.

    In the allocate / gift / priority case "B", the same "waste" assumption is true, only now cycles can be dedicated to the COGS needing more HUB cycles, leaving other COGS to fail for lack of them, unless they are written to be lean in their HUB use.

    Seems to me, case "A" moves the potential failure / performance variation problem toward the needy COG, and thus the primary program. If it fails, the user must go and either refactor the problem to better use multiple COGS, or dilute the amount of resources other COGS need to perform, or improve the efficiency of their code.

    Case "B" moves the potential failure away from the needy COG and onto the lean COGS, and thus onto the peripherials. Now those may fail, and the user must either make them more lean or refactor those problems to better use more COGS, or improve that other code efficiency.

    The standard case is, of course all COGS perform as they perform, and thus there are only problem areas and options.
    So then, where there is a failure, the problem simply gets refactored to better use more COGS, or less HUB, and or improve code efficiency in general.

    How do those differences translate into clear use cases that "we must have", as opposed to the negative perception of waste?

    This is what would sell me on moving away from the standard case. It may sway others, but I really struggle with the perception being "more powerful" as opposed to the reality very likely being just more power in one place, which ends up looking a lot more like those chips with a strong CPU and hardware peripherals.

    I honestly do not see the benefit, because I honestly don't see an order of improvement by hub cycle changing, and where that's true, isn't the problem merely improving code anyway?

    If so, and we don't have interrupts, what happens when the needy, strong COG, needs to be managed better? Do we now interrupt it, or what?

    Seems to me, this desire is really rooted in making a Propeller look more like a single core or dual core machine with some definable peripherials than it is any significant and material gain in overall power.

    Given users gravitate toward that, the product could end up less effective due to the speed crutch preventing people from really embracing multi-core?

    Jim Bagley made this argument a while back when we were talking about early P2 development and what parallelism means, and I find it compelling enough to put here, framed in terms of the current discussion. He's got experience on advanced multi-core things due to the games industry, which means he's more multi-core than most people and I can't shake this argument off despite the considerable advocacy here.

    @Baggers, hope you don't mind me name dropping here, but I wanted to give credit where credit is due. SKYPE me later, if you can.

    Finally, assuming no significant overall power argument can be made, wouldn't a similar gain in product appeal come from a more serious effort to educate people about multi-core?

    Really the Propeller is a concurrent, deterministic multi-processor. That's a lot different proposition than a multi-core CPU in that the cores are isolated from one another. This really matters! And I'm highlighting this, because better education on multi-core was extremely favorable in my personal experience. Lots of things became easier and I could get more out of the chip. Baggers WOLF3D raycaster is parallelized, as an example and maxing one COG would have meant not really using the others well, making that and simlar exercises fail.




  • Dave HeinDave Hein Posts: 6,347
    edited 2013-12-08 11:41
    If the cog code is not carefully crafted to hit it's dedicated hub cycle it will have 0 to 7 stall cycles. So the average is 3.5 stall cycles for every hub access. That's only 7% if the cog access the hub on the average of every 50 cycles. If none of the cogs need deterministic timing, and the other cogs rarely perform hub access, then the 7% stalls can be eliminated almost entirely by sharing hub in a first-come-first-served manner.

    Now let's say our main cog needs to copy hub memory from one location to another. It does a read, followed by a write, and so on. During that period it will stall 75% of the time. Disabling dedicated slots would eliminate most of the stalls, and allow it to run 4 times faster.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-12-08 11:43
    And do you have some clear use cases where that is an issue that could not be resolved by making better use of the COGS overall?
  • Bill HenningBill Henning Posts: 6,445
    edited 2013-12-08 11:46
    Hi Dave,

    I wrote a detailed analysis earlier, which showed that compiled code that referenced the hub can expect a 2x-3x improvement in performance if it can use spare (ie not used by the cog it is allocated for, no need for stealing cycles) hub cycles.

    To me, that is a very powerful argument for allowing "hungry" in the words I used hubexec cogs to use spare cycles.

    I do not want to take cycles from cogs that need them.
    Dave Hein wrote: »
    If the cog code is not carefully crafted to hit it's dedicated hub cycle it will have 0 to 7 stall cycles. So the average is 3.5 stall cycles for every hub access. That's only 7% if the cog access the hub on the average of every 50 cycles. If none of the cogs need deterministic timing, and the other cogs rarely perform hub access, then the 7% stalls can be eliminated almost entirely by sharing hub in a first-come-first-served manner.

    Now let's say our main cog needs to copy hub memory from one location to another. It does a read, followed by a write, and so on. During that period it will stall 75% of the time. Disabling dedicated slots would eliminate most of the stalls, and allow it to run 4 times faster.
  • LeonLeon Posts: 7,620
    edited 2013-12-08 11:51
    potatohead wrote: »

    XMOS has some niche wins. I think that's a very important element to examine, and if I were tasked with that analysis related to P2, XMOS and others in niches where some design attributes were maximized, or presented in ways somewhat related to the P2 would be at the top of my list. High volume niches or even moderate to low volume but high margin niches are where the P2 is very highly likely to see some success outside of the markets Parallax enjoys success with right now.

    I was really objecting to Chip claiming that the Propeller was the only deterministic device.
  • ctwardellctwardell Posts: 1,716
    edited 2013-12-08 12:03
    potatohead wrote: »
    And do you have some clear use cases where that is an issue that could not be resolved by making better use of the COGS overall?

    Why should such an answer be required?

    I don't mean that to be a jerk, but why not allow hub slot sharing and let users decide.

    If they can nicely partition a program into multiple cogs then don't use hub slot sharing, if not then they have the option of using hub slot sharing.

    I can also see cases where a program split across multiple cogs would benefit from hub slot sharing, instead of each of the cogs waiting it's turn for a slot the set of cogs can share a pool of slots.

    I don't think we should consider multicore and the shared round robin hub to be the same thing. The cogs make the prop multicore, the hub is a communication / shared storage medium.

    C.W.
  • Dave HeinDave Hein Posts: 6,347
    edited 2013-12-08 12:11
    potatohead, I don't have any detailed analysis of dedicated slots versus shared slots.

    Bill, do you have a link to your analysis? I'd like to look at it. BTW, I prefer using the positive word "sharing" versus the negative word "stealing" when referring to sharing slots. :) The slots are certainly available for cogs that aren't running, but it's also reasonable to share the slot for a cog that rarely does hub accesses, such as a cog running a serial driver. Now a cog running a VGA driver would most likely need to keep its dedicated slot, and not share it.
  • Bill HenningBill Henning Posts: 6,445
    edited 2013-12-08 12:22
    Happy to help!

    http://forums.parallax.com/showthread.php/125543-Propeller-II-update-BLOG?p=1225390&viewfull=1#post1225390

    I totally agree - how can you steal a slot not used, or even given up freely (by not using)?

    I feel that if a cog's slot is not needed, hungry cogs should be able to eat it :)
    Dave Hein wrote: »
    Bill, do you have a link to your analysis? I'd like to look at it. BTW, I prefer using the positive word "sharing" versus the negative word "stealing" when referring to sharing slots. :) The slots are certainly available for cogs that aren't running, but it's also reasonable to share the slot for a cog that rarely does hub accesses, such as a cog running a serial driver. Now a cog running a VGA driver would most likely need to keep its dedicated slot, and not share it.
  • potatoheadpotatohead Posts: 10,261
    edited 2013-12-08 12:23
    Not a Jerk at all.
    Why should such an answer be required?

    I don't mean that to be a jerk, but why not allow hub slot sharing and let users decide.

    I gave that answer in my post above. The product could be seen as less effective overall.

    I think cycle allocation would do this, and I'm opposed to this.

    I'm not sure the cycle stealing would and have said so in the past. I'm on the fence about this, looking for some confirmation. And that's case "A" above.

    And I'm looking for cases people can envision to clarify. We've got people asking for pairing, first come first serve, to priority schemes...
  • Bill HenningBill Henning Posts: 6,445
    edited 2013-12-08 12:29
    I would agree with you 100% that stealing a cycle from a cog that may need it is bad.

    Please check my post

    http://forums.parallax.com/showthread.php/125543-Propeller-II-update-BLOG?p=1225390&viewfull=1#post1225390

    I think you will find it a compelling (if a bit whimsy) argument for letting hubexec cogs use otherwise unused slots to greatly boost C / Spin / hubexec-pasm performance.
    potatohead wrote: »
    Not a Jerk at all.



    I gave that answer in my post above. The product could be seen as less effective overall.

    I think cycle allocation would do this. I'm not sure the cycle stealing would and have said so in the past.
Sign In or Register to comment.