Shop OBEX P1 Docs P2 Docs Learn Events
HUB EXEC Update Here - Page 13 — Parallax Forums

HUB EXEC Update Here

11011131516

Comments

  • AleAle Posts: 2,363
    edited 2014-02-15 08:00
    Don't understand me wrong, I like the P1, I like that its setup means almost no time and no components for UART, VGA and tightly integrated IDE... I have 3 XMOS boards and mine too but I couldn't be hooked to it, more power and everything but getting your own, custom board with it, is something else... The IDE works very well, though.
    Yes, PASM is the way(R) but with so much emphasis on C...
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-02-15 09:05
    Parallax needs to emphasize C to get more customers.

    C will be good for the "application/gui/business logic" code. Pasm2 for the interesting real time drivers, etc. Spin for experimenting and more.

    Once they are hooked, they will notice the great stuff "under the hood" :)
    Ale wrote: »
    Don't understand me wrong, I like the P1, I like that its setup means almost no time and no components for UART, VGA and tightly integrated IDE... I have 3 XMOS boards and mine too but I couldn't be hooked to it, more power and everything but getting your own, custom board with it, is something else... The IDE works very well, though.
    Yes, PASM is the way(R) but with so much emphasis on C...
  • Kerry SKerry S Posts: 163
    edited 2014-02-15 09:29
    The amazing power of the P2 is that it will be able to handle both sides of a system... hard core real time control AND a full featured user interface. While PASM is great for RT cog code it is not something you want to expect customers to have to use to do GUI features.

    Being 'able' to do something and actually being 'used' for it are two different things.

    A solid C framework is going to be critical to getting developers to explore using the P2 for the combined system designs where it will really shine. The fact that Chip and crew are well aware of that and are trying, within reason, to be prepared is just another indication of how well thought out the P2 really is.
  • jmgjmg Posts: 15,173
    edited 2014-02-15 11:46
    Ale wrote:
    maybe just random thoughts... Now we need PLCC84 and we are all set
    I'd like a PLCC84 version! It would make working with P2 easier; but I'd hate to lose all those nice I/O's...

    I'd like a PLCC84 version! It would make working with P2 easier; but I'd hate to lose all those nice I/O's...

    Packaging and test NRE's would make PLCC84 not really economic, but I did see a nifty module design a while back, where the designer effectively made a PLCC package, with some careful drilling and routing of the square board edge details.

    The tqfp128 will easily fit into a PLCC84 PCB, might even get other stuff on there too.

    & here is an example :
    http://www.hdl.co.jp/en/index.php?id=315

    Addit: I've just noticed they do Cyclone V models, 49K size, which could emulate P2.

    That device alone is ~$57/100+, and their Yen price seems ~ US$216, so they are not cheap.
    (contrast BEMICRO CV $34.81 @ Verical, 25k size)
  • cgraceycgracey Posts: 14,151
    edited 2014-02-15 11:47
    Ahle2 wrote: »
    I have not been very active lately and have not yet catched up on all new things regarding hubexec and multitasking etc; So this may be answared elsewhere and/or it may be a stupid question. Is there a way to read out PC+Z+C from a task. (Yes I do have a reason for asking this) I guess it would be possible by using trickery that involves such things as Calls, AUX, Push, Pop and some special code running in the task that you want to "monitor".

    /Johannes


    There is no current instruction to do that, but I could add one. Could you illustrate a compelling use case?
  • ElectrodudeElectrodude Posts: 1,657
    edited 2014-02-15 12:00
    cgracey wrote: »
    There is no current instruction to do that, but I could add one. Could you illustrate a compelling use case?

    Please add this. It would be very useful for a debugger running in another task, so it could show the user the PC and flags of the task being debugged. It would also be good for running more than four tasks, where one task, the kernel, which would rarely get a turn (SETTASK %%0120120120120123, it's task 3), would be in charge of swapping them out. It would sit in a djnz i, $ loop for a while and then, when the time comes to swap tasks, it would SETTASK to get all of the turns of the task that is being replaced, (e.g. SETTASK %%3123123123123123 to swap task 0), use Ahle2's instruction (GETPCZC dest, taskid?) to get and save the PC and flags of the old task, and then JMPTASK to the new task.

    electrodude
  • AleAle Posts: 2,363
    edited 2014-02-15 12:09
    jmg: I saw those modules some time ago, they are nice but bloody expensive !. A small module with regulator, crystal, prop plug & SDRAM would be great !, specially because soldering 0.4 mm pitch is no walk in the park :(
  • cgraceycgracey Posts: 14,151
    edited 2014-02-15 12:11
    Please add this. It would be very useful for a debugger running in another task, so it could show the user the PC and flags of the task being debugged. It would also be good for running more than four tasks, where one task, the kernel, which would rarely get a turn (SETTASK %%0120120120120123, it's task 3), would be in charge of swapping them out. It would sit in a djnz i, $ loop for a while and then, when the time comes to swap tasks, it would SETTASK to get all of the turns of the task that is being replaced, (e.g. SETTASK %%3123123123123123 to swap task 0), use Ahle2's instruction (GETPCZC dest, taskid?) to get and save the PC and flags of the old task, and then JMPTASK to the new task.

    electrodude


    I agree!
  • Heater.Heater. Posts: 21,230
    edited 2014-02-15 12:33
    Ale,
    I see that now the propeller 2 has many more opcodes than originally thought, I just ask myself how useful are all these new opcodes ?

    I too have been disturbed by the seemingly ever growing size of the P2 instruction set. I has bee discussed here a little bit from time to time.

    Overwhelmed I pretty much gave up following the development of new instructions and all the changes going on. Which is a shame because I have an FPGA card here to do testing on. Hopefully things are now pretty stable and it time for me to get back to that.

    Here are a few thought I have about it:

    1) I hope the base set of "normal" instructions we are used to from the P1 and other micros, MOV, ADD, CMP, JMP, CALL etc, ect is still intact and works much as we are used to. In this way newbies and casual assembler programmers can get stuff done as easily as ever. Although perhaps not as optimally as the P2 would allow. That is, I hope we don't need to to know any of the new and "weird" instructions to make some use of the thing.

    2) That base set of "normal" instructions is of course what C and other high level languages will use to impliment their compilers. That might only be 20% of the available op codes! Luckily there has been input from compiler writers here and the P2 has gained better support for high level languages as a result.

    3) What about all those others? Well, if we don't have to use them, as hoped, and if they don't consume piles of silicon then there is no harm in it. Like all those hundreds of opcodes in the Z80 that were hardly ever used.

    4) Many of those instructions are dealing with P2 hardware specific things like video and cordic maths. We will need to use those. Compilers will wrap them up in macros/functions/intrinsics or whatever. Assembler programmers will need to understand them in the same way they have to understand the billion registers that control the peripheral hardware on an ARM, for example.

    5) I do worry that Chip may have made a problem for himself in the future. Prop users will expect that they are carried forward to the P3 and any future Propellers. Does this instruction set scale? When the Prop 3 arrives with 16MB of HUB RAM will those instructions smoothly accommodate it or does it all need ripping out and redoing?

    6) What are you suggesting, Ale, anyway? Potentially we could imagine deciding that anything that compilers don't use can be removed. Well then the Prop is reduced to 8 normal CPU's and the whole effort is wasted.

    7) It would be interesting to maintain an analysis of P2 programs, in OBEX or wherever they are published, as they are developed and see when all the instructions are used.

    8) I'm hoping that some simple method of starting hardware scheduled threads is implemented in C and other compilers. Perhaps with pthreads or OMP.
  • MJBMJB Posts: 1,235
    edited 2014-02-15 12:33
    cgracey wrote: »
    There is no current instruction to do that, but I could add one. Could you illustrate a compelling use case?
    you could have a larger number of tasks, that a scheduler could resume after setting PC+Z+C having it run for a while and after stop read PC+Z+C to save in an array
    so later at any time the task could be continued. Reminds me a little of ACTOR languages in AI.

    EDIT: wow 5 Messsages since my last refresh ...
  • jmgjmg Posts: 15,173
    edited 2014-02-15 12:34
    Ale wrote: »
    jmg: I saw those modules some time ago, they are nice but bloody expensive !. A small module with regulator, crystal, prop plug & SDRAM would be great !, specially because soldering 0.4 mm pitch is no walk in the park :(

    Yes, there are P2 modules planned. The PLCC form factor appeals, questions there become :
    * What Size ? plcc68 or plcc84 ?
    * Surface mount module ability would increase market volume, but could be tricky.
    I see those FPGA PLCC modules have many small parts underneath.

    Maybe PLCC84 would allow a 'clean' under surface, for surface mounting option ?
  • potatoheadpotatohead Posts: 10,261
    edited 2014-02-15 12:50
    Prop users will expect that they are carried forward to the P3 and any future Propellers. Does this instruction set scale?

    Well, we tossed P1 compatibility. Perhaps that should be on the table for P3, where appropriate. Seems to me, the real answer is to make darn sure P3 is capable enough to emulate P2 :)
  • Dave HeinDave Hein Posts: 6,347
    edited 2014-02-15 13:00
    potatohead wrote: »
    Well, we tossed P1 compatibility. Perhaps that should be on the table for P3, where appropriate. Seems to me, the real answer is to make darn sure P3 is capable enough to emulate P2 :)
    P1 compatibility hasn't been tossed entirely. I was able to get the pfth interpreter to run on P2 by just changing a few instructions. Most of the code is unchanged from the P1 version. However, to take full advantage of P2 the programmer will have to use delayed jumps, cached reads and many of the other features of P2. One thing I've noticed is that P1 code running on P2 will encounter many more pipeline stalls and hub stalls than it does running on P1.
  • ctwardellctwardell Posts: 1,716
    edited 2014-02-15 13:05
    Go see the LEGO Movie. It will make you think if you let it.

    Chip is nice enough to let us play with his version of a LEGO set.

    So put the 'kragle' away and let's have fun with this thing.

    I know there are business realities that must be met if Parallax is to keep churning out the neat stuff, but if we have to make it not fun anymore to met those realities then what's the point...

    C.W.
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-02-15 13:06
    One could say that the P1 stalls on every instruction :)

    However, I mostly agree with you.

    Most P1 assembly code can be ported fairly easily by ignoring new instructions, and the delayed version of instructions.

    Will that result in optimal code? Of course not.

    Will it be a pretty easy port? Yep!

    Besides, one can still buy P1's.

    Fortunately we are not in the x86 processor world, shackled by backward compatibility.

    I expect most people will use only a subset of the full potential (and features, instructions) of the P2 ... initially. But if they get serious, they will explore all the nooks an crannies.

    Even if they don't - they will still be able to take advantage of expert-level objects, libraries and code snippets.

    Everyone wins.

    Addendum:

    Just because P2 now has regular-processor features, it still has all the special secret sauce for deterministic control in a cog at the pasm level. Best of both worlds.
    Dave Hein wrote: »
    P1 compatibility hasn't been tossed entirely. I was able to get the pfth interpreter to run on P2 by just changing a few instructions. Most of the code is unchanged from the P1 version. However, to take full advantage of P2 the programmer will have to use delayed jumps, cached reads and many of the other features of P2. One thing I've noticed is that P1 code running on P2 will encounter many more pipeline stalls and hub stalls than it does running on P1.
  • Heater.Heater. Posts: 21,230
    edited 2014-02-15 13:18
    Bill,
    Fortunately we are not in the x86 processor world, shackled by backward compatibility.
    Are you sure about that?

    Let's assume the P2 becomes a wild success, a billion eager young programmers latch on to it, millions of lines of assembler is written for it.

    Could Parallax then say, hey we got this great new P3. Sorry guys you have to write all your code again.

    Of course redoing compilers to match is a lot of extra work.

    Aside: I would not blame Intel and the x86 for the shackles of backwards compatibility. They have tried very hard to introduce new architectures over the years: iAPX432, i860, Itanium. The shackles of backward compatibility are squarely down to MicroSoft and the ecosystem of closed source application developers.
  • Heater.Heater. Posts: 21,230
    edited 2014-02-15 13:28
    Bill,
    ...it still has all the special secret sauce for deterministic control in a cog at the pasm level. Best of both worlds

    Yes indeed. And that secret sauce is also available in C thanks to the ability of propgcc to compile code into COG. I'm hoping a COG will be able to run, for example, one large chunk of C code from HUB in a thread and a small COG resident code that was written in PASM. That PASM part may be half the speed of running single threaded in a dedicated COG but it does have the timing determinism still.

    I sometimes think Parallax does not shout about the Propeller secret sauce enough. Easy, real-time. deterministic and multi-threaded (via COGs so far) and all without the complexity of interrupts.
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-02-15 13:55
    Yes, I am sure.

    Why?

    Because we are basically talking about the micro controller world, not desktop CPU's. It is the nature of the beast. Look at all the different peripheral mixes on other mcu's.

    Look at the differences in different generation of PIC's, ARM's etc - it is to be expected.

    Regarding re-targeting compilers for each new (major difference) generation - it's the cost of doing business. Again, see PIC et al.

    We cannot afford to shackle ourselves with full backward compatibility.

    Frankly, I can see P2.x being greatly compatible with P2.x source code.

    I expect a whole new ball game with P3 - personally I am rooting for 64 bit cogs.
    Heater. wrote: »
    Bill,

    Are you sure about that?

    Let's assume the P2 becomes a wild success, a billion eager young programmers latch on to it, millions of lines of assembler is written for it.

    Could Parallax then say, hey we got this great new P3. Sorry guys you have to write all your code again.

    Of course redoing compilers to match is a lot of extra work.

    Aside: I would not blame Intel and the x86 for the shackles of backwards compatibility. They have tried very hard to introduce new architectures over the years: iAPX432, i860, Itanium. The shackles of backward compatibility are squarely down to MicroSoft and the ecosystem of closed source application developers.
  • jmgjmg Posts: 15,173
    edited 2014-02-15 13:55
    Heater. wrote: »
    Could Parallax then say, hey we got this great new P3. Sorry guys you have to write all your code again.

    Of course redoing compilers to match is a lot of extra work.

    Yet ARM and Microchip manage exactly this ? ( - and you cannot say either are failures ? )

    The many ARMs are not binary compatible, and neither are PICxx's , but it does show the importance of a common umbrella Brand

    Mostly, they try to have compatible IDEs and compilers at least from the same stable, so source-level changes are minimal.

    I think the Parallax/Community efforts along the same lines, are looking very good.
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-02-15 14:02
    Heater. wrote: »
    Yes indeed. And that secret sauce is also available in C thanks to the ability of propgcc to compile code into COG.

    I don't expect C to be able to use all the instructions, and I don't expect to be writing 1080p60 32bpp drivers in C.

    The highest performance drivers will be written in assembler for the foreseeable future.
    Heater. wrote: »
    I'm hoping a COG will be able to run, for example, one large chunk of C code from HUB in a thread and a small COG resident code that was written in PASM. That PASM part may be half the speed of running single threaded in a dedicated COG but it does have the timing determinism still.

    The pasm code sharing a cog with C code (ie in a different task) cannot be fully deterministic due to hub accesses. It can be deterministic to a rough grain, but not to the level we are used to.

    Frankly, I expect cogs I use for C (and other compiled code) *NOT* to use hardware threading; there is only one dcache line, so there is a performance cliff. For C code, pthreads are a *MUCH* better choice (mostly due to cache performance issues) than native tasks.

    Now for pasm code - tasks will allow packing up to four drivers into a cog, and I can see TONS of great "packed cogs" - perhaps even packaged into objects - and due to the register remapping and hardware tasks, it will be possible to pick-and-choose among many drivers to assemble a semi custom soft peripheral mix for a cog. There will be exceptions (ultra high performance display drivers, etc) but it will be wonderful.
    Heater. wrote: »
    I sometimes think Parallax does not shout about the Propeller secret sauce enough. Easy, real-time. deterministic and multi-threaded (via COGs so far) and all without the complexity of interrupts.

    I TOTALLY AGREE!
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-02-15 14:03
    jmg wrote: »
    Yet ARM and Microchip manage exactly this ? ( - and you cannot say either are failures ? )

    The many ARMs are not binary compatible, and neither are PICxx's , but it does show the importance of a common umbrella Brand

    Mostly, they try to have compatible IDEs and compilers at least from the same stable, so source-level changes are minimal.

    I think the Parallax/Community efforts along the same lines, are looking very good.

    TOTALLY AGREED!

    We overlapped writing basically the same thing!
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-02-15 14:05
    Followup to my post #381:

    Multiple compiled threads using hardware tasking would be far more realistic if we had at least four lines of dcache. Alas, we don't.
  • Heater.Heater. Posts: 21,230
    edited 2014-02-15 14:21
    Bill,

    I guess you are right. I would not want the P2+n to be shackled by backwards compatibility if it meant not being able to have the latest world changing feature that Chip or the forums come up with. Like 16MB of HUB RAM or 64 bit COGS.

    And yes, unlike the desktop world a lot of what we do here is opensource so we are not shackled by dependence on those "must have" binary only programs and libraries. Companies developing their own closed source code can rework it themselves as they do with all those other devices.

    On the other hand, the multiplicity of instructions sets in the ARM world has driven me mad in the past. As we speak I'm trying to get programs that work fine on the Raspberry Pi up and running on an IGEP board. Luckily that's only a recompile away. As long as I can find the right set of config options and compiler flags to get them to build. Hmm...why am I doing actual real work at midnight on a Saturday?

    Hey, 64 bit COGs was my suggestion a long while back. That gives 128 MBytes of address range in the src and dst fields of instructions!
    At the time we drooled over that as it enabled the expansion of COG space. But now it's even better as we can seemly access COG and HUB in the same address space!
    Perhaps even Gigs of external SRAM can be mapped in as well in future.
  • Heater.Heater. Posts: 21,230
    edited 2014-02-15 14:41
    Bill,
    ...I don't expect to be writing 1080p60 32bpp drivers in C....The highest performance drivers will be written in assembler for the foreseeable future.
    Neither do I. But there are lot of small drivers and real time interfaces that don't need such extreme speed. I have written a C version FullDuplexSerial that is totally resident in COG and runs at 115200 baud on the P1. It's in the propgcc examples if you want to check it out. The PII will be capable of a lot more of that kind of thing.
    The pasm code sharing a cog with C code (ie in a different task) cannot be fully deterministic due to hub accesses
    Sure it can. As far as I understand a thread can never stall another thread. Whilst there might be some jitter in execution time going on there is a know upper bound on its execution time and hence there is full determinism within some knowable limits.
    It can be deterministic to a rough grain, but not to the level we are used to.
    Yeah that. "real-time" and "deterministic" does not mean I have to know within a nanosecond when everything happens. Many things can be done with far less stringent tolerances.
    Frankly, I expect cogs I use for C (and other compiled code) *NOT* to use hardware threading; there is only one dcache line, so there is a performance cliff.
    But in my desired mode of, big C program in HUB - small PASM driver in COG, there is no hitting that dcache line performance cliff. It's like being able to combine the Spin code of FDS and it's PASM into running on a single COG. What can be undesirable about that?
  • Ahle2Ahle2 Posts: 1,179
    edited 2014-02-15 15:07
    quote_icon.png Originally Posted by Ahle2 viewpost-right.png
    I have not been very active lately and have not yet catched up on all new things regarding hubexec and multitasking etc; So this may be answared elsewhere and/or it may be a stupid question. Is there a way to read out PC+Z+C from a task. (Yes I do have a reason for asking this) I guess it would be possible by using trickery that involves such things as Calls, AUX, Push, Pop and some special code running in the task that you want to "monitor".

    /Johannes

    There is no current instruction to do that, but I could add one. Could you illustrate a compelling use case?

    I want to make a microkernel with the ability to have arbitrary number of tasks, not just limited to HW-tasks. I think it would open up for some pseudo interrupt handling and great debugging as well. Not that a microkernel or interrupt handling is really needed on the P2.... but It's just "fun and games" for nerds like me.

    /Johannes
  • evanhevanh Posts: 15,915
    edited 2014-02-15 17:43
    Heater. wrote: »
    Aside: I would not blame Intel and the x86 for the shackles of backwards compatibility. They have tried very hard to introduce new architectures over the years: iAPX432, i860, Itanium. The shackles of backward compatibility are squarely down to MicroSoft and the ecosystem of closed source application developers.

    That's prolly a bit judgemental - And I'm a strong proponent of the GPL for sound business reasons, and even generally agree with RMS's ideals as a social objective.

    Reality is binary backwards compatibility is important to the PC world. Alternative architectures could only be introduced as long as backwards compatibility was also intact. I'd say Intel didn't try very hard at any stage of the game. AMD seemingly effortlessly introduced 64bit mode that, not only is a decent departure but, is able to concurrently co-exist. That's the way to do it in such a world.

    Going further off topic ...

    The Internet has allowed the Web - which bought on some real change. Can "Free Software" maintain and grow it's share? It's still very much in the balance. There is plenty of lobbying to restrict freedoms of all sorts or even make it illegal.

    Eg: Labelling an activity as "addictive" is one such example of a lobbying tool to keep us whimpering in the corner. Addiction, as a label, not only has negative connotations, it also has medical and legal teeth that can be bought to bare with simple accusations. The more activities get labelled as an addiction the easier it becomes to restrain you on a whim. The ultimate conclusion to this particular tool is fun can be classified as addictive. Whatever happened to enthusiasm?, to passion?, to in the zone?, to rad?, to obsessing even?

    Average people are abusing the word addiction, as a euphemism, far too much without realising it's dangers as a legal tool.
  • Heater.Heater. Posts: 21,230
    edited 2014-02-15 19:26
    evanh,
    That's prolly a bit judgemental...
    Me, never :)
    Reality is binary backwards compatibility is important to the PC world.
    I think we agree that is how the PC world is. Assuming that by "PC world" we mean the x86 and Windows world.

    What I was getting at is why is that? And whose fault is it.

    Clearly Intel had to maintain binary compatibility at every step of the x86 development. If that was broken MSDOS & Windows and all the commercial apps would not run. If MS did not rebuild all its code for the new architecture and if all the app developers did not provide new binaries Intel would not be able to sell the new chip. Intel was shackled by Microsoft and the commercial app developers.

    I'd say Intel tried pretty hard. That's three completely different architectures they developed that fell by the way side. (Except I think they are still pushing Itanium)
    AMD seemingly effortlessly introduced 64bit mode that, not only is a decent departure but, is able to concurrently co-exist.
    AMD's 64 bit approach followed the pattern set by Intel as it moved from 8086 to 286 to 386. Keep the previous programming architecture intact and add a new operating mode that switches on the new stuff. Real mode, protected mode, 32 bit mode and so on. That way at least old binaries would still run meaning you can sell your chip immediately and new code would slowly arrive to make use of the new features. Emphasis on "slowly" it was ten years after Intel introduced 32 bit processors that MS managed to get a 32 bit OS out the door.

    AMD was lucky that the 64 bit extension was so compelling that MS supported it. If they had not it would have gone nowhere.

    It's not hardware binary compatibility that is the shackle here. It's the ability to run all those commercial closed source operating systems and app binaries. The Windows ecosystem as I said.
  • AleAle Posts: 2,363
    edited 2014-02-15 20:54
    MS was very slow at supporting it. We had linux running on x64 years before!. And Intel didn't support it that graciously either... (Core2Duo limited to 4 GB RAM!)

    Going back to the P2, we have to see what we can do with it. Regarding many many threads, we have now 32... that's quite a bit, not even xmos has 32 threads anymore, as they tossed the G4s away.... but there those 64 kbytes were kind of a barrier...
    I don't see the jumps/calls scaling beyond 256 kbytes... but I think that now that high-speed IO is there and many more IO pins are available, things will look different :)
  • evanhevanh Posts: 15,915
    edited 2014-02-15 21:06
    Programs that relied on MSDOS were very important to the industry for a long time. In effect, the industry generically demanded backwards compatibility. Win95 support for DOS programs was not entirely stable and WinNT/2k dropped some features altogether. Single threaded busy-waiting was a normal behaviour. Full compatibility could only be achieved by running the CPU entirely in "Real" mode, which meant not co-existing with 32bit mode programs. 32bit OSes couldn't do the job. The transition was slow.

    It's different with 64bit, there isn't any visible transition as such. There is no OS design changes. Most users aren't even aware what mode an app or driver uses. Although, admittedly, this is probably due to every app and driver, these days, being written to use APIs and events/callbacks rather than simplistic CPU burning bit-bashing and, as you pointed out, AMD maybe didn't do it hugely better than Intel. I guess that shows Windoze is an improvement on DOS. ;)

    That part is all historical. Modern OSes support source-code compatibility - M$ has been forced to document Windoze API now ... occasionally something good happens.

    Our freedoms to roam and experiment and gain knowledge from others is a very current topic however. GPL is important in erecting a defence in the current environment - of abundant abuses of licensing/patents making for predatory market control. It's similar to land and housing ownership - the longer the market is allowed to run amuck, pushing up prices to insane levels, the more the thieves and bullies get away with before a clean up is enacted. How many bubbles and crashes does it take for strong action to be taken?

    Such abuses are also becoming a serious threat to education. The GPL guarantees good education, via re-contribution by the businesses, while at the same time providing a viable business model - everyone is on equal footing.

    My conclusion: The GPL formed as a natural reaction to the current laws.
  • evanhevanh Posts: 15,915
    edited 2014-02-15 21:10
    Ale wrote: »
    I don't see the jumps/calls scaling beyond 256 kbytes...

    Ya, that one will quickly create issues if a Prop3 has any plans for binary compatibility.
Sign In or Register to comment.