I wonder if you could swap memory in and out from SDRAM to HUB for each of these threads...
Maybe that would take too long, but if you could, you could then execute from a vast pool of memory...
I'm really confused now. This looks like what I had in mind, but I don't think the Txxxx instructions are for what I think they are. Do the new task switcher instructions just help with state saving, or do they do something else too? t3load and such just save state info, right? I'm sure the information's somewhere on the forums but I don't have nearly enough time to read everything.
I've considered that, but am not certain of the performance. I am sure one of us will play with the idea sooner or later.
The advantage to an LMM-like kernel is that it would be possible to integrate a software MMU... which makes many things possible...
Yeah this was also one of the things I liked the idea of as well, to be able to get code in from external RAM/flash storage into hub and execute it there at high speed using hub exec. If one limited themselves to using rather shortish function blocks using only relocatable code with relative jumps, we might be able to do caching to hub RAM on a per function basis and run from the hub. It is not close to paging or anything but the "function fault" trigger is then all done in software knowing whether the function has been loaded yet or not. A trampoline jump table per function call would allow this type of detection and each function call has a stub there at a fixed address. You would need compiler support for doing it and a dynamic hub memory allocation algorithmm and it would certainly not suit all applications which were time critical, but something interesting still might be achievable.
Another option is to try to switch quickly in and out of an XMM type execution mode (which uses an instruction subset) to hub exec mode per function call. The "fast" nominated functions are stored in hub RAM and run directly from hub, the "slow" ones run exclusively from external RAM/ROM. If this shared the same register set there might be scope there too to go back and forth between modes on the fly but you again would need special compiler support for such a thing.
We will see how that all pans out eventually...there's a few ways to do things here and I'm sure people will play with these sorts of ideas when they run out of hub RAM for their applications.
I'm really confused now. This looks like what I had in mind, but I don't think the Txxxx instructions are for what I think they are. Do the new task switcher instructions just help with state saving, or do they do something else too? t3load and such just save state info, right? I'm sure the information's somewhere on the forums but I don't have nearly enough time to read everything.
Thanks,
electrodude
Right T3LOAD loads up task3's state data from the WIDEs, while T3SAVE writes task 3's state data to the WIDEs. Task3 would be dormant when either of these instructions would be used.
OK, thanks. Are the WIDEs put somewhere other than cogram or what? Your instruction list says RDWIDE only has a destination register but WRWIDE has both a source and destination. How come? What's the task stack for?
I have a suggestion for something useful you can do with your preemptive multitasker as to not waste the 1/16 of the cog that's being used by task0 sitting in a passcnt, but it would only work with/be necessary for the aux stack (as opposed to the task stack, which I assume can be used for calls and such). Make task0 watch the stack head and, if it gets too close to the stack tail (in danger of over or underflow), tlock and swap a bunch of stack longs into or out of hubram. It would probably be good to swap out lots of stack (128 longs sounds good) so it doesn't need to happen often. This would only be useful/neccessary for things that have very recursive functions (which I do all the time), but wouldn't be any slower for threads that didn't need it. A 'jmpcnt time, where' instruction would be nice for this - jump/loop if timer isn't expired.
Having this Preemptive Multithreading opens the door to running an OS like Android, right?
No. Not even close.
To run Android you need to be able to run a Linux kernel. That requires virtual memory that requires a Memory Management Unit. The Linux kernel also like interrupts. Not to mention the huge amounts of external RAM that would be required.
This is Parallaxia where the impossible happens every day so perhaps someone will figure a way to mimic the require hardware support. But it would be very slow.
That's before we come to the other impossibility, creating the required graphics drivers.
As a reality check, the Raspberry Pi has everything required to run Android, in theory, nice ARM processor, loads of RAM, nice GPU etc etc. So far Android does not run on the Raspi.
To run Android you need to be able to run a Linux kernel. That requires virtual memory that requires a Memory Management Unit. The Linux kernel also like interrupts. Not to mention the huge amounts of external RAM that would be required.
This is Parallaxia where the impossible happens every day so perhaps someone will figure a way to mimic the require hardware support. But it would be very slow.
That's before we come to the other impossibility, creating the required graphics drivers.
As a reality check, the Raspberry Pi has everything required to run Android, in theory, nice ARM processor, loads of RAM, nice GPU etc etc. So far Android does not run on the Raspi.
Heater, I think you posted this quote once: "Those who do not understand Unix are condemned to reinvent it, poorly."
Is it possible to 'reinvent' Unix in such a way that interrupts are not needed, while maintaining its robustness? This could have import for future processing paradigms where interrupts don't exist, couldn't it? I mean, there are probably a handful of concepts that make Unix what it is. Could they be liberated from what Unix has become to make a smaller system?
...there are probably a handful of concepts that make Unix what it is. Could they be liberated from what Unix has become to make a smaller system?
Yes in deed. There is a small handful of concepts and for sure Unix started out on very small machines. Perhaps even with less speed and RAM than a PII. It was driven by the motivation to simplify the huge operating systems that had been under development at the time, like Multics.
Basically we can think of:
1) The Utilities and programs people can run.
2) The shell or command line interface.
3) The kernel managing the hardware stuff.
From a user perspective we only need 1) and 2). Who cares what's underneath and how it gets done in hardware?
What about that shell? That's where we see the handfull of UNIX concepts for the first time. Like:
A) I can run a program. Just type it's name and give it some parameters. I can run many programs as separate processes at the same time. Just put the "&" symbol on the end of the command.
C) Every thing is a file, the keyboard, the terminal, the files, the network sockets.
D) Files are just simple strings of bytes. No complicated records or whatever.
E) I can redirect output to console, to file, to printer easily "myProg > someDevice"
F) I can connect output of one process to input of another process "myProg | myOtherprog | myThirdProg > someFile"
And so on.
What about those processes?
A) It's nice if they can all run at the same time. That implies some means of context switching when they waiting on some input or time. Or just on a regular time tick to keep them all going "round robin style"
It's nice if they are not limited by memory. Many processes or users means we need space. Some means of paging things in and out to backing store is required.
C) Isolation between processes, so that one cannot crash out and scribble over the memory of another.
Arguably and C) are not required if you have enough RAM to hold everything that is running and you trust your programs are well behaved.
So the question is, can one create a UNIX like system as far as the programs and shell are concerned, on some other hardware that may not have interrupts?
I would say yes.
In the extreme we could imagine this:
i) Every unix processes spawned from the shell (or wherever) gets it's own physical processor. We can have billions of transistors today, why not have hundreds/thousands of processors, they could all be simpler than the behemoths we have today.
ii) The communication between those processors could be simple pipes streaming bytes around. Between processors and between IO systems like files, network, user interface.
iii) Each processor would of course have its own private RAM space.
Basically in this scheme we map all the UNIX concepts onto actual physical hardware. Instead of having a horribly complicated kernel trying simulate it, or create the abstraction, for us on a single processor.
Hey look, we don't need interrupts any more?
So here is what we need: A ton of Propeller II chips each with some external RAM. All hooked up with those serial pipes. And some means of getting from those user shell commands to a network of communicating processes.
This of course is what should have been happening, with the Transputer chip back in the day which was exactly designed for that kind of parallel processing, high speed serial links and all. In fact I seem to recall such UNIX like systems for the Transputer were under development. Sadly INMOS failed before they could get their 32 bit Transputers working.
Yes in deed. There is a small handful of concepts and for sure Unix started out on very small machines. Perhaps even with less speed and RAM than a PII. It was driven by the motivation to simplify the huge operating systems that had been under development at the time, like Multics.
Basically we can think of:
1) The Utilities and programs people can run.
2) The shell or command line interface.
3) The kernel managing the hardware stuff.
From a user perspective we only need 1) and 2). Who cares what's underneath and how it gets done in hardware?
What about that shell? That's where we see the handfull of UNIX concepts for the first time. Like:
A) I can run a program. Just type it's name and give it some parameters. I can run many programs as separate processes at the same time. Just put the "&" symbol on the end of the command.
C) Every thing is a file, the keyboard, the terminal, the files, the network sockets.
D) Files are just simple strings of bytes. No complicated records or whatever.
E) I can redirect output to console, to file, to printer easily "myProg > someDevice"
F) I can connect output of one process to input of another process "myProg | myOtherprog | myThirdProg > someFile"
And so on.
What about those processes?
A) It's nice if they can all run at the same time. That implies some means of context switching when they waiting on some input or time. Or just on a regular time tick to keep them all going "round robin style"
It's nice if they are not limited by memory. Many processes or users means we need space. Some means of paging things in and out to backing store is required.
C) Isolation between processes, so that one cannot crash out and scribble over the memory of another.
Arguably and C) are not required if you have enough RAM to hold everything that is running and you trust your programs are well behaved.
So the question is, can one create a UNIX like system as far as the programs and shell are concerned, on some other hardware that may not have interrupts?
I would say yes.
In the extreme we could imagine this:
i) Every unix processes spawned from the shell (or wherever) gets it's own physical processor. We can have billions of transistors today, why not have hundreds/thousands of processors, they could all be simpler than the behemoths we have today.
ii) The communication between those processors could be simple pipes streaming bytes around. Between processors and between IO systems like files, network, user interface.
iii) Each processor would of course have its own private RAM space.
Basically in this scheme we map all the UNIX concepts onto actual physical hardware. Instead of having a horribly complicated kernel trying simulate it, or create the abstraction, for us on a single processor.
Hey look, we don't need interrupts any more?
So here is what we need: A ton of Propeller II chips each with some external RAM. All hooked up with those serial pipes. And some means of getting from those user shell commands to a network of communicating processes.
This of course is what should have been happening, with the Transputer chip back in the day which was exactly designed for that kind of parallel processing, high speed serial links and all. In fact I seem to recall such UNIX like systems for the Transputer were under development. Sadly INMOS failed before they could get their 32 bit Transputers working.
Easy hey ?:)
I gather what's kind of neat about this is that, via a shell, a user has the same kind of instantiation abilities as programs do. It lets the user get into the middle of everything if he wants to be.
Thanks for all that explanation. I'll watch those videos.
Heater, that first video was really neat. I watched it twice. Those guys look exactly like half of the attendees at the West Coast Computer show in 1980 in San Francisco.
I like how they cited that Unix facilitated complex programs by allowing functions to be gracefully broken down and piped together. Pretty neat, and very simple.
The irony here is that you are intent on getting away from the enormous size and complexity of today's modern Unix/Linux and other operating systems and software. And that is exactly the same motivation that drove Ritchie and Thompson when they devised UNIX to start with! They were getting away from things like MULTICS which had taken hundreds of man years of development time, needed huge machines to run on and was not doing what anyone wanted. So, they rolled their sleeves up and made their own.
The irony here is that you are intent on getting away from the enormous size and complexity of today's modern Unix/Linux and other operating systems and software. And that is exactly the same motivation that drove Ritchie and Thompson when they devised UNIX to start with! They were getting away from things like MULTICS which had taken hundreds of man years of development time, needed huge machines to run on and was not doing what anyone wanted. So, they rolled their sleeves up and made their own.
One of those engineers in the video said that what made Unix so capable was its underlying file system.
I think they said MULTICS took 5,000 man years to develop. Unix probably took just a few.
It seems that as something evolves, there are periodic junctures where it must be reassessed and simplified, and rebased on a higher level of abstraction, so that it can become manageable again, and not outside of a person's ability to understand.
The areas where this doesn't happen is in things like government and industry, where entrenched interests are against any disruption. It sure would be great if some invention could render those forces docile.
The irony here is that you are intent on getting away from the enormous size and complexity of today's modern....
I thought, at first, you were going to continue, "...yet you are making the Prop2 too complicated." Maybe a little, but we'll have to see how it feels to program.
The areas where this doesn't happen is in things like government and industry, where entrenched interests are against any disruption. It sure would be great if some invention could render those forces docile.
There is such an invention, it's called a dictatorship ... quite the brain teaser, ah?
Hehe, I know, you meant like what is happening right here on the forum, where a consensus is quickly nutted out and action is immediately taken without parties asking for a cut.
Is it possible to 'reinvent' Unix in such a way that interrupts are not needed, while maintaining its robustness? This could have import for future processing paradigms where interrupts don't exist, couldn't it? I mean, there are probably a handful of concepts that make Unix what it is. Could they be liberated from what Unix has become to make a smaller system?
Unix is oriented toward symmetrical processors. Preemption is oriented toward single core CPU with limited hardware support. Presumably Multics had grander ideas about what constitutes a computer.
Unix could, for example, have the kernel running in an independent supervisory core that doesn't have very many features at all. Not unlike how a hypervisor is organised. It could divvy up CPU utilisation in many ways. It would be tiny footprint on the silicon and without impacting on the performance of the heavy lifting cores. Like how when an IT person recommends adding a stupid $50 graphics card to add into the PC even though the integrated GPU is more powerful! They do this because that graphics card has dedicated RAM for video display and therefore doesn't steal from the CPU's RAM accesses. It's only a tiny, tiny percentage but they are happy knowing they're going to get 100% of what that CPU can achieve.
What is currently known as interrupts could become just inputs for state-machines, soft or hard ... more possible asymmetries. Events would be generated and so on ... A lot of the extra hardware for this type of approach has blossomed in recent years. The trend toward ASIC solutions reduces the need for instant CPU diversion to service "interrupts".
Vendors won't certify them, but... Often you can get the pro drivers to run on the more "game" or "consumer" oriented cards. Now, the right thing to do is step up and buy a real card. But, the CPU difference on some data sets can matter.
Funny thing about the onboard GPU. It is more powerful, but the driver software typically sucks. Another funny thing, CPU only rendering, with good software, is better for MCAD than having a GPU is! If you want to see your model with any kind of accuracy anyway. Modern software manages level of detail for whatever graphics device you are using, which is slowly rendering those super high end cards less and less important every day.
One of my first troubleshooting things related to graphics is to force a software only path. I've been shocked at how well things continue to run these days. One, CPUs are fast. Two, optimized graphics has really advanced.
It's largely the software you pay for, unless you buy a very, very expensive and huge card! Also nVidia. ATI? Just run. Quick. All the really good people from SGI, who actually got most of this stuff just right, went to nVidia. ATI makes competitive hardware. Too bad they don't make software as good.
The other use case is virtual machines. I've done it. Though now, I don't care so much, and will just run a laptop and deal because it's portable. When I did care though...
Thanks for all that explanation. I'll watch those videos.
In addition to those videos, recommended reading for those eager to learn more (words from Dennis M. Ritchie itself) : http://cm.bell-labs.com/who/dmr/
Do not miss the links under the section "Unix papers and writings, approximately chronological".
First unix was written in assembler for PDP-7: 18 bit CPU, with 1 microsecond instruction cycle (1 MHz), some instructions taked 2 cycles other 3 cycles.
Later they bougth a (more capable) PDP-11: 24K memory (16K for system, 8K for user programs). Files were limited to 64K bytes. At the beginnings there was no memory protection. At that time he also developed the C language and rewrited Unix in C.
I also recommend the "forbidden" book with original C code of UNIX and comments: "Lions' Commentary on UNIX 6th Edition, with Source Code" by John Lions (1976).
Here I have to disagree. UNIX was developed on single processor machines. Which by modern standards are very small. Slow processor, not much RAM.
Preemption is oriented toward single core CPU with limited hardware support.
Yes, but that is the only way they could get what they wanted at the time.
I will argue that "UNIX" is not about the low level hardware implementation. UNIX is about how it fosters "community". That is something that sounds very "left field" and nebulous but it turns out to drive the whole design. As stated in those videos. I can expound on that at some length but that will have to wait for another post....
But yes, I agree, there are other ways to do the "UNIX Thing". Multiple processors, micro-kernels, I don't know what.
Funny you should mention the "...stupid $50 graphics card to add into the PC even though the integrated GPU is more powerful!". I have just done exactly that today. It's not installed yet but I'm hoping to get something more than the 5 FPS the integrated graphics gives for my webgl experiments. Mind you this is for an old AMD 64 box.
I will argue that "UNIX" is not about the low level hardware implementation. UNIX is about how it fosters "community". That is something that sounds very "left field" and nebulous but it turns out to drive the whole design.
I will want to read this, when you are so inclined. Yes, it's not about the low level stuff.
I like how they cited that Unix facilitated complex programs by allowing functions to be gracefully broken down and piped together. Pretty neat, and very simple.
I've recently refocused my career into the field of bioinformatics/genetics and it's amazing how some of the best tools were built by people that apparently didn't get this. Many of the programs are great and amenable to piping data into/out of them, but there's some ugly monolithic monsters in the toolbox. The very rich capabilities in the shell are also a huge strength for Unix in this field. Lots of lashing together simple tools with bash into pushbutton solutions.
As for the core Unix concepts, many of the originators of Unix (Pike, Thompson, even Richie a little), hit the reset button in the late 80's and built a new OS that fully committed to great ideas in Unix but abandoned the hacks that were necessary on early machines. But that OS, "Plan 9 From Bell Labs", never gained traction. AmigaOS took some of the Unix ideas but went into a more lightweight tasks/message passing direction, which BeOS carried even further. These days people are combining those ideas (lightweight) tasks with pure event-oriented systems such as Node.js.
Which brings me back to the interruptless Propeller 1 & 2. Between the renewed interest in event programming models, and excellent CSP languages such as Go, I think not having interrupts isn't as alien an idea than it might've felt like a decade or two ago. Just so long as there's the opportunity to jump to the opposite extreme and create code that is deterministic and has dedicated computing resources for those times when it's necessary. This is why I find the Prop so fascinating and why I'm tracking the P2 development, it feels like it enables both ways of thinking (deterministic/machinistic vs. event/message-based) in a natural way while having the discipline to not build the foundation on unfun things like interrupt handling.
(man, I could talk/type about this all day! I should tune out now so that I can get something done today)
I will want to read this, when you are so inclined. Yes, it's not about the low level stuff.
I have been wanting to make that post the whole day. Ever since I watched that video again and the word "community" cropped up. All of a sudden the whole plan, if indeed it was a plan, hit me in a way that it never did before. Even if I already knew it for many years already.
Problem is the whole story kept growing bigger and bigger in my mind, I have to organize it a bit first.
About that "stupid $50 graphics card to add into the PC". I just installed my $50 Euro Nvidia card. Frame rates up from 5 FPS from the on board Nvidia graphics to 60 FPS!
Just for the record on the AT&T/Bell Labs footage:
We have gotten several new chairs (a few)
We do have newer monitors/terminals (some)
There are a lot fewer people with beards
I checked the phone directory and none of the people are still around (obviously, for some)
I think if I were to go into the office and open up a drawer, I still have one of the beige desk phones
I think we're at about 200,000 workstations and 45000+ Unix/Windows servers....unsure of the number of mainframes but they are still around.
390PB of raw SAN storage across 82 data center connected by around 1400 SAN switches.
They are interesting videos and the ideas of community, simplicity and tool smithing are fascinating topics. I wish I had been at Bell Labs when this was going on, I'm not smart enough to have contributed but I could have gotten coffee for them!!
Comments
The advantage to an LMM-like kernel is that it would be possible to integrate a software MMU... which makes many things possible...
Thanks,
electrodude
Yeah this was also one of the things I liked the idea of as well, to be able to get code in from external RAM/flash storage into hub and execute it there at high speed using hub exec. If one limited themselves to using rather shortish function blocks using only relocatable code with relative jumps, we might be able to do caching to hub RAM on a per function basis and run from the hub. It is not close to paging or anything but the "function fault" trigger is then all done in software knowing whether the function has been loaded yet or not. A trampoline jump table per function call would allow this type of detection and each function call has a stub there at a fixed address. You would need compiler support for doing it and a dynamic hub memory allocation algorithmm and it would certainly not suit all applications which were time critical, but something interesting still might be achievable.
Another option is to try to switch quickly in and out of an XMM type execution mode (which uses an instruction subset) to hub exec mode per function call. The "fast" nominated functions are stored in hub RAM and run directly from hub, the "slow" ones run exclusively from external RAM/ROM. If this shared the same register set there might be scope there too to go back and forth between modes on the fly but you again would need special compiler support for such a thing.
We will see how that all pans out eventually...there's a few ways to do things here and I'm sure people will play with these sorts of ideas when they run out of hub RAM for their applications.
Right T3LOAD loads up task3's state data from the WIDEs, while T3SAVE writes task 3's state data to the WIDEs. Task3 would be dormant when either of these instructions would be used.
I have a suggestion for something useful you can do with your preemptive multitasker as to not waste the 1/16 of the cog that's being used by task0 sitting in a passcnt, but it would only work with/be necessary for the aux stack (as opposed to the task stack, which I assume can be used for calls and such). Make task0 watch the stack head and, if it gets too close to the stack tail (in danger of over or underflow), tlock and swap a bunch of stack longs into or out of hubram. It would probably be good to swap out lots of stack (128 longs sounds good) so it doesn't need to happen often. This would only be useful/neccessary for things that have very recursive functions (which I do all the time), but wouldn't be any slower for threads that didn't need it. A 'jmpcnt time, where' instruction would be nice for this - jump/loop if timer isn't expired.
Thanks,
electrodude
To run Android you need to be able to run a Linux kernel. That requires virtual memory that requires a Memory Management Unit. The Linux kernel also like interrupts. Not to mention the huge amounts of external RAM that would be required.
This is Parallaxia where the impossible happens every day so perhaps someone will figure a way to mimic the require hardware support. But it would be very slow.
That's before we come to the other impossibility, creating the required graphics drivers.
As a reality check, the Raspberry Pi has everything required to run Android, in theory, nice ARM processor, loads of RAM, nice GPU etc etc. So far Android does not run on the Raspi.
Heater, I think you posted this quote once: "Those who do not understand Unix are condemned to reinvent it, poorly."
Is it possible to 'reinvent' Unix in such a way that interrupts are not needed, while maintaining its robustness? This could have import for future processing paradigms where interrupts don't exist, couldn't it? I mean, there are probably a handful of concepts that make Unix what it is. Could they be liberated from what Unix has become to make a smaller system?
Does that sound familiar?
For fun here is a two minute video on "What is Unix" from back in the day: http://www.youtube.com/watch?v=JoVQTPbD6UY or this slightly longer one http://www.youtube.com/watch?v=tc4ROCJYbm0
What is unix?
Basically we can think of:
1) The Utilities and programs people can run.
2) The shell or command line interface.
3) The kernel managing the hardware stuff.
From a user perspective we only need 1) and 2). Who cares what's underneath and how it gets done in hardware?
What about that shell? That's where we see the handfull of UNIX concepts for the first time. Like:
A) I can run a program. Just type it's name and give it some parameters.
I can run many programs as separate processes at the same time. Just put the "&" symbol on the end of the command.
C) Every thing is a file, the keyboard, the terminal, the files, the network sockets.
D) Files are just simple strings of bytes. No complicated records or whatever.
E) I can redirect output to console, to file, to printer easily "myProg > someDevice"
F) I can connect output of one process to input of another process "myProg | myOtherprog | myThirdProg > someFile"
And so on.
What about those processes?
A) It's nice if they can all run at the same time. That implies some means of context switching when they waiting on some input or time. Or just on a regular time tick to keep them all going "round robin style"
It's nice if they are not limited by memory. Many processes or users means we need space. Some means of paging things in and out to backing store is required.
C) Isolation between processes, so that one cannot crash out and scribble over the memory of another.
Arguably and C) are not required if you have enough RAM to hold everything that is running and you trust your programs are well behaved.
So the question is, can one create a UNIX like system as far as the programs and shell are concerned, on some other hardware that may not have interrupts?
I would say yes.
In the extreme we could imagine this:
i) Every unix processes spawned from the shell (or wherever) gets it's own physical processor. We can have billions of transistors today, why not have hundreds/thousands of processors, they could all be simpler than the behemoths we have today.
ii) The communication between those processors could be simple pipes streaming bytes around. Between processors and between IO systems like files, network, user interface.
iii) Each processor would of course have its own private RAM space.
Basically in this scheme we map all the UNIX concepts onto actual physical hardware. Instead of having a horribly complicated kernel trying simulate it, or create the abstraction, for us on a single processor.
Hey look, we don't need interrupts any more?
So here is what we need: A ton of Propeller II chips each with some external RAM. All hooked up with those serial pipes. And some means of getting from those user shell commands to a network of communicating processes.
This of course is what should have been happening, with the Transputer chip back in the day which was exactly designed for that kind of parallel processing, high speed serial links and all. In fact I seem to recall such UNIX like systems for the Transputer were under development. Sadly INMOS failed before they could get their 32 bit Transputers working.
Easy hey ?:)
I gather what's kind of neat about this is that, via a shell, a user has the same kind of instantiation abilities as programs do. It lets the user get into the middle of everything if he wants to be.
Thanks for all that explanation. I'll watch those videos.
I like how they cited that Unix facilitated complex programs by allowing functions to be gracefully broken down and piped together. Pretty neat, and very simple.
A nice trip down memory lane.
One of those engineers in the video said that what made Unix so capable was its underlying file system.
I think they said MULTICS took 5,000 man years to develop. Unix probably took just a few.
It seems that as something evolves, there are periodic junctures where it must be reassessed and simplified, and rebased on a higher level of abstraction, so that it can become manageable again, and not outside of a person's ability to understand.
The areas where this doesn't happen is in things like government and industry, where entrenched interests are against any disruption. It sure would be great if some invention could render those forces docile.
I thought, at first, you were going to continue, "...yet you are making the Prop2 too complicated." Maybe a little, but we'll have to see how it feels to program.
There is such an invention, it's called a dictatorship ... quite the brain teaser, ah?
Hehe, I know, you meant like what is happening right here on the forum, where a consensus is quickly nutted out and action is immediately taken without parties asking for a cut.
Unix is oriented toward symmetrical processors. Preemption is oriented toward single core CPU with limited hardware support. Presumably Multics had grander ideas about what constitutes a computer.
Unix could, for example, have the kernel running in an independent supervisory core that doesn't have very many features at all. Not unlike how a hypervisor is organised. It could divvy up CPU utilisation in many ways. It would be tiny footprint on the silicon and without impacting on the performance of the heavy lifting cores. Like how when an IT person recommends adding a stupid $50 graphics card to add into the PC even though the integrated GPU is more powerful! They do this because that graphics card has dedicated RAM for video display and therefore doesn't steal from the CPU's RAM accesses. It's only a tiny, tiny percentage but they are happy knowing they're going to get 100% of what that CPU can achieve.
What is currently known as interrupts could become just inputs for state-machines, soft or hard ... more possible asymmetries. Events would be generated and so on ... A lot of the extra hardware for this type of approach has blossomed in recent years. The trend toward ASIC solutions reduces the need for instant CPU diversion to service "interrupts".
Let me know when you find some MCAD vendor certified $50 video cards!
C.W.
Vendors won't certify them, but... Often you can get the pro drivers to run on the more "game" or "consumer" oriented cards. Now, the right thing to do is step up and buy a real card. But, the CPU difference on some data sets can matter.
Funny thing about the onboard GPU. It is more powerful, but the driver software typically sucks. Another funny thing, CPU only rendering, with good software, is better for MCAD than having a GPU is! If you want to see your model with any kind of accuracy anyway. Modern software manages level of detail for whatever graphics device you are using, which is slowly rendering those super high end cards less and less important every day.
One of my first troubleshooting things related to graphics is to force a software only path. I've been shocked at how well things continue to run these days. One, CPUs are fast. Two, optimized graphics has really advanced.
It's largely the software you pay for, unless you buy a very, very expensive and huge card! Also nVidia. ATI? Just run. Quick. All the really good people from SGI, who actually got most of this stuff just right, went to nVidia. ATI makes competitive hardware. Too bad they don't make software as good.
The other use case is virtual machines. I've done it. Though now, I don't care so much, and will just run a laptop and deal because it's portable. When I did care though...
Otherwise, yeah, it's stupid.
In addition to those videos, recommended reading for those eager to learn more (words from Dennis M. Ritchie itself) : http://cm.bell-labs.com/who/dmr/
Do not miss the links under the section "Unix papers and writings, approximately chronological".
First unix was written in assembler for PDP-7: 18 bit CPU, with 1 microsecond instruction cycle (1 MHz), some instructions taked 2 cycles other 3 cycles.
Later they bougth a (more capable) PDP-11: 24K memory (16K for system, 8K for user programs). Files were limited to 64K bytes. At the beginnings there was no memory protection. At that time he also developed the C language and rewrited Unix in C.
I also recommend the "forbidden" book with original C code of UNIX and comments: "Lions' Commentary on UNIX 6th Edition, with Source Code" by John Lions (1976).
I will argue that "UNIX" is not about the low level hardware implementation. UNIX is about how it fosters "community". That is something that sounds very "left field" and nebulous but it turns out to drive the whole design. As stated in those videos. I can expound on that at some length but that will have to wait for another post....
But yes, I agree, there are other ways to do the "UNIX Thing". Multiple processors, micro-kernels, I don't know what.
Funny you should mention the "...stupid $50 graphics card to add into the PC even though the integrated GPU is more powerful!". I have just done exactly that today. It's not installed yet but I'm hoping to get something more than the 5 FPS the integrated graphics gives for my webgl experiments. Mind you this is for an old AMD 64 box.
I will want to read this, when you are so inclined. Yes, it's not about the low level stuff.
I've recently refocused my career into the field of bioinformatics/genetics and it's amazing how some of the best tools were built by people that apparently didn't get this. Many of the programs are great and amenable to piping data into/out of them, but there's some ugly monolithic monsters in the toolbox. The very rich capabilities in the shell are also a huge strength for Unix in this field. Lots of lashing together simple tools with bash into pushbutton solutions.
As for the core Unix concepts, many of the originators of Unix (Pike, Thompson, even Richie a little), hit the reset button in the late 80's and built a new OS that fully committed to great ideas in Unix but abandoned the hacks that were necessary on early machines. But that OS, "Plan 9 From Bell Labs", never gained traction. AmigaOS took some of the Unix ideas but went into a more lightweight tasks/message passing direction, which BeOS carried even further. These days people are combining those ideas (lightweight) tasks with pure event-oriented systems such as Node.js.
Which brings me back to the interruptless Propeller 1 & 2. Between the renewed interest in event programming models, and excellent CSP languages such as Go, I think not having interrupts isn't as alien an idea than it might've felt like a decade or two ago. Just so long as there's the opportunity to jump to the opposite extreme and create code that is deterministic and has dedicated computing resources for those times when it's necessary. This is why I find the Prop so fascinating and why I'm tracking the P2 development, it feels like it enables both ways of thinking (deterministic/machinistic vs. event/message-based) in a natural way while having the discipline to not build the foundation on unfun things like interrupt handling.
(man, I could talk/type about this all day! I should tune out now so that I can get something done today)
Problem is the whole story kept growing bigger and bigger in my mind, I have to organize it a bit first.
About that "stupid $50 graphics card to add into the PC". I just installed my $50 Euro Nvidia card. Frame rates up from 5 FPS from the on board Nvidia graphics to 60 FPS!
Here is my simple test: http://the.linuxd.org:3000/iframe_test.html
Or full screen: http://the.linuxd.org:3000/3js.html
We have gotten several new chairs (a few)
We do have newer monitors/terminals (some)
There are a lot fewer people with beards
I checked the phone directory and none of the people are still around (obviously, for some)
I think if I were to go into the office and open up a drawer, I still have one of the beige desk phones
I think we're at about 200,000 workstations and 45000+ Unix/Windows servers....unsure of the number of mainframes but they are still around.
390PB of raw SAN storage across 82 data center connected by around 1400 SAN switches.
They are interesting videos and the ideas of community, simplicity and tool smithing are fascinating topics. I wish I had been at Bell Labs when this was going on, I'm not smart enough to have contributed but I could have gotten coffee for them!!
But that's not got this thread!!