DOS on P2?
Rayman
Posts: 14,566
Can it be done? Seems like it could.
MS-DOS source code is available. There are some other DOS versions out there too...
Seems P2 has RAM and speed enough...
MS-DOS source code is available. There are some other DOS versions out there too...
Seems P2 has RAM and speed enough...
Comments
So is the GEM desktop! (Atari ST)
A 286 emulator stands a good chance of running 10MHz+ equivalent on a P2
The CGA colors were awful (if you had color, instead of green screen), load times horrendous, squawking PC beeper used as crude speaker, some weird 55 ms timer (18.2 ticks a second? why?).
I attempted to learn assembly at the time, and with such a crude platform and confusing CPU I almost gave it up forever.
DOS was a hideous OS and needed hackish TSR's to support base hardware, and the stuff I was interested in (Games) always seemed to read and write from the hardware directly and blindly.
So with so many hacks any emulator has to simulate the entire PC if any interesting "DOS" software is to run.
There is the DOS games and such. There is a lot of pain too. Nobody really misses those times. But, there are a TON of systems that still run under DOS. A nice system that delivers solid emulation could open up a lot of doors in terms of upgrading, improving and connecting legacy systems.
http://zet.aluzina.org/images/d/d8/Pres.pdf
http://zet.aluzina.org/index.php/Zet_processor
Also, isn't emulating an 80286 really over-kill for DOS? Protected mode is not necessary (or useful without a DOS extender method Pharlap? - I could be wrong), and non-multiplexed address/data is not likely to add real value (64+ pins for SRAM leaves very little left for peripherals !again!).
I suppose if you get an 8086 emulator running, you could extend it though ... like Intel did. ;-)
I'm sure an 8086 emulator on a PII is quite possible and might even run at original IMB PC speeds.
But then MS-DOS was a brain dead pile of junk as well and I can't imagine anyone would want to bother with it. Certainly not anyone who lived through having to use it back in the day.
He still uses that software running from floppy on a old machine.
Talk about getting the most out of your investment.
BTW I don't think I charged him enough in the first place!
Yes. Sort of working my resume backwards nowadays I stumble about stuff like that a lot.
Not Dos. But I found a running windows 3.? still in active environment and executing a COBOL program (sort of Cash-Register and Inventory) I wrote 1992 in MicroFocus Cobol.
Sort of funny thing MicroFocus did at that time. Getting text based Cobol programs to run in the first MS GUI environment. Them did a quite good job in using the same source for text and GUI based UI's.
I guess nobody here thinks that COBOL still has any value. But being forced to go back there I was astonished. Met some old friends again still working in COBOL. Needed some help after 20+ years of not touching the language.
You do not find much in the internet about it, but COBOL is still going strong. Tons of man-years invested in source. Huge programs, Still running.
I am back to my daily job with C# now - the time travel with COBOL was short and just for a friend of mine, long ago.
But honestly - there is a market out there. Them COBOL programmers are either going to retire or simply die off. I got several job offers in a couple of weeks while helping out a friend.
Alas - you need to wear a suit and a tie.
Mike
It certainly didn't say much about M$ vaunted technical prowess when they could only produce a crude OS like this while Commodore had the Amiga OS which put it to shame.
Now DOS on a P2 doesn't make sense when there are a bunch of x86 boards out there like Intel Galileo that can do it better and faster than the P2 ever could. Porting is a lot easier too.
As for retrocomputing, it has been moving to FPGA's and/or using still available 6502's, Z-80, 68K's and building your own system, etc.
Personally, if M$ got sucked into another dimension along with Gates, I'd be elated. That nasty little bugger held back computer innovation by decades with his monopolist practices and intellectual piracy. Embrace and extend my a**, it's always been steal and strangle with him.
MS and others on the other hand would rather that they had the "secret knowledge" and are keepers of the code whilst you humble maggots out there just pay to use it.
By the way "just a re-write of Unix" seems to be rather derogatory. That "just a rewrite" took far more work than ever went into MS-DOS starting from the creation of GCC by Richard Stallman in 1987. https://gcc.gnu.org/wiki/History and the subsequent efforts of thousands of hackers and companies.
But whilst we are here. The first ever .exe I ran on the first release of NT crashed it hard. Despite all the pronouncements of being a muIti-user, multi-tasking OS with memory protection bla bla from MS. I was a bit miffed as it had taken me a long time to write (it worked fine under DOS despite being a 32 bit app). I knew at that moment it was all hopeless.
I do run GUI programs to read my email and browse the web, or to run Excel and Word. The GUI interface is great for organizing my photos. Other than that, I prefer to do everything else through the command line. I even avoid the Prop Tool and SimpleIDE, preferring to use vi along with BSTC and propeller-elf-gcc.
For me, it is easier to keep both hands on the home-row of the keyboard instead of switching back and forth between the keyboard and a mouse. I think the ability to touch-type makes that easier. For people who can't touch-type, and have to use the hunt-and-peck method it might be easier to use a GUI and mouse.
1. Syntax. It's possible to encode a very rich set of functionality into syntax, and it's equally possible for people to understand it and make use of it in flexible ways.
This is the UNIX way.
2. In a more general sense, it's about context changes. When we are keyboarding, there is one mental context. We are composing our input and we are delivering it. There is a flow here, and good typists can type while thinking, literally speaking to the computer via the keyboard. That's what I'm doing right now, and for the most part, these are a lot of the words you would hear if we were talking. (which is why I seriously wish we could improve voice just one more notch... Because I would use it, when I'm using the computer myself, not in a mix where conflicting voices would be a problem)
When we context switch, such as going to use the mouse, or some other input mode and or device, we lose flow, and with the loss of flow comes a higher mental burden to continue the line of thought. This resistance shows up as fatigue and or less cohesive / productive sessions. A great example of this is seen in mechanical CAD systems, where GUI input makes a lot of sense, and the UI has moved to presenting options that make sense for flow more than an attempt to present a structured syntax of sorts. Good innovation going on in GUI land right now. Doing the same sort of task command line would be extremely painful, as it is extremely painful with a GUI presenting a ton of options too.
A great example of syntax, where a GUI really doesn't add value, might be regular expressions.
Secondly, there is a form of hysteresis about this, where regular and expected context switches are something people can work into flow. Some people are fairly rigid and present a considerable and varied reluctance to change context. Their hysterisis is high. Others do it effortlessly. A lot of the GUI / command line debate is driven by this difference among people, and it's just not often discussed. And it should be.
The thing is, a whole lot of people simply don't flow in the go deep sense. Your average knowledge worker may task a lot instead. For them, the GUI helps considerably, because they need to keep a lot of states handy, providing little bits of input here and there, with expected context switches. This is your power user with 30 windows open, who has not logged out for two weeks. Doing that command line can also be done, and the difference might be the secretary or marketing person running a GUI really hard, as opposed to the sysadmin who is touching 50 machines, etc...
The GUI paradigm works extremely well for the former. It can work for the latter, in a support role, again where there isn't flow, but where there is flow, the command line would empower that person to manage a lot of machines, data, requirements, scripting, etc... to just make things happen. They won't want to switch a lot, because they won't want to break flow and lose that more complex state enabling them to automate things successfully, for example.
I think that the reason why most people hate the command line is that the DOS shell in Windows is pretty crappy.
If you have tasks that you do a lot why not abstract them into buttons on the screen that you click? Or whatever GUI thing you like.
I cannot imagine life without a command line. By the time you have made a GUI comprehensive enough to do everything you can do from the shell it would be huge, massively complex and impossible to use. Not saying it can't be done but what would be the point?
I cannot imagine life without a GUI. Like this browser I'm typing into now.
Sometimes all that mousing around and touch/swiping is just making things more inconvenient rather than easier.
Got it in one. If your whole business model is based around getting people's hands off the (non-patentable) keyboard and onto your (patentable) mouse as often as possible, then naturally you are going to make the command line as awful to use as you can.
The Windows command line interpreter is simply appalling, even compared to the command line interpreters that preceded it by 20 or more years.
Did they do this deliberately? Of course they did!
Ross.
Thanks..