Fair! I am on mobile and will relax most rules. I could punctuate it. Not going to.
I will say you should upgrade your parser. Add intent mode, and a switch from proper and or detail mode.
Or maybe vocalize it mode. When read aloud, most people will parse that correctly. I know. I've delivered like over the phone countless times. Context is vital.
Part of my life experience involves a lot of creative English. Mostly working with non native speakers. Over a year or two, I reached a point where I have few problems. Sometimes it's a real mess!
They say technically that intent cannot be derived from the written word. That it is subjective.
I think it's true, but... I can also say my brand of subjective hits the mark quite often. I take expression most often, not as "what did they mean?", instead, "what could they mean?" From that set of things, best guess, inference based on context and relevance.
If I then include something that will only make sense, or is most likely to make sense, given the best guess, in the response back, a few iterations tracks closer and closer to lucidity, and it's a meaningful dialog.
No worries, potatohead, I have a fairly robust parser.
It's just that it bails on such constructs in default mode. Then I have to load the context module, the American spelling/vocabulary/grammar module, the intent module, the potatohead module and restart it.
Best of all is the neural network module. Problem is it needs a lot of training data.
Don't forget:
Time flies like an arrow; fruit flies like a banana.
I have to take exception to those proposing the backup be hosted on GitHub/Dropbox/etc. This may be fine while developing. But it is not a guaranteed secure storage.
These days of cloud storage pose a real problem for users. What guarantee do you have they will be there tomorrow! Absolutely none. Google have shut down some of their services, although they have given reasonable notice. There is a case where a cloud storage host was shutdown by government authorities and computers confiscated with no warning because authorities believed pirate software/movies were being hosted on the site. While likely true, the reality was other legitimate users who paid for the cloud service were left high and dry. No access to their data!!!
You need to be in full control of the proper backup. You cannot, and should not, rely on a third party, who BTW is providing a free service and make no guarantees that this service will be here tomorrow!!!
You company's main backups (includes documentation) should be held in two separate places, under your company's control. Company operating certifications (ISOxxxx) have been auditting these procedures for many years.
I have to take exception to those proposing the backup be hosted on GitHub/Dropbox/etc.
I totally agree with all points you are making but I think you are mixing up some different issues as if they were one. Namely: backup, source code management, security.
Source code management (git for example) is not backup (Dropbox for example). Security is totally another thing.
This may be fine while developing.
It's whilst developing that a source code management system is the most useful. If you are not developing you no longer need it. But for much useful software development never stops. Until it ceases to be useful.
But it is not a guaranteed secure storage....What guarantee do you have they will be there tomorrow!
No it is not. And none. That is what backups are for.
You need to be in full control of the proper backup. You cannot, and should not, rely on a third party, who BTW is providing a free service and make no guarantees that this service will be here tomorrow!!!
Exactly. Totally agree.
You company's main backups (includes documentation) should be held in two separate places,under your company's control.
I still agree. Except three or more places would be better. If you only have two and one gets corrupted/out of sync how do you which is the correct version?
Having said all that, I think you will agree that even just having a copy of your source and docs on Dropbox/github/bitbucket whatever is already a vastly better situation than having a single copy on a USB stick. And then forgetting where it is!
Git by itself is not a backup. No matter Github, Bitbucket, whoever. But Git is a distributed system. Your git repo is easily cloned to as many places as you feel makes you "safe". Local or otherwise. With the advantage that you get a very useful source code management tool into the bargain.
There is some ambiguity of your use of "secure". Is that "securely storage" as in there is very little chance of losing anything. Or is it "secure" as in there is little chance of code leaking out to those you don't want to have it?
@rest, agree configuration is not backup, that is only a small part. Git may be great, but it is only source control. Major projects use more than one backup method, including off site storage, so really you have 3, local working, github, and either Iron Mountain, safe deposit box, storage shed or garage. But the concept of CMM is far more. You may have every line and version of code you ever wrote, but it your project may still be a brick. You need the documentation to tell you what you planned, did, and how to implement. What tools did you use? If they are not controlled and saved, how do you recreate the same product down the road if needed. Sometimes a few years down the road which seems to be decades in techno-years. This is why there are still craploads of COBOL out there. In spite of the '90s learn C and save your job mantra. You don't dump billions of dollars worth of tested code for the next thing. And you may even need to consider hardware in the configuration management model. Do you still have a system with proper OS (which should have been controlled along with your tool chain) to even try to recreate the last release in order to maintain and release the next version. Test plans and tools also need documentation and control. The OPs apparently not doing this put an entire enterprise at risk.
For the pros out there, this is preaching to the choir, for the hobby developers it should give a hint as to the underlying reasoning for my last and some other posts in the past. For those hoddiests who want to do true pro level projects, this should give an idea of some things to consider or some ideas to begin exploration into reliable, repeatable and maintainable projects.
After one experience I had, I can't stand everything being networked, especially when the server isn't local.
As soon as the server goes down for whatever reason the you can't do anything , not even print to a printer that sits in front of you.
Computers come with CD/DVD drives or you can easily buy a USB version so just burn a few CDs.
Actually I was thinking of `merkin, after the character of U.S. President Merkin Muffley in the Peter Sellers movie "Dr Strangelove". But that is a joke I guess most `merkins did not understand and perhaps too crude for this forum.
You are right CMM is a whole other can of worms. Building a system that will still be usable, and maintainable, in 5, 10, 20 and more years is not easy. Over the years I have seen many embedded systems become unmaintainable as the hardware and software tools they depend on become unavailable. Replacing/recreating them is a huge expense (Work for me ).
I have seen projects floundering because the documentation has gone missing over the years. I have worked on new systems that had to be compatible with old systems for which the documentation and the source code has gone missing. A nightmare of reverse engineering!
Hence the drive to create cross-platform code using standardized language like C and C++. In the expectation that compilers and other tools will be around well into the future and the code will be buildable for future hardware.
Otherwise you are stuck having to keep hold of binary executables and such of old tools and end up having to run the code in an emulator of some kind when the hardware is gone.
After one experience I had, I can't stand everything being networked, especially when the server isn't local. As soon as the server goes down for whatever reason the you can't do anything , not even print to a printer that sits in front of you.
I sympathize.
Luckily when using git for source code control one can continue working if the projects git server is unavailable.
Git is a distributed system. The git repo on my local PC is quite equivalent to the one on the projects git server or any other clone a developer has anywhere in the world.
Work can continue, changes get committed to the local repo. They can be pushed to the project's repo later. Heck, the project's repo could vapourize and it would not be a big deal. In fact we would hardly notice.
I have/had an onsite fireproof safe -at home - worked from home almost all of my working life. A secure backup copy (multiple incremental copies including software tools) was always in the safe, and regular additional copy stored offsite - at a friends house. He did similar work, so we swapped backups. When CD Burners became available, we bought one between us.
I still have backups going back to the early 80's. Because of the way I backup, I can prove beyond any doubt, that I wrote my code. Sadly, none of this old code is in use today.
Prior to my Windows 10 laptops, I have full logs of every piece of software installed on my pcs, including all settings, serial numbers, passwords, etc. Same for Servers that I installed, including originals or copies of the software installed.
I have no need for git or any other source control because I have always had an extremely regimented, perhaps paranoid or OCD, backup methodology. I owe all this to to a dear mentor and friend that I met in 1974 when I started to work on minicomputers.
Take my advice as you will. You truly need to own your own backup/security, whatever that may be. I have seen so many disasters from those that thought they had it covered, until they needed it, when they found out just how inadequate it was.
At the time, the title of the comment made sense to me; however, as more comments were posted it became clear that I was off track and the title change is fine with me.
I am an old timer who used to build and program computers that ran assembly code, interfaced with simple devices such as enunciators, switches, and the like but the computers flew jet aircraft perfectly well. With the advent of Microsoft do everything computers and software it is no longer, I just read from the comments, the case that writing code into a computer can be depended upon to be saved where it should be saved. Back in the days of Artificial Intelligence software I wrote programs that would modify code dependent upon the results of tests performed on electronic modules. The modified code would setup the waveform analyzers and signal generators in the test station more efficiently than originally designed to reduce factory test time. These computers were PDP 10s and VAXs. These machines were amazingly dumb; however, that was their best characteristic.
Since I cannot get a PDP 10 or VAX computer any more...my best solution is to disconnect the USB ports to the bank of BS2s and propeller micro controllers since the micro controllers do all the work autonomously. When changes are needed the USB port will be connected, adjustment made and then disconnected.
Back in the days or PDP 10 and VAX computer software was created to serve the needs of it's users. It was reliable. It did what you told it. It did that repeatably.
In this modern world or Win x,y,z, Apple whatever, Android this and that, the OS and the applications seem more geared toward serving the needs of it's owners: MS, Apple, Google etc.
(We do not own the software, we only use it under license).
If it happens to be useful for us users that is incidental.
For this reason, among others, I am a strong supporter of Linux and Free and Open Source software. Better to have something we users can control.
This has gone quiet so hopefully you found your data. If you need help, link me info on "Eee laptop" I usually use Hiren's Boot disk either cde or USB stick to run a stand-alone copy of XP; it will see all files on main disk. If this fails to find files then a recovery is next try.
BTW I used to fix and program Friden/Singer/ICL System Ten mini-computers. They were also programmed in assembler. They were reliable. And the software and operating system was reliable, and well understood. Nothing happened without the users/operators permission. Upgrades to software and the OS could also be relied upon to work. The same applies to competing computers which included the PDP 10, IBMs etc etc.
Unfortunately, with the cheaper PCs, software and OSes of today, those days of reliable computing and control are long gone. What happened???
Since I cannot get a PDP 10 or VAX computer any more...
The PDP-10 was one of my favorite computers. I did a lot of work writing assembly code for the PDP-10 under TOPS-10. Of course, it's impossible to emulate the PDP-10 on the Propeller. :-)
Not really. The first emulator I wrote was for a PDP-11 which is a 16 bit machine on a PDP-8 which is a 12 bit machine. There was a lot of shifting and masking required but the emulator worked. Of course, now I'm saying it isn't actually impossible so that probably means a PDP-10 emulator will never be done. :-(
Not really. The first emulator I wrote was for a PDP-11 which is a 16 bit machine on a PDP-8 which is a 12 bit machine. There was a lot of shifting and masking required but the emulator worked. Of course, now I'm saying it isn't actually impossible so that probably means a PDP-10 emulator will never be done. :-(
Still, it would take quite an effort to come up with an emulator that ran at a reasonable speed and made efficient use of the available memory on a P1 or a P2.
The Prop would do a nice job at emulating the PDP10.
I have always thought about emulating the ICL System 25 on the prop. It has an 80bit instruction word but it is decimally addressed, as are all instructions. I wrote an emulation on the PC with a 80486 @ 33MHz and it ran an average of 3x faster. One of my customers did a full days validation - the program initialisation that took 15 minutes, took less than a minute!
I specifically targeted the 486 assembler instruction set.
Comments
Sorry, could not resist. This 'mercan speak trips me up all the time
I will say you should upgrade your parser. Add intent mode, and a switch from proper and or detail mode.
Or maybe vocalize it mode. When read aloud, most people will parse that correctly. I know. I've delivered like over the phone countless times. Context is vital.
Part of my life experience involves a lot of creative English. Mostly working with non native speakers. Over a year or two, I reached a point where I have few problems. Sometimes it's a real mess!
They say technically that intent cannot be derived from the written word. That it is subjective.
I think it's true, but... I can also say my brand of subjective hits the mark quite often. I take expression most often, not as "what did they mean?", instead, "what could they mean?" From that set of things, best guess, inference based on context and relevance.
If I then include something that will only make sense, or is most likely to make sense, given the best guess, in the response back, a few iterations tracks closer and closer to lucidity, and it's a meaningful dialog.
It's just that it bails on such constructs in default mode. Then I have to load the context module, the American spelling/vocabulary/grammar module, the intent module, the potatohead module and restart it.
Best of all is the neural network module. Problem is it needs a lot of training data.
Don't forget:
Time flies like an arrow; fruit flies like a banana.
Hey, I have been here much. Startup is intense. Missed seeing you in the States. Next time one of us travels maybe.
It would be great to hook up in Euclidian space. Perhaps, possibly, maybe, I'll be over the pond for another project next year.
These days of cloud storage pose a real problem for users. What guarantee do you have they will be there tomorrow! Absolutely none. Google have shut down some of their services, although they have given reasonable notice. There is a case where a cloud storage host was shutdown by government authorities and computers confiscated with no warning because authorities believed pirate software/movies were being hosted on the site. While likely true, the reality was other legitimate users who paid for the cloud service were left high and dry. No access to their data!!!
You need to be in full control of the proper backup. You cannot, and should not, rely on a third party, who BTW is providing a free service and make no guarantees that this service will be here tomorrow!!!
You company's main backups (includes documentation) should be held in two separate places, under your company's control. Company operating certifications (ISOxxxx) have been auditting these procedures for many years.
Source code management (git for example) is not backup (Dropbox for example). Security is totally another thing. It's whilst developing that a source code management system is the most useful. If you are not developing you no longer need it. But for much useful software development never stops. Until it ceases to be useful. No it is not. And none. That is what backups are for. Exactly. Totally agree. I still agree. Except three or more places would be better. If you only have two and one gets corrupted/out of sync how do you which is the correct version?
Having said all that, I think you will agree that even just having a copy of your source and docs on Dropbox/github/bitbucket whatever is already a vastly better situation than having a single copy on a USB stick. And then forgetting where it is!
Git by itself is not a backup. No matter Github, Bitbucket, whoever. But Git is a distributed system. Your git repo is easily cloned to as many places as you feel makes you "safe". Local or otherwise. With the advantage that you get a very useful source code management tool into the bargain.
There is some ambiguity of your use of "secure". Is that "securely storage" as in there is very little chance of losing anything. Or is it "secure" as in there is little chance of code leaking out to those you don't want to have it?
@rest, agree configuration is not backup, that is only a small part. Git may be great, but it is only source control. Major projects use more than one backup method, including off site storage, so really you have 3, local working, github, and either Iron Mountain, safe deposit box, storage shed or garage. But the concept of CMM is far more. You may have every line and version of code you ever wrote, but it your project may still be a brick. You need the documentation to tell you what you planned, did, and how to implement. What tools did you use? If they are not controlled and saved, how do you recreate the same product down the road if needed. Sometimes a few years down the road which seems to be decades in techno-years. This is why there are still craploads of COBOL out there. In spite of the '90s learn C and save your job mantra. You don't dump billions of dollars worth of tested code for the next thing. And you may even need to consider hardware in the configuration management model. Do you still have a system with proper OS (which should have been controlled along with your tool chain) to even try to recreate the last release in order to maintain and release the next version. Test plans and tools also need documentation and control. The OPs apparently not doing this put an entire enterprise at risk.
For the pros out there, this is preaching to the choir, for the hobby developers it should give a hint as to the underlying reasoning for my last and some other posts in the past. For those hoddiests who want to do true pro level projects, this should give an idea of some things to consider or some ideas to begin exploration into reliable, repeatable and maintainable projects.
As soon as the server goes down for whatever reason the you can't do anything , not even print to a printer that sits in front of you.
Computers come with CD/DVD drives or you can easily buy a USB version so just burn a few CDs.
You are right CMM is a whole other can of worms. Building a system that will still be usable, and maintainable, in 5, 10, 20 and more years is not easy. Over the years I have seen many embedded systems become unmaintainable as the hardware and software tools they depend on become unavailable. Replacing/recreating them is a huge expense (Work for me ).
I have seen projects floundering because the documentation has gone missing over the years. I have worked on new systems that had to be compatible with old systems for which the documentation and the source code has gone missing. A nightmare of reverse engineering!
Hence the drive to create cross-platform code using standardized language like C and C++. In the expectation that compilers and other tools will be around well into the future and the code will be buildable for future hardware.
Otherwise you are stuck having to keep hold of binary executables and such of old tools and end up having to run the code in an emulator of some kind when the hardware is gone.
Luckily when using git for source code control one can continue working if the projects git server is unavailable.
Git is a distributed system. The git repo on my local PC is quite equivalent to the one on the projects git server or any other clone a developer has anywhere in the world.
Work can continue, changes get committed to the local repo. They can be pushed to the project's repo later. Heck, the project's repo could vapourize and it would not be a big deal. In fact we would hardly notice.
I still have backups going back to the early 80's. Because of the way I backup, I can prove beyond any doubt, that I wrote my code. Sadly, none of this old code is in use today.
Prior to my Windows 10 laptops, I have full logs of every piece of software installed on my pcs, including all settings, serial numbers, passwords, etc. Same for Servers that I installed, including originals or copies of the software installed.
I have no need for git or any other source control because I have always had an extremely regimented, perhaps paranoid or OCD, backup methodology. I owe all this to to a dear mentor and friend that I met in 1974 when I started to work on minicomputers.
Take my advice as you will. You truly need to own your own backup/security, whatever that may be. I have seen so many disasters from those that thought they had it covered, until they needed it, when they found out just how inadequate it was.
I am an old timer who used to build and program computers that ran assembly code, interfaced with simple devices such as enunciators, switches, and the like but the computers flew jet aircraft perfectly well. With the advent of Microsoft do everything computers and software it is no longer, I just read from the comments, the case that writing code into a computer can be depended upon to be saved where it should be saved. Back in the days of Artificial Intelligence software I wrote programs that would modify code dependent upon the results of tests performed on electronic modules. The modified code would setup the waveform analyzers and signal generators in the test station more efficiently than originally designed to reduce factory test time. These computers were PDP 10s and VAXs. These machines were amazingly dumb; however, that was their best characteristic.
Since I cannot get a PDP 10 or VAX computer any more...my best solution is to disconnect the USB ports to the bank of BS2s and propeller micro controllers since the micro controllers do all the work autonomously. When changes are needed the USB port will be connected, adjustment made and then disconnected.
Thank you for all the information.
Discovery
Back in the days or PDP 10 and VAX computer software was created to serve the needs of it's users. It was reliable. It did what you told it. It did that repeatably.
In this modern world or Win x,y,z, Apple whatever, Android this and that, the OS and the applications seem more geared toward serving the needs of it's owners: MS, Apple, Google etc.
(We do not own the software, we only use it under license).
If it happens to be useful for us users that is incidental.
For this reason, among others, I am a strong supporter of Linux and Free and Open Source software. Better to have something we users can control.
Merry Christmas.
Thank you.
Discovery
BTW I used to fix and program Friden/Singer/ICL System Ten mini-computers. They were also programmed in assembler. They were reliable. And the software and operating system was reliable, and well understood. Nothing happened without the users/operators permission. Upgrades to software and the OS could also be relied upon to work. The same applies to competing computers which included the PDP 10, IBMs etc etc.
Unfortunately, with the cheaper PCs, software and OSes of today, those days of reliable computing and control are long gone. What happened???
All is not lost. We have Linux and BSD.
Discovery
Warning: We have an "Impossible On The Propeller" alert!
Well, the instruction set is here:
http://pdp10.nocrew.org/docs/instruction-set/pdp-10.html
Wouldn't 36-bit words would be tough to emulate
... and 36 is a nice multiple of 9, wich happens to be a well known number of bits, when it comes to propeller instructions!
Henrique
P.S.
Only after I've posted my last comment, I've done a first look to the instruction set linked above.
Its wonderfull!
Both P1 and P2 could deal with a instruction set of such kind, both for decoding and simulating it.
Sounds like an interesting project should someone be so inclined. Could it be anything other than an academic project and a curiosity?
I have always thought about emulating the ICL System 25 on the prop. It has an 80bit instruction word but it is decimally addressed, as are all instructions. I wrote an emulation on the PC with a 80486 @ 33MHz and it ran an average of 3x faster. One of my customers did a full days validation - the program initialisation that took 15 minutes, took less than a minute!
I specifically targeted the 486 assembler instruction set.