Will Arnold Be Back? AI Code of Ethics

Hiya All,
First off, I'm not making any moral judgements here, nor (and especially!) do I want to start any arguments ... rather would like to hear your educated opinions in a thoughtful discussion. I think everyone here is qualified to talk about this.

Over the last couple of years, concerns of run-away artificial intelligence eventually getting the better of us have risen to general public awareness.

Opinions and insights range from essentially "nah, it's not a big deal," to AI being the "greatest existential" problem humanity has to face (Elon Musk).

Although, "AI" studies are many decades old, AI has really taken off in the last couple of years. With the ever increasing power of processors and memory capacity, coupled with a 'deep net' under the internet, the possibilities of AI surpassing human intelligence - at least in calculation abilities - approaches.

So what does that have to do with things that Parallax sells - or with microcontrollers or tiny systems generally?

If we think of how IoT is starting to interlink many small systems, it's not out of line to consider these like molecules in a potentially larger, self-organizing system. To me, IoT stands now at a stage similar to when the internet was still the DARPA Net, when a few university mainframes were 'talking' to each other via Sockets :). Now, the interlinking of small to medium devices is growing at head-spinning paces. Sure, the Propeller, or any micro, doesn't have the capacity to be smart enough to be turned into an arm of Arnold. And yet, there are many near-future scenarios we can imagine, from truly beneficial to horrifically unthinkable ...

We all know about Asimov's Three Laws of Robotics. Many may have heard too of the Code of Ethics on Human Augmentation. Musk thinks politicians must see the need to regulate beforehand.

We - as technicians, experimenters, engineers - are watching the 'birth' of intelligent machines. We are part of this process - which makes us partly (even fully) responsible for what will happen. No, I'm not a Luddite - and I certainly would not want to discourage having fun with (or making money with) any of these things. But the whole realm is still in our hands because it has not become complex enough to self-replicate and "get loose" <- that's not far off.

Your thoughts?

- Howard
PS Here's some interesting, thought-proviking places:

(Which refers to:)
There, Musk, Thiel, and other high-profile technologists are trying to keep us ahead of the curve.


  • 9 Comments sorted by Date Added Votes
  • Since this is a family friendly space, I will refrain from linking the root of my own little tiny spark of net.fame regarding this topic. Suffice it to say the Three Laws of Robotics don't work; every TL story Asimov wrote was an illustration of that, as is my own little harrowing contribution to the genre.

    I believe the difference between machines and living things with brains is algorithmic, and in fact a particular algorithm which will turn out to be universal and relatively simple when we discover how it works. It will be possible to implement this algorithm at "low resolution" to achieve startling results with relatively modest computer brains. My belief in the possibility of AI, which waned during the late 1980's, was revived by a Stephen Jay Gould essay about students researching the Bee Eating Wasp, a nonsocial insect with a brain of about 300,000 neurons -- well within the simulation capabilities of existing computers. When the researchers tinkered with the wasp's environment it exhibited remarkably human behaviors, including what can only be described as cognitive dissonance. The wasp's model of the world might not be as detailed as ours, but the wasp was just as startled and lost for answers as we would be if some godlike being moved our house a few blocks without telling us.

    I doubt this algorithm is all that complicated, at at times I've been tempted to explore it myself, although I am wary of becoming a character in my own nasty little web novel. One of Ray Kurzweil's points has been that eventually the computational power to make such an artificial consciousness even at a human level will soon be available to governments and large corporations, and not long after that to individuals on the desktop. My point is that eventually someone will figure out the algorithm. That is a genie that will never be reinserted into its bottle once it is released, and its release is pretty much inevitable. Think of Stanley Ulam getting the idea for the two-stage thermonuclear bomb, knowing what Ed Teller is certain to do with it, and considering whether to keep it to himself. Of course Ulam told Teller, because if he hadn't what if Andrei Sakharov had had the same idea and ran with it? Any race has only one winner.

    And artificial consciousness is the key to all kinds of marvels. Imagine a self driving car that really drives as well as you do, because it is aware of its environment, can read road signs, and understands the true 3D reality of its surroundings. Imagine, as the Singularity folks suggest, such a machine designing its own more powerful successor, and its successor repeating the task, ad infinitum. Imagine machines like Cortana and Siri that really understand what you are saying. Once it's known to be possible, somebody somewhere will do it. You cannot stop that, any more than you can stop any particular country that really wants to badly enough from building an atomic bomb. The secret is not the exact mechanism; the secret is that it is possible at all. Once that is known, everybody who cares will figure the details out on their own.

    So TL;DR, yeah, Ahnold will most likely be back.
  • GordonMcCombGordonMcComb Posts: 3,259
    edited August 2017 Vote Up1Vote Down
    You can make AI as smart as you want, and remove any rules of behavior. What matters is the things you connect it to. Terminator always made the dopey assumption that nuclear missiles would be automated -- the story line depended on this suspension of disbelief. Well, nukes aren't now, nor have they ever been, automatic. They're completely manual, requiring at a minimum two people seated a distance apart. It's been this way since the the dawn of the nuclear age. I mean, they already thought of this stuff.

    Start to worry when people become so stupid they decide to remove these safeguards. Frankly, I'd prefer to think we humans wouldn't be that dumb, but if we ever are, do we deserve this Earth?

    Of course, there are many lesser things to give way to computational power. I'm not sure how many of the theoretical what-if's could actually come to pass, though, or if they wouldn't be taken care of by market forces. Does if matter if Siri gives us bad advice? I can imagine should that become a problem people will just turn to another AI system. Annoying, but overall innocuous.

    Then there are machines, like driverless cars, that work on a small enough scale the failsafes will be minimal. Should you trust your new Chevy to not drive you off a cliff because its robot brain has suddenly decided you're a threat to humanity because of the rotten music you listen to on Sirius?

    Okay, obviously I'm being glib about the extremes. But since we're not close to a true AI system (only pretenders), I think it's a little too early to draw up borders. Things like ethics are best made when there are real ethical questions to answer, and not broad theoreticals. It also matters how the AI is used. An investment system that correctly identifies winning stocks would be quite useful. A security system that automates the detection, apprehension, and punishment of thieves -- probably something to be concerned about. We won't know about the specifics until we get there.

    Roger is right that someone (likely many someones) will solve the AI riddles, and we'll have true self-learning systems. The emphasis should be about making those systems safe within their own expectation and environment. I'd gladly give the job of a bomb disposal expert to an AI robot. I'll feel badly if it gets blown up, but he/she/it can always be rebuilt. And its knowledge and experience backed up before it goes on its next mission. Do we really want to limit this kind of life-saving application by preventing progress in AI?

    Luddites were actually upset that the machines would take away the care and craftsmanship they had spent a lifetime developing. And they were right. Machines can't reproduce pride in workmanship.

  • Heater.Heater. Posts: 20,362
    edited August 2017 Vote Up1Vote Down
    Terminator always made the dopey assumption that nuclear missiles would be automated -- the story line depended on this suspension of disbelief. Well, nukes aren't now, nor have they ever been, automatic. They're completely manual,
    Hmm.. Yes and no.

    What about this event in 1983:


    "On 26 September 1983, the nuclear early warning system of the Soviet Union reported the launch of multiple USAF Minuteman intercontinental ballistic missiles from bases in the United States. These missile attack warnings were correctly identified as a false alarm by Stanislav Yevgrafovich Petrov, an officer of the Soviet Air Defence Forces. This decision is seen as having prevented a retaliatory nuclear attack based on erroneous data on the United States and its NATO allies, which would have probably resulted in immediate escalation of the cold-war stalemate to a full-scale nuclear war."

    The automated system said "launch".

    Stanislav Petrov basically ignored orders and decided not to launch. A pretty close call for automatic nukes I'd say. We don't have to suspend so much belief for the Terminator story.

    If the conjectured powerful AI can get control of the communications then it is in a position to trick humans into doing anything it wants. Because what we have then is a bad case of the Byzantine generals problem:

  • Inventing a working AI will be the last thing humanity will invent.


    I am just another Code Monkey.
    A determined coder can write COBOL programs in any language. -- Author unknown.
    Press any key to continue, any other key to quit

    The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this post are to be interpreted as described in RFC 2119.
  • GordonMcCombGordonMcComb Posts: 3,259
    edited August 2017 Vote Up1Vote Down
    Where in that article does it say the actual launching of nuclear missiles was automatic? Russia has basically the same system we have -- the 2-man rule. It takes at least two humans to initiate the arming and ignition of the rocket, and these systems are not connected to a computer. Manual key switches only. I think you missed the point of my post and the Wiki page.
  • Heater.Heater. Posts: 20,362
    edited August 2017 Vote Up1Vote Down
    Not sure that it does. Other versions of the story I have read indicate that Petrov defied orders to avoid nuke retaliation. An act of treason basically. So he could likely have gone ahead with it rather than risk his own neck.

    But that is kind of missing my main point there. Perhaps such systems are not fully automatic and there are still humans in the loop as safeguards. But still a large part of the detection and communications is automatic. As I said if the conjectured powerful AI can get control of that it is in a position to convince those human guards to do anything. How would they know better?

    Then again, consider the dream of the USA. The Strategic Defense Initiative (SDI). For any chance of making such a system work it would have to be fully automatic. SDI was canned but I believe such proposals are still mooted from time to time.

  • GordonMcCombGordonMcComb Posts: 3,259
    edited August 2017 Vote Up1Vote Down
    In any case none of the missile command story involved AI, so I'm not sure of the point here. The fact that there is a disconnect in the automated data flow IS the point. This design didn't just happen out of a happy coincidence. What makes you think that if/when a true AI system is deployed in such a situation that its designers will ignore common sense and have no safeguards? If they didn't do it before, why the sudden ignorance now?

    People tend to invent dangers out of thin air, then proceed to debate their merits. That's a waste of time. I could propose we produce regulations on warp drive. But until that technology actually happens, what would be the benefit.

    Last week's story about Facebook's chat bots is a good example of why it's pointless to limit AI research simply because of imagined problems. Their bots started inventing their own language syntax, which is remarkable in itself, but it also demonstrated whole swaths of limitations in the AI programming. The new syntax was crude and significantly wasteful ("it it it it it" if the bot wanted five of something), but how are developers to discover such things unless they actually try them? The issue is whether AI development needs to be regulated before people know how to create it in the first place. Examples like this show we're much further behind than folks may think.
  • Gordon,
    People tend to invent dangers out of thin air, then proceed to debate their merits. That's a waste of time -- straw man arguments lead no where. I could propose we produce regulations on warp drive. But until that technology actually happens, what would be the benefit.
    I agree Gordon.

    It might be a waste of time. On the other hand we read science fiction, imagine some possibilities, then have some fun hours discussing the possible outcomes over beers down the pub.

    So my scifi speculation here is:

    1) Suitably smart AI is employed and has access to our automatic systems and communication channels.
    2) There are indeed people in the loop, creating " disconnect in the automated data flow". Like Petrov in the Russian false alert situation.
    3) However, as the AI is now in the communication links, it can trick such people into believing whatever it likes.
    4) Thus the AI can subvert and overcome such human safeguards.

    Anyway, more realistically. This is not what I worry about. The reality is that we do have increasingly automated systems. They have increasing capacity for data collection, monitoring, surveillance. They do have increasingly capable AI, even if it is not the strong human like AI many imagine. Algorithms are controlling our lives more and more. I don't yet worry about the AI itself taking over. I do worry about the power of all that being concentrated in the hands of a few humans.

  • As far as automated nuclear war, there wasn't just the Terminator -- War Games showed us just why that might have been implemented in the very first scene. But that said, AI starting a war isn't my big concern.

    The first and biggest problem with AI is going to be the collision between our "let him who will not work not eat" ethic and with the fact that AI will eventually do almost EVERYTHING we do better than we do. Our society is going to have to restructure itself in a very basic level or there will be chaos. We will not need truck drivers, we will not need line operators in plants, we will not need schedulers or accountants or any other office functionaries. How do all the people doing those jobs justify their existence and earn (if we keep requiring that) their way to a living?

    I have complete faith that AI will, in the fullness of time, be capable of doing everything humans do, long before it becomes superhuman and we face the Terminator / Seed AI type scenario. And in that situation if we still need jobs to survive, we will be competing against machines that don't need food, work breaks, or sleep and which will never retire or die.

    We really have to get past that before we can worry about Ahnold returning.
Sign In or Register to comment.