Shop OBEX P1 Docs P2 Docs Learn Events
How to stop your intelligent robot killing you? — Parallax Forums

How to stop your intelligent robot killing you?

Seems you cannot. Especially if you fit it with an emergency stop button:



Comments

  • ercoerco Posts: 20,244
    Dude wastes our time with some very silly arguments. Asimov's 3 laws covered all this what, over 60 years ago?
  • WBA ConsultingWBA Consulting Posts: 2,933
    edited 2017-03-06 07:09
    Yawn, some very elementary arguments to a complex problem. Couldn't make it through the whole video as I realized too soon it was wasting my time. One fundamental flaw is that to truly have an "e-stop" for an AI machine is including the thinking that the AI would have any awareness of the e-stop. Any competent AI system would of course want to protect the e-stop from human intervention if the AI believes itself to be more intelligent than humans. If an AI system has a function of serving you tea, why would it think it has the right to believe "something easier" as a better option than that? Apparently his perception of AI is that where the AI can be allowed to think its decisions outweigh any restrictions of courtesy, laws, human safety, etc. His arguments about the AI "finding out" about the e-stop are very weak in that it assumes a path of self-learning that an e-stop must exist so it would eventually figure it out. Really!?!?
  • Robot B-9 had a little pack on its side that, when removed, stopped the robot cold. Seemed to work for the Space Family Robinsons, so good enough for me, too.

    I didn't watch this particular video, but there's a difference between AI and sentience. One you program with whatever control loops you want. End of story (though be careful of the corner-case bug). The other evolves into thinking patterns you have no control over. There's no evidence that we are anywhere close to developing sentient beings.

    The kind of danger AI poses is the "pinky finger" quagmire: as humans, the less we do with our hands the more at risk we are of losing digits to evolution, as they become less and less crucial for survival. AI imposes a risk of reduced critical thought -- if a machine can be made to perform complex decisions, humans will become weaker at developing this kind of critical thinking. Not to drive too sharp of a point, but there is a growing Siri-based population that has no clue what to do in a library. You know, places with books.
  • ercoerco Posts: 20,244
    edited 2017-03-06 16:41
    Not to drive too sharp of a point, but there is a growing Siri-based population that has no clue what to do in a library. You know, places with books.

    Agreed. Smarter phones=dumber people. Who's this "Dewey Decimal" dude anyway? Stupid name, DUH!

    Nick Fury to disoriented young helicarrier pilot: "Is the sun coming up? Put it on the left!"

  • If we want to worry about this, then how about worrying about all of the people running around without emergency stop buttons? ;-)
  • This is why I recommend everyone carry a pocket EMP.
  • I do think that developing A.I. is not a good move anyways. As soon as you have software smarter then yourself and able to reproduce itself without your intervention, you will pretty soon not understand the source of the software anymore.

    Stop-Button or not, you will loose the race. And not just yourself, but the whole human mankind.

    As some friend of mine out of Singapore told me:

    "All apes are able to understand and can speak human languages. But they don't show this to us because they know that they then will have to work."

    Enjoy!

    Mike

  • Heater.Heater. Posts: 21,230
    Gordon,
    ..there's a difference between AI and sentience
    Yep. Agreed.
    One you program with whatever control loops you want. End of story
    Well no. I presume by "one" there you mean the first one mentioned, that is AI. I think the point is you don't program AI with "whatever control loops". What they are aiming for with AI is a thing that learns something for itself, and can do some reasoning for itself. As such, it's behavior is going to be a surprise. Not something your programmed in exactly.
    The other evolves into thinking patterns you have no control over.
    By "the other" there I presume you mean "sentience". Nobody is talking about sentience here. Not as we might find in a typical definition:

    "Sentience is the capacity to feel, perceive, or experience subjectively. Eighteenth-century philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentience)"

    We are not talking about "feeling", "emotion", "consciousness", "self awareness", "desires", "motives", etc. Only about machines with some intelligence that have some function to maximize.
    There's no evidence that we are anywhere close to developing sentient beings.
    Very true. Given that we cannot even define "sentient" in anyway that people agree on. I mean, for example: Burn my finger and I will feel pain. I am sentient at least as far as that. How can I define a machine that feels pain like me? Let alone build one. How would I know when it is really working? Really sentient as opposed to just responding like it might be?

  • Is thinking and reasoning the same and are they a needed part of intelligence?

    Learning by experience is, basically, how neuronal networks get trained. If the trained neuronal network then can distinguish "good" from "bad" apples or car-parts, could we then call the process reasoning or thinking?

    How to find out if a A.I. has self-awareness and is self-awareness needed for Intelligence?

    Sure is that intelligence is not needed for self-awareness. There are examples everywhere.

    Enjoy!

    Mike

  • Heater. wrote: »
    Not something your programmed in exactly.

    You do in fact program critical control parameters exactly, no matter what the system. The programming code to develop cognitive functions, self-learning algorithms, etc., for any AI system contains countless parameters that define normal operating conditions and behavior. It's seldom "open ended." An example would be a heat sensor connected to the machine's CPU. While a machine could learn the temperature right before it goes on the fritz (assuming it has some sort of stateless reboot routine), it's more productive to provide that kind of low-level information in the first place.

    Since protections, for itself as much as anything, are a sensible idea for any machine, it stands to reason a programmer will add these end-stops as deemed appropriate. Even humans learn from what they're told (the control parameter), not just from what they discover when they put their hand on a hot burner on the stove (the independently discovered parameter). Both are part of "learning," and one is not inherently better than the other.

    In science fiction, a machine might learn completely from scratch, but it's counter-productive and wasteful to do that in reality. The idea of AI is that it builds upon a bedrock of existing information.

    Sentience is simply the capacity to understand, which currently no known machine does. (If you think you see one that's understanding you, it's a parlor trick.) You can call it "feel" or "perceive" but these are subjective, assume some kind of emotional uptake (not necessary for sentience), and aren't measurable in a machine in any case.

    It's possible to create a system that will try to preserve itself, but it take sentience to understand the concept of death. The machine could learn some of the methods of its own termination -- perhaps reading its operating manual, or watching another machine like it be turned off (though that action alone doesn't necessarily require a self-preserving defense).

    The notion that some generic AI will somehow stumble upon the concept of its own mortality is (at least for now) a fiction. Google's self-driving cars are really good at driving. If you want one to play professional football, you need to start over. I haven't reviewed Google's code, but I'd be surprised their cars would know how to deal with a quarterback sneak. Give it a pedestrian in the middle of the field, and it'll do its best to avoid it.
  • Duane DegnDuane Degn Posts: 10,588
    edited 2017-03-07 03:51
    Guys, I think you're missing the point of this being a general AI. The AI would be capable of solving problems on its own without a built in program on how to do each task.

    Unless there is something magical/spiritual about the human brain, I don't see how one can maintain machines can't be sentient. All each of us know is we ourselves think. We think we are sentient but I'd say this is basically because we defined sentient to mean our own ability to recognize ourselves. "I think, therefore I am."

    The only proof anyone else is sentient is based on our experience of being sentient ourselves. This "proof" is based on a sample size of one. Who's to say other people are really sentient? Maybe they're just machines made of flesh and blood programmed really well to fool us into thinking they're sentient.

    If an AI behaved like a human, why would we not think they're sentient? Why would calling AI robots not sentient be different than my call you guys not sentient? After all you're just machines made of flesh and blood with firmware made with neurons instead of silicon. All I know for sure reasonably sure is my own sentience.

    I agree, we're not close to having AI with general AI but the questions posed by Computerphile aren't about our current state of AI, the questions are concerned about some future state. Once an AI is smarter than we are, it will be too late to ask these questions. Once AI is better at programming an AI than we are, the leap in intelligence will be huge. We will seem like ants to a super AI.



    As Sam Harris suggests, it's hard to muster up any real worry about AI but a super AI could easily be mankind's greatest threat at some point in the future.

    I'll add a few more thoughts about AI. Any AI which we develop will have access to pretty much everything ever written about AI (including these forum posts). It will know we're concerned about an AI taking over the world. Surely some sort of super AI will know how to appease us and convince us we have nothing to worry about. In the meantime it could be programming all the latest technology to have some sort of hidden backdoor which it could use for its own purposes when it decides we ants are becoming a nuisance.

    I suggest the questions posed in the Computerphile video are very important.
  • Pretty much all of Asimov's Three Laws stories were about how the Three Laws fall apart in edge cases even though they look so reasonable. A lot of people have put hard thought into this idea, all of which is highly speculative until we know what actual AI ends up looking like. And of course there is the risk that it will end up looking like Skynet, and we really won't know that for sure until someone builds it.
  • Duane Degn wrote: »
    Guys, I think you're missing the point of this being a general AI. The AI would be capable of solving problems on its own without a built in program on how to do each task.

    Well, I think the point is this kind of AI doesn't exist, any more than my example of the self-driving car capable of doing more than driving a car. Worrying now about something that doesn't exist is rather pointless, because by the time it does, if it does, who knows what the science will be or what it will contain.

    I'm not a believer in the generational leap of "super AI," or that it's even been shown to be plausible. Reminds me of art-by-computer. Sure, some of it is interesting, but lacking an ability to appreciate the art keeps it from being more than mediocre. How does a computer know what a computer should do? I love that Roger posted above so I can make the reference ... Colossus is still just a book and a movie, and it's no more closer to being a reality than it was in 1970.

  • Colossus is still just a book and a movie, and it's no more closer to being a reality than it was in 1970.

    Did you see the Jeopardy playing computer named Watson? I wouldn't have thought such a thing was possible, but it easily defeated the former Jeopardy champions. I personally think this sort of machine intelligence is much closer to Colossus than the machines we had in 1970.

    After some of the trial questions the programmers were asked "how did it know that?" and the programmers didn't know how the program was able to figure out the answer (given as a question).
    I'm not a believer in the generational leap of "super AI," or that it's even been shown to be plausible.

    Our minds do this sort of thing. Unless there is something supernatural about the human mind, I don't see why an AI couldn't do anything a human could do.
    Worrying now about something that doesn't exist is rather pointless,

    I think the argument is if we wait until a super AI exists, it will be too late to worry about it.

    I personally have a hard time "worrying about it," but I thought Sam Harris' talk had some convincing arguments on why we should worry about it.

  • GordonMcCombGordonMcComb Posts: 3,366
    edited 2017-03-07 06:33
    Duane Degn wrote: »
    Did you see the Jeopardy playing computer named Watson? I wouldn't have thought such a thing was possible, but it easily defeated the former Jeopardy champions. I personally think this sort of machine intelligence is much closer to Colossus than the machines we had in 1970.

    Watson isn't general AI, and its programming is so specialized it's really in a different category altogether. It is a deep database system that is "tuned" to respond to natural language queries, then do very fast permutation lookups to find what it thinks is the best answer, without actually "knowing" what the answer is. Google on steroids. It *is* machine intelligence, and it's quite remarkable, but just because it excels at computational calisthenics doesn't really mean it's a candidate to oversee SkyNet.

    Since we really don't know much about how the human mind works -- they're still trying to figure out what autism is -- it's impossible to know what the limits are in building a machine to duplicate it. It's pure conjecture. The world is already full of really horrible things that are actually happening today to be overly concerned about a robot refusing to make my toast in the morning.

    Okay, so it's possible (and likely) some researchers will demonstrate a malignant AI system just to show it can be done, but that's still a matter of programming it to behave that way. (Reminds me of Jessica Rabbit's plaintive wail, "I'm not bad, I'm just drawn that way.") Mind you, I'm not saying we should dive into AI with blinders on, but we're so far from anything that can rise up against us, it just seems like even more hubris of the gods. Plus, it's a really old story -- recall one like it first published in 1818.
  • ercoerco Posts: 20,244
    Just saw this new rerelease trailer for 2001 which made me think of this thread. Certainly more ominous and exciting than the original trailer.

  • I don't think we need to worry about losing control of our robots.

    The Robot Whisperer will surely know how to control them.







  • Watson has two things going for himself. One is a good sense of humor, and two happens be an excellent support system.

    According to author David Gerrold, (who I met finally last year...), he still needs to become sentient.

    Incidentally erco your crowd leaves for Oz on Friday.
Sign In or Register to comment.