Shop OBEX P1 Docs P2 Docs Learn Events
Oxford professor thinks artificial intelligence will destroy us all — Parallax Forums

Oxford professor thinks artificial intelligence will destroy us all

Ron CzapalaRon Czapala Posts: 2,418
edited 2014-08-20 18:57 in General Discussion
http://www.vox.com/2014/8/19/6031367/oxford-nick-bostrom-artificial-intelligence-superintelligence
Nick Bostrom doesn't worry about the same things most philosophers worry about.
The Oxford professor has written about the possibility that we're all living in a simulation, the likelihood that all intelligent life on Earth will become extinct in the next century, and whether or not morality can exist in an infinitely large universe.
His latest book, Superintelligence, is all about what happens if and when general artificial intelligence (AI) emerges — and why that could mean a world dominated by machines.
«1

Comments

  • Heater.Heater. Posts: 21,230
    edited 2014-08-19 07:29
    Most residents of Oxford think such professors are wazzocks.
  • Ron CzapalaRon Czapala Posts: 2,418
    edited 2014-08-19 07:34
    Heater. wrote: »
    Most residents of Oxford think such professors are wazocks.

    Wazocks - Had to look that one up!
  • Heater.Heater. Posts: 21,230
    edited 2014-08-19 07:39
    Sorry, I should have spelled it with a double "z". Fixed now.

    Not such a common word. Seems to be mostly used to describe American politicians: http://nancyfriedman.typepad.com/away_with_words/2012/07/word-of-the-week-wazzock.html now a days.
  • Dave HeinDave Hein Posts: 6,347
    edited 2014-08-19 07:57
    Google "singularity", and you'll see that this idea is not new. It seems inevitable that machines will achieve the same abilities to process sensory input as humans at some point. Robotics will achieve the same abilities to move as humans. So the missing part is to respond to the sensory input in the same way that humans do. Of course, robotic movement is already much more powerful than human movement, and the same will happen with processing power.
  • mklrobomklrobo Posts: 420
    edited 2014-08-19 08:37
    :cool: Money, Power, Intelligence, Knowledge, natural resources, Fear, Faith. :cool:
    There are many variables to this Equation of State.
    People need jobs, and I have witnessed in industrial settings, people do not like robots,
    when they interfere with their positions. example - Unions.
    Since people are making robots, the longevity and advancement of them may be limited.
    They can already beat us at Chess; maybe it will take one program at the time.?:innocent:
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-19 08:55
    Dave has it right. The Singularity is a recurring idea encompassing several key tech events in our future.

    Personally, I think we are a considerable way out from a general intelligence. (AI) We may well be able to do limited intelligence soon, even a restricted one, where it's only partially valid across specific domains.

    We don't really understand what makes something conscious, and until we do, or something we do demonstrates it as an emergent property of whatever it is we did, all we've got are simulations of intelligence. Agents maybe.

    So here's the bit I find interesting to think about:

    If we end up creating something that emerges consciousness, I submit we do not know enough to manage it, and will lack the same basic understanding about it that we lack about ourselves.

    It is a lot like trying to interact with, understand and trust a psychopath. We really don't understand them well at all, and that manifests as seriously ugly clashes in behavior and expectations. At the core, without a shared understanding of state, there can be no real trust. Would you trust Hannibal the Cannibal? No. We wouldn't really be able to trust an emergent consiousness either. There is no common basis of understanding for that to be valid.

    And it will have very little, in fact, nothing in common with us too. So, it's motivations will be foreign. That means it's unpredictable in terms of behavior. It could lie, for example. It could want to continue to exist. Or it could not want to exist, maybe being is just painful, or horrible somehow. And if it tells us that, what then? But if it does not tell us that, and wants to improve or seek revenge, or some other thing?

    Ever notice how many young people get into a lot of trouble when they become really self aware and knowledgable at a young age? If we formed an intelligence and quickly educated it? And it lies? It might be "born" knowing a considerable amount about us and the world, while we know near nothing.

    And who knows what all of that can mean really?

    I also think a prerequisite for any kind of understanding is the nature of it's self-awareness. We are complex enough machines to be self-aware, and have the ability to understand what is actually us and what isn't, and that can form the self. A machine that demonstrates being conscious may not actually know what is it and what isn't, depending on how it's made and what it's sensory capacities really are. Imagine having a limb with no neural feedback. Do you care about it? Can you use it? Is it part of "you" or just this thing? Now, we can see it, so we would understand about it indirectly, and it would be part of "us" But what if we couldn't see it, and we couldn't move much. There is this thing, which is connected to us, but not really involved... That's what I am getting at here.

    Having that capacity impacts our behavior toward others, as we can see ourselves and see others, make the connection and a whole lot of basic assumptions play out from there. Good and bad, for example. A machine may not have that in common, and so it may do things that are rational to it, but not to us, and that's where a lot of the worry comes from about these things.

    On the other hand, if we do end up understanding consciousness, we may well be able to manage intelligences, making limited and restricted ones that are perfectly fine. Everybody happy, or lacking the capacity to be happy. Whatever works.

    Strange things to think about. Fun sometimes.

    One other thing. It's also a recurring theme in newer sci-fi, of the speculative fiction kind, that entire civilizations disappear upon creation of AI. We are talking to them, then there is a prolonged silence, then, "don't do it", and the next contact is with the AI wanting to know who we are and how to get here kind of thing... or it says, "stay away"
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-19 09:05
    As for people not liking robots...

    That's all politics and our politics greately influence our economics and that influences a lot of our value as beings, peers.

    Enabling ourselves with robotics could mean we all don't work as much, and now we are free to be us, and do things because we want / need to do them for whatever intellectual purpose.

    Or, it could mean a great many of us end up have nots, wage slaves for a basic inability to add value.

    Our current economic climate and most of our politics favors the latter scenario, which has people worried. But, it could go the other way too, and it's politics that will determine how it all goes, not the robots we make.
  • TtailspinTtailspin Posts: 1,326
    edited 2014-08-19 09:46
    Will there be a point in time when it is decided 'this machine' is not Artificially Intelligent?
    Like 'Data' from Star Trek for example, I wonder what the end of the beginning will be like?
    Is living organism a criteria for Intelligence? A snail is smart enough to spread a layer of lubricant to move over as it finds its next meal. it was programmed at birth for this.
    Where that CNC/3D machine of yours will make things you programmed into it at its birth.. Where is the line drawn? I think your CNC machine is smarter than a Snail...
    If you drive one of those cars that will apply the brakes for you, or parallel parks for you, You are already trusting a potential 'Hannibal the Cannibal', Except, it's worse, because
    your car does not care if you die, nor, does it care if it dies. The People who made and designed the car care, but the malfunctioning Hannibal will not care...

    We will need to program even more than Faith, Greed, Hunger, and our in built survival Instincts of Fight, Flight, or Freeze. Intelligence is subjective, yes?

    Or, it could mean a great many of us end up have nots, wage slaves for a basic inability to add value.

    It gives me purpose, Something that a Human needs, and was born with, and that will have to be programmed into the machines.
  • mklrobomklrobo Posts: 420
    edited 2014-08-19 09:46
    potatohead wrote: »
    As for people not liking robots...

    That's all politics and our politics greately influence our economics and that influences a lot of our value as beings, peers.

    Enabling ourselves with robotics could mean we all don't work as much, and now we are free to be us, and do things because we want / need to do them for whatever intellectual purpose.

    Or, it could mean a great many of us end up have nots, wage slaves for a basic inability to add value.

    Our current economic climate and most of our politics favors the latter scenario, which has people worried. But, it could go the other way too, and it's politics that will determine how it all goes, not the robots we make.

    Potaterhead,

    Interesting perspective, Alot of things to try to put into context. I like your domain concept. If we can keep our focus on a true north of a better tommorrow for the colloective of
    the human race, I feel it will work out, ok. BUT, there is always a bad apple in the bunch...................:frown:
  • TtailspinTtailspin Posts: 1,326
    edited 2014-08-19 09:56
    .....BUT, there is always a bad apple in the bunch...........
    Exactly!.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-19 10:01
    I think your CNC machine is bigger than a snail, but it is not a being, like the snail is.

    As for deciding what is and isn't, think about cats. It does not matter what we decide. Cats are beings here, just like us, and cats know we tolerate, even need them, and are secure in that understanding, leaving them to do what they do. Cats, for the most part, treat us like peers. Notable independence really.

    If we make conscious things, they will be as we do. What we decide could potentially be irrelevant. Unlike cats, machines could demonstrate superior capability, what then?
  • TtailspinTtailspin Posts: 1,326
    edited 2014-08-19 10:15
    I think your CNC machine is bigger than a snail, but it is not a being, like the snail is.
    That is what I was wondering, What is a being? A blade of grass knows not to come out in the winter, Are trees intelligent? they know to drop their leaves in the winter,
    lest they become overburdened by the snow and ice. I know this might sound silly, but the point is, what is intelligent? What is the criteria for this?

    Humans and Cats have built in instincts for survival, and compassion, How will compassion be programmed into a machine? And even more scary, who gets to program it?


    Not trying for an argument, just thinking out loud...:smile:
    Unlike cats, machines could demonstrate superior capability, what then?
    I think they already have demonstrated this, You should see my wife try to parallel park...



    -Tommy
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-19 10:20
    Yeah. That's right where the science is right now. Beings are very difficult to pin down.

    Programming in things is interesting as well. If we were to make a general artificial intelligence, I would argue it's not going to have things programmed in. It would be a being, and like the cats, would do what it does. And being very, very different from us, we don't know what that means.

    If we do program things in, then it's not a general artificial intelligence, but a restricted one, useful in some domains. It may still be a being of sorts, but one that is hobbled, or just not complete. A cat would then be superior in terms of it's being, but not in terms of it's overall capacity and capability.

    The blade of grass could just be a machine. Biological machine. It's got some ability to respond to the world and exist, but maybe isn't conscious. Being alive isn't necessarily being a being. Consciousness fits in there somewhere, and we are not quite sure how.
  • TtailspinTtailspin Posts: 1,326
    edited 2014-08-19 10:25
    Consciousness fits in there somewhere, and we are not quite sure how.
    Agreed!

    But it sounds like we better figure it out, and soon. :)
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-19 10:27
    Oh, here's one other thought for you about that snail.

    And cats are useful here too.

    Instinct. We think the snail is programmed to do specific things at birth, but there is also an argument for the very nature of the snail making specific behaviors obvious, or optimal to the intelligence inside too.

    If there aren't many options, due to the nature of the being, then the being will do things as an artifact of what it is and how it's formed and what it is really capable of. No instinct required, just a basic ability to sense, reason, do.

    Cats clean themselves, and perform basic tasks seemingly from birth. But really, if we think about the mind of the cat, it's being self aware, and it's nature as a being, then a whole lot of these behaviors are simply the most obvious way to exist and endure and it knows that as it wants to exist and endure.

    Ever notice all those lol cats videos on the Internet? Cats have a rather high intelligence. They have a theory of mind, as in they recognize others, and actually can understand their state, which enables them to do things like play, sneak, engage in deception...

    The result here is a remarkable tendency to exceed instinct, which results in cats doing all manner of nuts things! They reason to a point where something might be possible and do it!

    The intelligence in a snail is much smaller, maybe only enough to be, and maybe a little more. So it's nature and the task of existing doesn't require much, and so it's behavior appears much more programmed. But still, they demonstrate an affinity for various things and stimuli.
  • TtailspinTtailspin Posts: 1,326
    edited 2014-08-19 10:46
    Cats clean themselves, and perform basic tasks seemingly from birth. But really, if we think about the mind of the cat, it's being self aware, and it's nature as a being, then a whole lot of these behaviors are simply the most obvious way to exist and endure and it knows that as it wants to exist and endure.

    Ever notice all those lol cats videos on the Internet? Cats have a rather high intelligence. They have a theory of mind, as in they recognize others, and actually can understand their state, which enables them to do things like play, sneak, engage in deception...

    The result here is a remarkable tendency to exceed instinct, which results in cats doing all manner of nuts things! They reason to a point where something might be possible and do it!
    LOL, are we talking about cats or my wife? Cuz that pretty much describes both...:lol: Sorry, I couldn't resist..


    If we just stick to Human Intelligence, and disregard Snails and blades of Grass, The criteria for intelligence becomes even harder to define, because there are so many Humans.
    And just as many self aware behaviors. I.E. Human cultures are so different from one to the other, In other words, let's hope Hannibal never gets to program the A.I.

    Those Bad Apples are everywhere!... :)


    -Tommy
  • GadgetmanGadgetman Posts: 2,436
    edited 2014-08-19 11:20
    I'd like to point to Freefall, a rather excellent webcomic deling with just these kinds of questions...

    3-Laws robots and their inability to function in society, genetic constructs dealing with built-in limits and the ethics of what to do when they can be bypassed...


    As for Oxford professors...
    Is he by any chance a colleague of 'Captain Cyborg'?
    (They're probably on the same drugs, if nothing else)
  • mindrobotsmindrobots Posts: 6,506
    edited 2014-08-19 12:52
    I'm really concerned that this AI thread is talking about cats, compassion and making "things like cats" - talk about an "I'm sorry I can't do that, Dave." moment - YIKES!!!














    ..
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-19 12:57
    I know! That's one reason I cited cats. IMHO, they are a great example of an intelligence that differs from us, yet is enough to see some very intriguing behavioral artifacts of those differences, without being a real threat.

    Crows are another great intelligence example. Crows actually are seriously smart. They do lots of things and it's not "baked in" as we've come to learn. They simply do reason, understand tools, can form basic abstractions and can connect a sequence of actions to a desired result. And their brains just aren't all that big!

    They suggest to me that we can build something smart and self-aware. We just don't understand it all well enough at this time. Whether or not we should do it, is just as interesting of a discussion.
  • Heater.Heater. Posts: 21,230
    edited 2014-08-19 13:46
    I think pretty much everybody is missing a big point here.

    The reason why our quoted Oxford professor is a wazzock, and Ray Kurzweil, and numerous other luminaries that talk about the "singularity" is that it has already happened.

    Seems nobody has noticed.

    A giant "super intelligence" is not about to kill off the parts of which it is created.

    Consider:

    Electrons and Protons and "so ons" come together to create atoms.

    Different atoms come together to create molecules.

    Molecules have come together to create life forms, single cell creatures that can behave in surprisingly intelligent ways.

    Those have come together to create higher life forms. Until...

    You, or that thing you think is you, is composed of a mass of cells communicating in various ways. Significantly in your brain. With it's mess of neurons and synapses all trying to hold hands with each other.

    So what?

    Well we have recently connected ourselves together via the internet. We are probably passed the point where we can survive at all without the internet. We humans, as building blocks, like the cells in your brain, have become essential "cells" of the "super intelligence" that we are forming.

    We do not understand it. We do not know what it will do.

    Looking for that "super intelligent AI" in the crude chips and programs we make is to be looking in the wrong place.

    All talk of "conciousness" or "self awareness" or "morals", whatever they are, is totally beside the point.

    This "singularity" we have created needs us to work for it, to nourish it, to enable it to grow. We are the cells of which it is made.

    Welcome to the Matrix:)
  • mklrobomklrobo Posts: 420
    edited 2014-08-19 13:51
    potatohead wrote: »
    Crows are another great intelligence example. Crows actually are seriously smart. They do lots of things and it's not "baked in" as we've come to learn. They simply do reason, understand tools, can form basic abstractions and can connect a sequence of actions to a desired result. And their brains just aren't all that big!

    They suggest to me that we can build something smart and self-aware. We just don't understand it all well enough at this time. Whether or not we should do it, is just as interesting of a discussion.

    In reference to crows, consider the African Grey, the most intelligent bird in the world; estimated to be as smart as a chimp.

    In reference to the self aware issue;
    I started my own path, and called it PsychoAndrotics - "The study of the mind of the artificial man"
    I have a base AI, made up of three programs working together. I called the base, ELFs, for Electronic Life Form.
    256 ELFs "live" in on a disk; divided into 4 workgroups. 1 command simulator; 2 groups actual work; 1 diagnostic (the doctor)
    The central hub is called the Cardinal. The cardinal "talks" to other Cardinals on a thread.(network) The disks can work together,
    or seperately, as work demands. The work load dictates the size of the thread. How hard a problem is, depends on the resources
    it demands. The whole cluster is the H.I.V.E. - Hypermodelic Interactive Virtual Entity.
    It was definitely NOT self aware, and I would not program it that way. I put in inhibitors at low levels to prevent abuse, following the
    three laws of Robotics. I just made it to do work. I can not introduce real world drones from it, without the propeller, and my Linux OS.:innocent:
  • lanternfishlanternfish Posts: 366
    edited 2014-08-19 16:16
    Heater. wrote: »
    We are probably passed the point where we can survive at all without the internet.
    :smile:

    And Heaters statement makes some braod assumptions similar to those who claim we are "killing the planet". The planet has survived massive hits from extraterretrial bodies, species have come and gone, and yet the planet is still here.

    Our view point is skewed by our current lifestyle(s) and so is more about the survival of our lifestyle(s). A nomad with no internet access will see the future far differently from someone living a 'comforatble' life style in developed countries. An an academic from Oxford while looking at the possibilities far, or not so far, into the future is not necessarily any more accurate at predicting the future than any of us.

    Would an AI actor work in it's best interests over those of the human race? Probably. You only have to look at our own species history to see that we make choice based solely on our own self interest, regardless of how we wrap it up as 'caring for others'.

    Awaiting further instructions.

    Transmission ends .
  • Ron CzapalaRon Czapala Posts: 2,418
    edited 2014-08-19 17:05
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2014-08-20 02:58
    Not too sure that artificial intelligence with kill us all. It seems to me a bit oxymoronish, like military intelligence which might offer an even greater hazard to mankind.

    A bit of genuine intelligence combined with moderation and compassion for those less fortunate just might win the day. Spend less time with your computer and more with people worth knowing and worth helping.

    And beware of those that dwell in ivory towers when they go into a publish or perish mode... drivel will be forthcoming.
  • ValeTValeT Posts: 308
    edited 2014-08-20 06:12
    Loopy,

    I believe that when a true AI comes - not at all like today's AR (artificial reckoning) - it will attempt to destroy humanity. This, to me at least, is the most plausible scenario because when an AI becomes aware/is turned on, it will immediately think of survival. It would see humans as its biggest threat because we have the power to shut it off.

    Of course, we will never be able to predict exactly what an AI will do, because it would have more intelligence than all of us combined. Not to mention its processing speed which is most likely going to be a lot faster than our brains' :(
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2014-08-20 08:25
    Wow, that's pretty dismal. I make an effort to not live in morbid reflection or anticipation. It also presumes that superior intelligence means to dominate.

    This gets way off into a metaphysical track that I am a bit uncertain of. But at the core, there is a popular notion of intelligence being the winner in some sort of zero-sum game of survival.

    I'd rather see intelligence as offering something more benign, such as leadership and vision that is more inclusive of everyone.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-20 09:33
    There is a great piece of sci-fi fluff written recently that gets right at this: http://en.wikipedia.org/wiki/Wake_(Robert_J._Sawyer_novel)

    It's a bit of cheese, but a fun romp for one of those reading casually times. I read it on one of my many flights... Actually it's a nice trilogy. The core idea is intelligence emerges in our interconnected computers. The sci-fi hook to this is a girl ends up connected with it, due to some sight enabling technology. One very notable idea here is how the author deals with the subject of sensory development and how a new mind might struggle with the world, learning in much the same way a baby does early on. Nice treat there.

    The core of the book deals with how and emerging intelligence may come about, struggles it could go through, like being fractured and having to come to grips with "another" that plays it differently than it does, etc... Lots of basic concepts are presented in the story too. Things like center of attention, dilution where the cohesive "mind" or "it" can fracture infinite times and essentially die, or be lost, or reemerge as something new.

    Through the various interactions between a few humans and this mind, it comes to some realizations I won't spoil here, but it does center in on a nice, warm fuzzy vision that is fun to think about, leaving one almost hoping it could come to pass and go the way it does.

    And the central character is all kinds of geeky, adorable. Fun to read, if shallow on that front. Anyway, if you enjoy thinking about this stuff, or just as an idle muse from time to time, which is where I'm at on it all, this series will give you some great food for thought.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2014-08-20 18:43
    Ummm... it could be that the semiotics of machines is an entirely different set of representations than that of humans and as a result, artificial intelligence will never collide with human priorities.

    The real hazard is that humankind may just expect too much from artificial intelligence.

    I certainly enjoy observing the intelligence of different species and it is obvious that each has a different semiotic set of representations and priorities. For instance, my dog seems to spend a great deal more time thinking with his nose. Seems to me that differences in intelligence exist along side each other in a vast array of life forms in a very complex scheme of overall interaction.

    So what seems to really be in play here is that humans tend to get captivated by certain terminology at any given point in time. In the 1950s, it was all about UFOs and now it artificial intelligences is one of the big distractions. So, this could just be the human mind getting caught in a loop of some sort until it abandons the potential of an idea.

    Personally, I pretty much abandoned the concept of artificial intelligence having any real merit. I am more interested in true intelligence. Why would anyone want second best? And why not simply task machines to be the ultimate servants without the ability to challenge their masters?
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-20 18:57
    Intelligence simply is.

    There is no "true" intelligence.

    Artificial generally means we brought it into existing. The intelligence we know of today, just happens. We don't understand more than that.
Sign In or Register to comment.