Atlas Shrugs Off Abuse
But robots never forget. Sleep tight, bearded middle age Tech #42861.
http://techcrunch.com/2016/02/23/i-know-i-shouldnt-feel-bad-for-a-robot-but-i-do-anyway
http://techcrunch.com/2016/02/23/i-know-i-shouldnt-feel-bad-for-a-robot-but-i-do-anyway
Comments
Sigh
I distrust AI much less than the type of people that want back doors in my phone.
That robot is a passive teddy bear because it was programmed to be. I picture these things being programmed to enforce curfews in the next 100 years.
So the big question is, who is going to help me build Galt? He can avoid walls so far, but does have trouble with chair legs. Who's with me?
Like most everything - the morality of the robot is neutral - It is what we do with the robot that is good or evil.
Seems this Atlas can already outperform ED209 on the stairs and getting up from the ground test.
We do have the three laws.
Robots may be just another tool to carry on in that tradition, like swords, cannons, guns, dynamite, nukes and the rest.
Ah well.
It has to be "they pay me so much" or "they'll use it only to help and protect us" and I don't know which it is.
Of course they will. But they will also be "just another tool" for saving lives, exploration, and the rest.
But... there is a big difference between what Atlas is demonstrating and the sort of robot that the Three Rules (or Ten Rules) would apply to, namely that the robot still lacks something we would define with terms like "self-awareness", "moral compass", "sentience", etc.
The RoboCop clip you happen to show proves the point. Yes, the ED209 can be viewed as being in violation of the Three Rules. But it was still just a tool that lacked sentience, and therefore not something that the Three Rules applied to. Its operators, on the other hand, were clearly in violation of the Ten Rules.
It makes me think that, in the long run, even if there is a point where robots gain true sentience, most users will want something less-than-sentient. After all, who wants slaves that think for themselves!?! In that case, the Three Rules still won't apply to the robots. But the Ten Rules will still apply to the robot owners.
Do we? Or do we have to depend on politicians for them to be included in the programming?
I believe the upright BigDog is a great stride from the four legged version. With Google's money, it should go far. Maybe put one driving a Google car?
Can you define "self-awareness", "moral compass", "sentience" ? In such a way that we can measure it and talk about it in a logical, scientific manner?
Besides, Asimov's whole point about the 3 laws was to demonstrate how they break down. Much like the modern day laws of humans lead to contradictions and paradoxes, undecidable issues, and unintended consequences.
Is it even possible to have a "sentient" robot if it has such rules hard wired into it's very being?
Now, in terms of the philosophical debate about "sentience", "self awareness" or "conciousness" I'm still on the ground floor. For example:
If I stick my hand in a fire I feel something real bad. It's pain. It's an experience.
But, if you stick your hand in the same fire. I feel nothing.
So here is the question, how can I tell you have any of that "self awareness" or "conciousness" stuff? How do I know you feel anything?
Well, you scream and shout, the same way I did. So I kind of infer that there is something going on with you that is like my experience.
It's far from proof that you are self aware. Presumably a robot could do all that screaming and shouting as well. Traditionally animals did all that screaming and shouting but us humans never ascribed any self awareness or conciousness to them.
Bottom line is that the only self aware being I confidently know exists is me!
I think more like 613.
Anywho, BD seems to revel in these videos that show abuse of their robots. I'm guessing it's part of their viral marketing approach. Had the robot been shown already prone on the ground, and getting up to full posture (demonstrating the same programming either way), I bet there wouldn't be quite so many embeds and shares.
The technology is neither good or evil. We are the "creative" ones...and not always constructive.
I'm totally with you. Humans learn stuff, make new things. I'm not about to suggest we stop doing that because it may get used in bad ways.
@Gordon, I don't get understand. 613 years ago was was only 1403 or so. I think we have evidence enough that the Mosses story is much older.
Also I'm not sure what you mean by "abuse of their robots". The task is to create a machine that can balance and walk and do that in the face of strange terrain and weird stuff happening. At the end of the day it's software and they are trying to break it with odd inputs. Same as we do for every program we have ever written. In this case it involves poking at it with a stick or moving it's box. Basically the same as hacking into my latest web site with some unexpected input, does it fall over or not?
Now, I kind of agree with you. We have been seeing staged "intelligent" robot presentations since the 1950's. How much of this Boston Dynamics stuff is realistic?
See https://en.wikipedia.org/wiki/613_commandments
Wow, after decades of hanging around with Jewish guys never have they mentioned the 613 rules.
I think I'm in tune with some of it:
204. Not to swear needlessly
348. Not to tattoo the body like the idolaters
609. To have a place outside the camp for sanitary purposes
On the other hand:
153. Not to eat things that swarm in the water
Is a bit of a bugger (Oops, violation of 204) because if I had my way I would live off of prawns and such.
How come they have been holding out on me all this time?
Seriously, I'm not convinced you can make "good" people out of such rules. Never mind robots.
Yes, yes, and yes.
Whit, somehow I knew you'd get the reference!
Heater, BD has typically shown humans intentionally trying to push, kick, or shove their robots to show their ability to re-balance, yet there are many ways of demonstrating this. It ALWAYS creates controversy, as it did with BigDog. Humans tend to anthropomorphize machines when they are in living creature shapes, and there's an emotion reaction to the visuals of agression. BD knows this, and uses it as part of their marketing. I'm not saying it's right or wrong of them to do so, but a remark about the power of social media. A video made to generate controversy will get shared much more.
We already know that the US military is not be interested in these robots unless they are silent; the Army already has said thanks, but no. But a completely quiet robotic soldier is going to take a lot more development, so this is why you're now seeing these demonstrations in warehouse settings. I just thought the usual "now let's kick it over!" scene a bit out of place in the rest of the video.
OK. I can well believe that any company making whatever tries to exploit whatever media is available. Which now a days is "social media".
I did read recently that this whole idea was declared not useful to the military. It makes noise, it needs fuel, squaddies can carry all that stuff just as well or better, more reliably, with less support and cheaper.
On the other hand, has anyone else demoed such robotics feats? The closest I have seen in the Sony whatever it is called.
It's impressive, assuming not too much of it is orchestrated.
Mind you, their headless dog machine thing running around was somehow very disturbing.
They made major progress in motion control, durability, moving in different terrain, moving fast, now even two legged.
Sure the main goal here is to produce something to go into war. But I see no difference in working on that then working on a flight controller for a military airplane.
It's just more fun at Boston Dynamics, I guess. Sadly there is no need for gray haired COBOL programmers down there at BD.
@erco might have a chance there (flame throwers and such), but me? Nope.
That 'headless dog' was a test for a transport unit, carrying weapons and tools/packs/weapons behind (or upfront?) the ground troops. So they do not need to carry it by themselves and be more mobile in action.
There is also some video out in YT where the 'headless dog' does have a head and is throwing concrete blocks quite precise.
Like in them extreme quiet non nuclear (small?) submarines sold by Germany worldwide since decades, the noise problem might be solvable.
Just think about them as ground based drones and you might be shivering about the implications they bring, but they will be developed. Not just here in the US but generally. And clones of them sold on Ebay and Alibaba, like realistic sex dolls now, just as bouncers for your club, bar, golf course or so. I do fear the private use more then the military one, but basically I fear both coming to reality pretty soon.
Sadly,
Mike
THAT good sir, is a meaningful and memorable quote!
Or not. BD has been largely funded by DARPA, and there are existing contracts to fulfill. But Google's stance on continuing with military robots has been more nebulous. I think they'd like us to believe it's a way to make money while learning how to design productive commercial robots of the future. Of course, what Google has said/hinted at may not be the same as what they really intend to do. It's a matter of wait and see.
Pretty funny. Though I can see this being a thing in 50-100 years.
Did you mean Gort? At least I think that was his name, one of the most impressionable movies I had seen as a kid.
Jim