The Five-Minute Forums

The Five-Minute Forums (http://www.fiveminute.net/forums/index.php)
-   Science Fiction (http://www.fiveminute.net/forums/forumdisplay.php?f=9)
-   -   Should real robots obey Asimov's Laws? (http://www.fiveminute.net/forums/showthread.php?t=1342)

Gatac 01-29-2007 07:35 PM

Quote:

Originally Posted by PointyHairedJedi (Post 72060)
Visions of Multivac, Colossus, Shalmanesser and Skynet are ultimately just fantasy, unrealisable because conciousness is not something that can be created, whole and complete, utterly constrained in everything it does by a set of arbitrarily imposed rules. Machines that think, if there ever are any, will be like us - blank slates that must be taught how to think from the ground up.

While I agree that creating consciousness is tricky, I believe it'll be relatively easy to "grow" once you have the hardware to get a decently-sized neural network going. Like I said, self-evolving systems have a lot of potential. Whether we can hardcode any "rules" is debateable, but there are already some suggestions, such as "benelovence", i.e. we must make sure that any intelligence we do end up creating is fundamentally friendly to mankind.

Quote:

Of course, I have my doubts as to whether we will ever manage such a feat, as first we must understand how conciousness works in humans. It's a problem that I don't think will be solved in any of our lifetimes - though we may attain a vastly more complete understanding of the functioning of our brains, that won't tell us much about self-conciousness and free will. We may get machines that can learn how to do a few things, but I doubt there will ever be anything that has the amazing capacity and range of the human mind.
The "ever" is a dangerous thing. Saying something can't be done because you can't do it is shortsighted at best. Also, I know that it is possible to create something intelligent - after all, there's, you know, *us*. Once we understand the chemistry of neurons well enough to emulate one and manage to pack in enough nodes into a system that they are roughly equal to the amount of neurons we have, we should be able to start writing the first primitive self-evolving software and have it undergo rapid evolution towards more complex behaviour.

I think the real question is not whether this is feasible, there are over six billion of us walking around with just that kind of computer in our skull, but the more interesting questions are what kind of intelligence you will create in this way. In a lot of ways, we are who we are because of what came before us - how can we tell what will result from feeding the budding AI on what we *think* is the best way to grow an intelligence? We might be breeding a true alien that remains completely incomprehensible to us - the problem shifts if it becomes intelligent enough to analyze and understand *us*, but then you have the boogieman of an AI that's smarter than us. It will be able to talk to us, but we won't know what's going on inside of it. (AI that we can fully analyze and understand is likely to not be very useful, though...unless you're breeding artificial insects.)

That'll be a few interesting conversations, I think.

Quote:

Clearly you have never seen Mars Attacks!. :p
Oh, I have, it's one of my favorite movies. However, we can not assume that alien malefactors have any specific anti-nuclear technology and just give up here, because nukes are still the most destructive weapons we have and therefore our best shot. If nukes don't work, we're probably screwed. I fully blame Hollywood for the "The aliens are immune to nuclear weapons" trope - we can't hope to have some deus ex machine weakness in the attackers, therefore our best bet is brute force, and nukes are the best brute force weapons we have, plus we have a lot of them. If the aliens do happen to have a weakness we can exploit (and that seems reasonable; all the portrayals of superior aliens in Sci-Fi have dulled us to the fact that a human is an amazingly tough and resourceful animal, and likely to be far superior in at least one respect over the aliens) - more power to us.

Gatac

Nate the Great 01-29-2007 09:05 PM

Okay, although I concede this whole "self-evolving" thing might be the best bet for a self-aware computer, I assert that this is EXACTLY what you don't want to happen. Think of Deep Thought. He wasn't even fully hooked up and he already knew about rice pudding and income tax! What's to stop a self-teaching computer from reaching the point of "these dirty bags of mostly water are so self-contradictory that they're not worth obeying"? I have no problem with an assembly line robot being able to figure out the most efficient way to perform an assembly line text, but you don't just give a robot total Internet access and step back.

PointyHairedJedi 01-29-2007 11:28 PM

Quote:

Originally Posted by Gatac (Post 72063)
The "ever" is a dangerous thing. Saying something can't be done because you can't do it is shortsighted at best. Also, I know that it is possible to create something intelligent - after all, there's, you know, *us*. Once we understand the chemistry of neurons well enough to emulate one and manage to pack in enough nodes into a system that they are roughly equal to the amount of neurons we have, we should be able to start writing the first primitive self-evolving software and have it undergo rapid evolution towards more complex behaviour.

I concede your point on "ever". However, you're making the assumption that neurons+chemistry=sentience (or their digital equivalents). Neuroscience has discovered many things to date about the way our brains work - from the broad sweep of which regions influence what, to the small end of the scale like neurotransmitter chemistry and the function of different cell types. But, and this is the huge but, none of it tells us anything more about conciousness than philosophy has been able to in the last two and a half thousand years.

The main question it comes down to is this: is conciousness purely a function of the brain? On the face of it, yes, but think a little harder. If we take it to be the case, then which part of the brain is responsible exactly? Is it somehow the case that 'conciousness' only happens when an animal with a big enough brain comes along? If so, why? Is conciousness instead not primarily a biological function, but more of a learned one? Or is our memory the primary factor?

The point is, no-one really has a clue, or for that matter any idea how to find out. Conversely perhaps it'll be attempts to create machine intelligences that'll give us some handle on how we ourselves think, but like I said I doubt that it'll happen in any of our lifetimes.


As an aside, I find it interesting that no-one has thus far touched upon the ethics of creating machine intelligence. One of the things about Trek that has consistently bugged me in nearly every show is what an incredibly laissaz-faire approach the otherwise fanatically ethical Federation takes toward the creation of artificial lifeforms (in the form of holograms, mostly). We had that whole thing with Data being judged to be 'human' legally, but what of Vic Fontaine and the EMH? To be fair though, it's not something that much SF covers at all, but it seems like it should.

Hejira 01-30-2007 03:00 AM

Quote:

Originally Posted by Infinite Improbability (Post 72064)
Okay, although I concede this whole "self-evolving" thing might be the best bet for a self-aware computer, I assert that this is EXACTLY what you don't want to happen. Think of Deep Thought. He wasn't even fully hooked up and he already knew about rice pudding and income tax! What's to stop a self-teaching computer from reaching the point of "these dirty bags of mostly water are so self-contradictory that they're not worth obeying"? I have no problem with an assembly line robot being able to figure out the most efficient way to perform an assembly line text, but you don't just give a robot total Internet access and step back.

You don't give a human child total Internet access and step back, either.

If computers/robots/machines/Tamagotchis ever reach the level of complexity that they can be self-aware, IMO they'd be as good as human, only with more batteries and less pooping. As such, anyone with a self-aware AI would pretty much be a parent, and some parents just suck. Others, though, are totally awesome.

Good parents teach their kids about morals, responsibility, and all that other stuff that stops most humans from going BSI* and killing everyone. I guess I just don't see the robotic sentience issue as any more different than organic sentience.

*B = Bat, and I = Insane.

PointyHairedJedi 01-30-2007 10:32 AM

A good point, following on from that, is that any machine intelligence would by necessity be patterned after our own; after all, what other model do we have?

GreenFire1 01-30-2007 06:15 PM

Um, self-aware robots? I thought this topic was about Asenion robots (viz. those that follow Asimov's laws). The biggest thing about Asimov robots is that they're objects. You can use them stupidly or evilly, or you can use them for productivity or comfort. An Asenion robot makes no decisions for itself - every single action is not only based on an order (or law), but can be mathematically predicted based on the situation and the nature of its active orders.

I've actually considered the value of another system of robot safety (a robot doesn't have morals, any more than a knife does) based on "standing orders." I'm not quite sure where I got the idea, but it's basically this - the robot's only intrinsic motivation is to follow orders. Now here's the neat part. Every robot is programmed to recognize all humans as having given a set of default orders, stuff like "don't harm me," "don't harm my property," et cetera. That's the basic idea of it. Anyone have their own ideas for robot security?

Gatac 01-30-2007 06:41 PM

Well, I think Asimov's rigidly-constructed robots are possible, but only after we've used self-evolving systems to get a better understanding of how workable AI organises itself. I'm sure military and government contractors will take an Asimov model - after all, they are completely predictable, or should be -, but the real world needs a cheaper, faster and smarter solution, even if it comes with some risks.

I like the idea of standing orders. Especially since the "human" in Asimov's Laws should really be corrected to "sapient being".

Gatac

Nate the Great 01-30-2007 07:56 PM

One wonders what amounts to a valid Turing Test in the 24th century. I think that a key requirement would be the creation of a process that the computer didn't already know.

Gatac 02-07-2007 12:55 PM

Actually, I think the chief question is, where does "cheating the Turing test" end and "actually being sapient" begin?

Gatac

Derek 02-07-2007 07:22 PM

When it can't be turned off.

Gatac 02-07-2007 09:18 PM

I can turn off any sapient being; the matching tool is called "gun".

Gatac

mudshark 02-07-2007 11:22 PM

There's another way, of course...

Nate the Great 02-08-2007 03:52 AM

Okay, dead does not equal inactive.

Define sentience.

Derek 02-08-2007 12:45 PM

Quote:

Originally Posted by Gatac (Post 72142)
I can turn off any sapient being; the matching tool is called "gun".

You know, even as I posted my statement, I figured someone was going to say that you can turn off humans. But there's a difference between destroying a sapient being and turning off a specific application in a fully-functional well-working computer.

How about this: "When it isn't affected."

Gatac 02-08-2007 01:43 PM

Ah. I concede that point, then.

Still, going transhuman, won't we be able to make a human brain capable of safe shutdown and restart? Admittedly, this will likely involve cybernetic implants or , at the very least, advanced medical treatment, and even then it'll probably be metastable. (Cryogenics and whatnot.)

Gatac

Nate the Great 02-08-2007 05:59 PM

Talk about drifting topics...

Okay, cyrogenics/carbonite/instant dehydration cubes and whatnot are topics for their own thread. This thread is robotics and machine intelligence.

Last time I checked, the poll was fifty-fifty. Any comment? Expected? Unexpected? Surprising? Not surprising?

PointyHairedJedi 02-10-2007 07:03 PM

I suppose on the face of it it may be taken as surprising that a bunch of nerds such as we wouldn't overwhelmingly say "yes", but then anyone who actually knows this site knows what a fractious bunch we are really. ;)

Derek 02-10-2007 09:30 PM

No, we aren't.

Sa'ar Chasm 02-10-2007 09:38 PM

2/5 of us are.

Chancellor Valium 02-10-2007 11:56 PM

Fractious? The people on this site are about as fractious as two sleeping Trakenites.

In any case, I think this discussion has reached a state of decay, and we have come full circle to the questions I raised on page one, myself...


All times are GMT. The time now is 11:12 AM.

Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.