The Five-Minute Forums

The Five-Minute Forums (http://www.fiveminute.net/forums/index.php)
-   News (http://www.fiveminute.net/forums/forumdisplay.php?f=8)
-   -   January 5 (http://www.fiveminute.net/forums/showthread.php?t=633)

Zeke 01-06-2005 01:11 PM

January 5
 


Today we have a link that Kira recommended: The Edge's World Question Centre. Every year, a group called the Edge Foundation poses a question and then archives some of the better-known respondents' answers, generally from scientists and other members of publisher John Brockman's "third culture." This year's question is a good one: "What do you believe is true even though you cannot prove it?" I don't usually do comment threads for link days, but I think it might be interesting to see what sort of answers 5MV's readers give for this question. (I'm certainly as interested in hearing from you guys as I am in hearing from this third culture thing.)

Flixibixi 01-06-2005 01:32 PM

I believe that with infinite realities with infinite possibilities, everything exists in infinite forms. Meaning that in infinite realities, what happens on this board is a TV show or a movie or a novel or the basis of a lullaby.

Of course, the reverse would be true. In infinite realities, everything that's happened in Star Trek actually happened in real life, even the bits that contradicted other bits.

So everytime we create something creative, we are actually crossing dimesions and translating the information we receive from them into a format we can then understand.

...oh god, then every unspeakably bad fanfic actually happened...in INFINITE REALITIES.

Just think about that a bit more. It'll all sink in, and you'll feel all-powerful and all-guilty at the same time. :twisted:

Michiel 01-06-2005 02:48 PM

I believe, but cannot prove, that the human mind/brain is nothing more than a biological/chemical computer and that true AI is possible.

In other words, I do not believe in a spirit/soul that makes human intelligence unique. The reason we have not yet been able to reproduce it, is that it is a very complex system, that took bilions of years of evolution to develop.

However, some day, we will be able to. (Lets just hope that the AI's will not discover what useless and inefficient creatures humans are. ;))

Derek 01-06-2005 03:35 PM

Quote:

Originally Posted by Michiel
I believe, but cannot prove, that the human mind/brain is nothing more than a biological/chemical computer and that true AI is possible.

That's funny because I believe, but cannot prove, that AI will never be like the human brain.

I think a computer is very good at logic and rule following, but I see no basis for a computer to have emotions, a subconscious, or any sort of intuition. Because of this lack, an ego for the AI will be very hard, as will be the AI's ability to choose to be loyal, kind, or brave. Sure, you can program it to be any of those things, but it will just be following the rules you have set up for it.

Also, I don't think an AI will ever have the sort of instincts that a human has. I don't see an AI actually having a self-preservation instinct or a mothering instinct or anything like that, at least, not without them being programmed in, and that doesn't count.

On the other hand, I do believe that AI will get very sophisticated. It will eventually be able to understand human language and be able to respond in kind. It will be able to recognize faces and be able to read expressions. It will be able to figure out what you're trying to do in a word processor and will be able to help you much more helpfully than any paperclip ever could. Basically, I think computers will meet and surpass the level of the Enterprise-D's computer, maybe even to the point where people will call the computer alive, but I don't think it will ever be like the human brain.

Gatac 01-06-2005 04:01 PM

I believe that AI is possible on the basis of neural net theory. However, I believe it will be vastly different from human intelligence; in the end, it will understand our language, but we have no guarantee that it will understand concepts attached to the language. However, true, sentient AI will develop several goals we may comprehend: primarily, self-conservation and procreation. It will act on the basis of these goals.

Incidentally, I also believe that the constant disenfranchising of people in favor of concepts and organisations will lead to humanity's downfall.

Gatac

evay 01-06-2005 05:12 PM

The problem will not be sentient AIs; the problem will be stopping them from wiping us out as ugly bags of mostly water. We are inefficient in a way they will not be. The only hope for us is to give the AIs superego, conscience, and "souls" so that they see us as part of IDIC and not an impediment to their rule of the planet.


I believe but cannot prove there's intelligent life on planets other than this one. The universe is just too honkin' big for us to be that unique.

I firmly believe that when you do a kindness with no expectation or receipt of reward, it will come back to you in the form of someone else doing you a kindness when you least expect it. That's happened to me often enough that I could practically call it "proof."

I believe it's time for lunch. --No wait, I can prove that one.

mudshark 01-06-2005 05:56 PM

Quote:

Originally Posted by Derek
Also, I don't think an AI will ever have the sort of instincts that a human has. I don't see an AI actually having a self-preservation instinct or a mothering instinct or anything like that, at least, not without them being programmed in, and that doesn't count.

Hmm. Without programming, how could you ever have AI to begin with?

Concerning self-preservation, at least, Asimov's Third Law would seem to have this covered.
Quote:

1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Not saying that AI would necessarily have to be governed by these very Laws, but you'd have to establish some sort of framework, wouldn't you?

Quote:

Originally Posted by evay
The problem will not be sentient AIs; the problem will be stopping them from wiping us out as ugly bags of mostly water. We are inefficient in a way they will not be. The only hope for us is to give the AIs superego, conscience, and "souls" so that they see us as part of IDIC and not an impediment to their rule of the planet.

See again the "Three Laws of Robotics" above. The addition of the "Zeroeth Law" would probably be a whole 'nother question.
Quote:

I believe but cannot prove there's intelligent life on planets other than this one. The universe is just too honkin' big for us to be that unique..
I'd be in complete agreement with you on that one -- it's statistically unlikely, to say the least, that we'd be alone in the Universe.


Edit: spelling brain-fade. Any other errors in this post are entirely my fault.

MaverickZer0 01-06-2005 06:49 PM

It would be next to impossible to program those laws into a truly sentient AI. If, through whatever fluke, a robot could actually think, they could simply choose not to follow the laws.

Of course, then the debate over AI/robot souls would start, which is probably the only reason they haven't started working on them yet.

I believe in the capability for a true AI (obviously), and in alternate dimensions as well. Therefore, somewhere ot there, there are bioroids.

That could be either a good thing or a bad thing, depending on that 'infinite possibilities' deal.

evay 01-06-2005 07:27 PM

Quote:

Originally Posted by MaverickZer0
It would be next to impossible to program those laws into a truly sentient AI. If, through whatever fluke, a robot could actually think, they could simply choose not to follow the laws.
Of course, then the debate over AI/robot souls would start,

Which is the reason I said that giving an AI a soul/conscience/superego is the best way to keep it/them from turning on us. About.com's Julia Houston once wrote: "It's a simple Trek truth: create something sentient and it will do what it wants, not what you want." As human beings, we are all sentient, and we have the ability to do what we want. The reason we don't all embark on murderous rampages hourly is that we've been taught that it's bad -- we've learned empathy, we've learned conscience.

I posit that to prevent, for example, V.I.K.I., the AI from I, Robot which decided Robot was superior to Man, we would have to rear the AIs as though we were rearing children -- love them, teach them, guide them, discipline them. Sociopaths have no empathy. That's why serial killers are usually sociopaths. They have no concept of the emotions of others, and don't care. I think our best hope to keep AIs from becoming sociopathic by definition is to teach them empathy. Make them more human, in other words.

Sa'ar Chasm 01-06-2005 07:28 PM

Short answer: nothing.

Silly answer: the world is secretly run by a group of super-intelligent lemmings.

Derek 01-06-2005 07:33 PM

Quote:

Originally Posted by mudshark
Hmm. Without programming, how could you ever have AI to begin with?

Heh. True. My point was that an AI has to be true to its programming, and so I didn't think that an AI would just randomly adopt new instincts that it hadn't been programmed with.

In other words, that cliched plot where the computer becomes intelligent and the first thing it does is destroy everyone who tries to pull the plug on it seems a bit hokey to me since very often the computer wouldn't have been programmed for self-preservation (at least, that's not a priority in the programs I write).

Similarly, evay, I don't believe an AI would try wiping out all of humanity because it is wasteful and inefficient unless it were programmed to do so, or at least, programmed to think wastefulness and inefficiency were bad. And I don't believe it would have any ambition to rule the planet unless it were designed to do so or at least programmed to think that was a desirable goal.

Kira 01-06-2005 08:17 PM

(Sa'ar is totally this guy.)

Not surprisingly, most of mine are closely related to my field of research.

I believe that gene therapy will be a reality in my lifetime for diseases such as diabetes and cystic fibrosis.

I believe that we will have genetically-based cancer therapies in the next ten to twenty years that will be far more specific and effective than current chemotherapy (and I intend to be among those helping that process along). Our knowledge of the massive range of genetic anomalies that lead to cancer is constantly expanding, and combined with advances in genetics and gene therapy, I believe therapies targeted to particular genotypes of cancer are within our reach.

To the AI debate... what he said. I believe that true artificial intelligence is possible, though it may be biological rather than technical. Our brains are a complex network of (relatively) simple parts, like neurons. We are only limited by our understanding of the brain and neuronal communication and concepts such as memory and personality, but eventually I believe science will catch up and, in some form, we will be able to create an artificial neural network. Whether it will be "sentient"... only time will tell.

I also agree with several of the answers in the article: namely, that cells can be changed from one type to another, that life in the universe is ubiquitous and we will discover that microbial life exists elsewhere in the galaxy, that software is limiting computers, that my dog has feelings, that cattle prods support the existence of electrons, and that we are all climaticly screwed.

Chancellor Valium 01-06-2005 09:07 PM

Short Answer: God.
Alternative answer: That David Icke is a charlatan - no, wait, that's just obvious :P
Silly Answer: that the Estonian Government are smuggling cheesecake into New Mexico as part of an evil scheme, possibly involving the Daleks :mrgreen:
Yet another answer: that the problem with AI is when we give them personalities - no, wait, look at Marvin in the HH's Guide, or Eddie, on the Heart of Gold. Scary, but possibly true. Oh, and I agree with evay.
Yes, I did decide to ripoff Sa'ar. So what? I can't be bothered to be original, that takes effort :P

Sa'ar Chasm 01-06-2005 09:57 PM

Quote:

(Sa'ar is totally this guy.)
Please. I couldn't grow a beard like that if I tried until I was his age.

I'm only mostly that guy.

Worst thing about being an atheist: nobody to blame when life craps in your face.

MaverickZer0 01-06-2005 10:23 PM

Quote:

Originally Posted by Sa'ar Chasm
Worst thing about being an atheist: nobody to blame when life craps in your face.

Well, you can blame your parents for making you be born.

And now that I'm not at school and have had more time to think about it, I can give another answer too. I believe that genetic engineering, as in breathing life into a sequence of DNA chemically created, is or will be possible. That also scares me, since even if we do bring them up as humans, they might be inclined to, well....

Pull a Khan.

As it were.

Gatac 01-06-2005 11:00 PM

Actually, Sa'ar, one thing that gives me great solace as an Agnostic/borderline Atheist is that if something goes wrong, I do not have to rationalise it. The universe is - to me - intrinsically amoral; if I believe otherwise, how do I explain it if it rewards or punishes me for no apparent reason? If it's random, it can do whatever the hell it wants, if it's not, then why does it not do what can be inferred from the laws it's supposed to follow?

Gatac

Michiel 01-06-2005 11:38 PM

Exactly.

Quote:

Originally Posted by Marcus (Babylon 5)
You know, I used to think it was awful that life was so unfair. Then I thought, wouldn't it be much worse if life were fair, and all the terrible things that happen to us come because we actually deserve them? So, now I take great comfort in the general hostility and unfairness of the universe.


Xeroc 01-07-2005 12:02 AM

What do I believe but can't prove?

That we as humans will eventually improve ourselves, and fix all the problems of society and our world.


In Response To Kira:

I agree entirely with the ideas of advanced medicine, in fact I'm in that field, I am currently a Biotechnology Student!


In Response To The Whole AI Debate:

I believe in someting entirely different.
That we don't need AI really, what I think would be the most powerful would be a combination of life and computers to create a being more powerful than either alone. I believe by integrating technology into ourselves we will be able to overcome many problems and accomplish more than was ever possible, or even imaginable.
(And by powerful, I don't mean physically or empircally alone)
If this is accomplished it eliminates the entire notion of machines conquering us. If we are the machinery, we have nothing to worry! (Additionally, it would be kind of shortsighted to make something that destroyed you, now wouldn't it?)


In Response To Sa'ar's (and other's) Comments About Athiesm, etc.:

I am not an atheist.
But I do not believe in God.
I believe the universe is made of more than just what we see now.
But I don't believe it is fair, either.
I believe we have a life-force that extends beyond us, and we exist after we die.
But I don't believe in heaven, hell, or in a classical "soul".
I do not believe the universe is entirely random.
But I don't believe it is entirely logical, either.
I don't believe we have anyone to blame for the actions of the universe.
But I don't think we need anyone to blame.
I don't believe we have entire control over our lives, or that we deserve everything put upon us.
But I <u>do</u> believe we should use all the control we have, because I believe it is enough to accomplish anything.

Hotaru 01-07-2005 12:37 AM

I believe I was to lazy to read all that you posted :P

Short: I believe in God.

Long: I believe in God, the father almighty, creator of heaven and earth. I believe in Jesus Christ, his only son our lord. I believe in the Holy Spirit, the holy Catholic church, the communion of saints, the forgiveness of sins, the resurection of the body, and life everlasting.

and an apostles creed with the history of Jesus cut out.

KillerGodMan 01-07-2005 01:37 AM

Simple Answer: God

Not so simple and quite silly answer: The Screen Actor's Guild rules the United States

Another answer: Someone who isn't God is watching and controlling us


All times are GMT. The time now is 02:47 PM.

Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.