The Five-Minute Forums  

Go Back   The Five-Minute Forums > FiveMinute.net > News
Register FAQ Community Calendar Today's Posts Search

Reply
 
Thread Tools Display Modes
  #1  
Old 01-06-2005, 01:11 PM
Zeke's Avatar
Zeke Zeke is offline
The lens that flares in the night
Administrator
 
Join Date: Apr 2004
Location: Ottawa, ON
Posts: 3,398
Send a message via ICQ to Zeke Send a message via AIM to Zeke Send a message via Yahoo to Zeke
Default January 5



Today we have a link that Kira recommended: The Edge's World Question Centre. Every year, a group called the Edge Foundation poses a question and then archives some of the better-known respondents' answers, generally from scientists and other members of publisher John Brockman's "third culture." This year's question is a good one: "What do you believe is true even though you cannot prove it?" I don't usually do comment threads for link days, but I think it might be interesting to see what sort of answers 5MV's readers give for this question. (I'm certainly as interested in hearing from you guys as I am in hearing from this third culture thing.)
__________________
FiveMinute.net: because stuff is long and life is short

[03:17] FiveMinZeke: Galactica clearly needs the advanced technology of scissors, which get around the whole "yanking on your follicles" problem.
[03:17] IJD: cylons can hack any blades working in conjunction
Reply With Quote
  #2  
Old 01-06-2005, 01:32 PM
Flixibixi Flixibixi is offline
Member
 
Join Date: Nov 2004
Posts: 38
Send a message via MSN to Flixibixi Send a message via Yahoo to Flixibixi
Default

I believe that with infinite realities with infinite possibilities, everything exists in infinite forms. Meaning that in infinite realities, what happens on this board is a TV show or a movie or a novel or the basis of a lullaby.

Of course, the reverse would be true. In infinite realities, everything that's happened in Star Trek actually happened in real life, even the bits that contradicted other bits.

So everytime we create something creative, we are actually crossing dimesions and translating the information we receive from them into a format we can then understand.

...oh god, then every unspeakably bad fanfic actually happened...in INFINITE REALITIES.

Just think about that a bit more. It'll all sink in, and you'll feel all-powerful and all-guilty at the same time. :twisted:
__________________
http://www.ilovebeets.com/

Because bees would suffocate after being canned and labelled.
Reply With Quote
  #3  
Old 01-06-2005, 02:48 PM
Michiel's Avatar
Michiel Michiel is offline
Last of the ko fighters
Member
 
Join Date: Mar 2003
Location: Netherlands
Posts: 523
Default

I believe, but cannot prove, that the human mind/brain is nothing more than a biological/chemical computer and that true AI is possible.

In other words, I do not believe in a spirit/soul that makes human intelligence unique. The reason we have not yet been able to reproduce it, is that it is a very complex system, that took bilions of years of evolution to develop.

However, some day, we will be able to. (Lets just hope that the AI's will not discover what useless and inefficient creatures humans are. )
__________________
The strength of a civilization is not measured by its ability
to wage wars, but rather by its ability to prevent them.
- Gene Roddenberry
Reply With Quote
  #4  
Old 01-06-2005, 03:35 PM
Derek's Avatar
Derek Derek is offline
Dean of misderektion
Senior Staff
 
Join Date: Mar 2003
Location: Sector 001
Posts: 1,106
Default

Quote:
Originally Posted by Michiel
I believe, but cannot prove, that the human mind/brain is nothing more than a biological/chemical computer and that true AI is possible.
That's funny because I believe, but cannot prove, that AI will never be like the human brain.

I think a computer is very good at logic and rule following, but I see no basis for a computer to have emotions, a subconscious, or any sort of intuition. Because of this lack, an ego for the AI will be very hard, as will be the AI's ability to choose to be loyal, kind, or brave. Sure, you can program it to be any of those things, but it will just be following the rules you have set up for it.

Also, I don't think an AI will ever have the sort of instincts that a human has. I don't see an AI actually having a self-preservation instinct or a mothering instinct or anything like that, at least, not without them being programmed in, and that doesn't count.

On the other hand, I do believe that AI will get very sophisticated. It will eventually be able to understand human language and be able to respond in kind. It will be able to recognize faces and be able to read expressions. It will be able to figure out what you're trying to do in a word processor and will be able to help you much more helpfully than any paperclip ever could. Basically, I think computers will meet and surpass the level of the Enterprise-D's computer, maybe even to the point where people will call the computer alive, but I don't think it will ever be like the human brain.
__________________
"Please, Aslan," said Lucy, "what do you call soon?"
"I call all times soon," said Aslan; and instantly he vanished away and Lucy was alone with the Magician.
Reply With Quote
  #5  
Old 01-06-2005, 04:01 PM
Gatac's Avatar
Gatac Gatac is offline
Man in the iron mask
Member
 
Join Date: Feb 2004
Location: Magdeburg, Germany
Posts: 667
Send a message via ICQ to Gatac Send a message via AIM to Gatac
Default

I believe that AI is possible on the basis of neural net theory. However, I believe it will be vastly different from human intelligence; in the end, it will understand our language, but we have no guarantee that it will understand concepts attached to the language. However, true, sentient AI will develop several goals we may comprehend: primarily, self-conservation and procreation. It will act on the basis of these goals.

Incidentally, I also believe that the constant disenfranchising of people in favor of concepts and organisations will lead to humanity's downfall.

Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'?
Justin: It's kind of a limited skill.
Greg: Depends on how often you drive off bridges.
- d02 Quotes
Reply With Quote
  #6  
Old 01-06-2005, 05:12 PM
evay's Avatar
evay evay is offline
But if you put the hammer in an elevator...
Member
 
Join Date: Apr 2004
Location: Deck Four, Section Seven
Posts: 522
Default

The problem will not be sentient AIs; the problem will be stopping them from wiping us out as ugly bags of mostly water. We are inefficient in a way they will not be. The only hope for us is to give the AIs superego, conscience, and "souls" so that they see us as part of IDIC and not an impediment to their rule of the planet.


I believe but cannot prove there's intelligent life on planets other than this one. The universe is just too honkin' big for us to be that unique.

I firmly believe that when you do a kindness with no expectation or receipt of reward, it will come back to you in the form of someone else doing you a kindness when you least expect it. That's happened to me often enough that I could practically call it "proof."

I believe it's time for lunch. --No wait, I can prove that one.
__________________
Any truth is better than indefinite doubt. — Sherlock Holmes
"The Adventure of the Yellow Face," Arthur Conan Doyle
Reply With Quote
  #7  
Old 01-06-2005, 05:56 PM
mudshark's Avatar
mudshark mudshark is offline
Is he ever gonna hit Krazy Kat, or what?
Member
 
Join Date: Mar 2003
Location: UMRK
Posts: 1,738
Default

Quote:
Originally Posted by Derek
Also, I don't think an AI will ever have the sort of instincts that a human has. I don't see an AI actually having a self-preservation instinct or a mothering instinct or anything like that, at least, not without them being programmed in, and that doesn't count.
Hmm. Without programming, how could you ever have AI to begin with?

Concerning self-preservation, at least, Asimov's Third Law would seem to have this covered.
Quote:
1. A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Not saying that AI would necessarily have to be governed by these very Laws, but you'd have to establish some sort of framework, wouldn't you?

Quote:
Originally Posted by evay
The problem will not be sentient AIs; the problem will be stopping them from wiping us out as ugly bags of mostly water. We are inefficient in a way they will not be. The only hope for us is to give the AIs superego, conscience, and "souls" so that they see us as part of IDIC and not an impediment to their rule of the planet.
See again the "Three Laws of Robotics" above. The addition of the "Zeroeth Law" would probably be a whole 'nother question.
Quote:
I believe but cannot prove there's intelligent life on planets other than this one. The universe is just too honkin' big for us to be that unique..
I'd be in complete agreement with you on that one -- it's statistically unlikely, to say the least, that we'd be alone in the Universe.


Edit: spelling brain-fade. Any other errors in this post are entirely my fault.
__________________
Methinks Ted Sturgeon was too kind.

'Yes, but I think some people should be offended.'
-- John Cleese (on whether he thought some might be offended by Monty Python)
Reply With Quote
  #8  
Old 01-06-2005, 06:49 PM
MaverickZer0's Avatar
MaverickZer0 MaverickZer0 is offline
Suuuuuper genius
Member
 
Join Date: Feb 2004
Location: On Beach Street, in a Dimensional Area
Posts: 745
Send a message via AIM to MaverickZer0 Send a message via MSN to MaverickZer0 Send a message via Yahoo to MaverickZer0
Default

It would be next to impossible to program those laws into a truly sentient AI. If, through whatever fluke, a robot could actually think, they could simply choose not to follow the laws.

Of course, then the debate over AI/robot souls would start, which is probably the only reason they haven't started working on them yet.

I believe in the capability for a true AI (obviously), and in alternate dimensions as well. Therefore, somewhere ot there, there are bioroids.

That could be either a good thing or a bad thing, depending on that 'infinite possibilities' deal.
__________________
Sig v8.2.2

No, I don't know what I'm doing, but I'm going to go and do it anyway.

*pokes avatar* Made by a good LJ friend. Thanks Ani!

Dark Blues: I'm going to kill you!
Enzan: Not if I kill me first!
Dark Blues: You...are aware my goal is accomplished either way, right?
Enzan: ...Yeah...
Reply With Quote
  #9  
Old 01-06-2005, 07:27 PM
evay's Avatar
evay evay is offline
But if you put the hammer in an elevator...
Member
 
Join Date: Apr 2004
Location: Deck Four, Section Seven
Posts: 522
Default

Quote:
Originally Posted by MaverickZer0
It would be next to impossible to program those laws into a truly sentient AI. If, through whatever fluke, a robot could actually think, they could simply choose not to follow the laws.
Of course, then the debate over AI/robot souls would start,
Which is the reason I said that giving an AI a soul/conscience/superego is the best way to keep it/them from turning on us. About.com's Julia Houston once wrote: "It's a simple Trek truth: create something sentient and it will do what it wants, not what you want." As human beings, we are all sentient, and we have the ability to do what we want. The reason we don't all embark on murderous rampages hourly is that we've been taught that it's bad -- we've learned empathy, we've learned conscience.

I posit that to prevent, for example, V.I.K.I., the AI from I, Robot which decided Robot was superior to Man, we would have to rear the AIs as though we were rearing children -- love them, teach them, guide them, discipline them. Sociopaths have no empathy. That's why serial killers are usually sociopaths. They have no concept of the emotions of others, and don't care. I think our best hope to keep AIs from becoming sociopathic by definition is to teach them empathy. Make them more human, in other words.
__________________
Any truth is better than indefinite doubt. — Sherlock Holmes
"The Adventure of the Yellow Face," Arthur Conan Doyle
Reply With Quote
  #10  
Old 01-06-2005, 07:28 PM
Sa'ar Chasm's Avatar
Sa'ar Chasm Sa'ar Chasm is offline
Our last, best hope for peace
Staff
 
Join Date: Mar 2003
Location: Sitting (in Ottawa)
Posts: 3,425
Default

Short answer: nothing.

Silly answer: the world is secretly run by a group of super-intelligent lemmings.
__________________
The first run through of any experimental procedure is to identify any potential errors by making them.
Reply With Quote
  #11  
Old 01-06-2005, 07:33 PM
Derek's Avatar
Derek Derek is offline
Dean of misderektion
Senior Staff
 
Join Date: Mar 2003
Location: Sector 001
Posts: 1,106
Default

Quote:
Originally Posted by mudshark
Hmm. Without programming, how could you ever have AI to begin with?
Heh. True. My point was that an AI has to be true to its programming, and so I didn't think that an AI would just randomly adopt new instincts that it hadn't been programmed with.

In other words, that cliched plot where the computer becomes intelligent and the first thing it does is destroy everyone who tries to pull the plug on it seems a bit hokey to me since very often the computer wouldn't have been programmed for self-preservation (at least, that's not a priority in the programs I write).

Similarly, evay, I don't believe an AI would try wiping out all of humanity because it is wasteful and inefficient unless it were programmed to do so, or at least, programmed to think wastefulness and inefficiency were bad. And I don't believe it would have any ambition to rule the planet unless it were designed to do so or at least programmed to think that was a desirable goal.
__________________
"Please, Aslan," said Lucy, "what do you call soon?"
"I call all times soon," said Aslan; and instantly he vanished away and Lucy was alone with the Magician.
Reply With Quote
  #12  
Old 01-06-2005, 08:17 PM
Kira's Avatar
Kira Kira is offline
Annoy, tiny blonde one! Annoy like the wind!
Former Staff
 
Join Date: Feb 2003
Location: Mars
Posts: 780
Send a message via ICQ to Kira Send a message via MSN to Kira Send a message via Yahoo to Kira
Default

(Sa'ar is totally this guy.)

Not surprisingly, most of mine are closely related to my field of research.

I believe that gene therapy will be a reality in my lifetime for diseases such as diabetes and cystic fibrosis.

I believe that we will have genetically-based cancer therapies in the next ten to twenty years that will be far more specific and effective than current chemotherapy (and I intend to be among those helping that process along). Our knowledge of the massive range of genetic anomalies that lead to cancer is constantly expanding, and combined with advances in genetics and gene therapy, I believe therapies targeted to particular genotypes of cancer are within our reach.

To the AI debate... what he said. I believe that true artificial intelligence is possible, though it may be biological rather than technical. Our brains are a complex network of (relatively) simple parts, like neurons. We are only limited by our understanding of the brain and neuronal communication and concepts such as memory and personality, but eventually I believe science will catch up and, in some form, we will be able to create an artificial neural network. Whether it will be "sentient"... only time will tell.

I also agree with several of the answers in the article: namely, that cells can be changed from one type to another, that life in the universe is ubiquitous and we will discover that microbial life exists elsewhere in the galaxy, that software is limiting computers, that my dog has feelings, that cattle prods support the existence of electrons, and that we are all climaticly screwed.
__________________
\"It\'s all fun and games until one of you gets my foot up your ass.\"
--Veronica Mars
Reply With Quote
  #13  
Old 01-06-2005, 09:07 PM
Chancellor Valium's Avatar
Chancellor Valium Chancellor Valium is offline
Reasonably priced male pills
Member
 
Join Date: Sep 2004
Location: Rhen Var, sitting on a radiator...
Posts: 4,595
Send a message via MSN to Chancellor Valium
Default

Short Answer: God.
Alternative answer: That David Icke is a charlatan - no, wait, that's just obvious :P
Silly Answer: that the Estonian Government are smuggling cheesecake into New Mexico as part of an evil scheme, possibly involving the Daleks :mrgreen:
Yet another answer: that the problem with AI is when we give them personalities - no, wait, look at Marvin in the HH's Guide, or Eddie, on the Heart of Gold. Scary, but possibly true. Oh, and I agree with evay.
Yes, I did decide to ripoff Sa'ar. So what? I can't be bothered to be original, that takes effort :P
__________________
O to be wafted away
From this black aceldama of sorrow;
Where the dust of an earthy today
Is the earth of a dusty tomorrow!
Reply With Quote
  #14  
Old 01-06-2005, 09:57 PM
Sa'ar Chasm's Avatar
Sa'ar Chasm Sa'ar Chasm is offline
Our last, best hope for peace
Staff
 
Join Date: Mar 2003
Location: Sitting (in Ottawa)
Posts: 3,425
Default

Quote:
(Sa'ar is totally this guy.)
Please. I couldn't grow a beard like that if I tried until I was his age.

I'm only mostly that guy.

Worst thing about being an atheist: nobody to blame when life craps in your face.
__________________
The first run through of any experimental procedure is to identify any potential errors by making them.
Reply With Quote
  #15  
Old 01-06-2005, 10:23 PM
MaverickZer0's Avatar
MaverickZer0 MaverickZer0 is offline
Suuuuuper genius
Member
 
Join Date: Feb 2004
Location: On Beach Street, in a Dimensional Area
Posts: 745
Send a message via AIM to MaverickZer0 Send a message via MSN to MaverickZer0 Send a message via Yahoo to MaverickZer0
Default

Quote:
Originally Posted by Sa'ar Chasm
Worst thing about being an atheist: nobody to blame when life craps in your face.
Well, you can blame your parents for making you be born.

And now that I'm not at school and have had more time to think about it, I can give another answer too. I believe that genetic engineering, as in breathing life into a sequence of DNA chemically created, is or will be possible. That also scares me, since even if we do bring them up as humans, they might be inclined to, well....

Pull a Khan.

As it were.
__________________
Sig v8.2.2

No, I don't know what I'm doing, but I'm going to go and do it anyway.

*pokes avatar* Made by a good LJ friend. Thanks Ani!

Dark Blues: I'm going to kill you!
Enzan: Not if I kill me first!
Dark Blues: You...are aware my goal is accomplished either way, right?
Enzan: ...Yeah...
Reply With Quote
  #16  
Old 01-06-2005, 11:00 PM
Gatac's Avatar
Gatac Gatac is offline
Man in the iron mask
Member
 
Join Date: Feb 2004
Location: Magdeburg, Germany
Posts: 667
Send a message via ICQ to Gatac Send a message via AIM to Gatac
Default

Actually, Sa'ar, one thing that gives me great solace as an Agnostic/borderline Atheist is that if something goes wrong, I do not have to rationalise it. The universe is - to me - intrinsically amoral; if I believe otherwise, how do I explain it if it rewards or punishes me for no apparent reason? If it's random, it can do whatever the hell it wants, if it's not, then why does it not do what can be inferred from the laws it's supposed to follow?

Gatac
__________________
Katy: Can I have the skill 'drive car off bridge and have parachute handy'?
Justin: It's kind of a limited skill.
Greg: Depends on how often you drive off bridges.
- d02 Quotes
Reply With Quote
  #17  
Old 01-06-2005, 11:38 PM
Michiel's Avatar
Michiel Michiel is offline
Last of the ko fighters
Member
 
Join Date: Mar 2003
Location: Netherlands
Posts: 523
Default

Exactly.

Quote:
Originally Posted by Marcus (Babylon 5)
You know, I used to think it was awful that life was so unfair. Then I thought, wouldn't it be much worse if life were fair, and all the terrible things that happen to us come because we actually deserve them? So, now I take great comfort in the general hostility and unfairness of the universe.
__________________
The strength of a civilization is not measured by its ability
to wage wars, but rather by its ability to prevent them.
- Gene Roddenberry
Reply With Quote
  #18  
Old 01-07-2005, 12:02 AM
Xeroc's Avatar
Xeroc Xeroc is offline
Not to be confused with Kodax
Member
 
Join Date: Aug 2004
Location: The Universe
Posts: 4,230
Send a message via ICQ to Xeroc Send a message via AIM to Xeroc Send a message via Yahoo to Xeroc
Default

What do I believe but can't prove?

That we as humans will eventually improve ourselves, and fix all the problems of society and our world.


In Response To Kira:

I agree entirely with the ideas of advanced medicine, in fact I'm in that field, I am currently a Biotechnology Student!


In Response To The Whole AI Debate:

I believe in someting entirely different.
That we don't need AI really, what I think would be the most powerful would be a combination of life and computers to create a being more powerful than either alone. I believe by integrating technology into ourselves we will be able to overcome many problems and accomplish more than was ever possible, or even imaginable.
(And by powerful, I don't mean physically or empircally alone)
If this is accomplished it eliminates the entire notion of machines conquering us. If we are the machinery, we have nothing to worry! (Additionally, it would be kind of shortsighted to make something that destroyed you, now wouldn't it?)


In Response To Sa'ar's (and other's) Comments About Athiesm, etc.:

I am not an atheist.
But I do not believe in God.
I believe the universe is made of more than just what we see now.
But I don't believe it is fair, either.
I believe we have a life-force that extends beyond us, and we exist after we die.
But I don't believe in heaven, hell, or in a classical "soul".
I do not believe the universe is entirely random.
But I don't believe it is entirely logical, either.
I don't believe we have anyone to blame for the actions of the universe.
But I don't think we need anyone to blame.
I don't believe we have entire control over our lives, or that we deserve everything put upon us.
But I do believe we should use all the control we have, because I believe it is enough to accomplish anything.
__________________
Truer words were never spoken.

Xeroc Central

5MChat: PHP/JS Chat 2.0
Click here to view the chat in progress!
Reply With Quote
  #19  
Old 01-07-2005, 12:37 AM
Hotaru's Avatar
Hotaru Hotaru is offline
Lone Ranger
Member
 
Join Date: Nov 2003
Location: Where it's cold in winter and hot in summer, yet not warm in spring.
Posts: 1,041
Default

I believe I was to lazy to read all that you posted :P

Short: I believe in God.

Long: I believe in God, the father almighty, creator of heaven and earth. I believe in Jesus Christ, his only son our lord. I believe in the Holy Spirit, the holy Catholic church, the communion of saints, the forgiveness of sins, the resurection of the body, and life everlasting.

and an apostles creed with the history of Jesus cut out.
Reply With Quote
  #20  
Old 01-07-2005, 01:37 AM
KillerGodMan's Avatar
KillerGodMan KillerGodMan is offline
More something than this cat
Member
 
Join Date: Sep 2004
Location: Ontario
Posts: 1,689
Send a message via MSN to KillerGodMan Send a message via Skype™ to KillerGodMan
Default

Simple Answer: God

Not so simple and quite silly answer: The Screen Actor's Guild rules the United States

Another answer: Someone who isn't God is watching and controlling us
__________________
-KillerGM

Well I guess I'll just live WITHOUT an avatar then!
Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On

Forum Jump


All times are GMT. The time now is 05:37 PM.


Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.