BrainMeta'   Connectomics'  

Welcome Guest ( Log In | Register )

 
Reply to this topicStart new topic
> Newcomb's paradox
Timothy Chow
post Sep 17, 2009, 07:15 PM
Post #1


Newbie
*

Group: Basic Member
Posts: 1
Joined: Sep 17, 2009
Member No.: 32358



Given that it seems possible to detect someone's intention to, say, push a button a fraction of a second before he or she is "aware" of having that intention, has anyone attempted to carry out Newcomb's paradox in the lab?

In Newcomb's paradox, there are two boxes, Box A and Box B. Box A visibly contains $100 (for example). Box B is opaque and contains either $0 or $200 (the way in which the contents of Box B will be determined will be explained shortly).

There is a clock that counts down to zero. As soon as the clock shows zero, the subject must press one of two buttonsóButton 1 or Button 2. If the subject presses Button 1, then the subject earns the contents of both Box A and Box B (i.e., either $100 or $300 depending on what happens to be in Box B). If the subject presses Button 2, then the subject earns the contents of Box B only, and forgoes the $100 in Box A. If the subject does not press either button (or presses both buttons) before the deadline passes, then the subject gets nothing.

Now for the twist. Whether Box B contains $0 or $200 is decided by a machine that, at the instant the clock shows zero, puts $0 into Box B if it predicts that the subject will press Button 1, and puts $200 into Box B if it predicts that the subject will press Button 2.

The tricky part of the setup is to adjust the length of the deadline. It must be long enough that the subject has the impression that he or she is able to make a decision after the clock shows zero, but it must be short enough so that the machine can reliably predict which button the subject is about to press.

I believe that this experiment would be a very powerful contribution to our understanding of voluntary action, and judging from how many papers have been published on Newcomb's paradox, I'm sure it would be very interesting to philosophers. I tried emailing Patrick Haggard, urging him to perform this experiment, but since I'm a nobody, not surprisingly he ignored my email.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Rick
post Sep 21, 2009, 11:24 AM
Post #2


Supreme God
*******

Group: Basic Member
Posts: 5916
Joined: Jul 23, 2004
From: Sunny Southern California
Member No.: 3068



Do the experiment yourself? Google NSF grants and write a proposal.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Hey Hey
post Sep 21, 2009, 12:07 PM
Post #3


Supreme God
*******

Group: Basic Member
Posts: 7766
Joined: Dec 31, 2003
Member No.: 845



decided by a machine that ....... if it predicts

How will it predict/decide? Based on what?
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Rick
post Sep 21, 2009, 02:18 PM
Post #4


Supreme God
*******

Group: Basic Member
Posts: 5916
Joined: Jul 23, 2004
From: Sunny Southern California
Member No.: 3068



Based on machine-detectable neural activity. That's the premise, that a person decides before he is aware of it. The paradox is that one might be able to train himself to decide what he doesn't decide (or something like that).
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
astroidea
post Oct 05, 2009, 10:30 AM
Post #5


Aspiring
**

Group: Basic Member
Posts: 97
Joined: May 22, 2008
Member No.: 22853



QUOTE(Rick @ Sep 21, 2009, 03:18 PM) *

Based on machine-detectable neural activity. That's the premise, that a person decides before he is aware of it. The paradox is that one might be able to train himself to decide what he doesn't decide (or something like that).


I don't get why the paradox would be needed to train himself given that there is a machine that can detect decision before the decider is conscious of it.

In fact, I don't get newcomb's paradox at all. I just read about it on wikipedia and a few sites after reading about this thread. Why would anyone choose anything but box B? All of the arguments for choosing box A and B seem to conveniently ignore the premise that the predictor is never wrong.

I'm confused, would someone care to explain to me how choosing both boxes is can be just as rationally sound as choosing just box B in maximizing winnings?

User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Hey Hey
post Oct 05, 2009, 10:37 AM
Post #6


Supreme God
*******

Group: Basic Member
Posts: 7766
Joined: Dec 31, 2003
Member No.: 845



Surely this whole "delayed awareness" issue is down to neuronal transmission speeds that are slower than electronic or photonic device. Haven't we all been expecting an enhanced (velocity) transmission system to be integrated into the brain somehow in the future, if only via memory chips in the beginning, but then expanding into a fully inorganic brain with wifi (telepathy and universal knowledge and intelligence) and the rest that your imagination can muster? Then again that would be a transhuman and not a human, as a human has a property of being organic. Would that creature still be considered as "lifeform"? But I digress.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Rick
post Oct 06, 2009, 09:05 AM
Post #7


Supreme God
*******

Group: Basic Member
Posts: 5916
Joined: Jul 23, 2004
From: Sunny Southern California
Member No.: 3068



Human-ness is in your actions and beliefs, not in your body. Most of those homo sapiens walking around have not yet become human.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Hey Hey
post Oct 06, 2009, 01:09 PM
Post #8


Supreme God
*******

Group: Basic Member
Posts: 7766
Joined: Dec 31, 2003
Member No.: 845



QUOTE(Rick @ Oct 06, 2009, 06:05 PM) *

Human-ness is in your actions and beliefs, not in your body. Most of those homo sapiens walking around have not yet become human.
Rick, I can't agree. Every thought we have, every action we take is due to our body. Whether due to neurotransmiters, hormones, drugs, pollutants, toxins etc, we have a response that is due to them. And as they all influence the organic, then our body IS our actions, beliefs and everything else. Unless you were eluding to some superorganic characteristic? But that would be going down the path of supernatural wouldn't it, and I know you're not into that stuff. Unless you changed your mind.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Rick
post Oct 06, 2009, 01:50 PM
Post #9


Supreme God
*******

Group: Basic Member
Posts: 5916
Joined: Jul 23, 2004
From: Sunny Southern California
Member No.: 3068



So it's the old "your mind is your body" trick. I can't refute that. So maybe I should modify my statement to say that your humanness is in your brain, not your DNA. OK?
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Hey Hey
post Oct 06, 2009, 04:09 PM
Post #10


Supreme God
*******

Group: Basic Member
Posts: 7766
Joined: Dec 31, 2003
Member No.: 845



QUOTE(Rick @ Oct 06, 2009, 10:50 PM) *

So it's the old "your mind is your body" trick. I can't refute that. So maybe I should modify my statement to say that your humanness is in your brain, not your DNA. OK?
smile.gif
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Enki
post Oct 08, 2009, 04:20 AM
Post #11


Supreme God
*******

Group: Basic Member
Posts: 2794
Joined: Sep 10, 2004
From: Eridug
Member No.: 3458



QUOTE(Timothy Chow @ Sep 17, 2009, 07:15 PM) *

I believe that this experiment would be a very powerful contribution to our understanding of voluntary action, and judging from how many papers have been published on Newcomb's paradox, I'm sure it would be very interesting to philosophers. I tried emailing Patrick Haggard, urging him to perform this experiment, but since I'm a nobody, not surprisingly he ignored my email.


I recommend you to write him once again. Tell him that Enki from BrainMeta recommended you to re-send the letter.

>I'm a nobody, not surprisingly he ignored my email.

Why?! I consider you as Timothy Chow. E.g. for me Patrick Haggard is nobody if he did not care to respond to your letter.
User is offlineProfile CardPM
Go to the top of the page
+Quote Post
Paul King
post Aug 21, 2010, 12:47 PM
Post #12


Newbie
*

Group: Basic Member
Posts: 49
Joined: Aug 14, 2005
From: San Francisco, CA
Member No.: 4500



QUOTE(astroidea @ Oct 05, 2009, 11:30 AM) *
In fact, I don't get newcomb's paradox at all. I just read about it on wikipedia and a few sites after reading about this thread. Why would anyone choose anything but box B? All of the arguments for choosing box A and B seem to conveniently ignore the premise that the predictor is never wrong.

In the original version of Newcomb's paradox, the predictor is only right most of the time, say 90%.

If the predictor is always right and people witness this themselves, most everyone would just pick box B as you say.

If the predictor is only right 90% of the time (or 51%), many people would say "the die has already been cast" and whatever prediction the predictor made, taking both boxes will still maximize winnings. However the math still says box B alone is better.

Some people want to "fake out" the predictor by imagining that they will take both boxes, but at the last minute changing their mind. But wouldn't the predictor predict this too?

Newcomb's paradox is really about conflicting beliefs on the nature of causality and agency. Do you trust the world as you believe it to work and take both boxes? Or do you trust what appears to be working even though it contradicts your beliefs and take the one box?
User is offlineProfile CardPM
Go to the top of the page
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



Lo-Fi Version Time is now: 17th August 2017 - 03:33 AM


Home     |     About     |    Research     |    Forum     |    Feedback  


Copyright © BrainMeta. All rights reserved.
Terms of Use  |  Last Modified Tue Jan 17 2006 12:39 am

Consciousness Expansion · Brain Mapping · Neural Circuits · Connectomics  ·  Neuroscience Forum  ·  Brain Maps Blog
 · Connectomics · Connectomics  ·  shawn mikula  ·  articles