Archive

Archive for the ‘Lectures’ Category

Pascal Continued, and Review

December 7, 2011 Leave a comment

Pascal Continued

Recall that Pascal is looking for good reasons to believe that God exists.  Normally in this class we’ve considered good reasons to be either a priori or a posteriori.  Both of these are kinds of evidential reasons.  Pascal realizes that there is basically no evidential reason to believe that God exists, but he thinks he can still come up with a good reason to believe.  He does this by looking at prudential reasons.

A reason that appeals to what’s in your best interest is a prudential reason.  A reason that appeals either to a priori or a posteriori evidence is an evidential reason.

Prudential reasons are offered in support of actions.  Pascal says, “Hey, let’s think of believing as a kind of action, and see if there are prudential reasons in favor of it!  That would be neat, because then there would be good reasons to believe in God, even without evidence!”

So, Pascal’s argument is distinct among arguments for God in that it grants the atheist that there is no evidence for God’s existence.

Review

Expected Utility (p. 303-304)

The general formula for finding Expected Utility (or Expected Monetary Value) is:

Pr(win) x (net gain)  –  Pr(lose) x (net loss)

The probability that you win is calculated based on the rules of probability, Ch. 11, and the nature of the bet.  So, if you are betting on what two cards you will draw consecutively and without returning the first, then you use Rule 2G, Conjunction in General.  If you are betting on which single card you will draw in one shot, then you simply have a probability of winning equal to 1/52.  If you are betting that you draw one card or another, then you use Rule 3, Disjunction with Exclusivity.

Calculate the probability that you lose by doing 1 – Pr(win).  This is 1, Negation.

The net gain is the payoff of the bet minus the cost of the bet.  So, if the payoff is $26, and the bet is $1, then the net gain is $25.

The net loss is usually just the cost of the bet.  If you make a $1 bet, and lose, then the net loss is just $1.

Bayes Theorem (p. 291 – 297)

In order to calculate the Pr(h|e), you need three other numbers.  Recall that Pr(h|e) is the probability that some belief/hypothesis is true, given some evidence/observation.  For example, if you get a positive test result for cancer, then you want to know the probability that you have cancer, h, given a positive test result, e:  Pr(h|e).

The three numbers you need to know:

1. Pr(h): The base rate.  This is the probability that any random asymptomatic person in your age range has cancer.  No test result information is included in this number.

2. Pr(e|h): The sensitivity or reliability of the test.  This is a fact about the test.  Assuming some person has cancer, how probable is it that the test will deliver a positive result?  It is the true positive.  You can also calculate this number if you are given the false negative rate.  The false negative rate is the Pr(~e|h), the probability that the test comes back negative, given that the person has cancer.  1 – the false negative = the true positive.

3. Pr(e|~h): The false positive rate.  Also known as the doozy.  This is when the test tells you that you have cancer, but you don’t actually have cancer.

With these three numbers, you can calculate, or better yet estimate with pretty good accuracy, the chance that you have cancer, given a positive test result.  This does not apply only to cancer, though.  It applies to any sort of similar reasoning.  For example, this same Bayes Theorem can help one reason about positive results to a home pregnancy test, a DNA test, and even legal cases.

With the above three numbers, you can use the tree method of estimate the Pr(h|e).  Begin with Pr(h).  It will usually be a pretty small number, like .007.  This number can be expressed by saying “seven thousandths”.  This means, 7 out of 1,000.  So, let’s imagine that we have a sample size of 1,000 people.

1,000

h: 7                                                      ~h: 993

Next, we look at Pr(e|h).  It tells us, of the people that have cancer (or whatever), how many will get a positive test result?  It is usually a pretty high number, like .9, but it doesn’t have to be.  Suppose it is .9.  This means the test is 90% reliable.  That’s pretty good, but it’s not the whole story, as we’ll see.  Of the 7 with the cancer, about 6 will get a positive test result, where 6 is roughly 90% of 7.  This means the other person will get a false negative.

+: 6                -: 1

Then, we use the Pr(e|~h), the false positive rate.  Of the people who do not have the cancer, how many will get a positive test result anyway?  This is usually a pretty small number, like .03.  So, of the 993 who do not have cancer, about 27 of them will get a positive test result. The rest, 993 – 27 = 966, of the people without cancer will get a negative result.

+: 6                 -: 1                       +: 27                   -: 966.

Now, we wanted to know the chance we have cancer, given a positive test result.  So, of all the people who get positive test results, how many actually have cancer?  6 people out of 27 + 6 = 33, or 6/33, or 2/11.  So, with those three numbers that we were given, our chance of having cancer, given a positive test result, is 2/11.  So, Pr(h|e) = 2/11.  That’s pretty low, so there’s no reason to freak out yet.  The thing to do is get a second test done.  Two false positives in a row are extremely unlikely, and three in a row are even more unlikely.

Will I need a calculator?   NO.

Can I use one anyway?  Sure.

Remember all the key concepts from earlier exams, like validity, soundness, IBE, induction, philosophical arguments, informal fallacies, etc., are fair game on this exam.

The exam is structure like this: Part 1: Bayes Theorem, Expected Utility, Pascal’s Wager. 13 questions, multiple choice with some worth 2 points, where you show your work.

Part 2: Cumulative.  20 questions, mostly multiple choice, a few true/false, a couple fill in the blanks, and a couple worth 2 where you show your work.

 

Overall Grading Scale:
97.5-100 A+

92.5-97 A

90-92 A-

Etc.

Categories: Lectures, Seth Kurtenbach

Pascal’s Wager Lecture

December 1, 2011 Leave a comment

Pascal’s Wager.

Blaise Pascal (1623-1662).  French mystic and mathematician.  After work on conditional probability decided to avoid gambling.  Moved on to other questions: is it rational (logos) to believe in the existence of God.

The way Pascal reasons becomes influential not only among philosophers of religion but also as the founder of the modern theory of expected value/probabilistic decision theory.

Preliminaries:

I. Beliefs: Evidence vs. prudence.

The case of the briefcase:

I have briefcase with $1million dollars.  Another briefcase with loaded machine gun.  To Leo I say “I know you have no evidence that President Obama is juggling candy bars at this very moment.  Nevertheless, I want you to get your yourself to believe that’s true (with conviction).  You can use any psychological means possible (brainwashing, hypnosis, little colored pills, whatever).  Here’s the deal: if you succeed then I’ll give you $1million.  Otherwise, I use my machine gun.”

Leo’s situation: he has no evidence that President Obama is juggling candy bars at this very moment.  Nevertheless he has GOOD REASON to believe it.  That good reason is not evidential (obviously) but “prudential” (because he values money and his life).

Pascal’s question: even if we have no evidence that God exists do we have other reasons to determine whether we should believe?  Answer, yes, prudential reasons…

II. There sometimes reasons to bet even when the outcome is improbable.

Motivation: Pascal believes there is very little evidence that God actually exists.  Nonetheless, is it reasonable to believe in his existence?

Answer: yes, when the payoff is big enough.

The case of the gambling game.

Sheng finds $1 on the floor.  Decides to play a game that Jenny devised.

Game: pay $1 to play.  If you draw ace of spades, you get $1mill.  If you do not, you lose $1.
Odds of winning are small, 1/52.  Should Sheng still play?  Yes, of course.   How do we tell?

Expected Monetary value (p. 303-304).  (I’ll call it “expected utility” for this case)

= (the probability of winning) x (the net gain in utiles of winning) – (probability of losing) x (the net loss in utiles of losing)

Sheng’s case = 1/52 x $999,999 – 51/52 x $1 = $19,229

Question for later: how to interpret this result.

Pascal’s Wager (in sum): Asks, suppose I have little evidence that God exists.  Nevertheless do I have any “good reasons” to be believe that God exists?  Answer: yes.  a. I have prudential (as opposed to evidential reasons.  b. payoff is so big.

Pascal’s wager:

The pay-off heavily favors believing in God because the reward is so good and the punishment so bad.

Categories: Lectures

Bayes Theorem Lecture

December 1, 2011 Leave a comment

Bayes’s Theorem (pages 291–297)
It is a sad state when even physicians in the United States lack the tools to reason well. Consider this (from G. Gigerenzer’s Calculated Risks): “The probability that a woman of age 40 has breast cancer is about 1 percent. If she has breast cancer, the probability that she tests positive on a screening mammogram is 90 percent. If she does not have breast cancer, the probability that she nevertheless tests positive is 9 percent. What are the
chances that a woman who tests positive actually has breast cancer?” Do you know the answer? Most doctors who were presented with this common medical situation got the answer wildly wrong (off by 80 percent!).  Providing poor information leads to poor choices. Consider the possibility that a woman would take unnecessary invasive action on the basis of her or her physician not knowing how to perform this simple probability calculation.  The fact is many people have made poor medical choices. So, we could even say that what you will learn in this course might save your life!

A personal story: when my spouse was pregnant with our first of two daughters, she took a standard screening test for Down syndrome and got a positive result. The doctor told us that the next step was to perform a more invasive examination. The doctors were very clear why the invasive examination was required—it carried a lower “false positive” rate. But what, we wanted to know in our nervous state, was the chance that our baby potentially had Down syndrome? The nurse had no answer for us. We were given a pamphlet that told us not to worry because the first test had a high “false positive” rate. There’s that term again! We knew what it meant—it meant that the test sometimes indicates that there is a condition when there really isn’t. The question was: how does that rate affect the calculation for determining whether the fetus has Down? No answer. We consulted a “genetic
specialist” who pulled out charts, threw some numbers at us, etc. We were confused and very frustrated. Why couldn’t someone tell us a straight answer? It was then and there, nearly ten years ago, that I decided to learn
more about probability theory. When I found out that the calculation is relatively easy, I was determined to teach it to all my critical thinking classes. Our daughter, now nine, does not have Down, and the chances that she did,
even with that first positive screening, were very low. I don’t recall the specific conditions that would allow us to calculate the numbers, but it is the procedure that is important, and that’s what I’ll teach you, following the method laid out on textbook pages 294–296, which involves simple charts.

As your authors indicate on page 292, the reason why even seasoned doctors fail to correctly answer questions like the breast cancer or Down syndrome case is that people tend to focus too much on the rate of true positives and ignore the rarity of the condition in the first place. In both the breast cancer and Down syndrome cases there were no other signs or symptoms that the patient had the condition in the first place. So, they were no more likely than anyone in the general public to have the condition in the first place. Now, had there been other signs—for example, had the woman been given a mammogram because she felt a lump—then the value of true positives would have been higher. Nevertheless, the important point is that to determine one’s chance of having the condition given the result of the screening test requires us to use all the information available and not focus exclusively on one particular value (like the rate of true positives). Another important point is that the calculation involves a conditional probability: it is the chance that, say, our daughter would have Down’s given that she tested positive in the screening test.

On pages 292 and 293 the authors give us a little history—the theorem that allows us to calculate conditional probabilities where information changes our initial rates is due to some old guy born in the early eighteenth century. They also reveal to us the ugly formula associated with the old guy’s name. Never mind all that (unless you are a history buff). Let’s just get to the simple system that allows us to make accurate assessments (starting on page 294).

The authors do an excellent job of showing us how the simple system works. They use an example to guide us. The example concerns Wendy, who has received a positive result from colon cancer screening. The initial numbers are given on page 292. Notice that the probability that a person in the general population has colon cancer is very low, 0.003 or 0.3% or (better for quick calculations) 300/100,000. Wendy is assumed to have no other symptoms of having colon cancer—so she falls within the “general population” parameters. Had she had one of the symptoms, alas, the initial probability would have been higher.

Notice the labeling of the charts on pages 294–295. The trick is to consider all the relevant probabilities, not just one. At the very bottom of page 294 you will already get a sense for why Wendy’s chances are low. The total number of people getting colon cancer is only 300 out of 100,000. Because this is a total value for the column “Colon Cancer,” the two numbers we will plug into the cells above on the same column will add up to 300. Right away you should know that because colon cancer is rare, the chance of Wendy having colon cancer is low regardless of whether she tests positive or not. This is what my spouse and I didn’t really understand when we took the screen test for Down’s.  Use the methods described in the book to calculate the probability.  The answer should come out as roughly 9.2%, or .092-ish.

 

Categories: Lectures

Fallacies of Vacuity

Circular Reasoning

  • To remember what a circular argument is, think of a P.I.E.
  • In circular arguments, a Premise Is Equivalent to the conclusion.
  • Definition: Circular Argument
  • An argument is circular if and only if there is a premise of the argument that is equivalent to, or simply is the conclusion of that argument.

Examples of Circular Arguments:

  • 1. Drugs that make people hallucinate should be banned.
  • Therefore, hallucinogenic drugs should be banned.
  • 2. Individuals that use the threat of violence in order to make others succumb to their demands should be tortured.
  • Therefore, terrorists should be tortured.

Question Begging

  • To remember what question begging is, think of a P.A.C.
  • In question begging arguments, Premises Assume Conclusions.
  • Definition: Question Begging
  • An argument is question begging if and only if there is a premise of the argument that assumes the conclusion of that argument without any independent reasons for accepting that premise.

Example of Begging the Question

  • Background: a debate over whether alcoholic beverages should be banned.
  • Foods and beverages that make people intoxicated should be banned.
  • Therefore, alcoholic beverages should be banned.

Self-Sealing

  • To remember what self-sealing is, think of N.E.A.T.
  • For self-sealing arguments, there is No Evidence Against Them.
  • Definition: Self-Sealing
  • An argument (position) is self-sealing if and only if no evidence can possibly be brought against it no matter what.

Three Ways to be Self-Sealing

  • 1. By universal discounting.
  • 2. By going upstairs.
  • 3. By definition.

Universal Discounting

  • dismiss every possible objection, usually in an ad hoc or arbitrary way.

Example of Self-Sealing: Universal Discounting

  • Conspiracy Theorists:
  • Suppose someone thinks that a select group of universities controls all NCAA football.
  • As evidence in support of their position, they point to the select group of universities that get ranked in the top ten year after year despite not having won a national championship within the last decade.
  • Of course, that select group has allowed some non-members to be ranked in the top ten. For example, they let Boise State into the top ten. However, that’s just to conceal their total domination over NCAA football.

Going Upstairs

  • dismiss objections as an indication that the objector is not in a position to grasp the argument, or that by objecting, the objector actually provides evidence that the argument is on the right track.

Example of Self-Sealing: Going Upstairs

  • Psychoanalysis
  • Suppose that Joe meets Fred the Freudian.
  • Fred tells Joe, “you want to sleep with your mother and kill your father.”
  • Joe replies, “That’s absurd!”
  • Fred responds, “You just aren’t aware of your Oedipus complex yet.”
  • Fred tells Joe, “all of this just shows that you really do want to sleep with your mother and kill your father.”
  • Joe replies, “Tell that to my wife!”
  • Fred responds, “maybe someday you’ll come to terms with your Oedipus complex, but your responses indicate that today is definitely not that day.”

By Definition

  • Make a substantive claim. Then, cleverly redefine a crucial term in a way that guarantees that the claim will be true. By doing so, this deprives the claim of any substantive content.

Example of Self-Sealing: By Definition

  • Selfishness
  • Suppose that someone claims that all human actions are selfish.
  • This is an interesting claim, but let’s try to think of some counterexamples involving self-sacrifice.
  • In response to proposed counterexamples based on self-sacrifice, a defender of the claim that all human actions are selfish might respond by saying that by performing an act of self-sacrifice, what one wants to do is to help others. Hence, even acts of self-sacrifice are ultimately selfish.

 

Categories: Lectures

Lecture on Relevance

November 9, 2011 Leave a comment

Fallacies of Relevance:  Ch. 15

Premises that have no bearing on the truth of the conclusion are irrelevant.
Sometimes called Red Herrings.
A Red Herring is a fish that fox hunters would drag across a fox’s path in order to train their hounds.  The smell of the fish distracts the hounds from the smell of the fox.

Irrelevant reasons are offered to mislead or divert attention from the real issue.

A reason is relevant when it has some bearing on the truth value of the conclusion.

KINDS of IRRELEVANCE FALLACIES

1. Ad Hominem

Literally, an attack “against the person” making a claim, rather than against the claim itself.

Three subtypes of Ad Hominem:  1) Denier; 2) Silencer; 3) Dismisser

A. Denier

Denies the truth of what is claimed based on something about the person making the claim.

Some deniers are justified: “Louie is a jailhouse snitch who gets paid to testify and always perjures, so his testimony is probably false now.”

Others are unjustified: “The OWS folks look like a bunch of bums, so they are probably wrong.”

Ask: Does the information about the person give me reason to think the claim might be false?

If yes, then it is a justified denier – not ad hominem fallacy.
If no, then it is an unjustified denier – ad hominem fallacy.

B. Silencer

Silencers question a person’s right to speak without denying the truth of the claim.
Justified:
Random Guy: “No nukes! No nukes!”
Senator: “Would the gentleman please GTFO of the Senate Chamber?  We are in session.”

Unjustified:
Senator: “We should raise taxes!”
Other Senator: “Would the ‘esteemed colleague’ STFU?! He is Junior Senator from Wyoming!”

C. Dismisser

Dismiss the speaker as a reliable source of good information.  These do not deny the claim, but seek to undermine its support.

Justified: When the speaker lacks integrity and stands to gain from his claim.

Unjustified: When the speaker has integrity.

2. Appeal to Authority

Remember Ethos?
Usually it is okay to appeal to authority to support a minor claim.
When it is abused, it is a fallacy.

So when is it abused?  Ask yourself:
1. Is the cited authority really an expert in the appropriate field?
2. Is this the kind of question that an expert can settle?
3. Has the authority been cited correctly?
4. Can the cited authority be trusted to tell the truth?
5. Why is an appeal to authority even being made?

If the answers to (1)-(4) are “yes”, then the appeal may be relevant and justified.  But, it is still weak most of the time.

If one answers “no” to any of the above (1)-(4), then the appeal is irrelevant and unjustified.

3. Appeal to Popularity

Lots of people believe X, so X is (probably) true.

Ask: Is the opinion actually widely held?

Ask: Is popular opinion likely to be right about this sort of thing?

Ask: Why appeal to popular opinion at all?

4. Appeal to Emotion

P makes me Angry/Sad/Afraid…. So, not P.

Emotions generally cloud judgment, and often have no bearing on the truth of a proposition.  Often, they are irrelevant.

 

Categories: Lectures, Seth Kurtenbach

Fallacies of Ambiguity – Lecture Notes

November 3, 2011 Leave a comment

Ambiguity:  Occurs when a word or expression     is misleading or potentially misleading because it’s hard to tell which of a number of possible meanings is intended in the context. (pg. 333)

 

Ambiguity vs. Vagueness

Vagueness:
There are a range of boarder line cases where it isn’t clear if a concept applies.   There’s no clear place where the line should be drawn.

Ambiguity:  
An expression has more than one distinct meaning and it isn’t clear, given the context, which meaning applies.
Context can clarify ambiguity.  For example, consider the ambiguous claim, “Give me five!”

In the context immediately following a great and powerful achievement, this may mean “Give me a high five for the purpose of celebration!”  (That’s how cyborgs always say it.  Very unambiguous, cyborgs.)

 

However, in the context of a deli, in which the guy behind the counter says, “You want any slices of Swiss cheese?”, it may mean, “Good sir, please give me five pieces of Swiss cheese, and no fewer!”

 

 

Fallacy of Equivocation

Occurs when an argument uses the same expression in different senses in different places, and the argument is ruined as a result. (pg. 337)

(1) Six is an odd number of legs for a horse.
(2) Odd numbers cannot be divided by two.
(3) Therefore, Six cannot be divided by two.

Odd”: “Unusual” versus “not even”

–  Using the same meaning of the word in all premises… 

(1) Six is an unusual number of legs for a horse.
(2) Unusual numbers cannot be divided by two.
(3) Therefore, Six cannot be divided by two.

UNSOUND:  Premise (2) false

(1) Six is an uneven number of legs for a horse.
(2) Uneven numbers cannot be divided by two.
(3) Therefore, Six cannot be divided by two.

UNSOUND:  Premise (1) false

–  Using the intended meaning of the word in each premise…

(1) Six is an unusual number of legs for a horse.
(2) Uneven numbers cannot be divided by two.
(3) Therefore, Six cannot be divided by two.

INVALID:  Conclusion does not follow necessarily from premises.

Example from real life philosophy:

John Stuart Mill (1806 – 1873)

(1)  If something is desired, then it is desirable.
(2)  If something is desirable, then it is good.
(3)  If something is desired, then it is good.

“Desirable”:
“capable of being” desired versus “worthy of being desired”

– Using the same meaning of the expression in each premise…

(1) If something is desired, then it is capable of being desired.

(2) If something is capable of being desired, then it is good.

(3) Therefore, If something is desired, then it is good.

UNSOUND: Premise 2 is false.

(1) If something is desired, then it is worthy of being desired.

(2) If something is worthy of being desired, then it is good.

(3) Therefore, If something is desired, then it is good.

UNSOUND: Premise 1 is false.

 

– Using the intended meaning in each premise…

(1) If something is desired, then it is capable of being desired.

(2) If something is worthy of being desired, then it is good.

(3) Therefore, If something is desired, then it is good.

INVALID: Conclusion does not follow from the premises.

So, no matter what, an argument that equivocates is unsound:  it is either valid with a false premise, or invalid!

What to do if you suspect an argument commits the fallacy of EQUIVOCATION:

1.  Distinguish possible meanings.

2. Restate the argument using each possible meaning, in various combinations.

3.  Evaluate each restated argument and ask if the premises are false or if the argument is now invalid.

 

Definitions

1. Lexical Definitions:  Dictionary type definitions.  For example,

bank, n 1. a long pile or heap; mass.
2. an institution for receiving, lending, and safeguarding money and transacting other financial business.

2. Disambiguating Definitions:  Tell us which dictionary definition is intended in a particular context (semantic disambiguation).  Example,

      “. . . By ‘bank’ I mean a place where you would deposit money, not a river bank.”

Or,   or remove syntactic ambiguity…

     “When I say “All of my friends are not students” I mean not all of my friends are students.  I’m not saying that none of my friends are students.”

3. Stipulative Definitions: assign meaning to a new term or a new meaning to a familiar term.  For example,

“. . . By ‘hangry‘ I mean ‘happy and angry at the same time.”

4. Precising Definitions: used to resolve vagueness.  For example,

Specifying the maximum annual income one can have while
still being considered “poor”, for the purposes of
government reporting.

5.  Systematic/Theoretical Definitions:  introduced to give systematic order to a subject matter.  For example,

Using primitive notions to define define more complex
secondary notions (as in science and mathematics).

 

 

 

 

Schedule before Exam 3

October 31, 2011 Leave a comment

Nov. 1 (Tuesday): Chapter 13, Vagueness.
Nov. 3 (Thursday): Chapter 14: Ambiguity.
Nov. 8 (Tuesday): Chapter 15: Relevance and Vacuity
Nov. 10 (Thursday): Chapter 16: Vacuity

Nov. 15 (Tuesday): Review

Nov. 17 (Thursday): Exam 3

Categories: Lectures, Seth Kurtenbach