Archive

Archive for the ‘Seth Kurtenbach’ Category

Pascal Continued, and Review

December 7, 2011 Leave a comment

Pascal Continued

Recall that Pascal is looking for good reasons to believe that God exists.  Normally in this class we’ve considered good reasons to be either a priori or a posteriori.  Both of these are kinds of evidential reasons.  Pascal realizes that there is basically no evidential reason to believe that God exists, but he thinks he can still come up with a good reason to believe.  He does this by looking at prudential reasons.

A reason that appeals to what’s in your best interest is a prudential reason.  A reason that appeals either to a priori or a posteriori evidence is an evidential reason.

Prudential reasons are offered in support of actions.  Pascal says, “Hey, let’s think of believing as a kind of action, and see if there are prudential reasons in favor of it!  That would be neat, because then there would be good reasons to believe in God, even without evidence!”

So, Pascal’s argument is distinct among arguments for God in that it grants the atheist that there is no evidence for God’s existence.

Review

Expected Utility (p. 303-304)

The general formula for finding Expected Utility (or Expected Monetary Value) is:

Pr(win) x (net gain)  –  Pr(lose) x (net loss)

The probability that you win is calculated based on the rules of probability, Ch. 11, and the nature of the bet.  So, if you are betting on what two cards you will draw consecutively and without returning the first, then you use Rule 2G, Conjunction in General.  If you are betting on which single card you will draw in one shot, then you simply have a probability of winning equal to 1/52.  If you are betting that you draw one card or another, then you use Rule 3, Disjunction with Exclusivity.

Calculate the probability that you lose by doing 1 – Pr(win).  This is 1, Negation.

The net gain is the payoff of the bet minus the cost of the bet.  So, if the payoff is $26, and the bet is $1, then the net gain is $25.

The net loss is usually just the cost of the bet.  If you make a $1 bet, and lose, then the net loss is just $1.

Bayes Theorem (p. 291 – 297)

In order to calculate the Pr(h|e), you need three other numbers.  Recall that Pr(h|e) is the probability that some belief/hypothesis is true, given some evidence/observation.  For example, if you get a positive test result for cancer, then you want to know the probability that you have cancer, h, given a positive test result, e:  Pr(h|e).

The three numbers you need to know:

1. Pr(h): The base rate.  This is the probability that any random asymptomatic person in your age range has cancer.  No test result information is included in this number.

2. Pr(e|h): The sensitivity or reliability of the test.  This is a fact about the test.  Assuming some person has cancer, how probable is it that the test will deliver a positive result?  It is the true positive.  You can also calculate this number if you are given the false negative rate.  The false negative rate is the Pr(~e|h), the probability that the test comes back negative, given that the person has cancer.  1 – the false negative = the true positive.

3. Pr(e|~h): The false positive rate.  Also known as the doozy.  This is when the test tells you that you have cancer, but you don’t actually have cancer.

With these three numbers, you can calculate, or better yet estimate with pretty good accuracy, the chance that you have cancer, given a positive test result.  This does not apply only to cancer, though.  It applies to any sort of similar reasoning.  For example, this same Bayes Theorem can help one reason about positive results to a home pregnancy test, a DNA test, and even legal cases.

With the above three numbers, you can use the tree method of estimate the Pr(h|e).  Begin with Pr(h).  It will usually be a pretty small number, like .007.  This number can be expressed by saying “seven thousandths”.  This means, 7 out of 1,000.  So, let’s imagine that we have a sample size of 1,000 people.

1,000

h: 7                                                      ~h: 993

Next, we look at Pr(e|h).  It tells us, of the people that have cancer (or whatever), how many will get a positive test result?  It is usually a pretty high number, like .9, but it doesn’t have to be.  Suppose it is .9.  This means the test is 90% reliable.  That’s pretty good, but it’s not the whole story, as we’ll see.  Of the 7 with the cancer, about 6 will get a positive test result, where 6 is roughly 90% of 7.  This means the other person will get a false negative.

+: 6                -: 1

Then, we use the Pr(e|~h), the false positive rate.  Of the people who do not have the cancer, how many will get a positive test result anyway?  This is usually a pretty small number, like .03.  So, of the 993 who do not have cancer, about 27 of them will get a positive test result. The rest, 993 – 27 = 966, of the people without cancer will get a negative result.

+: 6                 -: 1                       +: 27                   -: 966.

Now, we wanted to know the chance we have cancer, given a positive test result.  So, of all the people who get positive test results, how many actually have cancer?  6 people out of 27 + 6 = 33, or 6/33, or 2/11.  So, with those three numbers that we were given, our chance of having cancer, given a positive test result, is 2/11.  So, Pr(h|e) = 2/11.  That’s pretty low, so there’s no reason to freak out yet.  The thing to do is get a second test done.  Two false positives in a row are extremely unlikely, and three in a row are even more unlikely.

Will I need a calculator?   NO.

Can I use one anyway?  Sure.

Remember all the key concepts from earlier exams, like validity, soundness, IBE, induction, philosophical arguments, informal fallacies, etc., are fair game on this exam.

The exam is structure like this: Part 1: Bayes Theorem, Expected Utility, Pascal’s Wager. 13 questions, multiple choice with some worth 2 points, where you show your work.

Part 2: Cumulative.  20 questions, mostly multiple choice, a few true/false, a couple fill in the blanks, and a couple worth 2 where you show your work.

 

Overall Grading Scale:
97.5-100 A+

92.5-97 A

90-92 A-

Etc.

Categories: Lectures, Seth Kurtenbach

Lecture on Relevance

November 9, 2011 Leave a comment

Fallacies of Relevance:  Ch. 15

Premises that have no bearing on the truth of the conclusion are irrelevant.
Sometimes called Red Herrings.
A Red Herring is a fish that fox hunters would drag across a fox’s path in order to train their hounds.  The smell of the fish distracts the hounds from the smell of the fox.

Irrelevant reasons are offered to mislead or divert attention from the real issue.

A reason is relevant when it has some bearing on the truth value of the conclusion.

KINDS of IRRELEVANCE FALLACIES

1. Ad Hominem

Literally, an attack “against the person” making a claim, rather than against the claim itself.

Three subtypes of Ad Hominem:  1) Denier; 2) Silencer; 3) Dismisser

A. Denier

Denies the truth of what is claimed based on something about the person making the claim.

Some deniers are justified: “Louie is a jailhouse snitch who gets paid to testify and always perjures, so his testimony is probably false now.”

Others are unjustified: “The OWS folks look like a bunch of bums, so they are probably wrong.”

Ask: Does the information about the person give me reason to think the claim might be false?

If yes, then it is a justified denier – not ad hominem fallacy.
If no, then it is an unjustified denier – ad hominem fallacy.

B. Silencer

Silencers question a person’s right to speak without denying the truth of the claim.
Justified:
Random Guy: “No nukes! No nukes!”
Senator: “Would the gentleman please GTFO of the Senate Chamber?  We are in session.”

Unjustified:
Senator: “We should raise taxes!”
Other Senator: “Would the ‘esteemed colleague’ STFU?! He is Junior Senator from Wyoming!”

C. Dismisser

Dismiss the speaker as a reliable source of good information.  These do not deny the claim, but seek to undermine its support.

Justified: When the speaker lacks integrity and stands to gain from his claim.

Unjustified: When the speaker has integrity.

2. Appeal to Authority

Remember Ethos?
Usually it is okay to appeal to authority to support a minor claim.
When it is abused, it is a fallacy.

So when is it abused?  Ask yourself:
1. Is the cited authority really an expert in the appropriate field?
2. Is this the kind of question that an expert can settle?
3. Has the authority been cited correctly?
4. Can the cited authority be trusted to tell the truth?
5. Why is an appeal to authority even being made?

If the answers to (1)-(4) are “yes”, then the appeal may be relevant and justified.  But, it is still weak most of the time.

If one answers “no” to any of the above (1)-(4), then the appeal is irrelevant and unjustified.

3. Appeal to Popularity

Lots of people believe X, so X is (probably) true.

Ask: Is the opinion actually widely held?

Ask: Is popular opinion likely to be right about this sort of thing?

Ask: Why appeal to popular opinion at all?

4. Appeal to Emotion

P makes me Angry/Sad/Afraid…. So, not P.

Emotions generally cloud judgment, and often have no bearing on the truth of a proposition.  Often, they are irrelevant.

 

Categories: Lectures, Seth Kurtenbach

Fallacies of Ambiguity – Lecture Notes

November 3, 2011 Leave a comment

Ambiguity:  Occurs when a word or expression     is misleading or potentially misleading because it’s hard to tell which of a number of possible meanings is intended in the context. (pg. 333)

 

Ambiguity vs. Vagueness

Vagueness:
There are a range of boarder line cases where it isn’t clear if a concept applies.   There’s no clear place where the line should be drawn.

Ambiguity:  
An expression has more than one distinct meaning and it isn’t clear, given the context, which meaning applies.
Context can clarify ambiguity.  For example, consider the ambiguous claim, “Give me five!”

In the context immediately following a great and powerful achievement, this may mean “Give me a high five for the purpose of celebration!”  (That’s how cyborgs always say it.  Very unambiguous, cyborgs.)

 

However, in the context of a deli, in which the guy behind the counter says, “You want any slices of Swiss cheese?”, it may mean, “Good sir, please give me five pieces of Swiss cheese, and no fewer!”

 

 

Fallacy of Equivocation

Occurs when an argument uses the same expression in different senses in different places, and the argument is ruined as a result. (pg. 337)

(1) Six is an odd number of legs for a horse.
(2) Odd numbers cannot be divided by two.
(3) Therefore, Six cannot be divided by two.

Odd”: “Unusual” versus “not even”

–  Using the same meaning of the word in all premises… 

(1) Six is an unusual number of legs for a horse.
(2) Unusual numbers cannot be divided by two.
(3) Therefore, Six cannot be divided by two.

UNSOUND:  Premise (2) false

(1) Six is an uneven number of legs for a horse.
(2) Uneven numbers cannot be divided by two.
(3) Therefore, Six cannot be divided by two.

UNSOUND:  Premise (1) false

–  Using the intended meaning of the word in each premise…

(1) Six is an unusual number of legs for a horse.
(2) Uneven numbers cannot be divided by two.
(3) Therefore, Six cannot be divided by two.

INVALID:  Conclusion does not follow necessarily from premises.

Example from real life philosophy:

John Stuart Mill (1806 – 1873)

(1)  If something is desired, then it is desirable.
(2)  If something is desirable, then it is good.
(3)  If something is desired, then it is good.

“Desirable”:
“capable of being” desired versus “worthy of being desired”

– Using the same meaning of the expression in each premise…

(1) If something is desired, then it is capable of being desired.

(2) If something is capable of being desired, then it is good.

(3) Therefore, If something is desired, then it is good.

UNSOUND: Premise 2 is false.

(1) If something is desired, then it is worthy of being desired.

(2) If something is worthy of being desired, then it is good.

(3) Therefore, If something is desired, then it is good.

UNSOUND: Premise 1 is false.

 

– Using the intended meaning in each premise…

(1) If something is desired, then it is capable of being desired.

(2) If something is worthy of being desired, then it is good.

(3) Therefore, If something is desired, then it is good.

INVALID: Conclusion does not follow from the premises.

So, no matter what, an argument that equivocates is unsound:  it is either valid with a false premise, or invalid!

What to do if you suspect an argument commits the fallacy of EQUIVOCATION:

1.  Distinguish possible meanings.

2. Restate the argument using each possible meaning, in various combinations.

3.  Evaluate each restated argument and ask if the premises are false or if the argument is now invalid.

 

Definitions

1. Lexical Definitions:  Dictionary type definitions.  For example,

bank, n 1. a long pile or heap; mass.
2. an institution for receiving, lending, and safeguarding money and transacting other financial business.

2. Disambiguating Definitions:  Tell us which dictionary definition is intended in a particular context (semantic disambiguation).  Example,

      “. . . By ‘bank’ I mean a place where you would deposit money, not a river bank.”

Or,   or remove syntactic ambiguity…

     “When I say “All of my friends are not students” I mean not all of my friends are students.  I’m not saying that none of my friends are students.”

3. Stipulative Definitions: assign meaning to a new term or a new meaning to a familiar term.  For example,

“. . . By ‘hangry‘ I mean ‘happy and angry at the same time.”

4. Precising Definitions: used to resolve vagueness.  For example,

Specifying the maximum annual income one can have while
still being considered “poor”, for the purposes of
government reporting.

5.  Systematic/Theoretical Definitions:  introduced to give systematic order to a subject matter.  For example,

Using primitive notions to define define more complex
secondary notions (as in science and mathematics).

 

 

 

 

Philosophy Club Tryouts

October 31, 2011 Leave a comment

Fellow Badasses,

Let it be known that the Philosophy Club is having an organizational meeting on Wednesday 11/9 from 7:30-9:30 in room 221 Strickland.   Students who are interested in learning more about the club should contact one of our majors, Kyle Hendricks (klh6x2@mail.missouri.edu).

 

Crazy things happen in philosophy club.  Great things.  Terrible things.  But great things.

Categories: Seth Kurtenbach

Schedule before Exam 3

October 31, 2011 Leave a comment

Nov. 1 (Tuesday): Chapter 13, Vagueness.
Nov. 3 (Thursday): Chapter 14: Ambiguity.
Nov. 8 (Tuesday): Chapter 15: Relevance and Vacuity
Nov. 10 (Thursday): Chapter 16: Vacuity

Nov. 15 (Tuesday): Review

Nov. 17 (Thursday): Exam 3

Categories: Lectures, Seth Kurtenbach

Lecture 10/20: More on Chances

October 22, 2011 Leave a comment

Probability Theory (Chapter Ten)

Continued from last time:
E. Availability Heuristic
Number of 7-letter words ending in -ing vs. number of words ending in _n_.  Because we can think of more words ending in -ing than we can (non -ing) words ending in _n_, we think the former will be more numerous than the latter.  This is wrong, however, because all -ing words are _n_ words, so there will be at least as many of the latter.

Another example: who has a better batting average, NY Yankees or Boston Redsox?  Many will think of the superstars and forget that the whole team contributes to the overall batting average of the team.  The less famous players are not “available” to you, in the sense that you cannot think of them off the top of your head.

Rules of Probability

We write the probability of h, for ‘hypothesis’, as Pr(h).  The Pr(h) = the number of outcomes favorable to h over the number of total outcomes; favorable/total.

1.  Negation:  Pr(~h) = 1 – Pr(h).  The probability that a hypothesis is false is equal to 1 minus the probability that h is true.  If the Pr(h) = .4, then the Pr(~h) = 1 – .4 = .6.

2. Conjunction with Independence:  Pr(h1 & h2) = Pr(h1) x Pr(h2).  Given two independent events, the probability of both occurring is figured by conjunction with independence.  Independence refers to whether the outcome of one event gives you any information about the outcome of the other event.  For example, if you draw a card from a normal deck, put it back and shuffle it, then the outcome of the next draw is independent of the first; both outcomes have a probability of 1/13.  However, if you draw a card, keep it out, and draw a second card, then the information from the first event tells you something about the outcome from the second event. The probability of drawing two kings, by drawing a card, putting it back and shuffling, and drawing another, is: Pr (h1 & h2) = Pr(h1) x Pr(h2) = 1/13 x 1/13 = 1/169.

2G. Conjunction in General:  To extend the rule to cover events that are not independent, we need the idea of Conditional Probability.  This is the probability that something will happen, given that some other thing happen, i.e., dependent on something else happening.  If we want the probability of h2, given that h1 happened, we write Pr(h2|h1).  For example, we may want to know the probability that we draw a king (h2), given that we just drew the king of diamonds (h1).  Conditional probability is figured out by considering the outcomes where both h1 and h2 are true, divided by the total h1 outcomes.  The rule for Conjunction in General is:  Pr(h1 & h2) = Pr(h1) x Pr(h2|h1).  The probability that you draw two kings in a row without replacing the first is 4/52 x 3/51 = 1/221.  The probability that you draw a king, given that you’ve just drawn a king, is the conditional probability.  It is 3/51, because there are 3 favorable outcomes when you’ve already drawn a king, over 51 total outcomes where you’ve already drawn a king.  Conjunction with independence is a special case of conjunction in general.

3. Disjunction with Exclusivity:  Pr(h1 or h2) = Pr(h1) + Pr(h2).  The probability that one of two mutually exclusive events is the sum of the probability of each.  The probability you roll a 5 or an 8 (Jumanji reference!) is Pr(roll a 5) + Pr(roll an 8 ) = 4/36 + 5/36 = 9/36 = 1/4.  Pretty decent chances of getting out the jungle.

3G. Disjunction in General: Of course, not all either/or statements are exclusive.  Many are inclusive, meaning that it is possible for both to occur.  Thus, we need a general formula for figuring out disjunctive probabilities.  It is Pr(h1 or h2) = Pr(h1) + Pr(h2) – Pr(h1 & h2).  Suppose half the class are male, and half female, and that half are over 19, and half are under or equal to 19.  If we want to know the chances that someone is either female or over 19, we figure the Pr(h1) = 2/4, plus the Pr(h2)= 2/4, minus the Pr(h1 & h2) = 1/4.  2/4 + 2/4 – 1/4 = 3/4.  So, the probability that someone is either female or over 19 is 3/4.  Disjunction with exclusivity is a special case of disjunction in general.

4. At Least:  The probability that an event will occur at least once in a series of n independent trials, where n is the number of trials, is 1 – Pr(~h)raised to the nth power.  What are the chances of tossing heads at least once in 8 independent flips of a fair coin?  Restate the question so that rules 1 and 2 can be used.  First, what are the chances that we don’t flip at least one heads?  That is 1 – Pr(flip at least one heads).  This is the same as saying the probability of flipping 8 tails in a row.  That’s Pr(tails) x Pr(tails) x Pr(tails) x Pr(tails) x Pr(tails) x Pr(tails) x Pr(tails) x Pr(tails) = Pr(tails)to the 8th power = 1/256.  So, 1 – Pr(flip at least one heads) = 1/256.  Those are the chances we don’t flip at least one heads.  So, Pr(flip at least one heads) = 255/256.  Pretty good chances!  So, to calculate ‘At Least x’, you start by asking the chances that you DON’T get at least x: this is 1 – Pr(at least x).  This is the same as asking the chances that the alternatives to x happen n times in a row, which is just an application of rule 2 or 2G above, depending on whether it is independent or not.  Then, remember to reconvert it to the original question by figuring 1 minus whatever you got.

Lecture 10/18: Taking Chances

October 18, 2011 1 comment

Notes on taking chances

Guy using fallacious reasoning.

Probability Theory (Chapter Ten)
I. Gambler’s fallacy and the law of large numbers
A. Examples
B. Law of Large numbers:  The difference between the observed value of a sample and its true value will diminish as the number of observations in the sample increases.

Applications

O: after a billion flips of a coin, we counted 48% heads, 52% tails.
H1: Fair coin
H2: coin is weighted towards tails

Which hypothesis is predicted by the law of large numbers?
answer:  H2, due to the Law of Large Numbers

O: after ten flips of a coin, we counted 6 heads and 4 tails
H1: Fair coin
H2: Coin is weighted towards heads?

Which hypothesis is predicted by the law of large numbers?

answer: predictively equivalent (H1 = H2)

Random sequences:
A. 1, 1, 1, 1, 1, 1, 2, 1, 1, 2
B. 1, 2, 1, 2, 2, 1, 2, 1, 1, 2
C. 1, 2, 2, 2, 2, 1, 2, 1, 1, 2
D. 1, 2, 2, 2, 1, 2, 1, 2, 2, 1

Which one was generated by a “randomizer” ? A
http://www.randomizer.org—form.htm

Question: Suppose you flip a fair coin three heads in a row. What is the
probability that a head will come up a fourth time?
answer: 1/2

...because it's FAIR.

C. Misapplication of law of large numbers
Example 1 and 2:
law of large numbers does not support the idea that a gambler will experience runs of good luck after a run of bad luck.  For coins and casino machines the probability of any outcome is independent of the number of trails you have experienced.  All bets are off if the trials are dependent rather than fair (but then no one would play at such a casino where the outcomes are rigged).

D. Examples outside of gambling

– Hot streaks in basketball:  give the ball to the buy who has made a bunch of shots in a row.  But, statistically, hitting three or shots in a row is statistically insignificant.

– “market beaters” in fund managing:  you swap out of your underperforming funds and into the hot fund.  But, given that the market is pretty efficient, past performance is not a good guide to future performance (there will be streaks for any fund over long enough period of time).

II. Common judgements and their fallacious foundations

The Path of Folly.

A. Confirmation bias:  you are convinced beforehand that a stock picker or basketball player can “get hot” (due to media attention or your own feelings about the person).  So, you ignore the fact that streaks are likely in the short term (given law of large numbers).

B. Over optimism
How often does a college basketball team that is trailing at halftime come back to win?
answer: (less than 20%) (people typically guess 30%-60%)
data: 3300 games in Nov-Jan.

Why are we often wrong?
We are optimistic and media gives most attention to comeback victories.

C. Irrationality due to desire to win
Suppose 50% chance of scoring on a two-point shot. 33% for a three-point shot.  A team is down by two points and it has time for one last shot. What play should the coach call?
answer: if the team makes the two-point shot, it still has to play overtime, where its chances of winning are 50%. Have to win on two 50% gambles = 25%
overall. So, should go for 3-points.

Apply this to stocks: many investors shy away from stocks because of the potential for short-term sting (like the sting of losing from a 3-point shot at the
buzzer). But, in the long run stocks are best investment (over its history).

D. Representative heuristic
question #1 on Tversky teasers.
People tend to say that Hand #2 is much more unlikely than Hand #1. But, each is equally likely in a fair game.
Representative heuristic: Hand #2 is more unimpressive so it is more likely to represent an ordinary hand.

question #2 on “teasers”
89% of students said that it is more likely that Linda was both a bank teller and a feminist than that she was simply a bank teller.
can’t be true: the probability of two things being true can never be higher than the probability that just one of them is true (one is true if both are).