key terms and concepts in inductive logic

Learning Objectives

After reading this chapter, you should be able to:

1. Define key terms and concepts in inductive logic, including strength and cogency.

2. Differentiate between strong inductive arguments and weak inductive arguments.

3. Identify general methods for strengthening inductive arguments.

4. Identify statistical syllogisms and describe how they can be strong or weak.

5. Evaluate the strength of inductive generalizations.

5Inductive Reasoning

Iakov Kalinin/iStock/Thinkstock

6. Differentiate between causal and correlational relationships and describe various

types of causes.

7. Use Mill’s methods to evaluate causal arguments.

8. Recognize arguments from authority and evaluate their quality.

9. Identify key features of arguments from analogy and use them to evaluate the strength

of such arguments.

When talking about logic, people often think about formal deductive reasoning. However, most of the

arguments we encounter in life are not deductive at all. They do not intend to establish the truth of the

conclusion beyond any possible doubt; they simply try to provide good evidence for the truth of their

conclusions. Arguments that intend to reason in this way are called inductive arguments. Inductive

arguments are not any worse than deductive ones. Often the best evidence available is not final or

conclusive but can still be very good.

For example, to infer that the sun will rise tomorrow because it has every day in the past is inductive

reasoning. The inference, however, is very strongly supported. Not all inductive arguments are as strong

as that one. This chapter will explore different types of inductive arguments and some principles we can

use to determine whether they are strong or weak. The chapter will also discuss some specific methods

that we can use to try to make good inferences about causation. The goal of this chapter is to enable you

to identify inductive arguments, evaluate their strength, and create strong inductive arguments about

important issues.

Age fotostock/SuperStock

Weather forecasters use inductive

reasoning when giving their

predictions. They have tools at their

disposal that provide support for their

arguments, but some arguments are

weaker than others.

5.1 Basic Concepts in Inductive Reasoning

Inductive is a technical term in logic: It has a precise definition, and that definition may be different from

the definition used in other fields or in everyday conversation. An inductive argument is one in which

the premises provide support for the conclusions but fall short of establishing complete certainty. If you

stop to think about arguments you have encountered recently, you will probably find that most of them

are inductive. We are seldom in a position to prove something absolutely, even when we have very good

reasons for believing it.

Take, for example, the following argument:

The odds of a given lottery ticket being the winning ticket are extremely low.

You just bought a lottery ticket.

Therefore, your lottery ticket is probably not the winning ticket.

If the odds of each ticket winning are 1 in millions, then this argument gives very good evidence for the

truth of its conclusion. However, the argument is not deductively valid. Even if its premises are true, its

conclusion is still not absolutely certain. This means that there is still a remote possibility that you

bought the winning ticket.

Chapter 3 discussed how an argument is valid if our premises guarantee the truth of the conclusion. In

the case of the lottery, even our best evidence cannot be used to make a valid argument for the

conclusion. The given reasons do not guarantee that you will not win; they just make it very likely that

you will not win.

This argument, however, helps us establish the likelihood of its conclusion. If it were not for this type of

reasoning, we might spend all our money on lottery tickets. We would also not be able to know whether

we should do such things as drive our car because we would not be able to reason about the likelihood

of getting into a crash on the way to the store. Therefore, this and other types of inductive reasoning are

essential in daily life. Consequently, it is important that we learn how to evaluate their strength.

Inductive Strength

Some inductive arguments can be better or worse than

others, depending on how well their premises increase the

likelihood of the truth of their conclusion. Some arguments

make their conclusions only a little more likely; other

arguments make their conclusions a lot more likely.

Arguments that greatly increase the likelihood of their

conclusions are called strong arguments; those that do not

substantially increase the likelihood are called weak

arguments.

Here is an example of an argument that could be considered

very strong:

A random fan from the crowd is going to race (in a

100 meter dash) against Usain Bolt.

Usain Bolt is the fastest sprinter of all time.

Oksana

Kostyushko/iStock/Thinkstock

Context plays an

important role in

inductive arguments.

What makes an

argument strong in one

context might not be

strong enough in

another. Would you be

more likely to play the

lottery if your chances

of winning were

supported at 99%?

Therefore, the fan is going to lose.

It is certainly possible that the fan could win—say, for example, if Usain Bolt breaks an ankle—but it

seems highly unlikely. This next argument, however, could be considered weak:

I just scratched off two lottery tickets and won $2 each time.

Therefore, I will win $2 on the next ticket, too.

The previous lottery tickets would have no bearing on the likelihood of winning on the next one. Now

this next argument’s strength might be somewhere in between:

The Bears have beaten the Lions the last four times they have played.

The Bears have a much better record than the Lions this season.

Therefore, the Bears will beat the Lions again tomorrow.

This sounds like good evidence, but upsets happen all the time in sports, so its strength is only moderate.

Considering the Context

It is important to realize that inductive strength and weakness are relative

terms. As such, they are like the terms tall and short. A person who is short

in one context may be tall in another. At 6’0”, professional basketball player

Allen Iverson was considered short in the National Basketball Association.

But outside of basketball, someone of his height might be considered tall.

Similarly, an argument that is strong in one context may be considered

weak in another. You would probably be reasonably happy if you could

reliably predict sports (or lottery) results at an accuracy rate of 70%, but

researchers in the social sciences typically aim for certainty upward of 90%.

In high­energy physics, the goal is a result that is supported at the level of 5

sigma—a probability of more than 99.99997%!

The same is true when it comes to legal arguments. A case tried in a civil

court needs to be shown to be true with a preponderance of evidence,

which is much less stringent than in a criminal case, in which the defendant

must be proved guilty beyond reasonable doubt. Therefore, whether the

argument is strong or weak is a matter of context.

Moreover, some subjects have the sort of evidence that allows for extremely

strong arguments, whereas others do not. A psychologist trying to predict

human behavior is unlikely to have the same strength of argument as an

astronomer trying to predict the path of a comet. These are important

things to keep in mind when it comes to evaluating inductive strength.

Strengthening Inductive Arguments

Regardless of the subject matter of an argument, we

generally want to create the strongest arguments we can. In

general, there are two ways of strengthening inductive

arguments. We can either claim more in the premises or

claim less in the conclusion.

Fuse/Thinkstock

The strength of an inductive argument

can change when new premises are

added. When evaluating or presenting

an inductive argument, gather as

many details as possible to have a

more complete understanding of the

strength of the argument.

Claiming more in the premises is straightforward in theory,

though it can be difficult in practice. The idea is simply to

increase the amount of evidence for the conclusion. Suppose

you are trying to convince a friend that she will enjoy a

particular movie. You have shown her that she has liked

other movies by the same director and that the movie is of

the general kind that she likes. How could you strengthen

your argument? You might show her that her favorite actors

are cast in the lead roles, or you might appeal to the reviews

of critics with which she often agrees. By adding these

additional pieces of evidence, you have increased the

strength of your argument that your friend will enjoy the

movie.

However, if your friend looks at all the evidence and still is

not sure, you might take the approach of weakening your conclusion. You might say something like,

“Please go with me; you may not actually like the movie, but at least you can be pretty sure you won’t

hate it.” The very same evidence you presented earlier—about the director, the genre, the actors, and so

on—actually makes a stronger argument for your new, less ambitious claim: that your friend won’t hate

the movie.

It might help to have another example of how each of the two approaches can help strengthen an

inductive argument. Take the following argument:

Every crow I have ever seen has been black.

Therefore, all crows are black.

This seems to provide decent evidence, provided that you have seen a lot of crows. Here is one way to

make the argument stronger:

Studies by ornithologists have examined thousands of crows in every continent in which they

live, and they have all been black.

Therefore, all crows are black.

This argument is much stronger because there is much more evidence for the truth of the conclusion

within the premise. Another way to strengthen the argument—if you do not have access to lots of

ornithological studies—would simply be to weaken the stated conclusion:

Every crow I have ever seen has been black.

Therefore, most crows are probably black.

This argument makes a weaker claim in the conclusion, but the argument is actually much stronger than

the original because the premises make this (weaker) conclusion much more likely to be true than the

original (stronger) conclusion.

By the same token, an inductive argument can also be made weaker either by subtracting evidence from

the premises or by making a stronger claim in the conclusion. (For another way to weaken or strengthen

inductive arguments, see A Closer Look: Using Premises to Affect Inductive Strength.)

A Closer Look: Using Premises to Affect Inductive Strength

Suppose we have a valid deductive argument. That means that, if its premises are all true, then its

conclusion must be true as well. Suppose we add a new premise. Is there any way that the

argument could become invalid? The answer is no, because if the premises of the new argument

are all true, then so are all the premises of the old argument. Therefore, the conclusion still must

be true.

This is a principle with a fancy name; it is called monotonicity: Adding a new premise can never

make a deductive argument go from valid to invalid. However, this principle does not hold for

inductive strength: It is possible to weaken an inductive argument by adding new premises.

The following argument, for example, might be strong:

99% of birds can fly.

Jonah is a bird.

Therefore, Jonah can fly.

This argument may be strong as it is, but what happens if we add a new premise, “Jonah is an

ostrich”? The addition of this new premise just made the argument’s strength plummet. We now

have a fairly weak argument! To use our new big word, this means that inductive reasoning is

nonmonotonic. The addition of new premises can either enhance or diminish an argument’s

inductive strength.

An interesting “game” is to see if you can continue to add premises that continue to flip the

inductive argument’s degree of strength back and forth. For example, we could make the

argument strong again by adding “Jonah is living in the museum of amazing flying ostriches.”

Then we could weaken it again with “Jonah is now retired.” It could be strengthened again with

“Jonah is still sometimes seen flying to the roof of the museum,” but it could be weakened again

with “He was seen flying by the neighbor child who has been known to lie.” The game

demonstrates the sensitivity of inductive arguments to new information.

Thus, when using inductive reasoning, we should always be open to learning more details that

could further serve to strengthen or weaken the case for the truth of the conclusion. Inductive

strength is a never­ending process of gathering and evaluating new and relevant information. For

scientists and logicians, that is partly what makes induction so exciting!

Inductive Cogency

Notice that, like deductive validity, inductive strength has to do with the strength of the connection

between the premises and the conclusion, not with the truth of the premises. Therefore, an inductive

argument can be strong even with false premises. Here is an example of an inductively strong argument:

Every lizard ever discovered is purple.

Therefore, most lizards are probably purple.

Of course, as with deductive reasoning, for an argument to give good evidence for the truth of the

conclusion, we also want the premises to actually be true. An inductive argument is called cogent if it is

strong and all of its premises are true. Whereas inductive strength is the counterpart of deductive

validity, cogency is the inductive counterpart of deductive soundness.

5.2 Statistical Arguments: Statistical Syllogisms

The remainder of this chapter will go over some examples of the different types of inductive arguments:

statistical arguments, causal arguments, arguments from authority, and arguments from analogy. You

will likely find that you have already encountered many of these various types in your daily life.

Statistical arguments, for example, should be quite familiar. From politics, to sports, to science and

health, many of the arguments we encounter are based on statistics, drawing conclusions from

percentages and other data.

In early 2013 American actress Angelina Jolie elected to have a preventive double mastectomy. This

surgery is painful and costly, and the removal of both breasts is deeply disturbing for many women. We

might have expected Jolie to avoid the surgery until it was absolutely necessary. Instead, she had the

surgery before there was any evidence of the cancer that normally prompts a mastectomy. Why did she

do this?

Jolie explained some of her reasoning in an opinion piece in the New York Times.

I carry a “faulty” gene, BRCA1, which sharply increases my risk of developing breast cancer and

ovarian cancer.

My doctors estimated that I had an 87 percent risk of breast cancer and a 50 percent risk of

ovarian cancer, although the risk is different in the case of each woman. (Jolie, 2013, para. 2–3)

As you can see, Jolie’s decision was based on probabilities and statistics. If these types of reasoning can

have such profound effects in our lives, it is essential that we have a good grasp on how they work and

how they might fail. In this section, we will be looking at the basic structure of some simple statistical

arguments and some of the things to pay attention to as we use these arguments in our lives.

One of the main types of statistical arguments we will discuss is the statistical syllogism. Let us start

with a basic example. If you are not a cat fancier, you may not know that almost all calico cats are

female—to be more precise, about 99.97% of calico cats are female (Becker, 2013). Suppose you are

introduced to a calico cat named Puzzle. If you had to guess, would you say that Puzzle is female or male?

How confident are you in your guess?

Since you do not have any other information except that 99.97% of calico cats are female and Puzzle is a

calico cat, it should seem far more likely to you that Puzzle is female. This is a statistical syllogism: You

are using a general statistic about calico cats to make an argument for a specific case. In its simplest

form, the argument would look like this:

99.97% of calico cats are female.

Puzzle is a calico cat.

Therefore, Puzzle is female.

Clearly, this argument is not deductively valid, but inductively it seems quite strong. Given that male

calico cats are extremely rare, you can be reasonably confident that Puzzle is female. In this case we can

actually put a number to how confident you can be: 99.97% confident.

Of course, you might be mistaken. After all, male calico cats do exist; this is what makes the argument

inductive rather than deductive. However, statistical syllogisms like this one can establish a high degree

of certainty about the truth of the conclusion.

Form

If we consider the calico cat example, we can see that the general form for a statistical syllogism looks

like this:

X% of S are P.

i is an S.

Therefore, i is (probably) a P.

There are also statistical syllogisms that conclude that the individual i does not have the property P.

Take the following example:

Only 1% of college males are on the football team.

Mike is a college male.

Therefore, Mike is probably not on the football team.

This type of statistical syllogism has the following form:

X% of S are P.

i is an S.

Therefore, i is (probably) not a P.

In this case, for the argument to be strong, we want X to be a low percentage.

Note that statistical syllogisms are similar to two kinds of categorical syllogisms presented in Chapter 3

(see Table 5.1). We see from the table that statistical syllogisms become valid categorical syllogisms

when the percentage, X, becomes 100% or 0%.

Table 5.1: Statistical syllogism versus categorical syllogism

Statistical syllogism Similar valid categorical syllogism

Example

99.97% of calico cats are female.

Puzzle is calico.

Therefore, Puzzle is female.

All calico cats are female.

Puzzle is calico.

Therefore, Puzzle is female.

Form

X% of S are P.

i is an S.

Therefore, i is (probably) P.

All M are P.

S is M.

Therefore, S is P.

Example

1% of college males are on the football team.

Mike is a college male.

Therefore, Mike is not on the football team.

No college males are on the football team.

Mike is a college male.

Therefore, Mike is not on the football team.

Form

X% of S are P.

i is an S.

Therefore, i is P.

X% of S are P.

i is an S.

Therefore, i is not P.

When identifying a statistical syllogism, it is important to keep the specific form in mind, since there are

other kinds of statistical arguments that are not statistical syllogisms. Consider the following example:

85% of community college students are younger than 40.

John is teaching a community college course.

Therefore, about 85% of the students in John’s class are under 40.

This argument is not a statistical syllogism because it does not fit the form. If we make i “John” then the

conclusion states that John, the teacher, is probably under 40, but that is not the conclusion of the

original argument. If we make i “the students in John’s class,” then we get the conclusion that it is 85%

likely that the students in John’s class are under 40. Does this mean that all of them or that some of them

are? Either way, it does not seem to be the same as the original conclusion, since that conclusion has to

do with the percentage of students under 40 in his class. Though this argument has the same “feel” as a

statistical syllogism, it is not one because it does not have the same form as a statistical syllogism.

Weak Statistical Syllogisms

There are at least two ways in which a statistical syllogism might not be strong. One way is if the

percentage is not high enough (or low enough in the second type). If an argument simply includes the

premise that most of S are P, that means only that more than half of S are P. A probability of only 51%

does not make for a strong inductive argument.

Another way that statistical syllogisms can be weak is if the individual in question is more (or less) likely

to have the relevant characteristic P than the average S. For example, take the reasoning:

99% of birds do not talk.

My pet parrot is a bird.

Therefore, my pet parrot cannot talk.

The premises of this argument may well be true, and the percentage is high, but the argument may be

weak. Do you see why? The reason is that a pet parrot has a much higher likelihood of being able to talk

than the average bird. We have to be very careful when coming to final conclusions about inductive

reasoning until we consider all of the relevant information.

5.3 Statistical Arguments: Inductive Generalizations

In the example about Puzzle, the calico cat, the first premise said that 99.97% of calico cats are female.

How did someone come up with that figure? Clearly, she or he did not go out and look at every calico cat.

Instead, he or she likely looked at a bunch of calicos, figured out what percentage of those cats were

female, and then reasoned that the percentage of females would have been the same if they had looked

at all calico cats. In this sort of reasoning, the group of calico cats that were actually examined is called

the sample, and all the calico cats taken as a group are called the population. An inductive

generalization is an argument in which we reason from data about a sample population to a claim

about a large population that includes the sample. Its general form looks like this:

X% of observed Fs are Gs.

Therefore, X% of all Fs are Gs.

In the case of the calico cats, the argument looks like this:

99.97% of calico cats in the sample were female.

Therefore, 99.97% of all calico cats are female.

Whether the argument is strong or weak depends crucially on whether the sample population is

representative of the whole population. We say that a sample is representative of a population when the

sample and the population both have the same distribution of the trait we are interested in—when the

sample “looks like” the population for our purposes. In the case of the cats, the strength of the argument

depends on whether our sample group of calico cats had about the same proportion of females as the

entire population of all calico cats.

There is a lot of math and research design—which you might learn about if you take a course in applied

statistics or in quantitative research design—that goes into determining the likelihood that a sample is

representative. However, even with the best math and design, all we can infer is that a sample is

extremely likely to be representative; we can never be absolutely certain it is without checking the

entire population. However, if we are careful enough, our arguments can still be very strong, even if they

do not produce absolute certainty. This section will examine how researchers try to ensure the sample

population is representative of the whole population and how researchers assess how confident they

can be in their results.

Representativeness

The main way that researchers try to ensure that the sample population is representative of the whole

population is to make sure that the sample population is random and sufficiently large. Researchers also

consider a measure called the margin of error to determine how similar the sample population is to the

whole population.

Randomness

Suppose you want to know how many marshmallow treats

are in a box of your favorite breakfast cereal. You do not

have time to count the whole box, so you pour out one cup.

You can count the number of marshmallows in your cup and

then reason that the box should have the same proportion

5xinc/iStock/Thinkstock

To ensure a sample is representative,

participants should be randomly

selected from the larger population.

Careful consideration is required to

ensure selections truly represent the

larger population.

One must be careful when making inductive generalizations

based on statistical data. Consider the examples in this video.

Raw numbers can sound more alarming than percentages.

Likewise, rate statistics can be misleading.

Making Inferences From Statistics

of marshmallows as the cup. You found 15 marshmallows in

the cup, and the box holds eight cups of cereal, so you figure

that there should be about 120 marshmallows in the box.

Your argument looks something like this:

A one­cup sample of cereal contains 15

marshmallows.

The box holds eight cups of cereal.

Therefore, the box contains 120 marshmallows.

What entitles you to claim that the sample is

representative? Is there any way that the sample may not

represent the percentage of marshmallows in the whole

box? One potential problem is that marshmallows tend to be

lighter than the cereal pieces. As a result, they tend to rise to the top of the box as the cereal pieces settle

toward the bottom of the box over time. If you just scoop out a cup of cereal from the top, then, your

sample may not be representative of the whole box and may have too many marshmallows.

One way to solve this problem might be to shake the box. Vigorously shaking the box would probably

distribute the marshmallows fairly evenly. After a good shake, a particular piece of marshmallow or

cereal might equally end up anywhere in the box, so the ones that make it into your sample will be

largely random. In this case the argument may be fairly strong.

In a random sample, every member of the population has an equal chance of being included.

Understanding how randomness works to ensure representativeness is a bit tricky, but another example

should help clear it up.

Almost all students at my high school have laptops.

Therefore, almost all high school students in the United States have laptops.

This reasoning might seem pretty strong, especially if you go to a large high school. However, is there a

way that the sample population (the students at the high school) may not be truly random? Perhaps if

the high school is in a relatively wealthy area, then the students will be more likely to have laptops than

random American high schoolers. If the sample population is not truly random but has a greater or

lesser tendency to have the relevant characteristic than a random member of the whole population, this

is known as a biased sample. Biased samples will be discussed further in Chapter 7, but note that they

often help reinforce people’s biased viewpoints (see Everyday Logic: Why You Might Be Wrong).

The principle of randomness applies

to other types of statistical arguments

as well. Consider the argument about

John’s community college class. The

argument, again, goes as follows:

85% of community college

students are younger than

40.

John is teaching a

community college course.

Making Inferences From Statistics

From Title: Evidence in Argument: Critical Thinking

(https://fod.infobase.com/PortalPlaylists.aspx?wID=100753&xtid=49816)

Critical Thinking Questions

1. The characteristics of the sample is an important

consideration when drawing inferences from

statistics. Before reading on, what qualities do you

think an ideal sample possesses?

2. How can one ensure that one is making proper

inferences from evidence?

3. What is the danger of expressing things using rates?

What example is given that demonstrates this

danger?

Therefore, about 85% of the

students in John’s class are

under 40.

Since 85% of community college

students are younger than 40, we

would expect a sufficiently large

random sample of community college

students to have about the same

percentage. There are several ways,

however, that John’s class may not be

a random sample. Before going on to

the next paragraph, stop and see how

many ways you can think of on your

own.

So how is John’s class not a random

sample? Notice first that the

argument references a course at a

single community college. The

average student age likely varies

from college to college, depending on

the average age of the nearby

population. Even within this one

community college, John’s class is not

random. What time is John’s class?

Night classes tend to attract a higher

percentage of older students than

daytime classes. Some subjects also

attract different age groups. Finally,

we should think about John himself.

His age and reputation may affect the kind of students who enroll in his classes.

In all these ways, and maybe others, John’s class is not a random sample: There is not an equal chance

that every community college student might be included. As a result, we do not really have good reason

to think that John’s class will be representative of the general population of community college students.

So we have little reason to expect it to be representative of the larger population. As a result, we cannot

use his class to reliably predict what the population will look like, nor can we use the population to

reliably predict what John’s class will look like.

Everyday Logic: Why You Might Be Wrong

People are often very confident about their views, even when it comes

to very controversial issues that may have just as many people on the

other side. There are probably several reasons for this, but one of

them is due to the use of biased sampling. Consider whether you think

Jakubzak/iStock/Thinkstock

Confirmation bias, or

the tendency to seek

out support for our

beliefs, can be seen in

the friends we choose,

books we read, and

news sources we

select.

your views about the world are shared by many people or by only a

few. It is not uncommon for people to think that their views are more

widespread than they actually are. Why is that?

Think about how you form your opinion about how much of the nation

or world agrees with your view. You probably spend time talking with

your friends about these views and notice how many of your friends

agree or disagree with you. You may watch television shows or read

news articles that agree or disagree with you. If most of the sources

you interact with agree with your view, you might conclude that most

people agree with you.

However, this would be a mistake. Most of us tend to interact more with people and information

sources with which we agree, rather than those with which we disagree. Our circle of friends

tends to be concentrated near us both geographically and ideologically. We share similar

concerns, interests, and views; that is part of what makes us friends. As with choosing friends, we

also tend to select information sources that confirm our beliefs. This is a well­known

psychological tendency known as confirmation bias (this will be discussed further in Chapter 8).

We seem to reason as follows:

A large percentage of my friends and news sources agree with my view.

Therefore, a large percentage of all people and sources agree with my view.

We have seen that this reasoning is based on a biased sample. If you take your friends and

information sources as a sample, they are not likely to be representative of the larger population

of the nation or world. This is because rather than being a random sample, they have been

selected, in part, because they hold views similar to yours. A good critical thinker takes sampling

bias into account when thinking about controversial issues.

Sample Size

Even a perfectly random sample may not be representative, due to bad luck. If you flip a coin 10 times,

for example, there is a decent chance that it will come up heads 8 of the 10 times. However, the more

times you flip the coin, the more likely it is that the percentage of heads will approach 50%.

The smaller the sample, the more likely it is to be nonrepresentative. This variable is known as the

sample size. Suppose a teacher wants to know the average height of students in his school. He randomly

picks one student and measures her height. You should see that this is not a big enough sample. By

measuring only one student, there is a decent chance that the teacher may have randomly picked

someone extremely tall or extremely short. Generalizing on an overly small sample would be making a

hasty generalization, an error in reasoning that will be discussed in greater detail in Chapter 7. If the

teacher chooses a sample of two students, it is less likely that they will both be tall or both be short. The

more students the teacher chooses for his sample, the less likely it is that the average height of the

sample will be much different than the average height of all students. Assuming that the selection

process is unbiased, therefore, the larger the sample population is, the more likely it is that the sample

will be representative of the whole population (see A Closer Look: How Large Must a Sample Be?).

A Closer Look: How Large Must a Sample Be?

In general, the larger a sample is, the more likely it is to be representative of the population from

which it is drawn. However, even relatively small samples can lead to powerful conclusions if

they have been carefully drawn to be random and to be representative of the population. As of

this writing, the population of the United States is in the neighborhood of 317 million, yet Gallup,

one of the most respected polling organizations in the country, often publishes results based on a

sample of fewer than 3,000 people. Indeed, its typical sample size is around 1,000 (Gallup, 2010).

That is a sample size of less than 1 in every 300,000 people!

Gallup can do this because it goes to great lengths to make sure that its samples are randomly

drawn in a way that matches the makeup of the country’s population. If you want to know about

people’s political views, you have to be very careful because these views can vary based on a

person’s locale, income, race or ethnicity, gender, age, religion, and a host of other factors.

There is no single, simple rule for how large a sample should be. When samples are small or

incautiously collected, you should be suspicious of the claims made on their basis. Professional

research will generally provide clear descriptions of the samples used and a justification of why

they are adequate to support their conclusions. That is not a guarantee that the results are

correct, but they are bound to be much more reliable than conclusions reached on the basis of

small and poorly collected samples.

For example, sometimes politicians tour a state with the stated aim of finding out what the people

think. However, given that people who attend political rallies are usually those with similar

opinions as the speaker, it is unlikely that the set of people sampled will be both large enough and

random enough to provide a solid basis for a reliable conclusion. If politicians really want to find

out what people think, there are better ways of doing so.

Margin of Error

It is always possible that a sample will be wildly different than the population. But equally important is

the fact that it is quite likely that any sample will be slightly different than the population. Statisticians

know how to calculate just how big this difference is likely to be. You will see this reported in some

studies or polls as the margin of error. The margin of error can be used to determine the range of

values that are likely for the population.

For example, suppose that a poll finds that 52% of a sample prefers Ms. Frazier in an election. When you

read about the result of this poll, you will probably read that 52% of people prefer Ms. Frazier with a

margin of error of ±3% (plus or minus 3%). This means that although the real number probably is not

52%, it is very likely to be somewhere between 49% (3% lower than 52%) and 55% (3% higher than

52%). Since the real percentage may be as low as 49%, Ms. Frazier should not start picking out curtains

for her office just yet: She may actually be losing!

Confidence Level

We want large, random samples because we want to be confident that our sample is representative of

the population. The more confident we are that are sample is representative, the more confident we can

be in conclusions we draw from it. Nonetheless, even a small, poorly drawn sample can yield informative

results if we are cautious about our reasoning.

If you notice that many of your friends and acquaintances are out of work, you may conclude that

unemployment levels are up. Clearly, you have some evidence for your conclusion, but is it enough? The

answer to this question depends on how strong you take your argument to be. Remember that inductive

arguments vary from extremely weak to extremely strong. The strength of an argument is essentially the

level of confidence we should have in the conclusion based on the reasons presented. Consider the

following ways you might state your confidence that unemployment levels were up, based on noting

unemployment among your friends and acquaintances.

a. “I’m certain that unemployment is up.”

b. “I’m reasonably sure that unemployment is up.”

c. “It’s more likely than not that unemployment is up.”

d. “Unemployment might be up.”

Clearly, A is too strong. Your acquaintances just are not likely to represent the population enough for

you to be certain that unemployment is up. On the other hand, D is weak enough that it really does not

need much evidence to support it. B and C will depend on how wide and varied your circle of

acquaintances is and on how much unemployment you see among them. If you know a lot of people and

your acquaintances are quite varied in terms of profession, income, age, race, gender, and so on, then

you can have more confidence in your conclusion than if you had only a small circle of acquaintances and

they tended to all be like each other in these ways. B also depends on just what you mean by “reasonably

sure.” Does that mean 60% sure? 75%? 85%?

Most reputable studies will include a “confidence level” that indicates how confident one can be that

their conclusions are supported by the reasons they give. The degree of confidence can vary quite a bit,

so it is worth paying attention to. In most social sciences, researchers aim to reach a 95% or 99%

confidence level. A confidence level of 95% means that if we did the same study 100 times, then in 95 of

those tests the results would fall within the margin of error. As noted earlier, the field of physics

requires a confidence level of about 99.99997%, much higher than is typically required or attained in

the social sciences. On the other end, sometimes a confidence level of just over 50% is enough if you are

only interested in knowing whether something is more likely than not.

Applying This Knowledge

Now that we have learned something about statistical arguments, what can we say about Angelina Jolie’s

argument, presented at the beginning of the prior section? First, notice that it has the form of a statistical

syllogism. We can put it this way, written as if from her perspective:

87% of women with certain genetic and other factors develop breast cancer.

I am a woman with those genetic and other factors.

Therefore, I have an 87% risk of getting breast cancer.

We can see that the argument fits the form correctly. While not deductive, the argument is inductively

strong. Unless we have reason to believe that she is more or less likely than the average person with

those factors to develop breast cancer, if these premises are true then they give strong evidence for the

truth of the conclusion. However, what about the first premise? Should we believe it?

In evaluating the first premise, we need to consider the evidence for it. Were the samples of women

studied sufficiently random and large that we can be confident they were representative of the

population of all women? With what level of confidence are the results established? If the samples were

small or not randomized, then we may have less confidence in them. Jolie’s doctors said that Jolie had an

87% chance of developing breast cancer, but there’s a big difference between being 60% confident that

she has this level of risk and being 99% certain that she does. To know how confident we should be, we

would need to look at the background studies that establish that 87% of women with those factors

develop breast cancer. Anyone making such an important decision would be well advised to look at

these issues in the research before acting.

Practice Problems 5.1

Which of the following attributes might negatively influence the data drawn from the

following samples? Click here

(https://ne.edgecastcdn.net/0004BA/constellation/PDFs/PHI103_2e/Answers_PracticeProblems5.1.pdf)

to check your answers.

1. A teacher surveys the gifted students in the district about the curriculum that should be

adopted at the high school.

a. sample size

b. representativeness of the sample

c. a and b

d. There is no negative influence in this case.

2. A researcher for Apple analyzes a large group of tribal people in the Amazon to

determine which new apps she should create in 2014.

a. sample size

b. representativeness of the sample

c. a and b

d. There is no negative influence in this case.

3. A researcher on a college campus interviews 10 students after a yoga class about their

drug use habits and determines that 80% of the student population probably smokes

marijuana.

a. sample size

b. representativeness of the sample

c. a and b

d. There is no negative influence in this case.

iStock/Thinkstock

Sufficient conditions

are present in

classroom grading

systems. If you need a

total of 850 points to

receive an A, the

sufficient condition to

receive an A is earning

850 points.

5.4 Causal Relationships: The Meaning of Cause

It is difficult to say exactly what we mean when we say that one thing causes another. Think about

turning on the lights in your room. What is the cause of the lights turning on? Is it the flipping of the

switch? The electricity in the wires? The fact that the bulb is not broken? Your initial desire for the lights

to be on? There are many things we could identify as a plausible cause of the lights turning on. However,

for practical purposes, we generally look for the set of conditions without which the event in question

would not have occurred and with which it will occur. In other words, logicians aim to be more specific

about causal relationships by discussing them in terms of sufficient and necessary conditions. Recall that

we used these terms in Chapter 4 when discussing propositional logic. Here we will discuss how these

terms can help us understand causal relationships.

Sufficient Conditions

According to British philosopher David Hume, the notion of cause is based

on nothing more than a “constant conjunction” that holds between

events—the two events always occur together (Morris & Brown, 2014). We

notice that events of kind A are always followed by events of kind B, and we

say “A causes B.” Thus, to claim a causal relationship between events of type

A and B might be to say: Whenever A occurs, B will occur.

Logicians have a fancy phrase for this relationship: We say that A is a

sufficient condition for B. A factor is a sufficient condition for the

occurrence of an event if whenever the factor occurs, the event also occurs:

Whenever A occurs, B occurs as well. Or in other words:

If A occurs, then B occurs.

For example, having a billion dollars is a sufficient condition for being rich;

being hospitalized is a sufficient condition for being excused from jury duty;

having a ticket is a sufficient condition for being able to be admitted to the

concert.

Often several factors are jointly required to create sufficient conditions. For

example, each state has a set of jointly sufficient conditions for being able to

vote, including being over 18, being registered to vote, and not having been

convicted of a felony, among other possible qualifications.

Here is an example of how to think about sufficient conditions when thinking about real­life causation.

We know room lights do not go on just because you flip the switch. The points of the switch must come

into contact with a power source, electricity must be present, a working lightbulb has to be properly

secured in the socket, the socket has to be properly connected, and so forth. If any one of the conditions

is not satisfied, the light will not come on. Strictly speaking, then, the whole set of conditions constitutes

the sufficient condition for the event.

We often choose one factor from a set of factors and call it the cause of an event. The one we call the

cause is the one with which we are most concerned for some reason or other; often it is the one that

represents a change from the normal state of things. A working car is the normal state of affairs; a hole in

Stockbyte/Thinkstock

Although water is a necessary

condition for life, it is not a sufficient

condition for life because humans also

need oxygen and food.

the radiator tube is the change to that state of affairs that results in the overheated engine. Similarly, the

electricity and lightbulb are part of the normal state of things; what changed most recently to make the

light turn on was the flipping of the switch.

Necessary Conditions

A factor is a necessary condition for an event if the event would not occur in the absence of the factor.

Without the necessary condition, the effect will not occur. A is a necessary condition for B if the

following statement is always true:

If A is not present, then neither is B.

This statement happens to be equivalent to the statement that if B is present, then A is present. Thus, a

handy way to understand the difference between necessary and sufficient conditions is as follows:

“A is sufficient for B” means that if A occurs, then B occurs.

“A is necessary for B” means that if B occurs, then A occurs.

Let us take a look at a real example. Poliomyelitis, or polio,

is a disease caused by a specific virus. In only a small

minority of those with poliovirus does the virus infect the

central nervous system and lead to the terrible condition

known as paralytic polio. In the large majority of cases,

however, the virus goes undetected and does not result in

paralysis. Thus, infection with poliovirus is not a sufficient

condition for getting paralytic polio. However, because one

must have the virus to have that condition, being infected

with poliovirus is a necessary condition for getting paralytic

polio (Mayo Clinic, 2014).

On the other hand, being squashed by a steamroller is a

sufficient condition for death, but it is not a necessary

condition. Whenever someone has been squashed by a

steamroller, that person is quite dead. However, it is not the

case that anyone who is dead has been run over by a

steamroller.

If our purpose in looking for causes is to be able to produce an effect, it is reasonable to look for

sufficient conditions for that effect. If we can manipulate circumstances so that the sufficient condition is

present, the effect will also be present. If we are looking for causes in order to prevent an effect, it is

reasonable to look for necessary conditions for that effect. If we prevent a necessary condition from

materializing, we can prevent the effect.

The eradication of yellow fever is a striking example. Research showed that being bitten by a certain

type of mosquito was a necessary condition for contracting yellow fever (though it was not a sufficient

condition, for some people who were bitten by these mosquitoes did not contract yellow fever).

Consequently, a campaign to destroy that particular species of mosquito through the widespread use of

insecticides virtually eliminated yellow fever in many parts of the world (World Health Organization,

2014).

Necessary and Sufficient Conditions

The most restrictive interpretation of a causal relationship consists of construing “cause” as a condition

both necessary and sufficient for the occurrence of an event. If factor A is necessary and sufficient for the

occurrence of event B, then whenever A occurs, B occurs, and whenever A does not occur, B does not

occur. In other words:

If A, then B, and if not­A, then not­B.

For example, to produce diamonds, certain very specific conditions must exist. Diamonds are produced if

and only if carbon is subjected to immense pressure and heat for a certain period of time. Diamonds do

not occur through any other process. If all of the conditions exist, then diamonds will result; diamonds

exist only when all of those conditions have been met. Therefore, carbon subjected to the right

combination of pressure, heat, and time constitutes both a necessary and sufficient condition for

diamond production.

This construction of cause is so restrictive that very few actual relationships in ordinary experience can

satisfy it. However, some scientists think that this is the kind of invariant relationship that scientific laws

must express. For instance, according to Newton’s law of gravitation, objects attract each other with a

force proportional to the inverse of the square of their distance. Therefore, if we know the force of

attraction between two bodies, we can calculate the distance between them (assuming we know their

masses). Conversely, if we know the distance between them, we can calculate the force of attraction.

Thus, having a certain degree of attraction between two bodies constitutes both a necessary and

sufficient condition for the distance between them. It happens frequently in math and science that the

values assigned to one factor determine the values assigned to another, and this relationship can be

understood in terms of necessary and sufficient conditions.

Other Types of Causes

The terms necessary condition and sufficient condition give us concrete and technical ways to describe

types of causes. However, in everyday life, the factor we mention as the cause of an event is rarely one

we consider sufficient or even necessary. We frequently select one factor from a set and say it is the

cause of the event. Our aims and interests, as well as our knowledge, affect that choice. Thus, practical,

moral, or legal considerations may influence our selection. There are three principal considerations that

may lead us to choose a single factor as “the cause,” although this is not an exhaustive listing.

Trigger cause. The trigger cause, or the factor that initiates an event, is often designated the cause of the

event. Usually, this is the factor that occurs last and completes a causal chain—the set of sufficient

conditions—producing the effect. Flipping the switch triggers the lights. All the other factors may be

present and as such constitute the standing conditions that allow the event to be triggered. The trigger

factor is sometimes referred to as the proximate cause since it is the factor nearest the final event (or

effect).

Unusual factor. Let us suppose that someone turns on a light and an explosion follows. Turning on the

light caused an explosion because the room was full of methane gas. Now being in a room is fairly

Hagen/Cartoonstock

Variables, such as buffalo and White

men, can be correlated in two

ways—directly and inversely. Which

type of correlation is being discussed

in this cartoon?

normal, turning on lights is fairly normal, having oxygen in a room is fairly normal, and having an

unsealed light switch is fairly normal. The only condition outside the norm is the presence of a large

quantity of explosive gas. Therefore, the presence of methane is referred to as the cause of the explosion.

What is unusual, what is outside the norm, is the cause. If we are concerned with fixing moral or legal

responsibility for an effect, we are likely to focus on the person who left the gas on, not the person who

turned on the lights.

Controllable factor. Sometimes we call attention to a controllable factor instrumental in producing the

event and point out that since the factor could have been controlled, so could the event. Thus, although

smoking is neither a sufficient nor a necessary condition for lung cancer, it is a controllable factor.

Therefore, over and above uncontrollable factors like heredity and chance, we are likely to single out

smoking as the cause. Similarly, drunk driving is neither a sufficient nor a necessary condition for getting

into a car accident, but it is a controllable factor, so we are likely to point to it as a cause.

Correlational Relationships

In both the case of smoking and drunk driving, neither were necessary nor sufficient conditions for the

subsequent event in question (lung cancer and car accidents). Instead, we would say that both are highly

correlated with the respective events. Two things can be said to be correlated, or in correlation, when

they occur together frequently. In other words, A is correlated with B, so B is more likely to occur if A

occurs, and vice versa. For example, having gray hair is correlated with age. The older someone is, the

more likely he or she is to have gray hair, and vice versa. Of course, not all people with gray hair are old,

and not all old people have gray hair, so age is neither a necessary nor a sufficient condition for gray

hair. However, the two are highly correlated because they have a strong tendency to go together.

Two things that vary in the same direction are said to be

directly correlated or to vary directly; the higher one’s age,

the more gray hair. Things that are correlated may also vary

in opposite directions; these are said to vary inversely. For

example, there is an inverse correlation between the size of

a car and its fuel economy. In general, the bigger a car is, the

lower its fuel economy is. If you want a car that gets high

miles per gallon, you should focus on cars that are smaller.

There are other factors to consider too, of course. A small

sports car may get lower fuel economy than a larger car

with less power. Correlation does not mean that the

relationship is perfect, only that variables tend to vary in a

certain way.

You may have heard the phrase “correlation does not imply

causation,” or something similar. Just because two things

happen together, it does not necessarily follow that one

causes the other. For example, there is a well­known

correlation between shoe size and reading ability in

elementary children. Children with larger feet have a strong

tendency to read better than children with smaller feet. Of

course, no one supposes that a child’s shoe size has a direct

effect on his or her reading ability, or vice versa. Instead,

both of these things are related to a child’s age. Older children tend to have bigger feet than younger

children; they also tend to read better. Sometimes the connection between correlated things is simple, as

in the case of shoe size and reading, and sometimes it is more complicated.

Whenever you read that two things have been shown to be linked, you should pay attention to the

possibility that the correlation is spurious or possibly has another explanation. Consider, for example, a

study showing a strong correlation between the amount of fat in a country’s diet and the amount of

certain types of cancer in that country (such as K. K. Carroll’s 1975 study, as cited in Paulos, 1997). Such

a correlation may lead you to think that eating fat causes cancer, but this could potentially be a mistake.

Instead, we should consider whether there might be some other connection between the two.

It turns out that countries with high fat consumption also have high sugar consumption—perhaps sugar

is the culprit. Also, countries with high fat and sugar consumption tend to be wealthier; fat and sugar are

expensive compared to grain. Perhaps the correlation is the result of some other aspect of a wealthier

lifestyle, such as lower rates of physical exercise. (Note that wealth is a particularly common

confounding factor, or a factor that correlates with the dependent and independent variables being

studied, as it bestows a wide range of advantages and difficulties on those who have it.) Perhaps it is a

combination of factors, and perhaps it is the fat after all; however, we cannot simply conclude with

certainty from a correlation that one causes the other, not without further research.

Sometimes correlation between two things is simply random. If you search through enough data, you

may find two factors that are strongly correlated but that have nothing at all to do with each other. For

example, consider Figure 5.1. At first glance, you might think the two factors must be closely connected.

But then you notice that one of them is the divorce rate in Maine and the other is the per capita

consumption of margarine in the United States. Could it be that by eating less margarine you could help

save the marriages of people in Maine?

Figure 5.1: Are these two factors correlated?

Although it may seem like two factors are correlated, we sometimes have to look harder

to understand the relationship.

Source: www.tylervigen.com (http://www.tylervigen.com) .

On the other hand, although correlation does not imply causation, it does point to it. That is, when we

see a strong correlation, there is at least some reason to suspect a causal connection of some sort

between the two correlates. It may be that one of the correlates causes the other, a third thing causes

them both, there is some more complicated causal relation between them, or there is no connection at

all.

However, the possibility that the correlation is merely accidental becomes increasingly unlikely if the

sample size is large and the correlation is strong. In such cases we may have to be very thoughtful in

seeking and testing possible explanations of the correlation. The next section discusses ways that we

might find and narrow down potential factors involved in a causal relationship.

5.5 Causal Arguments: Mill’s Methods

Reasoning about causes is extremely important. If we can correctly identify what causes a particular

effect, then we have a much better chance of controlling or preventing the effect. Consider the search for

a cure for a disease. If we do not understand what causes a particular disease, then our chances of being

able to cure it are small. If we can identify the cause of the disease, we can be much more precise in

searching for a way to prevent the disease. On the other hand, if we think we know the cause when we

do not, then we are likely to look in the wrong direction for a cure.

A causal argument—an argument about causes and effects—is almost always an inductive argument.

This is because, although we can gather evidence about these relationships, we are almost never in a

position to prove them absolutely.

The following four experimental methods were formally stated in the 19th century by John Stuart Mill in

his book A System of Logic and so are often referred to as Mill’s methods. Mill’s methods express the

most basic underlying logic of many current methods for investigating causality. They provide a great

introduction to some of the basic concepts involved—but know that modern methods are much more

rigorous.

Used with caution, Mill’s methods can provide a guide for exploring causal connections, especially when

one is looking at specific cases against the background of established theory. It is important to

remember that although they can be useful, Mill’s methods are only the beginning of the study of

causation. By themselves, they are probably most useful as methods for identifying potential subjects for

further study using more robust methods that are beyond the scope of this book.

Method of Agreement

In 1976 an unknown illness affected numerous people in Philadelphia. Although it took some time to

fully identify the cause of the disease, a bacterium now called Legionella pneumophila, the first step in

the investigation was to find common features of those who became ill. Researchers were quick to note

that sufferers had all attended an American Legion convention at the Bellevue­Stratford Hotel. As you

can guess, the focus of the investigation quickly narrowed to conditions at the hotel. Of course, the

convention and the hotel were not the actual cause of getting sick, but neither was it mere coincidence

that all of the ill had attended the convention. By finding the common elements shared by those who

became ill, investigators were able to quickly narrow their search for the cause. Ultimately, the

bacterium was located in a fountain in the hotel.

The method of agreement involves comparing situations in which the same kind of event occurs. If the

presence of a certain factor is the only respect in which the situations are the same (that is, they agree),

then this factor may be related to the cause of the event. We can represent this with something like

Table 5.2. The table indicates whether each of four factors was present in a specific case (A, B, or C) and,

in the last column, whether the effect manifested itself (in the earlier case of what is now known as

Legionnaires’ disease, the effect we would be interested in is whether infection occurred).

Table 5.2: Example of method of agreement

Case Factor 1 Factor 2 Factor 3 Factor 4 Effect

A No Yes Yes No Yes

Case Factor 1 Factor 2 Factor 3 Factor 4 Effect

B No No Yes Yes Yes

C Yes Yes Yes Yes Yes

The three cases all resulted in the same effect but differed in which factors were present—with the

exception of Factor 3, which was present in all three cases. We may then suspect that Factor 3 may be

causally related to the effect. Our notion of cause here is that of sufficient condition. The common factor

is sufficient to account for the effect.

In general, the method of agreement works best when we have a large group of cases that is as varied as

possible. A large group is much more likely to vary across many different factors than a small group.

Unfortunately, the world almost never presents us with two situations wholly unlike except for one

factor. We may have three or more situations that are greatly similar. For example, all of the afflicted in

the 1976 outbreak were members of the American Legion, all were adults, all were men, all lived in

Pennsylvania. Here is where we have to use common sense and what we already know. It is unlikely that

merely being a member of an organization is the cause of a disease. We expect diseases to be caused by

environmental factors: bacteria, viruses, contaminants, and so on. As a result, we can focus our search on

those similarities that seem most likely to be relevant to the cause. Of course, we may be wrong; that is a

hallmark of inductive reasoning generally, but by being as careful and as reasonable as we can, we can

often make great progress.

Method of Difference

The method of difference involves comparing a situation in which an event occurs with similar

situations in which it does not. If the presence of a certain factor is the only difference between the two

kinds of situations, it is likely to be causally related to the effect.

Suppose your mother comes to visit you and makes your favorite cake. Unfortunately, it just does not

turn out. You know she made it in the same way she always does. What could the problem be? Start by

looking at differences between how she made the cake at your house and how she makes it at hers.

Ultimately, the only difference you can find is that your mom lives in Tampa and you live in Denver.

Since that is the only difference, that difference is likely to be causally related to the effect. In fact,

Denver is both much higher and much drier than Tampa. Both of these factors make a difference in

baking cakes.

Let us suppose we are interested in two cases, A and B, in which A has the effect we are interested in

(the cake not turning out right) and B does not. This is outlined in Table 5.3. If we can find only one

factor that is different between the two cases—in this case, Factor 1—then that factor is likely to be

causally related to the effect. This does not tell us whether the factor directly causes the effect, but it

does suggest a causal link. Further investigation might reveal just exactly what the connection is.

Table 5.3: Example of method of difference

Case Factor 1 Factor 2 Factor 3 Factor 4 Effect

A Yes No No Yes Yes

B No No No Yes No

In this example, Factor 1 is the one factor that is different between the two cases. Perhaps the presence

of Factor 1 is related to why Case A had the effect but Case B did not. Here we are seeing Factor 1 as a

necessary condition for the effect.

The method of difference is employed frequently in clinical trials of experimental drugs. Researchers

carefully choose or construct two situations that resemble each other in as many respects as possible. If

a drug is employed in one but not the other, then they can ascribe to the drug any change in one

situation not matched by a change in the other. Note that the two sets must be as similar as possible,

since variation could introduce other possible causal links. The group in which change is expected is

often referred to as the experimental group, and the group in which change is not expected is often

referred to as the control group.

The method of difference may seem obvious and its results reliable. Yet even in a relatively simple

experimental setup like this one, we may easily find grounds for doubting that the causal claim has been

adequately established.

One important factor is that the two cases, A and B, have to be as similar as possible in all other respects

for the method of difference to be used effectively. If your 8­year­old son made the cake without

supervision, there are likely to be a whole host of differences that could explain the failure. The same

principle applies to scientific studies. One thing that can subtly skew experimental results is

experimental bias. For example, if the experimenters know which people are receiving the experimental

drug, they might unintentionally treat them differently.

To prevent such possibilities, so­called blind experiments are often used. Those conducting the

experiment are kept in ignorance about which subjects are in the control group and which are in the

experimental group so that they do not even unintentionally treat the subjects differently.

Experimenters therefore, do not know whether they are injecting distilled water or the actual drug. In

this way the possibility of a systematic error is minimized.

We also have to keep in mind that our inquiry is guided by background beliefs that may be incorrect. No

two cases will ever be completely the same except for a single factor. Your mother made the cake on a

different day than she did at home, she used a different spoon, different people were present in the

house, and so on. We naturally focus on similarities and differences that we expect to be relevant.

However, we should always realize that reality may disagree with our expectations.

Causal inquiry is usually not a matter of conducting a single experiment. Often we cannot even control

for all relevant factors at the same time, and once an experiment is concluded, doubts about other

factors may arise. A series of experiments in which different factors are kept constant while others are

varied one by one is always preferable.

Joint Method of Agreement and Difference

The joint method of agreement and difference is, as the name suggests, a combination of the methods

of agreement and difference. It is the most powerful of Mill’s methods. The basic idea is to have two

groups of cases: One group shows the effect, and the other does not. The method of agreement is used

within each group, by seeing what they have in common, and the method of difference is used between

the two groups, by looking for the differences between the two. Table 5.4 shows how such a chart would

look, if we were comparing three different cases (1, 2, and 3) among two groups (A and B).

Table 5.4: Example of joint method of agreement and difference

Case/group Factor 1 Factor 2 Factor 3 Factor 4 Effect

1/A Yes No No Yes Yes

2/A No No Yes Yes Yes

3/A No Yes No Yes Yes

1/B No Yes Yes No No

2/B Yes Yes No No No

3/B Yes No Yes No No

As you can see, within each group the cases agree only on Factor 4 and the effect. But when you compare

the two groups, the only consistent differences between them are in Factor 4 and the effect. This result

suggests the possibility that Factor 4 may be causally related to the effect in question. In this method, we

are using the notion of a necessary and sufficient condition. The effect happens whenever Factor 4 is

present and never when it is absent.

The joint method is the basis for modern randomized controlled experiments. Suppose you want to see

if a new medicine is effective. You begin by recruiting a large group of volunteers. You then randomly

assign them to either receive the medicine or a placebo. The random assignment ensures that each

group is as varied as possible and that you are not unknowingly deciding whether to give someone the

medicine based on some common factor. If it turns out that everyone who gets the medicine improves

and everyone who gets the placebo stays the same or gets worse, then you can infer that the medicine is

probably effective.

In fact, advanced statistics allow us to make inferences from such studies even when there is not perfect

agreement on the presence or absence of the effect. So, in reading studies, you may note that the

discussion talks about the percentage of each group that shows or does not show the effect. Yet we may

still make good inferences about causation by using the method of concomitant variation.

Method of Concomitant Variation

The method of concomitant variation is simply the method of looking for correlation between two

things. As we noted in our discussion of correlation, this cannot be used to conclude conclusively that

one thing causes the other, but it is suggestive that there is perhaps some causal connection between the

two. Stronger evidence can be found by further scientific study.

You may have noticed that, in discussing causes, we are trying to explain a phenomenon. We observe

something that is interesting or important to us, and we seek to know why it happened. Therefore, the

study of Mill’s methods, as well as correlation and concomitant variation, can be seen as part of a

broader type of reasoning known as inference to the best explanation, the effort to find the best or most

accurate explanation of our observations. Because this type of reasoning is sometimes classified as a

separate type of reasoning (sometimes called abductive reasoning), it will be covered in Chapter 6.

In summary, Mill’s methods provide a framework for exploring causal relationships. It is important to

remember that although they can be useful, they are only the beginning of this important field. By

themselves, they are probably most useful as methods for identifying potential subjects for further study

using more robust methods that are beyond the scope of this book.

Practice Problems 5.2

Identify which of Mill’s methods discussed in the chapter relates to the following

examples. Click here

(https://ne.edgecastcdn.net/0004BA/constellation/PDFs/PHI103_2e/Answers_PracticeProblems5.2.pdf)

to check your answers.

1. After going to dinner, all the members of a family came down with vomiting. They all had

different entrées but shared a salad as an appetizer. The mother of the family determines

that it must have been the salad that caused the sickness.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

2. A couple goes to dinner and shares an appetizer, entrée, and dessert. Only one of the two

gets sick. She drank a glass of wine, and her husband drank a beer. She believes that the

wine was the cause of her sickness.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

3. In a specific city, the number of people going to emergency rooms for asthma attacks

increases as the level of pollution increases in the summer. When the winter comes and

pollution goes down, the number of people with asthma attacks decreases.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

4. In the past 15 years there has never been a safety accident in the warehouse. Each day

for the past 15 years Lorena has been conducting the morning safety inspections.

However, today Lorena missed work, and there was an accident.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

5. Since we have hired Earl, productivity in the office has decreased by 20%.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

6. In the past, lead was put into many paints. It was found that the number of infant

fatalities increased in relation to the amount of exposure these infants had to lead­based

paints that were used on their cribs.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

7. It appears that the likelihood of catching the Zombie virus increases the more one is

around people who have already been turned into zombies.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

8. In order to determine how a disease was spread in humans, researchers placed two

groups of people into two rooms. Both rooms were exactly alike. However, in one room

they placed someone who was infected with the disease. The researchers found that

those who were in the room with the infected person got sick, whereas those who were

not with an infected person remained well.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

9. In a certain IQ test, students in a specific group performed at a much higher level than

those of the other groups. After analyzing the group, the researchers found that the high­

performing students all smoked marijuana before the exam.

a. method of agreement

b. method of difference

c. joint method

d. method of concomitant variation

badahos/iStock/Thinkstock

The ability to think critically about an

authority’s argument will allow you to

determine reliable sources from

unreliable ones, which can be quite

helpful when writing research papers,

reading news articles, or taking advice

from someone.

5.6 Arguments From Authority

An argument from authority, also known as an appeal to authority, is an inductive argument in which

one infers that a claim is true because someone said so. The general reasoning looks like this:

Person A said that X is true.

Person A is an authority on the subject.

Therefore, X is true.

Whether this type of reasoning is strong depends on the issue discussed and the authority cited. If it is

the kind of issue that can be settled by an argument from authority and if the person is actually an

authority on the subject, then it can actually be a strong inductive argument.

Some people think that arguments from authority in general are fallacious. However, that is not

generally the case. To see why, try to imagine life without any appeals to authority. You could not believe

anyone’s statements, no matter how credible. You could not believe books; you could not believe

published journals, and so on. How would you do in college if you did not listen to your textbooks,

teachers, or any other sources of information?

Even in science class, you would have to do every

experiment on your own because you could not believe

published reports. In math, you could not trust the book or

teacher, so you would have to prove every theorem by

yourself. History class would be a complete waste of time

because, unless you had a time machine, there would be no

way to verify any claims about what happened in the past

without appeal to historical records, newspapers, journals,

and so forth. You would also have a hard time following

medical advice, so you might end up with serious health

problems. Finally, why would you go to school or work if

you could not trust the claim that you were going to get a

degree or a paycheck after all of your efforts?

Therefore, in order to learn from others and to succeed in

life, it is essential that we listen to appropriate authorities.

However, since many sources are unreliable, misleading, or

even downright deceptive, it is essential that we learn to

distinguish reliable sources of authority from unreliable

ones. Chapter 7 will discuss how to distinguish between

legitimate and fallacious appeals to authority.

Here are some examples of legitimate arguments from authority:

“The theory of relativity is true. I know because my physics professor and my physics textbook

teach that it is true.”

“Pine trees are not deciduous; it says so right here in this tree book.”

“The Giants won the pennant! I read it on ESPN.com.”

“Mike hates radishes. He told me so yesterday.”

All of these inferences seem pretty strong. For examples of arguments to authority that are not as strong,

or even downright fallacious, visit Chapter 7.

5.7 Arguments From Analogy

An argument from analogy is an inductive argument that draws conclusions based on the use of

analogy. An analogy is a comparison of two items. For example, many object to deficit spending (when

the country spends more money than it takes in) based on the reasoning that debt is bad for household

budgets. The person’s argument depends on an analogy that compares the national budget to a

household budget. The two items being compared may be referred to as analogs (or analogues,

depending on where you live) but are referred to technically as cases. Of the two analogs, one should be

well known, with a body of knowledge behind it, and so is referred to as the familiar case; the second

analog, about which much less is known, is called the unfamiliar case.

The basic structure of an argument from analogy is as follows:

B is similar to A.

A has feature F.

Therefore, B probably also has feature F.

Here, A is the familiar case and B is the unfamiliar case. We made an inference about thing B based on its

similarity to the more familiar A.

Analogical reasoning proceeds from this premise: Since the analogs are similar either in many ways or in

some very important ways, they are likely to be similar in other ways as well. If there are many

similarities, or if the similarities are significant, then the analogy can be strong. If the analogs are

different in many ways, or if the differences are important, then it is a weak analogy. Conclusions arrived

at through strong analogies are fairly reliable; conclusions reached through weak analogies are less

reliable and often fallacious (the fallacy is called false analogy). Therefore, when confronted with an

analogy (“A is like B”), the first question to be asked is this: Are the two analogs very similar in ways that

are relevant to the current discussion, or are they different in relevant ways?

Analogies occur in both arguments and explanations. As we saw in Chapter 2, arguments and

explanations are not the same thing. The key difference is whether the analogy is being used to give

evidence that a certain claim is true—an argument—or to give a better understanding of how or why a

claim is true—an explanation. In explanations, the analogy aims to provide deeper understanding of the

issue. In arguments, the analogy aims to provide reasons for believing a conclusion. The next section

provides some tips for evaluating the strength of such arguments.

Evaluating Arguments From Analogy

Again, the strength of the argument depends on just how much A is like B, and the degree to which the

similarities between A and B are relevant to F. Let us consider an example. Suppose that you are in the

market for a new car, and your primary concern is that the car be reliable. You have the opportunity to

buy a Nissan. One of your friends owns a Nissan. Since you want to buy a reliable car, you ask a friend

how reliable her car is. In this case you are depending on an analogy between your friend’s car and the

car you are looking to buy. Suppose your friend says that her car is reliable. You can now make the

following argument:

The car I’m looking at is like my friend’s car.

My friend’s car is reliable.

Therefore, the car I’m looking at will be reliable.

How strong is this argument? That depends on how similar the two cases are. If the only thing the cars

have in common is the brand, then the argument is fairly weak. On the other hand, if the cars are the

same model and year, with all the same options and a similar driving history, then the argument is

stronger. We can list the similarities in a chart (see Table 5.5). Initially, the analogy is based only on the

make of the car. We will call the car you are looking at A and your friend’s car B.

Table 5.5: Comparing cars by make

Car Make Reliable?

B Nissan Yes

A Nissan ?

The make of a car is relevant to its reliability, but the argument is weak because that is the only

similarity we know about. To strengthen the argument, we can note further relevant similarities. For

example, if you find out that your friend’s car is the same model and year, then the argument is

strengthened (see Table 5.6).

Table 5.6: Comparing cars by make, model, and year

Car Make Model Year Reliable?

B Nissan Sentra 2000 Yes

A Nissan Sentra 2000 ?

The more relevant similarities there are between the two cars, the stronger the argument. However, the

word relevant is critical here. Finding out that the two cars have the same engine and similar driving

histories is relevant and will strengthen the argument. Finding out that both cars are the same color and

have license plates beginning with the same letter will not strengthen the argument. Thus, arguments

from analogy typically require that we already have some idea of which features are relevant to the

feature we are interested in. If you really had no idea at all what made some cars reliable and others not

reliable, then you would have no way to evaluate the strength of an argument from analogy about

reliability.

Another way we can strengthen an argument from analogy is by increasing the number of analogs. If you

have two more friends who also own a car of the same make, model, and year, and if those cars are

reliable, then you can be more confident that your new car will be reliable. Table 5.7 shows what the

chart would look like. The more analogs you have that match the car you are looking at, the more

confidence you can have that the car you’re looking at will be reliable.

Table 5.7: Comparing multiple analogs

Car Make Model Year Reliable?

B Nissan Sentra 2000 Yes

C Nissan Sentra 2000 Yes

D Nissan Sentra 2000 Yes

A Nissan Sentra 2000 ?

In general, then, analogical arguments are stronger when they have more analogous cases with more

relevant similarities. They are weaker when there are significant differences between the familiar cases

and the unfamiliar case. If you discover a significant difference between the car you are looking at and

the analogs, that reduces the strength of the argument. If, for example, you find that all your friends’ cars

have manual transmission, whereas the one you are looking at has an automatic transmission, this

counts against the strength of the analogy and hence against the strength of the argument.

Another way that an argument from analogy can be weakened is if there are cases that are similar but do

not have the feature in question. Suppose you find a fourth friend who has the same model and year of

car but whose car has been unreliable. As a result, you should have less confidence that the car you are

looking at is reliable.

Here are a couple more examples, with questions about how to gauge the strength.

“Except for size, chickens and turkeys are very similar birds. Therefore, if a food is good for

chickens, it is probably good for turkeys.”

Relevant questions include how similar chickens and turkeys are, whether there are significant

differences, and whether the difference in size is enough to allow turkeys to eat things that would be too

big for chickens.

“Seattle’s climate is similar, in many ways to the United Kingdom’s. Therefore, this plant is

likely to grow well in Seattle, because it grows well in the United Kingdom.”

Just how similar is the climate between the two places? Is the total about of rain about the same? How

about the total amount of sun? Are the low and high temperatures comparable? Are there soil

differences that would matter?

“I am sure that my favorite team will win the bowl game next week; they have won every game

so far this season.”

This example might seem strong at first, but it hides a very relevant difference: In a bowl game, college

football teams are usually matched up with an opponent of approximately equal strength. It is therefore

likely that the team being played will be much better than the other teams played so far this season. This

difference weakens the analogy in a relevant way, so the argument is much weaker than it may at first

appear. It is essential when studying the strength of analogical arguments to be thorough in our search

for relevant similarities and differences.

Analogies in Moral Reasoning

Analogical reasoning is often used in moral reasoning and moral arguments. Examples of analogical

reasoning are found in ethical or legal debates over contentious or controversial issues such as abortion,

gun control, and medical practices of all sorts (including vaccinations and transplants). Legal arguments

are often based on finding precedents—analogous cases that have already been decided. Recent

arguments presented in the debate over gun control have drawn conclusions based on analogies that

compare the United States with other countries, including Switzerland and Japan. Whether these and

similar arguments are strong enough to establish their conclusions depends on just how similar the

cases are and the degree and number of dissimilarities and contrary cases. Being aware of similar cases

Jupiterimages/BananaStock/Thinkstock

Retailers such as bookstores

commonly use arguments from

analogy when they suggest purchases

that have already occurred or that are occurring in other areas can vastly improve one’s wisdom about

how best to address the topic at hand.

The importance of analogies in moral reasoning is sometimes captured in the principle of equal

treatment—that if two things are analogous in all morally relevant respects, then what is right (or

wrong) to do in one case will be right (or wrong) to do in the other case as well. For example, if it is right

for a teacher to fail a student for missing the final exam, then another student who does the same thing

should also be failed. Whether the teacher happens to like one student more than the other should not

make a difference, because that is not a morally relevant difference when it comes to grading.

The reasoning could look as follows:

Things that are similar in all morally relevant respects should be treated the same.

Student A was failed for missing the final exam.

Student B also missed the final exam.

Therefore, student B should be failed as well.

It follows from the principle of equal treatment that if two things should be treated differently, then

there must be a morally relevant difference between them to justify this different treatment. An example

of the application of this principle might be in the interrogation of prisoners of war. If one country wants

to subject prisoners of war to certain kinds of harsh treatments but objects to its own prisoners being

treated the same way by other countries, then there need to be relevant differences between the

situations that justify the different treatment. Otherwise, the country is open to the charge of moral

inconsistency.

This principle, or something like it, comes up in many other types of moral debates, such as about

abortion and animal ethics. Animal rights advocates, for example, say that if we object to people harming

cats and dogs, then we are morally inconsistent to accept to the same treatment of cows, pigs, and

chickens. One then has to address the question of whether there are differences in the beings or in their

use for food that justify the differences in moral consideration we give to each.

Other Uses of Analogies

Analogies are the basis for parables, allegories, and forms of

writing that try to give a moral. The phrase “The moral of

the story is . . .” may be featured at the end of such stories, or

the author may simply imply that there is a lesson to be

learned from the story. Aesop’s Fables are one well­known

example of analogy used in writing. Consider the fable of the

ant and the grasshopper, which compares the hardworking,

industrious ant with the footloose and fancy­free

grasshopper. The ant gathers and stores food all summer to

prepare for winter; the grasshopper fiddles around and

plays all summer, giving no thought for tomorrow. When

winter comes, the ant lives warm and comfortable while the

grasshopper starves, freezes, and dies. The fable argues that

we should be like the ant if we want to survive harsh times.

The ant and grasshopper are analogs for industrious people

and lazy people. How strong is the argument? Clearly, ants

based on their similarity to other

items.

and grasshoppers are quite different from people. Are the

differences relevant to the conclusion? What are the

relevant similarities? These are the questions that must be

addressed to get an idea of whether the argument is strong or weak.

Practice Problems 5.3

Determine whether the following arguments are inductive or deductive. Click here

(https://ne.edgecastcdn.net/0004BA/constellation/PDFs/PHI103_2e/Answers_PracticeProblems5.3.pdf)

to check your answers.

1. All voters are residents of California. But some residents of California are Republican.

Therefore, some voters are Republican.

a. deductive

b. inductive

2. All doctors are people who are committed to enhancing the health of their patients. No

people who purposely harm others can consider themselves to be doctors. Therefore,

some people who harm others do not enhance the health of their patients.

a. deductive

b. inductive

3. Guns are necessary. Guns protect people. They give people confidence that they can

defend themselves. Guns also ensure that the government will not be able to take over its

citizenry.

a. deductive

b. inductive

4. Every time I turn on the radio, all I hear is vulgar language about sex, violence, and drugs.

Whether it’s rock and roll or rap, it’s all the same. The trend toward vulgarity has to

change. If it doesn’t, younger children will begin speaking in these ways and this will

spoil their innocence.

a. deductive

b. inductive

5. Letting your kids play around on the Internet all day is like dropping them off in

downtown Chicago to spend the day by themselves. They will find something that gets

them into trouble.

a. deductive

b. inductive

6. Many people today claim that men and women are basically the same. Although I believe

that men and women are equally capable of completing the same tasks physically as well

as mentally, to say that they are intrinsically the same detracts from the differences

between men and women that are displayed every day in their social interactions, the

way they use their resources, and the way in which they find themselves in the world.

a. deductive

b. inductive

7. Too many intravenous drug users continue to risk their lives by sharing dirty needles.

This situation could be changed if we were to supply drug addicts with a way to get clean

needles. This would lower the rate of AIDS in this high­risk population as well as allow

for the opportunity to educate and attempt to aid those who are addicted to heroin and

other intravenous drugs.

a. deductive

b. inductive

8. I know that Stephen has a lot of money. His parents drive a Mercedes. His dogs wear

cashmere sweaters, and he paid cash for his Hummer.

a. deductive

b. inductive

9. Dogs are better than cats, since they always listen to what their masters say. They also

are more fun and energetic.

a. deductive

b. inductive

10. All dogs are warm­blooded. All warm­blooded creatures are mammals. Hence, all dogs

are mammals.

a. deductive

b. inductive

11. Chances are that I will not be able to get in to see Slipknot since it is an over 21 show, and

Jeffrey, James, and Sloan were all carded when they tried to get in to the club.

a. deductive

b. inductive

12. This is not the best of all possible worlds, because the best of all possible worlds would

not contain suffering, and this world contains much suffering.

a. deductive

b. inductive

13. Some apples are not bananas. Some bananas are things that are yellow. Therefore, some

things that are yellow are not apples.

a. deductive

b. inductive

14. Since all philosophers are seekers of truth, it follows that no evil human is a seeker after

truth, since no philosophers are evil humans.

a. deductive

b. inductive

15. All squares are triangles, and all triangles are rectangles. Therefore, all squares are

rectangles.

a. deductive

b. inductive

16. Deciduous trees are trees that shed their leaves. Maple trees are deciduous trees.

Therefore, maple trees will shed their leaves at some point during the growing season.

a. deductive

b. inductive

17. Joe must make a lot of money teaching philosophy, since most philosophy professors are

rich.

a. deductive

b. inductive

18. Since all mammals are cold­blooded, and all cold­blooded creatures are aquatic, all

mammals must be aquatic.

a. deductive

b. inductive

19. I felt fine until I missed lunch. I must be feeling tired because I don’t have anything in my

stomach.

a. deductive

b. inductive

20. If you drive too fast, you will get into an accident. If you get into an accident, your

insurance premiums will increase. Therefore, if you drive too fast, your insurance

premiums will increase.

a. deductive

b. inductive

21. The economy continues to descend into chaos. The stock market still moves down after it

makes progress forward, and unemployment still hovers around 10%. It is going to be a

while before things get better in the United States.

a. deductive

b. inductive

22. Football is the best sport. The athletes are amazing, and it is extremely complex.

a. deductive

b. inductive

23. We should go to see Avatar tonight. I hear that it has amazing special effects.

a. deductive

b. inductive

24. Pigs are smarter than dogs. It’s easier to train them.

a. deductive

b. inductive

25. Seventy percent of the students at this university come from upper­class families. The

school budget has taken a hit since the economic downturn. We need funding for the

three new buildings on campus. I think it’s time for us to start a phone campaign to raise

funds so that we don’t plunge into bankruptcy.

a. deductive

b. inductive

26. Justin was working at IBM. The last person we got from IBM was a horrible worker. I

don’t think that it’s a good idea for us to go with Justin for this job.

a. deductive

b. inductive

27. If she wanted me to buy her a drink, she would’ve looked over at me. But she never

looked over at me. So that means that she doesn’t want me to buy her a drink.

a. deductive

b. inductive

28. Almost all the people I know who are translators have their translator’s license from the

ATA. Carla is a translator. Therefore, she must have a license from the ATA.

a. deductive

b. inductive

29. The economy will not recover anytime soon. Big businesses are struggling to keep their

profits high. This is due to the fact that consumers no longer have enough money to

purchase things that are luxuries. Most of them buy only those things that they need and

don’t have much left over. Those same businesses have been firing employees left and

right. If America’s largest businesses are losing employees, then there won’t be any jobs

for the people who are already unemployed. That means that these people will not have

money to pump back into the system, and the circle will continue to descend into

recession.

a. deductive

b. inductive

Determine which of the following forms of inductive reasoning are taking place.

30. The purpose of ancient towers that were discovered in Italy are unknown. However,

similar towers were discovered in Albania, and historical accounts in that country

indicate that the towers were used to store grain. Therefore, the towers in Italy were

probably used for the same purpose.

a. argument from analogy

b. statistical syllogism

c. inductive generalization

d. causal argument

31. After the current presidential administration passes a bill that increases the amount of

time people can be on unemployment, the unemployment rate in the country increases.

Economists studying the bill claim that there is a direct relation between the bill and the

unemployment rate.

a. argument from analogy

b. statistical syllogism

c. inductive generalization

d. causal argument

32. When studying a group of electricians, it was found that 60% of them did not have

knowledge of the new safety laws governing working on power lines. Therefore, 60% of

the electricians in the United States probably do not have knowledge of the new laws.

a. argument from analogy

b. statistical syllogism

c. inductive generalization

d. causal argument

33. In the state of California, studies found that violent criminals who were released on

parole had a 68% chance of committing another violent crime. Therefore, a majority of

violent criminals in society are likely to commit more violent crimes if they are released

from prison.

a. argument from analogy

b. statistical syllogism

c. inductive generalization

d. causal argument

34. Psilocybin mushrooms cause hallucinations in humans who ingest them. A new species of

mushroom shares similar visual characteristics to many forms of psilocybin mushrooms.

Therefore, it is likely that this form of mushroom has compounds that have neurological

effects.

a. argument from analogy

b. statistical syllogism

c. inductive generalization

d. causal argument

35. A recent survey at work indicates that 60% of the employees believe that they do not

make enough money for the work that they do. It is likely that a majority of the people

that work for this company are unhappy in their jobs.

a. argument from analogy

b. statistical syllogism

c. inductive generalization

d. causal argument

36. A family is committed to buying Hondas because every Honda they have owned has had

few problems and been very reliable. They believe that all Hondas must be reliable.

a. argument from analogy

b. statistical syllogism

c. inductive generalization

d. causal argument

Summary and Resources

Chapter Summary

The key feature of inductive arguments is that the support they provide for a conclusion is always less

than perfect. Even if all the premises of an inductive argument are true, there is at least some possibility

that the conclusion may be false. Of course, when an inductive argument is very strong, the evidence for

the conclusion may still be overwhelming. Even our best scientific theories are supported by inductive

arguments.

This chapter has looked at four broad types of inductive arguments: statistical arguments, causal

arguments, arguments from authority, and arguments from analogy. We have seen that each type can be

quite strong, very weak, or anywhere in between. The key to success in evaluating their strength is to be

able to (a) identify the type of argument being used, (b) know the criteria by which to evaluate its

strength, and (c) notice the strengths and weaknesses of the specific argument in question within the

context that it is given. If we can perform all of these tasks well, then we should be good evaluators of

inductive reasoning.

Critical Thinking Questions

1. What are some ways that you can now protect yourself from making hasty generalizations

through inductive reasoning?

2. Can you think of an example that relates to each one of Mill’s methods of determining causation?

What are they, and how did you determine that it fit with Mill’s methods?

3. Think of a time where you reasoned improperly about correlation and causation. Have you seen

anyone in the news or in your place of employment fall into improper analysis of causation?

What did they do, and what errors did they make?

4. Learning how to evaluate arguments is a great way to empower the mind. What are three forms

of empowerment that result when people understand how to identify and evaluate arguments?

5. Why do you believe that superstitions are so prevalent in many societies? What forms of

illogical reasoning lead to belief in superstitions? Are there any superstitions that you believe

are true? What evidence do you have that supports your claims?

6. Think of an example of a strong inductive argument, then think of a premise that you can add

that significantly weakens the argument. Now think of a new premise that you can add that

strengthens it again. Now find one that makes it weaker, and so on. Repeat this process several

times to notice how the strength of inductive arguments can change with new premises.

Web Resources

http://austhink.com/critical/pages/stats_prob.html

(http://austhink.com/critical/pages/stats_prob.html)

This website offers a number of resources and essays designed to help you learn more about statistics

and probability.

http://www.nss.gov.au/nss/home.nsf/pages/Sample+size+calculator

(http://www.nss.gov.au/nss/home.nsf/pages/Sample+size+calculator)

The Australian government hosts a sample size calculator that allows users to approximate how large a

sample they need.

http://www.gutenberg.org/ebooks/27942 (http://www.gutenberg.org/ebooks/27942)

Read John Stuart Mill’s A System of Logic, which is where Mill first introduces his methods for identifying

causality.

Key Terms

appeal to authority

See argument from authority.

argument from analogy

Reasoning in which we draw a conclusion about something based on characteristics of other similar

things.

argument from authority

An argument in which we infer that something is true because someone (a purported authority) said

that it was true.

causal argument

An argument about causes and effects.

cogent

An inductive argument that is strong and has all true premises.

confidence level

In an inductive generalization, the likelihood that a random sample from a population will have

results that fall within the estimated margin of error.

correlation

An association between two factors that occur together frequently or that vary in relation to each

other.

inductive arguments

Arguments in which the premises increase the likelihood of the conclusion being true but do not

guarantee that it is.

inductive generalization

An argument in which one draws a conclusion about a whole population based on results from a

sample population.

joint method of agreement and difference

A way of selecting causal candidates by looking for a factor that is present in all cases in which the

effect occurs and absent in all cases in which it does not.

margin of error

A range of values above and below the estimated value in which it is predicted that the actual result

will fall.

method of agreement

A way of selecting causal candidates by looking for a factor that is present in all cases in which the

effect occurs.

method of concomitant variation

A way of selecting causal candidates by looking for a factor that is highly correlated with the effect in

question.

method of difference

A way of selecting causal candidates by looking for a factor that is present when effect occurs and

absent when it does not.

necessary condition

A condition for an event without which the event will not occur; A is a necessary condition of B if A

occurs whenever B does.

population

In an inductive generalization, the whole group about which the generalization is made; it is the group

discussed in the conclusion.

proximate cause

See trigger cause.

random sample

A group selected from within the whole population using a selection method such that every member

of the population has an equal chance of being included.

sample

A smaller group selected from among the population.

sample size

The number of individuals within the sample.

statistical arguments

Arguments involving statistics, either in the premises or in the conclusion.

statistical syllogism

An argument of the form X% of S are P; i is an S; Therefore, i is (probably) a P.

strong arguments

Inductive arguments in which the premises greatly increase the likelihood that the conclusion is true.

sufficient condition

A condition for an event that guarantees that the event will occur; A is a sufficient condition of B if B

occurs whenever A does.

trigger cause

The factor that completes the cause chain resulting in the effect. Also known as proximate cause.

weak arguments

Inductive arguments in which the premises only minimally increase the likelihood that the conclusion

is true.

Choose a Study Mode 

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount