Timothy Williamson's talk at the Workshop will be loosely based on the following paper:

Vagueness, Indeterminacy and Social Meaning

Timothy Williamson

University of Oxford

Timothy.Williamson@philosophy.oxford.ac.uk

This paper constitutes an informal introduction to a cluster of problems in the philosophy of logic and language associated with the three elements of its title. In the course of it I will dispute the widespread view that the principles underlying the calculi of symbolic logic pioneered by Frege, Russell and others are strictly applicable only to the kind of perfectly precise artificial language to which some mathematicians aspire. I will argue that they apply equally well to the massively vague language inherent in social life.

**1. Vagueness**

In philosophy, the term
`vagueness' is a label for the phenomenon of *borderline cases*. An expression or concept is vague if and only if
it has borderline cases, that is, actual or potential cases in which it neither
clearly applies nor clearly fails to apply. For example, a borderline case for
the term `tall' is someone who is neither clearly tall nor clearly not tall.
Even when one can see the person in question without difficulty, one cannot decide
whether the term `tall' applies - or perhaps one decides it one way while other
speakers equally familiar with English and with an equally good view of the
person decide it the other way. There seems to be no room for a scientific
investigation to resolve the question. However carefully we measure the
person's height, and even the height of other people in the relevant comparison
class, the matter remains undecided. The more prone a term is to such
borderline cases, the vaguer it is. To the extent to which a term is not vague,
it is precise.

Vagueness
must be distinguished from several phenomena with which it is sometimes
confused: in particular, from *unspecificity*,
*ambiguity* and *context-dependence*.

A
term is unspecific if it applies to a variety of cases. For example, if you ask
me what proportion of the conference participants were woman and I reply `It
was not between 49% and 51%', my answer was highly unspecific - it leaves in
too many diverse possibilities (for example, 1% and 99%) to give you much
information about the conference - but that by itself does not make it at all
vague. It may in fact be slightly vague - for example, it might be unclear
whether to count some people as participants - but that is not a consequence of
the unspecificity. A variety of cases in which a term clearly applies does not
amount to even one case in which it neither clearly applies nor clearly fails
to apply. If I had said `Roughly 40%' my answer would have been vaguer but more
specific; I have given you useful information about the conference, but it is
not clear whether my answer applies to the case in which the exact proportion
of women was 35%. The difference between vagueness and unspecificity emerges
vividly when one negates a term. The negation of a vague term is just as vague
as the original term, since it has the same borderline cases. By contrast, the
less specific a term is, the *more*
specific its negation is. For example, the negation of the highly unspecific
answer `Not between 49% and 51%' is `Between 49% and 51%', which is highly
specific, although of course not perfectly specific.

A term is ambiguous if it has more than one meaning. A term can be vague but unambiguous if it has a single vague meaning. Equally, a term might in principle be ambiguous between several precise meanings. Probably the best way to think of ambiguity is as a case in which several linguistic expressions can be realized in physically the same form. For example, we count `bank' in `financial bank' and `bank' in `river bank' as distinct words. Vagueness and precision are properties of the disambiguated linguistic expression itself.

A term is context-dependent if its application depends on the context in which it is used. In practice, vague terms tend to be context-dependent and context-dependent terms tend to be vague, but theoretically the two phenomena are quite distinct. For example, the reference of the word `me' depends on who is speaking, but that does not constitute any vagueness, since it does not in itself imply any unclarity in the application of the term in any given context (which is not to deny that such unclarity can arise). Equally, a vague term could in principle be context-independent. For example, if a term had in every context the application which the term `tall' has in this present context, then that term would be context-independent but still vague, because `tall' as used in this present context has borderline cases.

**2. The Sorites Paradox**

Vagueness is responsible for one
of the great philosophical paradoxes, the *sorites
paradox*. It was known already to the classical Greeks. Imagine a heap of
sand, and suppose that one grain is subtracted with minimal collateral damage.
Is what is left still a heap? If one dissents, one is committed to the idea
that a single grain can make the difference between a heap and a non-heap. But
`heap' is a vague word; when one learns it, one does not learn a precise
cut-off point for its application. Our vague use of `heap' seems to validate a *tolerance principle* to the effect that
if *n* grains can make a heap then *n*‑1 grains can make a heap. Thus
if we start with the obvious fact that 10,000 grains can make a heap and we
apply the tolerance principle, we deduce that 9,999 grains can make a heap,
from which we deduce that 9,998 grains can make a heap, and eventually that one
grain can make a heap, and that 0 grains can, which is absurd.

With
a little thought one can construct a similar paradox for any other vague term.
Consequently, one can construct a sorites paradox for just about any term as it
is used in practice, for language as it is used in practice is just about
always vague. In some legal and scientific contexts we can use stipulations to
make our language much more precise than it usually is, but it never becomes
perfectly precise, not least because those stipulations must themselves be made
in language that is not already itself perfectly precise. For practical
purposes, it is sufficient if one resolves those borderline cases that actually
arise; it does not matter that *recherché*
hypothetical borderline cases remain. But even a small amount of such residual
vagueness will enable one to construct a sorites paradox, perhaps by making the
individual steps smaller. For example, one can dismantle the heap molecule by
molecule rather than grain by grain.

Differences between speakers in their standards for the application of terms are one source of vagueness, and therefore of sorites paradoxes, but they are not the only source. Speakers do not fix even private cut-off points for their terms, and could not do so with perfect precision even if they tried, for the reason just noted. And most speakers have no reason to try. Of course a given speaker will cease to apply the term at some point or other, but that point will not be constant even for a single speaker. Moreover, there will be typically be a border zone for each speaker in which they hesitate, shrug their shoulders or otherwise refuse to give straight answers to the question.

Sorites
paradoxes are a threat to the application of formal logic to language as it is
used in practice. By very elementary logic, one can deduce from the premise
`10,000 grains can make a heap' and instances of the tolerance principle the
conclusion `0 grains can make a heap'. One need only make repeated use of the
standard rule of *modus ponens*, which
allows one to infer from premises of the form `A' and `If A then B' the
conclusion `B'. *Modus ponens* is
usually regarded as the most basic rule governing the use of conditional
statements. To reject it is to challenge standard systems of logic in a
fundamental way. Alternatively, one might argue that the tolerance principle
has a false instance. But then the question arises: for which number *n* does the tolerance principle fail? To
answer that question is to specify a cut-off point, which is just what we are
not in a position to do. Yet another idea is to deny that all instances of the
tolerance principle are true while also denying that it has any false instance
in particular: but that combination constitutes another radical departure from
the classical position.

Because
the sorites paradox threatens central principles of logic, it was studied by
the great Stoic logician Chrysippus. The beginning of modern formal logic is
usually associated with the publication in 1879 of Gottlob Frege's *Begriffschrift*, subtitled `a formula
language, modeled upon that of arithmetic, for pure thought'. In it Frege
specifically mentions the sorites paradox as a threat to the interpretation of
his theorems (§27) and rules that sentences with vague terms such as `heap' do
not express judgements in the sense in which logic is concerned with
judgements. For Frege, the principles of logic properly apply only to thought
purified of all vagueness, the kind of thought to which science and in
particular mathematics aspires. If we reason in language that is not perfectly
precise, incoherence is an occupational hazard. In a paper published in 1923,
Bertrand Russell initiated the study of vagueness as a philosophical topic in
its own right, rather than simply as something to be cleared out of the way
before serious work could begin. But Russell too thought that the principles of
his logic were perfectly correct only as applied to perfectly precise language.
Moreover, he held that we can never attain perfect precision; we can only
imagine it. Thus we can only imagine speaking a language for which the
principles of his logic would be correct.

Russell and Frege leave us in an almost intolerable position, in which language and thought as they actually occur are a realm of anarchy over which the principles of logic have no authority. Surely we need principles to distinguish good from bad reasoning in practice. If their logic is not correct for vague language, should we not look for another logic that is? Since Frege and Russell, a major research effort has therefore gone into the construction of non-classical logics that are supposed to be appropriate for vague languages. Nevertheless, in both logic and mathematics the dominant formal systems in practice are still classical ones, like those of Frege and Russell; they embody the very assumptions that vagueness is supposed to make problematic.

**3. Fuzzy Logic**

The intended interpretation of classical logic relies on the principle of bivalence: every formula expresses either a truth or a falsehood (and not both). Bivalence is assumed by the standard explanation of the meanings of logical connectives such as `not', `if', `and' and `or' by means of truth-tables which specify the truth-value (truth or falsity) assigned to a complex formula consisting of such a connective applied to simpler formulas in terms of the truth-values assigned to those simpler formulas. For example, we explain `not' by saying that if `A' expresses a truth then `Not A' expresses a falsehood and if `A' expresses a falsehood then `Not A' expresses a truth. If the formula `A' expressed neither a truth nor a falsehood, any such explanation would be incomplete; the truth-table would not say whether `Not A' expressed a truth, a falsehood or neither. The core of classical logic is the system known as the propositional calculus, and the principles of classical logic are usually justified by reference to the truth-tables. The sorites paradox can be represented by reasoning within the propositional calculus, and therefore strikes at the core of classical logic. The truth-tables are also central to modern formal semantics in its standard truth-conditional form; the paradox is therefore a threat to such referential semantic theories too.

One
natural reaction to the sorites paradox is to propose that the dichotomy of
truth and falsity should be replaced by a continuum of degrees of truth. The
idea is that the statement that there is a heap is perfectly true when there
are 10,000 grains and perfectly false when there are 0 grains, but that in
between its degree of truth gradually declines as grain after grain is
subtracted, rather than dropping from all to nothing at some mysterious point.
Degrees of truth are in some ways like probabilities. They are often thought of
as running from 1 (perfect truth) to 0 (perfect falsity). If the probability of
`A' is *x* then the probability of `Not
A' is 1‑*x*; similarly, if the
degree of truth of `A' is *x* then the
degree of truth of `Not A' is 1‑*x*.
If the degree of truth of `A' is ˝ then the degree of truth of `Not A' is also
˝: thus `A' is half-true; its assertion and its denial are equally correct, so
`A' is a perfect borderline case. Vagueness is analysed as a phenomenon by
which statements take intermediate degrees of truth. There is a crucial
difference between probabilities and degrees of truth. Probability is usually
understood as a measure of ignorance. If you have tossed a coin and I am trying
to guess the result, I assign a probability of ˝ to the statement that it came
up heads because, from my perspective, that statement and its negation are
equally likely; but I assume that one or other of them is in fact true. I
simply do not know which. By contrast, the degree theorist holds that the
degree of truth is the bottom line in a borderline case; there is no underlying
truth or falsity of which one is ignorant.

One
approach based on degrees of truth is *fuzzy
logic*. Indeed, fuzzy logic has been so successful in its public relations
that in popular discussion the phrase is used almost synonymously with `logic
suitable for vague language'. In reality, fuzzy logic embodies one theory of
vagueness amongst many - it is not even the only approach to use degrees of
truth -and its technical defects make it quite unsuitable as a logic for vague
language.

The most distinctive feature of fuzzy logic is its attempt to generalize the classical truth-tables from two truth-values to a continuum of degrees of truth.

Let us consider the connective `and' as an example. The classical semanticist explains the meaning of `and' by specifying the truth-value of the conjunction `A and B' as a function of the truth-value of `A' and the truth-value of `B'. According to the classical truth-table, if `A' and `B' are both true then the conjunction `A and B' is true; if `A' is false or `B' is false then `A and B' is false. Analogously, the fuzzy logician attempts to explain the meaning of `and' by specifying the degree of truth of `A and B' as a function of the degree of truth of `A' and the degree of truth of `B'. This is a striking departure from the probabilistic model, for the probability of `A and B' is not a function of the probability of `A' and the probability of `B'. To see this, suppose that all you know about `A' and `B' is that they each have a probability of 0.5, and ask yourself what the probability of `A and B' is. Your data simply do not determine an answer to the question; the probability of `A and B' could be anywhere between 0 and 0.5. For example, if `A' and `B' both turn out to be `The coin came up heads', then the probability of `A and B' is also 0.5; but if `A' turns out to be `The coin came up heads' while `B' turns out to be `The coin did not come up heads' then the probability of `A and B' is 0. By contrast, fuzzy logicians assume that the degree of truth of `A and B' is determined solely by the degree of truth of `A' and the degree of truth of `B'. Typically, they postulate that the degree of truth of `A and B' is the minimum of the degree of truth of `A' and the degree of truth of `B'. That postulate agrees with the classical truth-table on the extremal values 0 and 1 and smoothly fills in the intervening gaps. But many anomalous consequences follow from any assumption the fuzzy logician makes about which function the degree of truth of a conjunction is of the degrees of truth of its conjuncts. For example, suppose that Fred and Ted are identical twins of exactly the same height. No matter what that height is, the following statement seems perfectly false: `Fred is tall and Ted is not tall'. For that statement to have a non-zero degree of truth, Fred would have to be at least slightly taller than Ted, which he is not. But now suppose that Fred is a perfect borderline case for `tall'; thus the degree of truth of `Fred is tall' is 0.5. Consequently, the degree of truth of `Ted is tall' is also 0.5. By the rule for negation, the degree of truth of `Ted is not tall' is 0.5 too. Thus the conjuncts of `Fred is tall and Ted is not tall' have the same degree of truth as the conjuncts of the repetitive conjunction `Fred is tall and Fred is tall', so the fuzzy logician is committed to assigning them the same degree of truth simply by the principle that their degrees of truth are determined by the degrees of truth of their conjuncts. But since mere repetition of a conjunct should make no difference to degree of truth, `Fred is tall and Fred is tall' should have the same degree of truth as the simple `Fred is tall', that is, 0.5. Therefore, the fuzzy logician has to assign a degree of truth of 0.5 to the conjunction `Fred is tall and Ted is not tall'. But that is entirely inappropriate, for, as already noted, that conjunction is not evenly poised between perfect truth and perfect falsity; it is at the very least far closer to the latter.

Degrees of truth are sometimes given a social interpretation. On this view, roughly speaking, the degree of truth of `A' is the probability that a competent speaker chosen at random would assent to `A', given a forced choice between assent and dissent. Such an interpretation merely reinforces the objection to fuzzy logic. For suppose that when competent speakers are forced to decide, 50% assent to `Fred is tall' and 50% assent to `Ted is not tall'. It certainly does not follow that the percentage assenting to `Fred is tall and Ted is not tall' will be even approximately the same as the percentage assenting to `Fred is tall and Fred is tall', as fuzzy logic on this interpretation would require. Under natural simplifying assumptions, 50% of competent speakers will assent to `Fred is tall and Fred is tall' and 0% will assent to `Fred is tall and Ted is not tall'. In practice the results would no doubt be messier than that, but not in ways helpful to fuzzy logic.

The social interpretation of degrees of truth is objectionable on other grounds. Competent speakers may be ignorant of relevant information. For example, they may not have seen Fred and Ted properly. They may be under various misapprehensions about average heights, and so on. When they became aware of further evidence, they might themselves insist that their earlier judgements had been mistaken. But even their judgements made in possession of complete statistics about height are likely to suffer from errors in the interpretation of the data. The social interpretation measures degrees of acceptance, not degrees of truth. In general they are not the same. For example, the existence of Holocaust-deniers gives the statement `The Holocaust never took place' a non-zero degree of acceptance but not a non-zero degree of truth; it would not become any less false if Holocaust-denial became more widespread. But distinguishing degrees of truth from degrees of acceptance will not save fuzzy logic, since its rules apply correctly neither to degrees of truth nor to degrees of acceptance.

**4. Indeterminacy and
Ignorance**

The failure of fuzzy logic does
not imply the failure of the project of replacing classical logic and semantics
by something supposedly better suited to vague language. There are other ways
of attempting to execute the project. I will concentrate here on the very
popular idea that vagueness involves a kind of *indeterminacy*. The idea is that in a borderline case it is simply
indeterminate whether the term applies; it neither determinately applies nor
determinately fails to apply. On this view, there is no fact of the matter,
nothing to be ignorant of. The question `Is there a heap?' sometimes has no
right answer. Indeed, vagueness is often regarded as providing the clearest and
least controversial examples of indeterminacy.

The
notion of indeterminacy must be distinguished from other notions with which it
is sometimes confused, in particular from *undecidability*
and *indeterminism*.

As a result of the incompleteness theorems of Kurt Gödel and others, we know that in most areas of mathematics every consistent formal system will leave some formulas undecided: they are neither provable nor refutable within the system. In that sense they are undecidable. But it does not follow that they are indeterminate in truth-value. It does not even follow that their truth-value cannot be known. In fact, the very formula of arithmetic that Gödel showed to be neither provable nor refutable in the formal system with which he was concerned (granted that the system is consistent, or, to be more accurate, has the property known as w-consistency, which it certainly has) was also revealed to be true by the nature of his proof, as Gödel knew. It is not at all indeterminate in truth-value. Indeed, every formula of arithmetic is decidable once we add a certain clearly valid rule of inference (a rule known as the w-rule); it is just that the result does not count as a formal system, because the rule requires infinitely many premises and formal procedures are by definition finite. No result of mathematical logic establishes any indeterminacy whatsoever.

Indeterminacy is also not a consequence of indeterminism, the thesis that the present does not causally determine the future: the thesis, in other words, that more than one state of the universe at a given time is consistent with the state of the universe at an earlier time and the laws of nature. For to say that it is not yet causally determined whether there will be a sea-battle tomorrow is quite consistent with saying that the question `Will there be a sea-battle tomorrow?' has a right answer of which we are ignorant. Quantum mechanics is often interpreted as postulating a kind of indeterminacy, but that does not follow merely from its indeterminism, and seems to be a structurally different phenomenon from the one with which we are concerned.

The idea of indeterminacy can be applied to the problem of vagueness in several technically different ways. For example, it may or may not be combined with a theory of degrees of truth; perhaps determinacy comes in degrees. I will focus on some general problems with the very idea of indeterminacy. Suppose, for example, that Fred is a borderline case for `tall'. On the indeterminacy view, the sentence `Fred is tall' is therefore indeterminate in truth-value. That is, it is neither determinately true nor determinately false; it is not determinately true and it is not determinately false. But what exactly does that claim mean? In particular, what is the word `determinately' adding?

The simplest suggestion is that the word `determinately' is present only for emphasis: for logical purposes, the indeterminacy claim is simply that `Fred is tall' does not express a truth and does not express a falsehood. On this view, the indeterminacy thesis is just the negation of the principle of bivalence, the foundation of classical logic and semantics, which can look so problematic in borderline cases. In other words, indeterminacy consists of truth-value gaps.

It is of course extremely plausible that some linguistic utterances neither express a truth nor express a falsehood simply because they express no proposition at all. A greeting such as `Hi!' is an obvious example. The question of truth or falsity does not even arise. But utterances of vague declarative sentences in borderline cases are not like that. For even if `Fred is tall' is borderline, one could in the same circumstances express a truth by saying `Fred could have been tall' (for example, if he had been fed differently as a child). But the possibility claim attributes possibility to a proposition, and is true only if there is a proposition for possibility to be correctly attributed to. Therefore, `Fred is tall' expresses a proposition - the proposition that Fred is tall. The question is whether a truth-value gap can arise when a sentence does express a proposition.

The fundamental logical principle governing the notion of truth for propositions is given by the minimalist schema `The proposition that P is true if and only if P'. Instances of the schema result from substituting the same declarative sentence for both occurrences of `P'. This is the very principle (already grasped by Aristotle) that prevents the notion of truth from becoming a free-floating metaphysical myth. To say that the proposition that the Holocaust occurred is true boils down to saying simply that the Holocaust occurred. Similarly, the fundamental principle governing the notion of falsity for propositions is the minimalist schema `The proposition that P is false if and only if not P'. For example, the proposition that it is raining is false if and only if it is not raining. Thus if Fred is tall then the proposition that Fred is tall is true and the sentence `Fred is tall' expresses a truth. If Fred is not tall then the proposition that Fred is not tall is false and the sentence `Fred is tall' expresses a falsehood. But the indeterminacy thesis as we are currently interpreting it claims that the the sentence does not express a truth and does not express a falsehood. Therefore, indeterminacy theorists must presumably deny that Fred is tall and deny that Fred is not tall, since they deny propositions equivalent to those. Now to deny that Fred is tall is to assert that Fred is not tall and to deny that Fred is not tall is to assert that Fred is not not tall. Thus indeterminacy theorists are by implication asserting that Fred is not tall and not not tall. But that is a contradiction, for `not not tall' contradicts `not tall'. I assume that an adequate account of vagueness must at least avoid self-contradiction. Therefore, something is wrong with the indeterminacy thesis when interpreted as the denial of bivalence.

Naturally, there is far more to be said about the logical issues in play here. I will not attempt to describe the devices by which opponents of bivalence have attempted to make their position consistent. In outline, what they need to do is to make the proposition that `Fred is tall' expresses a truth more committing than the claim that Fred is tall, so that they can deny that `Fred is tall' expresses a truth without denying that Fred is tall. Alternatively, they might try to make the claim that `Fred is tall' expresses a falsehood more committing than the claim that Fred is not tall, so that they can deny that `Fred is tall' expresses a falsehood without denying that Fred is not tall. In attempting to carry out the task, opponents of bivalence try to pack more into the notions of truth and falsity than is given by the straightforward minimalist schemas mentioned above. For example, they invoke a notion of correspondence or failure of correspondence to the facts in a sense metaphysical enough not to reduce to those schemas. But we cannot clarify the phenomenon of vagueness by mystifying the notions of truth and falsity.

Alternatively,
the indeterminacy theorist may assert indeterminacy in some other sense without
denying bivalence. But if the claim that something is indeterminate in
truth-value is compatible with its being true or false, what does the claim
mean? There no longer seems to be any substance to the claim that there is no
fact of the matter, no right answer to the question. Rather, the idea of
indeterminacy collapses into the idea of *ignorance*:
in borderline cases there is a fact of the matter, a right answer to the
question, but we are not in a position to find out what it is. When a heap is
dismantled grain by grain, there is a point at which a grain is subtracted from
a heap and no heap remains, but we cannot identify that point. That is the
epistemic account of vagueness defended in my book *Vagueness* and by other writers too.

One can arrive at epistemicism simply by adhering to classical logic and semantics, with all its benefits of simplicity and strength, and noting its consequences. But one can also arrive at it by carefully working through the implications of the alternative theories of vagueness, and discovering that they make the phenomenon even more mysterious than epistemicism does - for example, by mystifying the notions of truth and falsity. Indeed, the ignorance which epistemicism postulates is explicable on the basis of principles about human knowledge and its limits - in particular, about its need of margins for error - which are motivated independently of vagueness. In this paper, there has been space only for some brief hints in those directions. In the rest of the paper I will consider two objections to epistemicism, one concerning the relation between meaning and use, the other concerning the possibility of communication.

**5. Meaning and Use**

A natural objection to the epistemic account of vagueness is that it mystifies the relation between the meaning of expressions in a language and the use which speakers make of those expressions in practice. In a borderline case, speakers are either agnostic about the application of a term or disagree about its application. Overall, their use of the term is symmetric between truth and falsity. For example, in a borderline case `Fred is tall' may attract equal amounts of assent and dissent. According to epistemicism, it is either true or false. `Tall' means something that is either true or false of Fred, given his height and the heights of people in the relevant comparison class. But then the meaning of `tall' seems to be floating free of competent speakers' practice in using it. How can that avoid making the notion of meaning utterly mysterious?

There are strong reasons to think that the meaning of an expression in a language is not determined solely by factors internal to the brains or bodies of speakers, but also depends on features of the external environment in which the language is used. If so, and use were confined to factors internal to the brains or bodies of speakers, then meaning would not determine use. In what follows I will take the notion of use more broadly to include such features of the external environment. I will also include in the use of an expression relevant features of the linguistic environment in which it is used. Furthermore, `use' will be taken in a dispositional sense, to include not just those uses which speakers actually make of their language but also the uses which they are disposed to make of it in counterfactual circumstances. The point of all that is to give a sense in which it is a plausible constraint on the meaning of an expression that it should be determined by the use of that expression. These extra features do not seem to break the symmetry in a borderline case. For example, nothing in our external environment establishes any obvious line between `tall' and `not tall'. There are many places where a line could be drawn, none more natural than all the rest.

What
is meant by the statement `Use determines meaning', when use is understood as
above? The statement can be construed as making the modest claim that meaning
supervenes on use, in the sense that there cannot be a difference in meaning
between expressions without any difference in use. More formally, the
supervenience claim is that for any two possible situations *s*_{1}
and *s*_{2} and expressions *e*_{1} and *e*_{2}, if *e*_{1} has the same use in *s*_{1} as *e*_{2}
has in *s*_{2}, then *e*_{1} has the same meaning in *s*_{1}
as *e*_{2}
has in *s*_{2}. Of course, the exact significance of this claim will
depend on how the notions of meaning and use are elaborated. They are being
used here in a fairly schematic way. For the crucial point does not depend on
exactly how the schemas are filled in. What matters is that epistemicism about
vagueness is entirely consistent with the supervenience of meaning on use as
just defined. The epistemicist holds that `Fred is tall' expresses a truth or
falsehood in a borderline case, but it in no way follows (as a violation of
supervenience would imply) that there is another possible situation in which
the words have the same use and the relevant heights are the same but the
truth-value is reversed (which would require the meaning to be different).

The
epistemicist can consistently maintain that meaning is a function of use. What
the epistemicist should deny is that meaning is a *transparent* function of use - that is, a function that enables us
simply to deduce the cut-off point for a vague term from some canonical
description of the use. For we have no idea how to make any such deduction. But
we have grounds independent of vagueness for denying that meaning is a
transparent function of use. For example, even if every competent speaker
assents to a sentence, it does not follow that the sentence expresses a truth.
If Holocaust denial became universal, and everyone assented to `The Holocaust
did not occur', they would not thereby be speaking the truth. Even if we were
to deny epistemicism about vagueness, we would still have no defensible theory
that would enable us to deduce meaning from use in the way required. Thus the
failure of epistemicism to provide such a theory does not constitute an
objection to epistemicism.

Some
theorists are of course tempted at this point to advert to the use of the
expression by ideally rational speakers in full possession of the facts. But
the epistemicist replies that a speaker in full possesion of the facts *would* know whether Fred was tall, and
could assent to or dissent from the sentence `Fred is tall' accordingly. For
according to the epistemicist, it is either true or false that Fred is tall;
therefore either it is a fact that Fred is tall or it is a fact that Fred is
not tall. The anti-epistemicist may object that such facts are not of the kind
intended, but the implicit invocation of a privileged class of facts is a
retreat in the direction of the kind of correspondence theory of truth that
mystifies truth and falsity by rejecting the minimalist schemas mentioned
above. Alternatively, if the anti-epistemicist adverts to the use of the
expression by ideally rational speakers in full possession of the *precise* facts, then the problem is that
the facts knowable by speakers anything like us are always to some extent
vague.

The unavailability of a transparent function from use to meaning does not mean that nothing informative can be said about the way in which use determines meaning. But the rough and ready remarks available to non-epistemic theories of vagueness are also available to the epistemic theory. The same general factors will be relevant. Epistemicism may have some kind of principle to the effect that in cases of perfect symmetry of the kind imagined above, where everything else is entirely equal (which it almost never is), one of the two truth-values (falsity, say) is preferred. A non-epistemic theory may have a different default principle to the effect that in the same circumstances a neutral status is preferred; it will the need further principles to adjudicate between neutral and non-neutral status. In these respects, there is little to choose between the theories.

**6. Communication and
Social Meaning**

One objection to epistemicism raised by Stephen Schiffer and others is that it makes successful communication a miracle. The basic idea is that different speakers, equally familiar with the language, inevitably use vague words in slightly different ways. For example, some will be slightly more liberal in their use of `tall' than others. But then, it is argued, if meaning supervenes on use, the sharp cut-off points postulated by epistemicism are bound to come at different points for different speakers, so that each speaker of the language uses each vague expression with at least a slightly different meaning from every other speaker. In that case, speakers are never fully successful in communication, because what the hearer understands is never exactly the same as what the speaker means.

The first point to notice about this objection is that it is not specific to epistemicism; it arises for every account of the semantics of vague language, provided that it satisfies the constraint that meaning supervenes on use. For example, suppose that the meaning of a vague word as used in some specified context is given by a spectrum of degrees of truth rather than by an all-or-nothing cut-off point. If meaning supervenes on use, then that spectrum supervenes on different patterns of use for different speakers, and there is the same argument as before for doubting that two speakers ever mean exactly the same by their words. Indeed, since non-epistemic theories multiply the possibilities for the semantic status of an utterance, they multiply the possible meanings available for words and therefore seem to make coincidence in meaning between speakers even more of a miracle. No reason has been given for supposing that epistemicism is more vulnerable to the problem than any other theory of vagueness.

One
response to the problem is simply to bite the bullet and agree that speakers
and hearers usually do mean something different by the same words. The pill
might be sugared by the insistence that they often mean *approximately* the same, and that that is all that matters for
practical purposes. As Schiffer argues, however, that view has some
unattractive consequences. In particular, on most accounts of the semantics of
the propositional attitude attributions, it will have the effect that most
statements in the third-person of the form `S said that P' will turn out to be
false, because the reference of the reporter's terms will be slightly different
from the reference of the terms in the reported utterance. Devices to avoid
that consequence look *ad hoc* and tend
to have worse repercussions elsewhere.

A
better response appeals to a phenomenon the importance of which has been
emphasized by Hilary Putnam: in his phrase, the division of linguistic labour.
Putnam has argued persuasively that the phenomenon is crucial to the role of natural
kind terms such as `water', `gold', `lemon' and `tiger' in public languages. By
ourselves, most of us cannot detect the difference between gold and fool's
gold. Nevertheless, we acknowledge that there is a difference: we know that not
everything to which we should be inclined to apply the term `gold' on the basis
of its appearance really is gold. We rely on scientists and others to detect
the difference between gold and fool's gold for us. We defer to the experts.
Thus the relation of reference between the term `gold' as used by an individual
speaker and the element gold is mediated by the linguistic community. Speakers
typically do not establish the reference of their terms autonomously. They
employ words in a public language with the reference established for those
words in the public language. They do so not by learning the public language
perfectly but simply by deferring to it - for example, in the way in which they
accept communal correction when their use is out of line. They let the speech community
as a whole do the referential work for them. By this mechanism, speakers with
varying abilities to recognize gold when they see it can all use the word
`gold' with exactly the same reference. Of course, *some* knowledge of the public meaning is needed; a speaker who
thought that `gold' meant *accountant*
might be deemed to be insufficiently in touch with the public meaning to have a
participant's rights of reference. But a rough and ready acquaintance with the
public meaning is sufficient. Putnam proposes the `hypothesis of the
universality of the division of linguistic labor':

Every linguistic community [...] possesses at least some terms whose associated `criteria' are known only to a subset of the speakers who acquire the terms, and whose use by the other speakers depends upon a structured cooperation between them and the speakers in the relevant subsets.

Putnam's insight has been extended and developed by Tyler Burge, who has showed that it is not restricted to natural kind terms, and that it applies to the reference of the mental concepts expressed by words in a public language.

Although the division of linguistic labour was first studied independently of debates on vagueness, it provides an immediate solution to Schiffer's problem about communication with vague language. Both speaker and hearer use their words as words of a public language, with whatever reference they have in that language. They can use the words with exactly the same reference as each other even if their individual use of the words does not match exactly, for their reference is determined socially, not individually. This solution is available to almost any theory of vagueness, for almost any such theory can be formulated at the social level.

Schiffer has objected to this solution. He claims that it does not help when (as commonly) the reference of the vague expressions is sensitive to contextual factors, and in particular to the speaker's intentions, since those factors are not part of the public language. For example, the reference of the vague expression `a long time' is not the same when I say `The Romans ruled Spain for a long time' as when I say `The photocopier takes a long time to warm up'. Note that this objection too is not specific to the epistemic theory of vagueness; it is relevant to any theory which appeals to the division of linguistic labour to solve the problem of communication.

Schiffer's objection can be defeated by an application of the idea of the division of linguistic labour at the local rather than global level. In a particular conversational situation, the speaker and hearer form a temporary speech sub-community. To use Schiffer's example, if he says `I worked for a little while yesterday', then in thinking to myself `He said that he worked for a little while yesterday', I allow the reference of the vague expression `a little while' as I used it in this context to be determined parasitically on its reference as he used it in this context, provided that our current dispositions to use it do not diverge too radically. Perhaps I do misunderstand him if he would consider five minutes too long to count as `a little while'; but the existence of a narrow band of disagreement about the applicability of the phrase is consistent with exact agreement in reference, because it is consistent with the assumptions conditional on which I allow my reference to be parasitic on his - without knowing exactly what his reference is. Similarly, if you see someone whom I cannot see, and you say `That man is acting suspiciously', then I can ask you `What is he doing?', where the reference of `he' as I use it in this context is anaphorically dependent on the reference of `that man' as you use it in this context. In a given context, one person's reference can lock on to another's if their use is in sufficient but not perfect agreement. Thus the division of linguistic labour solves Schiffer's problem of communication even when reference depends on context - as of course it almost always does. Once again, the solution is available to almost any theory of vagueness. In particular, it is available to the epistemic theory.

It would be a misunderstanding to suppose that the emphasis on the social use of language in the final two sections of this paper represents some kind of alternative to referential semantics. For just what has been at issue is the social determination of reference. The division of linguistic labour is one of many ways in which the level of social use cannot be properly understood as autonomous from the level of reference, because what we find in social use are mechanisms to determine reference. Certainly a theory of vagueness formulated only in terms of use, without reference to reference, would miss the point. Vagueness, the phenomenon, is indeed a critical test of standard referential semantics, and one which it has often been assumed to fail. The vindication of the epistemic theory demonstrates the robustness and depth of referential semantics.

**Note on Further Reading**

I describe the history of the
problem of vagueness, critically assess the main accounts and defend the
epistemic theory in my book *Vagueness*
(London and New York: Routledge, 1994). Two up-to-date anthologies of papers on
vagueness are R. Keefe and P. Smith (eds.), *Vagueness:
A Reader* (London and Cambridge MA: MIT Press, 1996) and D. Graff and T.
Williamson (eds.), *Vagueness*
(Aldershot: Ashgate, forthcoming 2000). All three volumes contain extensive
bibliographies. Recently, several journals have had special issues on
vagueness: *The Southern Journal of
Philosophy*, vol. 33, supplement (1995); *The
Monist*, vol. 81, no. 2 (1998); *Acta
Analytica*, vol. 14, issue 23 (1999). For the latest exchange between
Stephen Schiffer and me see his `The epistemic theory of vagueness' and my
`Schiffer on the epistemic theory of vagueness', both in J. Tomberlin (ed.), *Philosophical Perspectives 13: Epistemology*
(Oxford and Boston MA: Blackwell, 1999). Hilary Putnam's views on the division
of linguistic labour can be found in his paper `The meaning of
"meaning"', reprinted in his *Mind,
Language and Reality: Philosophical Papers, Volume 2* (Cambridge: Cambridge
University Press, 1975); the quotation in the text is at p. 228. Tyler Burge's
seminal discussion is `Individualism and the mental', in P. French, T. Uehling
and H. Wettstein (eds.), *Studies in
Metaphysics: Midwest Studies in Philosophy, Volume 4* (Minneapolis MN:
University of Minnesota Press, 1979); see also his `Intellectual norms and
foundations of mind', *Journal of
Philosophy*, vol. 83 (1986): 697-720.