HOME

 

Project Description

A Unified Theory of Lexical Pragmatics

Funded by AHRB

 

 

Lexical pragmatics is a rapidly developing branch of linguistics which investigates the processes by which linguistically-specified (‘literal’) word meanings are modified in use. Well-studied examples include lexical narrowing (e.g. drink used to mean ‘alcoholic drink’), approximation (e.g. square used to mean ‘squarish’) and metaphorical extension (e.g. battleaxe used to mean ‘frightening person’).

There is increasing evidence that such processes apply automatically, and that a word rarely conveys exactly its literal meaning. Currently, there is little interaction between formal pragmatists (who are mainly interested in simplifying semantic description) and cognitive pragmatists (who are interested in the mechanisms underlying verbal comprehension).

We aim to create the foundations for interdisciplinary research by developing a framework in which the results of different approaches may be integrated. Typically, narrowing, loosening and metaphorical extension have been seen as distinct pragmatic processes and studied in isolation from each other. We will investigate the hypothesis that they are outcomes of a single pragmatic process which fine-tunes the interpretation of virtually every word. Our objectives are as follows:

(1a) To develop a unified, cognitively plausible account of lexical-pragmatic processes.
(1b) To compare this account with alternative accounts currently being developed.
(1c) To consider how far lexical-pragmatic processes are governed by general pragmatic principles which apply at both word and sentence level.
(1d) To investigate whether creative, occasion-specific uses (often found in literary works) involve the same processes as more regular, conventional uses.
(1e) To consider the implications of our account for the traditional notion of literal meaning.

 

Research Context

A striking feature of existing research on lexical pragmatics is that narrowing, approximation and metaphorical extension tend to be seen as distinct processes which lack a common explanation. Thus, narrowing is standardly treated as a case of I-implicature (governed by an Informativeness-principle, ‘What is expressed simply is stereotypically exemplified’) and analysed using default rules (Horn 1984, 2000; Levinson 2000; Blutner 1998, 2002). Approximation is often treated as a case of pragmatic vagueness involving different contextually-determined standards of precision (Lewis 1979; Lasersohn 1999). Metaphor is standardly seen as involving blatant violation of a pragmatic maxim of truthfulness, with resulting implicature(Grice 1975, Levinson 1983).

Typically, such accounts do not generalise: metaphors are not analysable as rough approximations, narrowings are not analysable as blatant violations of a maxim of truthfulness, and so on. The standard analyses have also been questioned on descriptive and theoretical grounds: for example, there is both theoretical and experimental evidence against the standard view of metaphor (Gibbs 1994, Recanati 1995, forthcoming; Glucksberg 2001). We aim to develop an alternative account by combining our own approach to pragmatics (Carston 2002; Wilson & Sperber 2002b) with results from recent experimental work on concepts and categorisation which suggest that understanding a word in context may involve the construction of an ‘ad hoc’ concept or occasion-specific sense (Barsalou 1989, 1992; Franks 1995; Sanford 2002; see also Recanati 2002).

In preliminary work, we have suggested that the process of ad-hoc concept construction may be constrained by general pragmatic principles (Carston 1997; Sperber & Wilson 1998a), and that the crucial pragmatic factor may be a comprehension procedure developed in our work on Relevance Theory (Sperber & Wilson 1986/1995; Carston 2002). On this approach, hearers bring to the interpretation of utterances a general expectation of relevance (defined in a technical sense, cf. Wilson & Sperber 2002b), and search for the most accessible interpretation which satisfies these expectations, fine-tuning the linguistically-specified word meaning where necessary. This account needs to be developed in detail, applied to a range of concrete data and tested against alternative accounts. Research questions and methods


Research Questions and Methods

(3a) What is the role of ‘default rules’ in lexical pragmatics?

The notions of default interpretation and default rule are widely used in analyses of lexical narrowing: e.g. bachelor is seen as understood by default to mean ‘eligible bachelor’, drink to mean ‘alcoholic drink’ and secretary to mean ‘female secretary’ (Lascarides & Copestake 1998; Levinson 2000). On this approach, the default interpretation should be the first to be tested, and should arise automatically in the absence of contrary evidence.

On our approach (based on ad hoc concept construction constrained by expectations of relevance), the narrowing process is much more flexible than default analyses predict, and the ‘default’ interpretation is not necessarily the first to be tested. We will compare these approaches using a combination of theoretical argument, corpus analysis and experimental investigation.

Methods

(i) Using word-sets standardly cited in the literature on default narrowing (e.g. bachelor, drink, secretary), we will search the Bank of English (a 450 million word corpus of contemporary British English) and WordNet (which provides synonym sets) for actual examples which we will use in developing our own account, and test it by comparing the relative frequencies of default vs flexible interpretations.

(ii) Using a well-established experimental paradigm developed by the psycholinguist Ira Noveck for measuring reaction times to context-sensitive meanings (Noveck & Posada 2002), we will test the claim that default interpretations are automatically assigned, and abandoned only if they result in inconsistency. (Noveck will run the experiments at the Cognitive Science Institute, Lyon, using word-sets supplied by us.)


(3b) Is it possible to develop a unified account of approximation and metaphor?

Metaphor is standardly seen as a blatant violation of a pragmatic maxim, while approximation is not. Our hypothesis is that approximation and metaphor are both varieties of loose use, involving the construction of an ad hoc concept with a broader denotation than the linguistically-specified meaning (Wilson & Sperber 2002a). Our account predicts that it should be possible to find a gradient of cases between literal use, approximation and metaphor; such cases would present a problem for the standard account. We will compare these approaches using a combination of theoretical argument and corpus analysis.

Methods

We will develop and test our hypotheses using two types of word which are strictly defined but often loosely or metaphorically used: (a) geometric terms (e.g. square, flat, round); (b) negatively-defined terms (e.g. painless, silent, raw). Using corpus searches based on Bank of English, WordNet, the Pragglejaz Metaphor site (which rates words in a corpus for literal vs metaphorical status) and lexical databases such as HECTOR (which provides manually-tagged senses; Atkins 1993), we will assess the claim that a gradient of cases exists, and test the ability of our account to deal with both conventional and creative uses (Kilgarriff 2001).

 

(3c) Can we give a unified account of all three lexical pragmatic processes?

Our account predicts that narrowing, approximation and metaphorical extension may combine, so that ‘bachelor’ might be simultaneously narrowed to eligible bachelor and loosened (as when a married man says ‘I’m a bachelor tonight’), and ‘silent’ might be loosely used to mean almost silent and simultaneously narrowed to denote a certain type of silence (e.g. speechlessness). Such cases seem to present problems for standard accounts which treat narrowing, loosening and metaphor in isolation from each other.

Methods

Using a combination of theoretical argument and corpus analysis, we will search our corpuses for examples of this type, assess their implications for standard accounts of narrowing and loosening, and consider how far the analyses of narrowing and loosening developed under (3a) and (3b) can explain these examples.

(3d) Does the explicit/implicit distinction apply at lexical as well as sentential level?

Lexical-pragmatic processes are standardly seen as contributing to implicit communication (implicatures) rather than explicit communication (Blutner 1998; Levinson 2000). Our hypothesis is that they contribute to explicit content (what is asserted; Carston 2002; Wilson & Sperber 2002a). The issue is partly terminological, but becomes substantive when combined with the claim that explicit and implicit communication involve distinct pragmatic processes (cf. Grice 1989; Recanati 1995, 2002; Levinson 2000).

Methods

Using a combination of theoretical argument and data drawn from our previous corpus searches, we will develop and test the hypothesis that (a) there is a worthwhile explicit/implicit distinction to be drawn at the lexical level, and (b) the relevance-theoretic comprehension procedure applies in the same way at both lexical and sentential levels.

 

(3e) Should the traditional notion of literal word meaning be abandoned?

We will end by considering the implications of our account for the traditional notion of literal word meaning, which is currently being criticised from a variety of perspectives (Gibbs 1994; Sperber & Wilson 1998; Carston 2002; Recanati forthcoming). We will investigate two hypotheses which are not necessarily mutually exclusive: (a) that some words do encode literal meanings which provide a starting point for inferential comprehension; (b) that some words encode not literal meanings but ‘pro-concepts’, place-holders into which pragmatically-constructed concepts are inserted. We expect each hypothesis to shed light on some of our examples, and will explore this idea using data gathered under (3a-d).



HOME