message from George Lakoff

From: Dick Hudson (dick@linguistics.ucl.ac.uk)
Date: Wed Nov 27 2002 - 09:53:27 GMT

  • Next message: J L Speranza: "Reference assignment and R-based narrowing: simple as that?"

    Dear Relevance,
    As an interested lurker on many lists (including this one), I thought you
    might be interested in this message which has just gone out over the
    cogling list.
    Dick

    =================
    Date: Tue, 26 Nov 2002 16:59:18 -0800 ( PST)
    From: George Lakoff <lakoff@cogsci.berkeley.edu>
    To: cogling@ucsd.edu
    Subject: Re: Convention and metaphor
    Errors-To: cogling-errors@ucsd.edu
    Sender: cogling-relay@ucsd.edu
    Reply-To: George Lakoff <lakoff@cogsci.berkeley.edu>
    X-MailScanner: PASSED (v1.2.7 95540 gAR1L1SX019258 mailbox5.ucsd.edu)
    X-UCL-PHONETICS-&-LINGUISTICS-MailScanner: Found to be clean

                                           Grice, Lewis, and Metaphor Theory

            It is nice to see good ol' topics from the 60's - Paul
    Grice's implicatures and David Lewis' conventionality - taken up
    again. The phenomena need to reconsidered seriously within the
    cognitive linguistics context. But when Sherman Wilcox writes "I
    admit to knowing not a stitch of Davidson," I fear that he isn't the
    only one, and that most folks in the cognitive linguistics tradition
    may also not know the context of Grice's and Lewis' work either.
    Since I shared a history with them (they were friends of mine back
    when I was working on logic), I think a bit that history might be
    useful - especially since it is relevant to the current discussion.
    Their work cannot now be taken at face value and has to be thought of
    in a historical perspective, for reasons that will become clear below.

            Paul Grice's lectures on implicature (Language and
    Conversation) were given as the William James Lectures at Harvard in
    1967. I was teaching there at the time and I attended. David Lewis
    was a grad student there and, I believe, he was in the room too.
    Grice's intent was conservative. Strawson had given lots of examples
    showing the inadequacy of Russell's symbolic logic in general and his
    Theory of Descriptions in particular. Grice was defending Russell.
    His argument was that you could keep Russellian logic for semantics
    and truth conditions, while getting the real natural language
    examples right by adding a theory of conversation on top of the logic.
    Since I was trying to incorporate logic and pragmatics into
    linguistics at the time (1967), I became enamored of Paul's work. He,
    however, refused to publish it. I managed to get a copy and
    distributed over 1,000 copies through the linguistic underground by
    1973, and also managed to get chapter 2 published in the Cole-Morgan
    volume on Speech Acts in 1975. (The story involves a bar in Austin,
    Texas.)

            Paul was an objectivist who insisted that all meaning was
    literal. Nonetheless, much of Paul's work was insightful - although
    his one metaphor example was pitifully analyzed. The only way Paul's
    theory could deal with metaphor was to claim that metaphors had a
    literal meaning conveyed via implicature. Searle later tried applying
    this idea in his paper on metaphor in the Ortony volume, a disastrous
    attempt.

            During the 70's, Paul's work became taken very seriously by
    those trying to keep formal logic as a theory of thought - with the
    result that it got reinterpreted - for good reason. Gazdar did a
    formalization within logic of the maxim of quantity in his
    dissertation. Grice's student Deirdre Wilson (she had typed his
    manuscript) realized that all the maxims could be seen as instances
    of relevance. Her theory of relevance also tried to preserve formal
    logic as a theory of semantics. When Fillmore formulated frame
    semantics, I realized that relevance - and with it Gricean
    implicature - could be handled via frame-based inference with a
    cognitive linguistics framework. The formal mechanism for doing this
    precisely did not exist then (the 70's), though it does now -
    Narayanan's simulation semantics within NTL. It would be a great
    thesis topic for someone to work out the technical details now that a
    technical mechanism is available.

            David Lewis' Harvard dissertation on Convention was a product
    of the same era - 1968, if I remember correctly. David was also an
    objectivist - of the most extreme variety. It's worth taking a look
    at his essay in the Davidson-Harman volume of the Semantics of
    Natural Language, where he argues that meaning has nothing to do with
    psychology - neither mind nor brain. For David, meaning could only be
    a correspondence between formal symbols and the objective world,
    where the objective world was taken as being modeled via
    set-theoretical models. The symbols were to be linked to the
    world-models via some mathematical function. For human languages,
    that function he claimed was determined by convention - which is why
    he wrote his thesis on the topic. But "convention" could not be could
    not be a matter of human psychology for David; it had to be objective
    as well. David's idea was to use the economic theory of his time -
    utility theory - to provide what he took as an objectivist account of
    convention, since utility was seen as something objective in the
    world. The irony here, of course, is that Danny Kahneman, my former
    cognitive science colleague at Berkeley - now at Princeton - just won
    the Nobel Prize in economics for proving that such a view of
    economics cannot be maintained. The examples he used were cases that
    revealed how people really reason: by prototype, frame, and metaphor
    - the staples of cognitive linguistics.

            David's work, like Paul's, was insightful, despite the
    objectivist intellectual tradition in which it was embedded. They
    were both super-smart people who transcended the theories they were
    brought up with. Both theories were exemplary products of their time,
    the late 60's (a period I enjoyed and am particularly fond of). But
    the intellectual tradition in which the theories were embedded cannot
    be taken seriously today, and so the work cannot be taken at face
    value. The theories were formulated before the age of cognitive
    science and neuroscience. We now know from those fields that
    objectivism is false (see the survey in Women, Fire, and Dangerous
    Things and the update in Philosophy in the Flesh). We know that every
    aspect of thought and language works through human brains, which are
    structured to run bodies and which create understandings that are not
    objectively true of the world.

            Metaphor is an important part of this story. The neural
    theory of metaphor (see PITF) explains how the system of conceptual
    metaphor is learned, why certain conceptual metaphors are universal
    and others are not, why the system is structured around primary
    metaphor, why metaphor acquisition works as it does, why conceptual
    metaphors preserve image-schemas, why metaphorical inference works as
    it does, and why conceptual metaphors tend to take sensory-motor
    concepts as conceptual source domains and non-sensory-motor concepts
    as targets.

            Convention also makes sense only in neural terms. What each
    of us takes as conventional must be instantiated in our synapses. The
    question is, what is the mechanism? In some cases, the usage-based
    theories of gradual entrenchment may make sense. For other cases,
    they don't. Metaphor is a case where those theories make no sense, as
    I pointed out in my previous note. The old entrenchment theories
    simply cannot explain what the neural theory of metaphor explains.

            Bill Croft aks, "How can a linguist decide whether a metaphor is
    conventional?" and he claims, "There is no easy way, and little or no research
    that I know of on the topic (please direct me to any!)." It is true
    that there is no easy way. The work is hard. But there is a huge
    amount of research on the topic. I refer him to chapter 6 of
    Philosophy in the Flesh (pp. 81-87), where nine forms of convergent
    evidence are listed - and to the references at the end of the book,
    where massive literature on the research is cited. Croft himself, for
    all his many accomplishments, is, to my knowledge, not a metaphor
    researcher. For those who are, there's a lot to know.

            In summary: Cognitive linguistics is committed to being
    consistent with what is known about the brain and the mind. That
    changes over time, and cognitive linguistics must change with it.
    Entrenched ideas about entrenchment may have to change as well. The
    ideas of Paul Grice and David Lewis from the 60's cannot just be
    taken over into cognitive linguistics as they were formulated. They
    cannot be taken at face value. They have to be rethought on the basis
    of what has been learned since. This is not just true of Grice and
    Lewis. My old work on generative semantics from the 60's had lots of
    neat insights as well. But they too have to be rethought. Some can be
    translated into cognitive linguistics - others cannot. None of this
    is easy or obvious. It is important to know the history of all this
    work. Those who do not know history are doomed to repeat it.

    Best wishes to all,

    George

    Richard (= Dick) Hudson

    Phonetics and Linguistics, University College London,
    Gower Street, London WC1E 6BT.
    +44(0)20 7679 3152; fax +44(0)20 7383 4108;
    http://www.phon.ucl.ac.uk/home/dick/home.htm



    This archive was generated by hypermail 2b29 : Wed Nov 27 2002 - 10:28:18 GMT