10.00-10.40 | George Powell | The semantics and pragmatics of proper names |
10.40-11.00 | coffee break | |
11.00-11.40 | Richard Horsey | The art of chicken sexing |
11.40-12.20 | Tim Wharton | Bee-dances, smiles and other natural codes |
12.20-13.30 | lunch |
13.30-14.10 | Dmitry Sityaev | Phonetic and phonological correlates of broad, narrow and contrastive focus |
14.10-14.50 | Gordon Hunter | Studies on the statistical modelling of Dialogue |
14.50-15.30 | Piers Messum | Characteristics and consequences of speech breathing for English |
15.30-16.00 | tea break |
16.00-16.40 | Ann Law | Cantonese sentence-final particles and the CP system | |
16.40-17.20 | Dick Hudson & Amela Camdzic |
Second position clitics in Word Grammar |
Much philosophical ink has been spilled over the semantics of natural language proper names. In particular, there is still fundamental disagreement over the question of whether proper names are descriptive or referential, i.e. whether their contribution to truth-conditional content is quantificational or whether they merely contribute their referents. In this paper I shall argue that the failure to reach consensus on this question can be put down to the question being unanswerable, since it falsely presupposes that proper names are the sort of thing that are either descriptive or referential.
The truth-conditional approach to meaning in natural language depends on two complementary assumptions: that the meaning of a natural language expression can be explicated in terms of the relations that hold between that expression and things in the world; and that, since linguistic meaning is constant across contexts, so must the relations that link an expression to the world be (leaving to one side the complications introduced by indexicality). It is the combination of these assumptions that leads to the conclusion that proper names are either quantificational or referential. Since, however, there seems to be convincing evidence on both sides, both sides have had been extensively defended in the literature. The referentialist stance receives its strongest support from the behaviour of proper names in modal contexts, while the descriptivist stance relies heavily on the kind of puzzles made familiar by Frege and Russell, in particular puzzles concerning co-referential and empty proper names. Each camp has, in turn, expended much ingenuity in attempting to explain away the evidence that most strongly supports their rival: the primary strategy for both has been to identify the object of opposing intuition not with the proposition expressed but with some extra proposition communicated. And why might we want to make such a move? Because, on the assumption that proper names must be either descriptive or referential, compelling evidence for the descriptive use of names forces the conclusion that names cannot contribute their referents to truth-conditional content, and vice versa.
In this paper, I intend to explore the implications for an analysis of proper names of abandoning the two foundational assumptions above. Rather than assuming that the semantics of proper names can be explicated directly in terms of the relations that hold between names and the things for which they stand (be those things complex or simple), I shall propose a two-step analysis of the truth-conditional contribution of names, the first step being that between linguistic expressions and mental representations and the second, that between mental representations and external objects. The way in which I shall develop this analysis will also undermine the second foundational assumption, that the truth-conditional links between linguistic expressions and the external things for which they stand are constant: on the account I shall advocate, proper names are semantically mandated to correspond to concepts with particular functional profiles, not to concepts with particular truth-conditional profiles. It is thus not proper names themselves that are either referential or descriptive, but their uses.
It's a little-known fact that the world's best chicken sexers come almost exclusively from Japan. Poultry owners once had to wait until chicks were five to six weeks old before differentiating male from female (the sex became visible when adult feathers started appearing, on the basis of which cockerels and pullets could easily be distinguished). But for commercial egg producers it's important to identify the females as soon as possible, to avoid unnecessary feeding of unproductive male chicks. Enter the Zen-Nippon Chick Sexing School, which began two-year courses training people to accurately discriminate the sex of day-old chicks on the basis of very subtle cues. If you ask the expert chicken sexers themselves, they'll tell you that in many cases they have no idea how they make their decisions. They just look at the rear end of a chick, and 'see' that it is either male or female. This is somewhat reminiscent of those expert chess players, often cited in the psychological literature, who can just 'see' what the next move should be; expert wine tasters, who have the uncanny ability to identify wines and vintages; and medical experts who can diagnose diseases on the basis of subtle information. All of these skills are hard-earned and not accessible to introspection.
But is there really anything unusual about the chicken sexer, the chess grand master, the wine buff or the medical expert? I will argue that there is not. Granted, it takes these people a lot of time and effort to learn their skills, they are highly accurate, can generally reach a decision quickly (in the case of chicken sexers, at the rate of over 1000 chicks per hour) and do so subconsciously. In fact, though, we are all constantly making categorisations of this sort: we are highly accurate at categorising natural kinds, substances, artefacts, and so on. We do so quickly and subconsciously, and the process is completely inaccessible to introspection (on what basis do you decide to classify something as a chair or as a tiger, for example?). The only difference seems to be that we learnt these skills as infants, and don't recall how much time or effort it took. This paper investigates chicken sexing from a cognitive perspective. The investigation sheds new light on the processes underlying our categorisation abilities, and suggests new ways of tackling the issue of conceptual content within a broadly Fodorian framework.
Sentences are rarely uttered in a behavioural vacuum. We colour and flavour our speech with a variety of natural vocal and facial gestures, which indicate our internal state by conveying attitudes to the propositions we express or information about our emotions or feelings. Such behaviours are often beyond our conscious control: they are involuntary, spontaneous. Almost always, however, understanding an utterance depends to some degree on their interpretation.
On the whole, the approach favoured by linguists is to abstract away from such behaviours: to sift out extraneous, non-linguistic phenomena, and focus on the rule-based grammar - the code. There are two reasons, however, why the pragmatist should cast a broader net. Firstly, thanks to the influential work of Paul Grice (1989), it is now increasingly recognised that verbal comprehension is more than a simple coding-decoding process. Any attempt to characterise linguistic communication should reflect the fact that it is an intelligent, intentional activity. Secondly, the aim of a pragmatic theory is to explain how utterances are understood; the task, therefore, of describing and explaining precisely what certain natural behaviours indicate, and how they are interpreted, would appear to fall squarely within the domain of pragmatics.
In his groundbreaking paper 'Meaning' (1957) Paul Grice drew a distinction between natural (N) and non-natural (NN) meaning, and showed how the latter might be characterised in terms of intentions and the recognition of intentions. To Grice, what is meantNN equated with what is intentionally communicated. I argue that there are two respects in which Grice's natural/non-natural dichotomy is not exhaustive: Firstly, from the fact that they have not been deliberately produced, it does not follow that spontaneously occurring natural behaviours cannot be intentionally shown to provide evidence of an intention to inform, and hence used in intentional communication. Secondly, some of these natural behaviours appear to be inherently communicative. Many things in the world carry information, or 'indicate': tree-rings, footprints in snow, the scent of ripe fruit. However, only a sub-set of these indicators are ever exploited, and only a sub-set of these indicators have an indicating function, that is, owe their continued existence to the fact that they indicate. These distinctions hold for natural behaviours too-behaviours that Grice regarded as carrying natural meaning. Some do not have an indicating function; I suggest that the interpretation of these is governed entirely by inference. Others, however, do. I propose that these behaviours have a coded element and are best analysed as natural codes.
In this talk I will focus on the second of these points-the existence of natural codes. The discussion will involve exploring an interesting point of contact between the philosophical, psychological and ethological literatures. It is hoped that research into natural codes will go some small way toward accommodating natural behaviours into a satisfactory pragmatic framework.
References:
Grice, P. (1957) Meaning. In Philosophical Review 66, 377-88. Reprinted in Grice (1989): 212-223.
Grice, P. (1989) Studies In The Way of Words. Cambridge, MA: Harvard University Press.
This current study is aimed at investigating phonetic and phonological correlates of what is called in intonational phonology "broad", "narrow" and "contrastive" focus. It has been debated for a long time that sentences like "She broke her LEG" with an accent on the word "leg" are ambiguous between broad focus reading ("What happened?") and narrow focus reading ("What did she break?"), and that there are no reliable phonetic cues for disambiguating the two sentences. On the other hand, corrective-contrastive focus (e.g. "She broke her neck, right?" - "No, she broke her LEG.") has been claimed to have a different status from broad/narrow focus, as far as its prosodic realisation is concerned (Brown, Currie and Kenworthy, 1980; Pierrehumbert and Hirschberg, 1990; Bartels and Kingston, 1994).
Two experiments were conducted in which we attempted to find phonetic and phonological correlates of broad, narrow and contrastive focus. The original hypothesis was that contrastive focus would have a different phonetic and phonological realisation from broad or narrow focus: contrastive accents were expected to have higher F0 peak and late peak alignment. It was also expected that contrastive focus would lead to downstep blocking in the data containing two pitch accents where focus happens to be placed on the object (e.g. "Anna rang LENNY"). In Experiment 1, phonetic correlates of broad, narrow and contrastive focus were investigated. A production experiment was conducted in which 5 speakers were presented with auditory stimuli and visual prompts to elicit utterances with broad, narrow and contrastive focus readings of the sentences containing one pitch accent (e.g. "My HEAD aches"). The results obtained revealed that focus condition has no significant effect on the fundamental frequency of the peak or peak alignment. However, it was found that word duration varied systematically as a function of the focus condition: words under narrow and contrastive focus tended to be longer than the same words under broad focus.
In Experiment 2, phonological correlates of broad, narrow and contrastive focus were investigated. A production experiment was conducted in which 3 speakers were presented with auditory stimuli and visual prompts to elicit utterances with two pitch accents containing broad focus and narrow/contrastive focus on the object (e.g. "MELANIE will lean on MERRYLIN"). It was found that all three-focus conditions tended to be associated with downstep. However, when ratios between peak2 and peak1 were compared, it was found that sentences with contrastive focus undergo a lesser degree of downstep than the same sentences with broad or narrow focus. Phonological analysis of the intonation of the sentences with broad focus is also offered.
In conclusion, the findings reveal that there is difference between broad and non-broad focus realisation from the phonetic and phonological point of view. More analysis is needed to answer the question whether there are any differences between narrow and contrastive focus. The results obtained so far seem to be in line with Rooth's (1992) theory of focus which provides a uniform treatment of the notion of focus: any focused constituent is viewed as evoking a set of contrastive elements, no matter whether focus is intended to be narrow or contrastive.
One area currently of major interest within the field of speech technology is that of dialogue systems - automated systems which facilitate human-machine interaction by permitting the user to talk to a computer system in a relatively natural manner, rather than relying on the use of pre-defined keywords for instructions. An example of such a dialogue system is AT&T's "How may I help you ?" (Gorin et al, 1997), which is an attempt to reduce the frustration caused to customers when they encounter an automated "push button" menu call-reception system ("For other options, press the star key"). However, current dialogue systems are mostly highly domain-specific and seem to rely heavily on the use of keywords and "salient phrase fragments". There appears to have been very little work done on tailoring the language model - the statistical model used to predict the a priori probability of words and utterances - used by such a system to the general nature of dialogue. Although it has been recognised that the performance of language models is quite sensitive to how different are the types of material used for their training and for testing or usage (Rosenfeld 2000) - for example, a system trained on even a very large corpus of data from TV or radio news broadcasts would show inferior performance if tested on, say, informal telephone conversations (even if the quality of the data is equally good in both cases) than an otherwise comparable system trained on a much smaller corpus of data from actual 'phone conversations - little work has actually been done on the statistical modelling of dialogue in its own right.
What is it about dialogue which is distinctive - which it clearly is: it differs in its nature and style from both written text and monologue speech? Can its distinctive characteristics, such as turn-taking and word associations both between things said by the same speaker (intra-speaker) and by both speakers (inter-speaker), be exploited within a statistical language model? If so, such a model could be greatly beneficial to dialogue systems for human-machine interaction, and give a different perspective, based on statistics derived from a large corpus of real, spontaneous speech, on the nature of dialogue. This talk will describe the dialogue data within the British National Corpus (BNC) (Burnard 1995) and experiments on the application of statistical language modelling techniques to this data.
'Speech breathing' (SB) is the traditional name for the use of the respiratory system to create the aerodynamic conditions necessary for speech. The claims made by scholars in the past of a correspondence between events in SB and phonetic events have not been confirmed in instrumental studies, so the topic is not now given importance in phonetics or phonology. However, there are good reasons to re-evaluate the significance of SB, particularly for English and other Germanic languages:
I will explain how consideration of 'effort' and aerodynamics in child speech development sheds light on a number of phenomena: the so-called rhythm of English, the cluster of properties associated with the 'tense' and 'lax' vowel classes, and some other perennial issues. I will go on to describe how these claims can be tested.
Cantonese sentence-final particles (SFPs) are bound forms attached to the end of utterances and constitute an important grammatical category in the language. They express a wide range of meanings such as aspect, focus, modality, mood, temporal order and conditional reasoning. Most early studies of sentence-final particles aim at documenting the inventory of all particles (Cheung 1972, Kwok 1984, Leung 1992, Matthews and Yip 1994) or analysing their semantics and conversational functions (Chan 1998, Fung 2000, Lee and Yiu 1998a, 1998b, 1999, Luke 1990). There has only been sporadic research on the syntax (Law 1990, Lee and Yiu 1998a, 1998b, 1999, Tang 1998) and phonology (Law 1990) of a subset of them and the acquisition of sentence-final particles by Cantonese-speaking children (Lee et al. 1996, Lee and Law 2000, 2001).
As most sentence-final particles take clausal scope, they are generally assumed to occupy some position in C. Law (1990) proposes three positions for SFPs: [Spec, CP] for question particles, C0 for ge3 ("assertion") as it is argued to occur in relative clauses, noun-complement clauses and the hai ... ge3 construction, and a position within VP for tim1 ("also"/"even"), which is claimed to be part of the discontinuous zung … tim1 construction. While hers is one of the first syntactic studies of sentence-final particles, only a few SFPs are discussed and it is unclear whether the same positions also host other SFPs. Lee and Yiu (1998a, 1998b, 1999) only examine two particles lei4 ("recent past" or "assertion") and ge3 ("assertion") which are said to be VP-final. Tang (1998) suggests that zaa3 ("only") is generated in T0 as it has a focusing function.
My paper attempts to give a more comprehensive syntactic analysis of Cantonese sentence-final particles. Adopting Rizzi's (1997) split-CP system, I propose two positions for SFPs, one base-generated in the Force field and the other lower than the higher Topic. The CP domain of Cantonese that I argue for is represented schematically below.
Force [SFP1] Topic SFP2* Focus Topic …
Since SFP1 encodes Force, only one particle can be generated in this field. This position typically hosts particles that express speech acts such as interrogative, imperative and declarative, with additional mood and speaker-oriented evidential or epistemic meanings. SFP2 is a head located lower than the higher Topic. This head may iterate, as indicated by the asterisk *, so more than one sentence-final particle of this class can be generated to form a cluster. This set of particles includes those that express restrictive and additive focus. SFP1 and SFP2 are differentiated by the morphological feature [Q]: SFP1 is either [+Q] (in yes-no questions) or [-Q] (non-questions) whereas SFP2 lacks this feature. These configurations should capture the co-occurrence restrictions and cluster ordering of SFPs, as well as the scopal properties of some sentence-final particles with other quantificational elements.
In this talk we discuss Wackernagel's clitics in Serbo-Croat-Bosnian (SCB). SCB has clitic pronouns, auxiliaries and a question particle li which cluster in the second position within the clause. The second position is defined as the one after the first word (1a) or the first phrase (1b).
(1a) | Moj | brat | mu | ga | je | dao. |
My | brother-nom | dat-3sg | acc-3sg | aux-3sg | give-participle. | |
My brother gave it to him. | ||||||
(1b) | Moj | mu | ga | je | brat | dao. |
My | dat-3sg | acc-3sg | aux-3sg | brother-nom | give-participle. | |
My brother gave it to him. |
Our proposal is developed in the framework of Word Grammar. We argue for a morphological account of cliticisation, where the clitic cluster and the clitics' host form a word - a "hostword". The clitics and the word to which they attach are related to the hierarchically higher hostword by part-whole relationship, as well as by syntactic dependencies. The hostword can have only one pre-dependent. This explains the second position of the cluster.
We shall explain how the Word Grammar analysis works for straightforward cases, as well as for some apparent deviations from this pattern. In a nutshell, we argue that the structure of SC clause is flat as the result of generalised raising. The consequence of generalised raising is that the dependencies between sentential items are raised to the level of the hostword. The "primary" sentential links such as between the verb and its arguments are obligatorily raised, while other links may be optionally so. This accounts for patterns in (1a) and (1b). Our analysis explains straightforwardly the pattern known from the work in GB/Minimalist framework as LongHead Movement. It is illustrated in (2b), which shows that the lexical verb is displaced from its base position (2a). This pattern raises problems because it shows the properties of head movement, but also of phrasal movement. The example of the latter is the apparent violation of Head Movement Constraint.
(2a) | Ivan | ga | je | vidjeo. |
Ivan-nom | acc-3sg | aux-3sg | see-participle | |
Ivan has seen him | ||||
(2b) | Vidjeo | ga | je. | |
See-participle | acc-3sg | aux-3sg | ||
He has seen him. |
We show that our Word Grammar analysis predicts the existence of the pattern in (2b). Generalised raising gives priority, in terms of word order, to additional dependencies at the level of the hostword. This frees the verb to be displaced on its own without dragging its arguments along, giving rise to (2b).