Completed Research Projects

This is an incomplete list of past projects.

Pragmatics Projects

    A Unified Theory of Lexical Pragmatics

    OBJECTIVES include: a) to develop a unified, cognitively plausible account of lexical-pragmatic processes and compare it with some alternative accounts. b) to consider how far lexical-pragmatic processes are governed by general pragmatic principles which apply at both word and sentence level. c) to consider the implications of our account for the traditional notion of literal meaning. d) to investigate whether creative, occasion-specific uses (often found in literary works) involve the same processes as more conventional uses.

    Dates:2003-2007. Funded by:AHRB. Duration:3 years.

    Researchers: Deirdre Wilson, Robyn Carston,

    Experimental Investigations of Semantic-Pragmatic Inferences

    The programme of research proposed in this application has two main objectives. First, we aim to empirically investigate whether widespread intuitive assumptions in the semantics/pragmatic interface actually enjoy psycholinguistic validity. Since groundbreaking work in the 70s, certain fundamental assumptions on scalar implicatures and presuppositions are uncontroversially adopted by theories of semantics and pragmatics. These assumptions are based on reflective judgments and introspection. Our research programme's first goal is to test whether these assumptions that are the building blocks of all major linguistic theories are empirically supported. Evidence that these assumptions are not psychologically valid would be of direct impact to semantics and pragmatics as it will require a totally different class of theories than the ones currently available in order to account for the data. The second aim of this programme is to contribute to the current landscape of linguistic theorising by testing theoryspecific hypotheses that are not amenable to reflective judgment and introspection. Current theories on semantics and pragmatics adhere to three basic modes of explanation: Grammatical, Pragmatic-Semantic, Pragmatic. These models have developed quite sophisticated and elaborate predictions on the defaultness by which implicatures are generated and presuppositions are accommodated as well as the degree of context- and structure- dependency of these inferences. The hypotheses concerning generating implicatures and accommodating presuppositions are too finegrained to be amenable to introspection. At this level of sophistication the only relevant data can be offered by empirical investigations on the processing of these inferences in real time that measure reading and reaction time and accuracy in time windows that are less than a second. Thus, our programme's second aim is to produce theory-critical data by employing standardised behavioural methodologies of experimental psychology that have not been applied to semantics and pragmatics before. Overall, this research will contribute to building a body of theory-neutral experimentally generated data on implicatures and presuppositions. Crucially, it will also put to test the major current theories of semantics and pragmatics that aim to account for these inferences in a unified way. Finally, by doing so, our research programme will also bring a more thorough linguistic insight into psycholinguistic research that has tended to investigate issues in core grammar, syntax, phonology and morphology, but neglect semantic and pragmatic issues on the interpretation of grammatical constructions.

    Dates:2007-2010. Funded by:AHRC. Duration:30 months.

    Researchers: Richard Breheny,

    Inferential Processes in literal and figurative lexical interpretation: A comparative psycho-pragmatic study of healthy, autistic and schizophrenic subjects

    Dates:2006-2009. Funded by:European Commission.

    Researchers: Robyn Carston, Paula Rubio,

    Inferential processes in lexical interpretation: A psycho-pragmatic study of healthy, autistic and schizophrenic participants

    Dates:2005-2008. Funded by:British Academy.

    Researchers: Paula Rubio,

Syntax Projects

    A Flexible Theory of Topic and Focus Movement

    The basic idea of the project is that topic and focus movement do not take place in order to check a syntactic feature [+focus] or [+topic] in the specifier of a designated functional projection. In fact, we believe that there is evidence that such features and such projections do not exist. Instead, we argue that topic and focus movement take place in order to facilitate the mapping between syntax and information structure.

    Dates:2006-2009. Funded by:AHRC. Duration:3 Years.

    Researchers: Michael Brody, Hans van de Koot, Ad Neeleman, Reiko Vermeulen,

Phonetics Projects

    Centre for Law Enforcement Audio Research (CLEAR)

    The CLEAR project aims to create a centre of excellence in tools and techniques for the cleaning of poor-quality audio recordings of speech. The centre is initially funded by the U.K. Home Office for a period of five years and will be run in collaboration with the Department of Electrical and Electronic Engineering at Imperial College.

    Dates:2007-2012. Funded by:U.K. Home Office. Duration:5 years.

    Researchers: Mark Huckvale,

    Local and global pitch contours in intonation

    With Douglas Honorof Haskins Laboratories, USA The goal of this research is to understand the relation between underlying pitch targets and the articulatory constraints in pitch production, and the relation between local and global aspects of intonation. We will investigate in both Chinese and English three factors important for forming surface F0 contours: (a) the nature of underlying local pitch targets, (b) the articulatory constraints under which pitch targets are implemented, and (c) the interaction between global intonation patterns and local pitch targets.

    Dates:1999-2004. Funded by:NIH.

    Researchers: Yi Xu,

    ProSynth: An integrated prosodic approach to device-independent, natural-sounding speech synthesis

    This collaborative project between Linguistics departments in Cambridge, London and York aimed to construct a model of computational phonology that integrates and extends modern metrical approaches to phonetic interpretation and to apply this model to the generation of high-quality speech synthesis. The three focal areas of research were intonation, morphological structure and systematic segmental variation. Integrating these is a temporal model that provides a linguistic structure or 'data object' upon which phonetic interpretation is executed and which delivers control information for synthesis.

    Dates:1997-2001. Funded by:EPSRC. Duration:4 years.

    Researchers: Jill House, Mark Huckvale,

    Role of sensory feedback in speech production as revealed by the effects of pitch- and amplitude-shifted auditory feedback

    With Charles Larson and colleagues, Northwestern University, USA The overall goal of this research project is to understand the function of sensory feedback in the control of voice fundamental frequency (F0) and intensity through the technique of reflex testing. The specific aims of the project are: to determine if the pitch-shift and loudness-shift reflex magnitudes depend on vocal task; to determine if the direction of pitch-shift and loudness-shift reflexes depend on the reference used for error correction; and to investigate mechanisms of interaction between kinesthetic and auditory feedback on voice control. The overall hypothesis is that sensory feedback is modulated according to the specific vocal tasks in whish subjects are engaged. By testing reflexes in different tasks, we will learn how sensory feedback is modulated in the tasks. We also hypothesize that auditory reflexes, like reflexes in other parts of the body, may reverse their direction depending on the vocal task. The mechanisms controlling such reflex reversals will be investigated, and this information will be important for understanding some voice disorders. It is also hypothesized that kinesthetic and auditory feedback interact in their control of the voice. Applying temporary anesthetic to the vocal folds and simultaneously testing auditory reflexes will provide important information on brain mechanisms that govern interaction between these two sources of feedback.

    Dates:2004-2009. Funded by:NIH. Duration:5 years.

    Researchers: Yi Xu,

    SIPhTra: System for Interactive Phonetics Training & Assessment

    An innovative method known as "Analytic Listening" (AL) has been developed at UCL as a tool for auditory training in phonetics. Its analytic approach formalises and sets a standard for good practice in this area. It is a flexible tool which can be adapted to class teaching or self-paced study, and which enables well-defined objective assessment of student attainment. Popular with users, it builds student confidence in an area often considered difficult. A major advance is now in progress as a result of the structured combination of AL with multimedia techniques which will support the incorporation of phonetic symbols and graphical displays.

    Dates:1997-2000. Funded by:HEFCE/DENI FDTL. Duration:33months.

    Researchers: John Maidment, Jill House,

    Signalling Focus in Sicilian Italian

    Purpose of project: an experimental investigation into the intonation-syntax interface in the signalling of different types of focus in Sicilian Italian

    Dates:2006-2007. Funded by:British Academy. Duration:1 year.

    Researchers:

Speech & Hearing Science Projects

    Acoustic and visual enhancement of speech for computer-based auditory training

    Dates:2000-2003. Funded by:EPSRC.

    Researchers: Valerie Hazan, Andrew Faulkner,

    Clarifying the speech perception deficits of dyslexic children

    This project will investigate how children with specific reading difficulties (dyslexia) and those who are reading normally perceive the sounds of speech. To decode speech, listeners need to be able to ignore ‘irrelevant’ variation in the speech signal that is linked to differences in speaker, speaking style, accent, etc. It is claimed that children with SRD are more sensitive to these variations than other children. We will check this claim using tests in which we can manipulate specific acoustic patterns within the word. We will then test children’s perception of many different consonants to try and better understand what makes some more difficult to identify than others. Finally, we will test children’s ability to adapt to different speakers and speaking styles.

    Dates:2005-2008. Funded by:Wellcome Trust. Duration:3 years.

    Researchers: Stuart Rosen, Valerie Hazan, Souhila Messaoud-Galusi,

    Hearcom - Hearing in the communication society

    HearCom is an integrated project under the FP6 ICT programme. It involves 30 partners from 12 countries and is coordinated by Tammo Houtgast and Marcel Vlaming from the VU University Medical Center in Amsterdam. Our society is strongly and increasingly communication-oriented. As much of this focuses on sound and speech, many people experience severe limitations in their activities, caused either by a hearing loss or by poor environmental conditions. The HearCom project aims at reducing these limitations in auditory communication.

    Dates:2004-2009. Funded by:CEC (EU).

    Researchers: Andrew Faulkner,

    Optimisation of voice pitch information in cochlear implant speech processing

    Main aim: To improve the transmission of pitch-related temporal information through a cochlear implant. Importance and timeliness: Current cochlear implant speech processing methods have been optimised for speech intelligibility in deafened adults. They provide very limited information to signal variations in the pitch of speech, especially over the range of pitch that is significant for the deaf child both in communication and in the development of spoken language. Cochlear implants are now becoming provided to deaf children in increasing numbers, yet there has been minimal attention to processing methods adapted to their needs.

    Dates:2005-2008. Funded by:RNID. Duration:3 years.

    Researchers: Andrew Faulkner, Stuart Rosen, Tim Green,

    SYNFACE: Synthesised talking face derived from speech for hearing disabled users of voice channels

    The main purpose of the SYNFACE project is to increase the possibilities for hard of hearing people to communicate by telephone. Many people use lip-reading during conversations, and this is especially important for hard of hearing people. However, this clearly doesn't work over the telephone!. This project aims to develop a talking face controlled by the incoming telephone speech signal. The talking face will facilitate speech understanding by providing lip-reading support. This method works with any telephone and is cost-effective compared to video telephony and text telephony that need compatible equipment at both ends.

    Dates:2001-2004. Funded by:CEC Framework V. Duration:3 years.

    Researchers: Andrew Faulkner,

    Second Language Vowel Perception

    This project examines vowel perception and plasticity during second- language (L2) learning by adults. The study evaluates whether individuals learn to 'perceptually switch' between their L1 (first- language) and L2 vowel systems, and assesses the role of fine-grained phonetic detail in the learning process. Study 1 will use a new method to generate phonetically detailed L1 (first-language) and L2 perceptual vowel maps for native speakers of Norwegian, German, Spanish, and French. Study 2 will train matched groups of German and Spanish learners to identify English vowels and examine how their L1 and L2 vowel spaces change over time. Study 3 will train French speakers with varying English-language experience. The research will contribute to our scientific understanding of phonetic perception and plasticity, introduce methodological innovations, and help guide the development of new computer-based phonetic training methods.

    Dates:2005-2008. Funded by:ESRC. Duration:3 Years.

    Researchers: Paul Iverson,

    Speaker-controlled variability in connected discourse: acoustic-phonetic characteristics and impact on speech perception

    This project investigates why certain speakers are easier to understand than others. Speech production is highly variable both across and within speakers. This is partly due to differences in the vocal tract anatomy and partly under the control of the speaker. This project examines whether clearer speakers are more extreme in their articulations (as measured from the acoustic properties of their speech) or whether they are more consistent in their production of speech sounds. In order to better model natural communication, the speech to be analysed is recorded using a new task aimed at eliciting spontaneous dialogue with specific keywords. The first study investigates whether 'inherent' speaker clarity is consistent across different types of discourse and whether speaker clarity is more closely correlated with cross-category differences or within-category consistency in production. The second study investigates whether clearer speakers show a greater degree of adaptation to the needs of listeners. This study has implications for models of speech perception. Understanding what makes a 'clear speaker' will also be informative for applications requiring clear communication, such as teaching, speech and language therapy, and the selection of voices for clinical testing and for speech technology applications.

    Dates:2008-2011. Funded by:ESRC . Duration:3 Years.

    Researchers: Valerie Hazan,

    Speech processors for combined electrical and acoustic hearing

    A substantial number of cochlear implant users have considerable residual hearing in the unimplanted ear and recent studies have demonstrated that the use of a contralateral hearing aid often provides significantly improved speech perception, particularly in noise. The factors responsible for bimodal benefits are not well understood, though it appears likely that they result mostly from the provision of complementary information across modalities, rather than true binaural interactions. The proposed work will examine factors likely to be important in optimising the bimodal transmission of speech spectral information, focusing on three aspects of place-coding. This research will both clarify our understanding of factors underlying bimodal benefits and help to develop clinically applicable methods for optimally combining an implant and a contralateral hearing aid, thus providing a highly cost-effective way to improve everyday perceptual performance in many users of cochlear implants.

    Dates:2006-2009. Funded by:RNID.

    Researchers: Andrew Faulkner,

    The Effects of Pulse Rate in Cochlear Implants

    Dates:2003-2007. Funded by:RNID.

    Researchers: Stuart Rosen,

    The effect of speaker variability on speech perception in children

    Dates:1999-2002. Funded by:EPSRC.

    Researchers: Valerie Hazan,

    The effects of phoneme discrimination and semantic therapies for speech perception deficits in aphasia

    Dates:2005-2008. Funded by:The Stroke Association.

    Researchers:

 

List of Active Research Projects