Valerie Hazan
| Status: |
BSc, MA, PhD, Professor in Speech Sciences |

 |
| Address: |
Room 315, Chandler House, UCL, Wakefield Street, London. |
| Phone: |
+ 44 (0) 20 7679 4076 |
| Email: |
v.hazan@ucl.ac.uk |
| Home page: |
http://www.phon.ucl.ac.uk/home/val/home.html |
| Primary Dept.: |
Speech, Hearing and Phonetic Sciences |
| RAE Research group: |
Speech and Hearing |
| Interests: |
Development of phoneme categorisation in normally-hearing and hearing-impaired children and in second-language learners; effects of listener- and speaker-related factors on speech intelligibility; effects of auditory and auditory-visual training for second-language learners. |
Research Projects
- Speaker-controlled variability in connected discourse: acoustic-phonetic characteristics and impact on speech perception
(2008-2011)
This project investigates why certain speakers are easier to understand than others. Speech production is highly variable both across and within speakers. This is partly due to differences in the vocal tract anatomy and partly under the control of the speaker. This project examines whether clearer speakers are more extreme in their articulations (as measured from the acoustic properties of their speech) or whether they are more consistent in their production of speech sounds. In order to better model natural communication, the speech to be analysed is recorded using a new task aimed at eliciting spontaneous dialogue with specific keywords. The first study
investigates whether 'inherent' speaker clarity is consistent across different types of discourse and whether speaker clarity is more closely correlated with cross-category differences or within-category consistency in production. The second study investigates whether clearer speakers show a greater degree of adaptation to the needs of listeners. This study has implications for models of speech perception. Understanding what makes a 'clear speaker' will also be informative for applications requiring clear communication, such as teaching, speech and language therapy, and the selection of voices for clinical testing and for speech technology
applications.
- A UCL Video Data Archive for Human Communication
(2007-2008)
- Clarifying the speech perception deficits of dyslexic children
(2005-2008)
This project will investigate how children with specific reading difficulties (dyslexia) and those who are reading normally perceive the sounds of speech. To decode speech, listeners need to be able to ignore ‘irrelevant’ variation in the speech signal that is linked to differences in speaker, speaking style, accent, etc. It is claimed that children with SRD are more sensitive to these variations than other children. We will check this claim using tests in which we can manipulate specific acoustic patterns within the word. We will then test children’s perception of many different consonants to try and better understand what makes some more difficult to identify than others. Finally, we will test children’s ability to adapt to different speakers and speaking styles.
- Acoustic and visual enhancement of speech for computer-based auditory training
(2000-2003)
Does seeing the speaker help in learning tricky aspects of a new language? A synthetic face is used to support modelling of troublesome phonetic gestures by second language learners
- The effect of speaker variability on speech perception in children
(1999-2002)
|
Some other pages on our site you may enjoy ...
The KLAIR project aims to build and develop a computational platform to assist research into
the acquisition of spoken language.
KLAIR is a sensori-motor server that displays a virtual infant on screen that can see, hear and speak.
BROWSE is a program for browsing audio recordings. With BROWSE you can drag and drop audio files in a range of formats onto the display to see and hear their contents. BROWSE allows you to zoom, scroll and save the audio files to other formats.
CochSim is a dynamic simulation of the time and frequency analysis performed by the ear.
Sound signals such as sinewaves, pulse trains, sawtooth waves and vowels can be fed into an
auditory filterbank and the output monitored in a moving animated display. The program shows the
vibration of the oval window and the basilar membrane, the haircell activity against filter
frequency and time, and an average excitation pattern across the cochlea.
ESection is a free program for calculating and displaying spectral and other related analyses of sections of a speech signal. It can be used to demonstrate the different spectral properties of elements of speech. It can also calculate an LPC spectrum, autocorrelation and cepstrum analyses, and can display the signal as a waveform or as a spectrogram. It automatically finds formant and fundamental frequency values.
RTSPECT is a free program for displaying a real-time waveform and spectrum display of an audio signal on Windows computers.
|