Speech Processing by Computer

 

LECTURE 6b

NEURAL NETWORKS

 

 

Objectives

 

By the end of the session you should:

q       be able to describe three applications of neural networks in speech and language

q       have a sense of how neural network models may be used to process language data, and how they may be trained and tested

q       be able to describe some strengths and weaknesses of the three applications, particularly with reference to symbolic alternatives

 

Outline

 

1.      Learning past-tense morphology

q       Generating the phonological representation of the past-tense form of given verb roots

q       Learned from examples

q       Tested for generalisation ability

2.      Lexical access

q       Identifying words from phonetic transcription

q       Lexical effect, lexical segmentation, time course of lexical access

3.      Phoneme recognition

q       Identifying phoneme probabilities from acoustic input

q       Learned from large annotated speech corpus

4.      Issues

q       Generalisation abilities of networks

q       Predictive power of networks as cognitive model

q       Practical model of some cognitive processes

 

Reading

 

Recommended:

q       “Connectionism”, Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/entries/connectionism/

 

Background:

q       D.E.Rumelhart & J.L.McClelland, On learning the past tenses of English verbs, in Parallel Distributed Processing, Vol 2, Chapter 18, MIT Press 1986, pp216-271.

 

q       J.L. McClelland & J.L. Elman , Interactive processes in speech perception: the TRACE model, in Parallel Distributed Processing, Vol 2, Chapter 15, MIT Press 1986, pp58-121.

 

q       A.J. Robinson,  An application of recurrent nets to phone probability estimation, IEEE Transactions on neural networks, 5 (1994), pp298-305.