Session Reality for the Brain:Do Phonological Features have any Reality for the Brain?
|
Reality for the Brain-1 |
SOME MEG CORRELATES FOR DISTINCTIVE FEATURES
William J. Idsardi, Department of Linguistics and Program in Neuroscience and Cognitive Science, University of Maryland Paper File |
This presentation reviews the use of distinctive features for the mental representation of speech sounds, briefly considering three bases for feature definition: articulatory, auditory and translational. We then review several recent neuroimaging studies examining distinctive features using magnetoencephalography (MEG). Although this area of research is still relatively new, we already have interesting findings regarding vowel place, nasality and consonant voicing. Although this research is not yet definitive, some refinements of these experiments can be expected to yield important results for feature theory, and more generally for our understanding of the neural computations that underlie the transformations between articulatory and auditory space necessary to produce and perceive speech. | |
Reality for the Brain-2 |
REPRESENTATION OF PHONOLOGICAL FEATURES IN THE BRAIN: EVIDENCE FROM MISMATCH NEGATIVITY
Carsten Eulitz, Department of Linguistics, University of Konstanz Paper File |
The representation of phonological features in the mental lexicon has been examined using event-related brain responses, such as mismatch negativity (MMN; an automatic auditory change detection response in the brain) or the P350 component (a correlate of lexical activation). This presentation will summarize some MMN studies that demonstrate support for (i) models proposing abstract underspecified representations in the mental lexicon, i.e. not all phonological features are stored; and (ii) top-down influence of the language-specific phonological system on the fine structure of the phonological representations. Constraints in using the MMN for investigations concerning phonological representations will also be discussed. | |
Reality for the Brain-3 |
PHONOLOGICAL ASPECTS OF AUDIOVISUAL SPEECH PERCEPTION
Ingo Hertrich, Department of General Neurology, University of Tuebingen Werner Lutzenberger, MEG Center, University of Tuebingen Hermann Ackermann, Department of General Neurology, University of Tuebingen Paper File |
Based on magnetoencephalographic measurements, this contribution delineates a sequence of processing stages engaged in audiovisual speech perception, giving rise, finally, to the fusion of phonological features derived from auditory and visual input. Although the two channels interact even within early time windows, the definite percept appears to emerge relatively late (> 250 ms after speech onset). Most noteworthy, our data indicate visual motion to be encoded as categorical information even prior to audiovisual fusion, as demonstrated by a non-linear visual /ta/ - /pa/ effect. Our findings indicate, first, modality-specific sensory input to be transformed into phonetic features prior to the generation of a definite phonological percept and, second, cross-modal interactions to extend across a relatively large time window. Conceivably, these integration processes during speech perception are not only susceptible to visual input, but also to other supramodal influences such as top-down expectations and interactions with lexical data structures. | |
Reality for the Brain-4 |
SPEECH SOUND PERCEPTION AND NEURAL REPRESENTATIONS
Maija S. Peltola, Department of Phonetics and the Centre for Cognitive Neuroscience, University of Turku Paper File |
This commentary reviews some of the main findings in speech sound perception using the brain imaging techniques and comments briefly on the recent findings by the session contributors. The main emphasis is on the experimental settings used in these studies. The aim is to demonstrate how the search for the neural correlates for abstract linguistic units has resulted in various types of experimental designs and how the stimulus selection may play a crucial role in the findings. It seems that the experimental settings are becoming more and more elaborate thus offering an access to the abstract levels of representation. | |
Reality for the Brain-5 |
Behavior reflects the (degree of) reality of phonological features in the brain as well
Holger Mitterer, Max Planck Institute for Psycholinguistics Paper File |
To assess the reality of phonological features in language processing (vs. language description), one needs to specify the distinctive claims of distinctive-feature theory. Two of the more far-reaching claims are compositionality and generalizability. I will argue that there is some evidence for the first and evidence against the second claim from a recent behavioral paradigm. Highlighting the contribution of a behavioral paradigm also counterpoints the use of brain measures as the only way to elucidate what is "real for the brain". The contributions of the speakers exemplify how brain measures can help us to understand the reality of phonological features in language processing. The evidence is, however, not convincing for a) the claim for underspecification of phonological features—which has to deal with counterevidence from behavioral as well as brain measures—, and b) the claim of position independence of phonological features. | |