Commentary on Hurford

Abstract: 59 words

Main Text: 1007 words

References: 277 words

Total Text: 1405 words

 

The neural representation of spatial predicate-argument structures in sign language

 

Bencie Woll

Department of Language and Communication Science

City University London

Northampton Square

London EC1V 0HB

UK

+44-20 7040 8354

b.woll@city.ac.uk      

www.city.ac.uk/lcs

 

Abstract

 

Evidence from studies of the processing of topographic and classifier constructions in sign language sentences provides a model of how mental scene description can be represented linguistically but also raises questions about how this can be related to spatial linguistic descriptions in spoken languages and their processing. This in turn provides insights into models of the evolution of language.

 

Hurford’s paper proposes a ‘wormhole’ between formal logic and empirical neuroscience, identifying PREDICATE(x) as a schematic representation of the brain’s integration of the location of an arbitrary referent object, mapped in parietal cortex, with the analysis of the properties of that referent by other systems. A single point will be raised here for consideration in relation to this proposal. With this model, it might be expected that the parietal lobes would be involved in linguistic comprehension tasks, especially those which demand spatial representational resources. In non-linguistic contexts, a very wide range of spatial functions is associated with parietal lobe function (see Culham & Kanwisher 2001 for a review). However, even when space is referred to in spoken language, there is little evidence that these parietal systems, specialised for spatial processing, are specifically activated.

 

Although parietal regions may be involved in tasks such as solving spatial syllogisms (Carpenter et al 1999) or generation of spatial prepositions in response to visual images (Damasio et al 2001) this does not appear to be mandatory (Goel et al 1998; Reichle, Carpenter & Just, 2000). Indeed, there is evidence that the parietal involvement in the Damasio et al study may arise from the processing of the visual image and not from the linguistic task itself. Since the claim is made by Hurford that mapping of ‘scenes’ by the parietal cortices underlies the subsequent creation of linguistic structures, the absence of any parietal involvement in language processing needs to be explained.

 

In relation to this point, data from sign language research is of interest. Although the sign languages of contemporary Deaf communities do not provide direct evidence relating to the evolution of human language, since they have arisen in humans with ‘language-ready’ brains, they do provide insight into what an earlier ‘wormhole’ might have looked like.

 

In sign languages, space serves several functions. All signing occurs in ‘sign space’, an area in front of the signer. This space may be regarded in different ways: from a phonological perspective, it serves simply as a region for execution of signs. At a higher level, entirely abstract sentence meanings can be represented spatially. In the BSL translation of the sentence ‘Knowledge influences belief’, one location in the space in front of the signer is assigned to ‘knowledge’, a second location to ‘belief’, and the verb ‘influence’ moves from the location assigned to ‘knowledge’ towards that assigned to ‘belief’. Such sentences may be regarded as exemplifying referential  use of space, in which spatial relations are used to differentiate grammatical classes and semantic roles. In such sentences, and even in less abstract BSL examples, such as ‘The woman keeps hitting the man’, the locations of events in sign space do not represent and are not constrained by ‘real-life’ spatial relations. However, in addition to these functions, BSL sentences can be constructed topographically. In topographic sentences, “the linguistic conventions used in this spatial mapping specify the position of objects in a highly geometric and non-arbitrary fashion by situating certain sign forms (e.g. classifiers) in space such that they maintain the topographic relations of the world-space being described” (Emmorey, Corina & Bellugi, 1995 43-44).

 

Because of this link between real-world spatial representations and language in such constructions, there has been recent interest in how sign languages may make use of cortical systems specialised for spatial processing, and how this may differ essentially from what is found in spoken language (Campbell & Woll, 2002). Two recent functional imaging studies of sign language processing (MacSweeney et al 2002; Emmorey et al, 2002) have cast some light on the question.

 

MacSweeney et al (2002) used fMRI to explore the extent to which increasing the topographic processing demands of BSL signed sentences was reflected in the differential recruitment of parietal regions. Enhanced activation was observed in left inferior and superior parietal lobules during processing of topographic BSL sentences (e.g. ‘The pen is to the left of the book on the table’ (topographic) in contrast to non-topographic sentences such as  ‘The brother is older than his sister’).  The left inferior parietal lobule is known to be activated in biological action recognition, and in processing the precise configuration and location of hands in space to represent objects, agents and actions. It has also been shown in other studies to be involved in hand movement imagery when contrasted with actual hand movement (Gerardin et al, 2000), and in imagery of hand rotation (Kosslyn et al, 1998). It is not activated in speech comprehension.

 

Emmorey et al (2002) found similar areas of activation in a PET study investigating classifier predicates in ASL. Deaf signers viewed drawings depicting spatial relations between two objects and were asked either to produce a construction using classifiers (e.g. CURVED-OBJECT (the classifier for CUP) signed above FLAT-OBJECT (the classifier for TABLE) or to produce a sentence using an ASL preposition (CUP ON TABLE). In this study the same parietal cortical region was activated as in MacSweeney et al, but analogous right-sided parietal activation was observed as well. Task differences are likely to have driven the different activation patterns in these two studies, since in Emmorey et al. participants had to create sentences in response to images of objects in spatial relations, while in MacSweeney et al, participants had only to detect semantically anomalous sentences.

 

Both studies indicate that some aspects of sign language processing require the contribution of cortical regions not associated with spoken language comprehension. Importantly, in MacSweeney et al, no differential activation in these regions was observed when hearing people heard and saw English translations of topographic BSL sentences. Since the visual medium affords the identification of objects and their spatial locations as a function of their forms and locations on the retina and sensory cortex, it is not surprising that cortical systems specialised for such mappings are utilised when sign languages capture these relationships. The absence of such features in spoken language processing suggests that loss of the parietal link relates to the development of speech, rather than to the development of language and thus provides indirect support for Hurford’s proposal.

 

References

 

Campbell, R. & Woll, B. (2003) Space is special in sign. Trends in Cognitive Sciences 7:1:5-7

Carpenter, P.A., Just, M.A., Keller, T.A., Eddy, W.F., & Thulborn, K.R. (1999). Timecourse of fMRI activation in language and spatial networks during sentence comprehension. Neuroimage, 10, 216 –224.

Culham, J.C., & Kanwisher, N.G. (2001). Neuroimaging of cognitive functions in human parietal cortex. Current Opinion Neurobiology, 11, 157-163.

Damasio, H., Grabowski, T.J., Tranel, D., Ponto, L.L.B., Hichwa, R.D., & Damasio, A.R. (2001). Neural correlates of naming actions and naming spatial relations. NeuroImage, 13, 1053-1064.

Emmorey, K., Corina, D., & Bellugi, U. (1995). Differential processing of topographic and referential functions of space. In: K Emmorey & J Reilly (Eds.), Language, Gesture and Space, Hillsdale NJ: Lawrence Erlbaum Associates.

Emmorey, K., Damasio, H., McCullough, S., Grabowski, T., Ponto, L.L.B., Hichwa, R.D., & Bellugi, U. (2002) Neural systems underlying spatial language in American Sign Language. NeuroImage 17: 812-824.

Gerardin, E., Sirigu, A., Lehericy, S., Poline, J.P., Gaymard, B., Marsault, C., Agid, Y., &  Le Bihan, D. (2000). Partially overlapping neural networks for real and imagined hand movements. Cerebral Cortex, 10, 1093-1104.

Goel, V., Gold, B., Kapur, S., & Houle, S. (1998). Neuroanatomical correlates of human reasoning. Journal of Cognitive Neuroscience, 10, 293-302.

Kosslyn, S.M., DiGirolamo, G.J., Thompson, W.L., & Alpert, N.M. (1998). Mental rotation of objects versus hands: neural mechanisms revealed by positron emission tomography. Psychophysiology, 35, 151-161.

MacSweeney, M., Woll, B., Campbell, R., Calvert, G.A., McGuire, P.K., David, A.S., Simmons, A., & Brammer, M.J. (2002) Neural correlates of British Sign Language processing: specific regions for topographic language? Journal of Cognitive Neuroscience 14;7: 1064-1075.

Reichle, E.D., Carpenter, P., Just, M.A. (2000). The Neural bases of strategy and skill in sentence-picture verification. Cognitive Psychology, 40, 261-295.