Abstract
Multimodality approaches representation and communication as some-thing more than language. It attends to the complex repertoire of semi-otic resources and organizational means that people make meaning through—image, speech, gesture, writing, 3-dimensional forms, and so on. Strictly speaking, then multimodality refers to a field of applica-tion rather than a theory. A variety of disciplines and theoretical approaches can be used to explore different aspects of the multimodal landscape. Psychological theories can be applied to look at how people perceive different modes or to understand the impact of one mode over another on memory for example. Sociological and anthropological the-ories and interests could be applied to examine how communities use multimodal conventions to mark and maintain identities. The term 'multimodality' is, however, strongly linked with social semiotic theory and is widely used to stand for 'multimodal social semiotics'. This is the use of multimodality in this chapter. Multimodality is concerned with signs and starts from the position that like speech and writing, all modes consist of sets of semiotic resources—resources that people draw on and configure in specific mo-ments and places to represent events and relations. From this perspective the modal resources a teacher or student chooses to use (or are given to use) are significant for teaching and learning. In this way, a multimodal approach rejects the traditional almost habitual conjunction of language and learning. Using a multimodal approach means looking at language as it is nestled and embedded within a wider social semiotic rather than a deci-sion to 'side-line' language. Examining multimodal discourses across the classroom makes more visible the relationship between the use of semiotic resources by teachers and students and the production of curriculum knowledge, student subjectivity, and pedagogy. Multimodality is to some extent an eclectic approach. Linguistic the-ories, in particular Halliday's social semiotic theory of communication (Halliday, 1978) and developments of that theory (Hodge and Kress, C A R E Y J E W I T T M. Martin-Jones, A. M. de Mejia and N. H. Hornberger (eds), Encyclopedia of Language and Education, 2nd Edition, Volume 3: Discourse and Education, 357–367. #2008 Springer Science+Business Media LLC. 1988) provided the starting point for multimodality. A linguistic model was seen as wholly adequate for some to investigate all modes while others set out to expand and re-evaluate this realm of reference drawing on other approaches (e.g. film theory, musicology, game theory). In addition the influence of cognitive and sociocultural research on multi-modality is also present, particularly Arnheim's work on visual commu-nication and perception (1969). Many of the concerns that underpin multimodality also build on anthropological and social research (specifically the work of Barthes (1993); Bateson (1977); Foucault (1991); Goffman (1979); and Malinowski (2006) among others). By the mid to late 1990s, a few books and papers on multimodality were starting to be published. The primary focus of this work was visual communication and the relationship between image and writing. The work of Gunther Kress and Theo van Leeuwen (1996), the New London Group (1996) and Michael O'Toole (1994) was particularly significant for multimodal research within education. This work chal-lenged the notion that learning is primarily a linguistic accomplish-ment, sketched key questions for a multimodal agenda, and began to define conceptual tools for thinking about teaching and learning beyond language. The call to understand pedagogy as multimodal was radical when it was first made. A key design element of a future pedagogy of multi-literacies was heralded as 'designs for other modes of meaning' (New London Group, 1996). In part, this call was a response to the social and cultural reshaping of the communicational landscape (related to glob-alization, new technologies, and new demands for work). In a sense, the conclusion that reading this 'new' multimedia, multimodal landscape for its linguistic meanings alone is not enough was inevitable. A special issue of Linguistics and Education on multimodality was an important publication (and one of the first) to provide tools for educational researchers wanting to undertake multimodal research (Lemke, 1998). Attempting to understand the relationship between image and text was central to the development of research on multimodality. The re-dundancy of 'non-linguistic' modes was argued against and the idea that the meaning of modes is incommensurable was key. Reading Images (Kress and van Leeuwen, 1996) opened the door for multimod-ality in the way that it discusses key concepts such as composition, modality and framing. This work offers a framework to describe the semiotic resources of images and analyses how these resources can be configured to design interpersonal meaning, to present the world in specific ways, and to realize coherence. It demonstrates and generates a series of semiotic network maps showing the semiotic resources of image in play and how discourses are articulated visually through the design of these resources. 358
Cite
CITATION STYLE
Jewitt, C. (2017). Multimodal Discourses Across the Curriculum. In Language, Education and Technology (pp. 31–43). Springer International Publishing. https://doi.org/10.1007/978-3-319-02237-6_4
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.