The Cognitive Theory Of Multimedia Learning: What It Is And What It Proposes

The cognitive theory of multimedia learning

When we talk about the lifelong lessons of school, high school or any other educational level, we all agree that a book with pictures or a documentary in class was something much more enjoyable than reading simple notes in which only words and more words.

It is not that an image is worth a thousand words, but it seems that images combined with words, whether read or heard, make the information to be learned become more powerful, more easily assimilated.

This is what the cognitive theory of multimedia learning defends, in which it is argued that the combination of information that activates the verbal and the visual helps us to carry out deeper learning. Let’s see it below.

What is the cognitive theory of multimedia learning?

When producing multimedia content aimed at pedagogical purposes, all types of professionals who know how to design it and know how the human mind works must participate. Pedagogues, psychologists, designers, illustrators, programmers and communicators must be in charge of the design of these resources since Multimedia, in itself, will not encourage learning, but rather the way in which it is designed and results in better acquisition of the content taught

The designer, regardless of the field, must know how to take advantage of new technologies and adapt the contents in such a way that, through the combination of different visual and auditory elements, the didactic objectives that are to be acquired in the academic curriculum are supported. Planning and processing information is something that must be very careful since converting them into multimedia elements is not an easy task and requires time and effort to be invested.

Taking all this into account, we fully enter into the central premise of the cognitive theory of multimedia learning, a model in which it is maintained that certain information is learned more deeply when it is presented in the form of words and images rather than just words. That is, by transforming classic content, traditionally in written format, into something that has visual or auditory support, better learning is acquired.

This idea comes from Richard Mayer in 2005, which proposes the cognitive theory of multimedia learning based on the idea that there are three types of memory storage (sensory memory, working memory and long-term memory) and, in addition, maintains that individuals have two separate channels to process the information, one being for the verbal material and the other for the visual. Each channel can only process a small amount of information at a time, and support can be provided by processing content presented in two different and complementary ways.

You may be interested:  How to Set Limits for Teenagers in the Summer

Meaningful learning from a multimedia element is the result of the learner’s activity when information is presented that activates both channels, building ordered and integrated knowledge As working memory has a rather limited cognitive load, if too many elements of the same type are presented at the same time, it can be overloaded, exceeding the processing capacity and causing some of those contents to not be satisfactorily processed. Thus, to reduce its load, it is useful to activate two different channels a little instead of just one excessively.

Richard Mayer’s Multimedia Learning

Within the cognitive theory of multimedia learning Richard Mater maintains that, to reduce the cognitive load of working memory when presenting content, it is appropriate to present it in multimedia format, that is, activating the two ways of receiving information: visual and verbal His principles about multimedia learning are directly related to the ideas emanating from John Sweller’s cognitive load theory.

It is worth highlighting the idea of ​​what is meant by multimedia content. We refer to multimedia content when certain information is presented, such as a presentation or communication, which includes words and images aimed at promoting learning. Starting from this idea and based on his scientific research, Mayer formulated up to eleven different principles that serve as a guide when designing multimedia material and that focus on facilitating learning, whether one has prior knowledge related to the new information or but.

Thus, from the cognitive theory of learning it is defended that By understanding how the human mind of a learner processes information, it will be possible to optimize the acquisition of certain content to the maximum Taking this into account, guides can be designed for the management and design of multimedia content, with the intention of making it easier for the student to build mental schemas about the new content and be able to automate it and introduce it into long-term memory. term.

The three foundations of the theory

There are three foundations of the theory that justify its central premise, maintaining that a certain content is learned more deeply when it is presented in the form of a combination of words with images.

1. Images and words are not equivalent

The saying that a picture is worth a thousand words is not true. Images and words are neither equivalent nor provide the same information, but rather they complement each other Through words we can better understand an image, and through images we can get a better idea and better understand what is stated in a text.

2. Verbal and visual information are processed through different channels

As we have already suggested, verbal or auditory information and visual or pictorial information are retained and processed in different channels Processing information in more than one channel gives us advantages in capacity, encoding in our memory, and retrieval. In this way, the memory and its storage in long-term memory are strengthened.

You may be interested:  14 Activities for Children with ADHD (to Work on Attention)

3. Integrating words and images produces deeper learning

Integrate a word accompanied by an image or a verbal representation with a pictorial into working memory involves some cognitive effort and processing At the same time, it is easier to relate this new information with previous learning, which provides deeper learning that remains in long-term memory and that can be applied to solve problems in other contexts.

Model of multimedia learning and memory

As we said, the model is based on the idea that our brain works with two information processing systems, one for visual material and the other for verbal material. The advantage of using these two channels is not quantitative, but rather qualitative given that, as we have mentioned before, visual and auditory information complement each other, they do not replace each other nor are they equivalent. Deep understanding occurs when the learner can build meaningful connections between verbal and visual representations

When multimedia material is presented, the information received in the form of words will be heard by the ears or read by the eyes, while the images will be seen by the eyes. In both cases, the new information will first pass through sensory memory, where it will be briefly retained in the form of visual stimuli (images) and auditory stimuli (sounds).

In working memory the individual will carry out the main activity of multimedia learning, since it is the space of our memory where we will process new information as long as we keep it conscious. This memory has a very limited capacity and, as we have mentioned, tends to be overloaded. On the other hand, long-term memory has almost no limits and, when information is deeply processed, it ends up being stored in this last space.

In working memory, sounds and images will be selected and the information will be organized, transforming it into coherent mental representations, that is, we will create a verbal mental model and a pictorial mental model based on what we have read, heard and seen. . Meaning will be given to the information by integrating the visual representations with the verbal ones and relating them to knowledge about previous data. As we can understand from all this, people are not passive recipients of new content, but rather we actively process it.

Taking all this into account, we can end up summarizing this point in the three assumptions below.

1. Dual channel assumption

This model assumes that people process information in two separate channels one being auditory or verbal information and the other being visual or pictorial information.

2. Limited capacity assumption

The two channels in the above assumption are claimed to have limited capacity. People’s working memory can retain a limited number of words and images at the same time

You may be interested:  Teenagers at Home: 7 Educational and Communication Keys for Parents

3. Active processing assumption

It is argued that people are actively involved in learning taking into account new relevant incoming information This selected information is organized into coherent mental representations and such representations are integrated with other prior knowledge.

The 11 principles of multimedia learning

Having seen in depth the entire cognitive theory of multimedia learning, we finally look at the eleven principles to be taken into account when designing multimedia material to optimize learning. These are principles that must be considered in every classroom and course that is considered adapted to the 21st century especially if you want to get the most out of new technologies and multimedia and online resources.

1. Multimedia principle

People learn best when The contents are displayed in image format combined with text instead of just words this principle being the main premise of the entire cognitive theory of multimedia learning.

2. Principle of contiguity

We learn best when images and words that refer to the same content are located close from each other.

3. Principle of temporality

People learn best when words and their corresponding images are displayed on the screen simultaneously

4. Modality principle

People learn better when multimedia content is in the form of images with narration than images with text.

5. Redundancy principle

We learn better when the images used They are explained either through a narrative or through text, but not with both modalities at the same time That is to say, presenting an image, a text and narrating it is rather a waste of time and resources, since its effect is neither cumulative nor multiplicative beyond the use of two supports.

6. Principle of coherence

People learn better when images, words or sounds that have no direct relationship with the content to be taught are removed from the screen.

7. Signaling principle

People learn better when they are added signs that indicate where we should pay our attention

8. Segmentation principle

We learn best when The contents presented to us are divided into small sections and when you can navigate freely and easily through them.

9. Pre-workout principle

We learn better when we are pre-trained in the key concepts to be explained before seeing the content developed. That is to say, It is better that we are briefly introduced or given an “abstract” of what we are going to see before starting with the syllabus itself giving us the opportunity to remember prior knowledge before the session, bring it to working memory and relate it while the lesson is explained.

10. Personalization principle

When presenting multimedia material, both in text format with image and narration type with image, it is better present them in a close and familiar tone ; This way you learn more than when the tone is too formal.

11. Principle of voice

If the chosen modality is an image with heard narration, people We learn best when a human voice is used in digital resources rather than one created through software that reads the text in robotic audio.