Metacognition (Flavell)

Metacognition is defined in simplest terms as “thinking about your own thinking.” The root “meta” means “beyond,” so the term refers to “beyond thinking.” Specifically, this means that it encompasses the processes of planning, tracking, and assessing your own understanding or performance.

The phrase was termed by American developmental psychologist John H. Flavell in 1979, and the theory developed throughout the 1980s among researchers working with young children in early cognitive stages.

(more…)

Situated Cognition (Brown, Collins, & Duguid)

Summary: Situated cognition is the theory that people’s knowledge is embedded in the activity, context, and culture in which it was learned. It is also referred to as “situated learning.”

Originators & proponents: John Seely Brown, Allan Collins, Paul Duguid

Keywords: activity, authentic domain activity, authentic learning, cognitive apprenticeship, content-specific learning, context, culture, everyday learning, knowledge, legitimate peripheral participation, socio-cultural learning, social construction of knowledge, social interaction, teaching methods

Situated cognition (Brown, Collins, & Duguid)

Situated cognition is a theory which emphasizes that people’s knowledge is constructed within and linked to the activity, context, and culture in which it was learned[1][2].

Learning is social and not isolated, as people learn while interacting with each other through shared activities and through language, as they discuss, share knowledge, and problem-solve during these tasks.

For example, while language learners can study a dictionary to increase their vocabulary, this often solitary work only teaches basic parts of learning a language; when language learners talk with someone who is a native speaker of the language, they will learn important aspects of how these words are used in the native speaker’s home culture and how the words are used in everyday social interactions.

(more…)

Albert Bandura Biography

In 2014, Canadian psychologist Albert Bandura was ranked number one atop a list of the Top 100 Eminent Psychologists of the Modern Era, published in the Archives of Scientific Psychology. [7] . Former president of the American Psychological Association, winner of numerous awards and more than sixteen honorary degrees and widely held as one of the most influential psychologists alive today, Albert Bandura is among the most prolific psychologists in history.

(more…)


Also check out:

Expertise Theory (Ericsson, Gladwell)

Expertise theory specifies how talent develops across specified fields or domains, focusing on cognitive task analysis (to map the domain), instruction and practice, and clearly specified learning outcomes against which one can objectively measure the development of expertise.

Anders Ericsson, a professor at Florida State University, is the leading figure in the field of expertise theory. However, many others are associated with it as well: Robert Sternberg (Cornell University), Richard Clark (University of Southern California), Benjamin Bloom (late of the University of Chicago), Herbert Simon (late of Carnegie Mellon University), and Mihaly Csikszentmihalyi (Claremont Graduate University). Another notable figure is Malcolm Gladwell, whose work has served to popularize the theory.

Keywords: expertise, practice, instruction, cognitive task analysis

(more…)

Cognitive Tools Theory (Egan)

Summary: There exist five kinds of understanding (or cognitive tools) that individuals usually master in a particular order during the course of their development; these have important educational implications.

Originator: Kieran Egan, a Professor at Simon Fraser University, proposed his theory of cognitive tools as part of a sustained program of writing and research on the role of imagination in learning, teaching, and curriculum.

Keywords: Cognitive, Stages, Imagination, Ironic, Literacy, Memes

(more…)

E-Learning Theory (Mayer, Sweller, Moreno)

E-learning theory consists of cognitive science principles that describe how electronic educational technology can be used and designed to promote effective learning.


History

The researchers started from an understanding of cognitive load theory to establish the set of principles that compose e-learning theory. Cognitive load theory refers to the amount of mental effort involved in working memory, and these amounts are categorized into three categories: germane, intrinsic, and extraneous[1].

Germane cognitive load describes the effort involved in understanding a task and accessing it or storing it in long-term memory (for example, seeing an essay topic and understanding what you are being asked to write about). Intrinsic cognitive load refers to effort involved in performing the task itself (actually writing the essay). Extraneous cognitive load is any effort imposed by the way that the task is delivered (having to find the correct essay topic on a page full of essay topics).


Key Concepts

Mayer, Moreno, Sweller, and their colleagues established e-learning design principles that are focused on minimizing extraneous cognitive load and introducing germane and intrinsic loads at user-appropriate levels[2][3][4][5][6]. These include the following empirically established principles:

Multimedia principle (also called the Multimedia Effect)

Using any two out of the combination of audio, visuals, and text promote deeper learning than using just one or all three.

Modality principle

Learning is more effective when visuals are accompanied by audio narration versus onscreen text. There are exceptions for when the learner is familiar with the content, is not a native speaker of the narration language, or when printed words are the only things presented on screen. Another exception to this is when the learner needs to use the material as reference and will be going back to the presentation repeatedly.

Coherence principle

The less that learners know about the presentation content, the more they will be distracted by unrelated content. Irrelevant video, music, graphics, etc. should be cut out to reduce cognitive load that might happen through learning unnecessary content. Learners with some prior knowledge, however, might have increased motivation and interest with unrelated content.

    Contiguity principle

    Learning is more effective when relevant information is presented closely together. Relevant text should be placed close to graphics, and feedback and responses should come closely to any answers that the learner gives.

    Segmenting principle

    More effective learning happens when learning is segmented into smaller chunks. Breaking down long lessons and passages into shorter ones helps promote deeper learning.

    Signaling principle

    Using arrows or circles, highlighting, and pausing in speech are all effective methods of signaling important aspects of the lesson. It is also effective to end a lesson segment after releasing important information.

    Learner control principle

    For most learners, being able to control the rate at which they learn helps them learn more effectively. Having just play and pause buttons can help more than having an array of controls (back, forward, play, pause). Advanced learners may benefit from having the lesson play automatically with the ability to pause when they choose.

    Personalization principle

    A tone that is more informal and conversational, conveying more of a social presence, helps promote deeper learning. Beginning learners may benefit from a more polite tone of voice, while learners with prior knowledge may benefit from a more direct tone of voice. Computer characters can help reinforce content by narrating the lesson, pointing out important features, or illustrating examples for the learner.

    Pre-training principle

    Introducing key content concepts and vocabulary before the lesson can aid deeper learning. This principle seems to apply more to low prior knowledge learners versus high prior knowledge learners.

    Redundancy principle

    Having graphics explained by both audio narration and on-screen text creates redundancy. The most effective method is to use either audio narration or on-screen text to accompany visuals.

    Expertise effect

    Instructional methods that are helpful to low prior knowledge learners may not be helpful at all, or may even be detrimental, to high prior knowledge learners.


Additional Resources and References

Resources

References

  1. Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational psychologist, 38(1), 43-52.
  2. Mayer, R. E. (1997). Multimedia learning: Are we asking the right questions?.Educational psychologist, 32(1), 1-19.
  3. Moreno, R., & Mayer, R. (2007). Interactive multimodal learning environments. Educational Psychology Review, 19(3), 309-326.
  4. Low, R., & Sweller, J. (2005). The modality principle in multimedia learning.The Cambridge handbook of multimedia learning, 147, 158.
  5. Mayer, R. E. (2003). Elements of a science of e-learning. Journal of Educational Computing Research, 29(3), 297-313.
  6. Clark, R. C., & Mayer, R. E. (2016). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning. John Wiley & Sons.

Information Processing Theory

Information processing theory discusses the mechanisms through which learning occurs. Specifically, it focuses on aspects of memory encoding and retrieval.



Contributors

  • George A. Miller (1920-2012)
  • Atkinson and Shriffin (1968)
  • Craik and Lockhart (1972)
  • Bransford (1979)
  • Rumelhart and McClelland (1986)

Key Concepts

The basic idea of Information processing theory is that the human mind is like a computer or information processor — rather than behaviorist notions that people merely responding to stimuli.

These theories equate thought mechanisms to that of a computer, in that it receives input, processes, and delivers output. Information gathered from the senses (input), is stored and processed by the brain, and finally brings about a behavioral response (output).

Information processing theory has been developed and broadened over the years. Most notable in the inception of information processing models is Atkinson and Shriffin’s ‘stage theory,’ presenting a sequential method, as discussed above, of input-processing-output[2]. Though influential, the linearity of this theory reduced the complexity of the human brain, and thus various theories were developed in order to further assess the inherent processes.

Following this line of thought, Craik and Lockhart issued the ‘level of processing’ model[3]. They emphasize that information s expanded upon (processed) in various ways (perception, attention, labelling, and meaning) which affect the ability to access the information later on. In other words, the degree to which the information was elaborated upon will affect how well the information was learned.

Bransford broadened this idea by adding that information will be more easily retrieved if the way it is accessed is similar to the way in which it was stored[4]. The next major development in information processing theory is Rumelhart and McClelland’s connectionist model, which is supported by current neuroscience research[5]. It states that information is stored simultaneously in different areas of the brain, and connected as a network. The amount of connections a single piece of information has will affect the ease of retrieval.

The general model of information processing theory includes three components:

Sensory memory

In sensory memory, information is gathered via the senses through a process called transduction. Through receptor cell activity, it is altered into a form of information that the brain could process. These memories, usually unconscious, last for a very short amount of time, ranging up to three seconds. Our senses are constantly bombarded with large amounts of information. Our sensory memory acts as a filter, by focusing on what is important, and forgetting what is unnecessary. Sensory information catches our attention, and thus progresses into working memory, only if it is seen as relevant, or is familiar.

Working memory/short term memory

Baddeley (2001) issued a model of working memory as consisting of three components[6]. The executive controls system oversees all working memory activity, including selection of information, method of processing, meaning, and finally deciding whether to transfer it to long term memory or forget it. Two counterparts of this system are the auditory loop, where auditory information is processed, and the visual-spatial checkpad, where visual information is processed. Sensory memories transferred into working memory will last for 15-20 seconds, with a capacity for 5-9 pieces or chunks of information. Information is maintained in working memory through maintenance or elaborative rehearsal. Maintenance refers to repetition, while elaboration refers to the organization of information (such as chunking or chronology).

The processing that occurs in working memory is affected by a number of factors. Firstly, individuals have varying levels of cognitive load, or the amount of mental effort they can engage in at a given moment, due to individual characteristics and intellectual capacities. Secondly, information that has been repeated many times becomes automatic and thus does not require much cognitive resources (e.g. riding a bike). Lastly, according to the task at hand, individuals use selective processing to focus attention on information that is highly relevant and necessary.

Long term memory

Long term memory includes various types of information: declarative (semantic and episodic), procedural (how to do something), and imagery (mental images).

As opposed to the previous memory constructs, long term memory has unlimited space. The crucial factor of long term memory is how well organized the information is. This is affected by proper encoding (elaboration processes in transferring to long term memory) and retrieval processes (scanning memory for the information and transferring into working memory so that it could e used). As emphasized in Bransford’s work, the degree of similarity between the way information was encoded and the way it is being accessed will shape the quality of retrieval processes. In general, we remember a lot less information than is actually stored there.


Additional Resources and References

Resources

References

  1. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2), 81.
  2. Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. Psychology of learning and motivation, 2, 89-195.
  3. Craik, F. I., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of verbal learning and verbal behavior, 11(6), 671-684.
  4. Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer appropriate processing. Journal of verbal learning and verbal behavior, 16(5), 519-533.
  5. Rumelhart, D. E., McClelland, J. L., & PDP Research Group. (1988). Parallel distributed processing (Vol. 1, pp. 354-362). IEEE.
  6. Baddeley, A. D. (2001). Is working memory still working?. American Psychologist, 56(11), 851.

Theory of Mind, Empathy, Mindblindness (Premack, Woodruff, Perner, Wimmer)

Theory of Mind, Empathy, Mindblindness

Summary: Theory of mind refers to the ability to perceive the unique perspective of others and its influence on their behavior – that is, other people have unique thoughts, plans, and points of view that are different than yours.

Originators and key contributors:

  • Jean Piaget (1896- 1980), a Swiss psychologist, described the inability of young children to perceive others’ points of view due to ‘egocentrism.’
  • David Premack and Guy Woodruff developed the term Theory of Mind (1978) as applied to their studies on chimpanzees.[1]
  • Josef Perner and Heinz Wimmer (1983) extended Theory of Mind to the study of child development.[2]

Keywords: Social cognition, child development, false-belief, Autism spectrum disorders, mindblindness

(more…)

Cognitive Theory of Multimedia Learning (Mayer)

Summary: A cognitive theory of multimedia learning based on three main assumptions: there are two separate channels (auditory and visual) for processing information; there is limited channel capacity; and that learning is an active process of filtering, selecting, organizing, and integrating information.

Originator: Richard Mayer (1947-)

Key terms: dual-channel, limited capacity, sensory, working, long-term memory

(more…)

Cognitivism

The cognitivist paradigm essentially argues that the “black box” of the mind should be opened and understood. The learner is viewed as an information processor (like a computer).



Contributors


Key Concepts

The cognitivist revolution replaced behaviorism in 1960s as the dominant paradigm. Cognitivism focuses on the inner mental activities – opening the “black box” of the human mind is valuable and necessary for understanding how people learn. Mental processes such as thinking, memory, knowing, and problem-solving need to be explored. Knowledge can be seen as schema or symbolic mental constructions. Learning is defined as change in a learner’s schemata[1][2].

A response to behaviorism, people are not “programmed animals” that merely respond to environmental stimuli; people are rational beings that require active participation in order to learn, and whose actions are a consequence of thinking. Changes in behavior are observed, but only as an indication of what is occurring in the learner’s head. Cognitivism uses the metaphor of the mind as computer: information comes in, is being processed, and leads to certain outcomes.


Additional Resources and References

References

  1. Ertmer, P. A., & Newby, T. J. (1993). Behaviorism, cognitivism, constructivism: Comparing critical features from an instructional design perspective.Performance improvement quarterly, 6(4), 50-72.
  2. Cooper, P. A. (1993). Paradigm Shifts in Designed Instruction: From Behaviorism to Cognitivism to Constructivism. Educational technology, 33(5), 12-19.