0:00
/
0:00

Consciousness in Metacognition

One fascinating aspect of metacognition relates to the level of conscious awareness that a person may approach their learning with. There is considerable debate here: some researchers argue that metacognition requires conscious reflection, while others (Veenman et al., 2002) contend that the concept of metacognition must include both conscious and subconscious processes, such as one’s neurological makeup or environmental factors that support self-regulation.

Recent theoretical work proposes that metacognition may be broader than traditionally conceived—not merely a conscious, intentional process but rather an instance of a larger class of representational re-description processes that occur unconsciously and automatically (Higher order thoughts in action: consciousness as an unconscious re-description process - PMC). Research suggests that while working memory involves many unconscious processes, it is only at the top of the brain’s cognitive hierarchy that metacognitive control becomes conscious, where our beliefs concern complex, abstract entities and estimates of their precision are more uncertain and malleable(Consciousness, (meta)cognition, and culture - PMC; Frith, 2023).

Groundbreaking research has shown that humans can unconsciously learn to use hidden representations in their own brains to earn rewards, with metacognition correlating with their learning processes, demonstrating that we can derive reward-maximizing choices from intrinsic high-dimensional neural information represented below the threshold of consciousness (Unconscious reinforcement learning of hidden brain states supported by confidence | Nature Communications;Cortese et al., 2020).

Self-Reflection and Learning

Most researchers describe their personal practice of metacognition as a process involving bringing otherwise unconscious elements of learning into a state of conscious awareness and investigation. Research has consistently found that routines promoting self-reflection are among the most impactful educational interventions (Dunlosky, 2013; Hattie, 2012).

Recent studies demonstrate that students who undergo metacognitive interventions exhibit superior understanding of complex concepts and better metacognitive processing compared to control groups, as metacognitive activities enable them to monitor their understanding, evaluate their problem-solving approaches, and adjust their strategies as needed (Frontiers | The impact of a metacognition-based course on school students’ metacognitive skills and biology comprehension). Contemporary research emphasizes that metacognition pertains to the conscious recognition and analysis of one’s own learning and cognitive processes, involving scrutiny and comprehension of self-awareness and encompassing reevaluation, recollection, and re-perception of outcomes (Metacognition research in education: topic modeling and bibliometrics | Educational technology research and development; Chou et al., 2023).

A potential reason for the effectiveness of self-reflective routines could be that they are modeled around an underlying, fundamental quality of information processing, which is at the core of learning. Educational programs and techniques that utilize these natural learning processes naturally better support information processing. It has been proposed by Domingos and others that a master algorithm is at work behind human cognition, and the best evidence for this theory has been the tremendous developments made in the field of computation and specifically artificial intelligence (AI).

It has been found that self-referential programs based on simple heuristics, rather than complex linear programs, produce more stable and flexibly functioning computer interfaces and robotics (Domingos, 2015; Kelly, 1995; Hofstadter, 1979). This emergent, computational view suggests that human-level conscious behavior results from a coalescence of vast stores of information being integrated from moment to moment by the brain. By this line of thinking, higher levels of conscious behavior result from larger stores of knowledge and experience.

The Challenge of AI Metacognition

However, programming a computer that can execute metacognition, even in a rudimentary way, has proven to be exceedingly difficult. Bill Gates recently highlighted that metacognition—a system’s ability to think about its thinking—represents a critical development in AI’s evolution, noting that current AI systems have only trivial levels of metacognitive capability (PYMNTSEDRM). At its core, metacognition in AI refers to a system’s capacity to monitor, evaluate, and potentially modify its own cognitive processes—going beyond simple decision-making to assess performance, recognize limitations, and adjust approaches based on self-reflection (The Dawn of Self-Aware AI: How Metacognition Could Reshape Commerce | PYMNTS.com).

Recent research has begun investigating whether large language models can monitor and control their internal neural activations, finding that these models can monitor only a subset of their neural mechanisms, aligning with findings that they internally encode more factual knowledge than they externally express (Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations; Panigrahi et al., 2025).

Studies show that generative AI systems impose significant metacognitive demands on users, requiring a high degree of metacognitive monitoring and control for tasks like prompting, evaluating outputs, and optimizing workflows (ACM Digital LibraryACM Digital Library; Tankelevitch et al., 2024). Research has revealed a troubling disconnect: while AI assistance improves task performance, it simultaneously reduces users’ metacognitive accuracy, leading them to overestimate their performance and over-rely on AI systems, reducing their ability to critically monitor outcomes (arXivarXiv; Holm et al., 2024).

The Abstraction and Binding Problems

The chief problem is the ability of the mind to formulate new, and conceptually more abstract, representations of phenomena that appear unrelated at a low level of understanding (Hulbig, 2018). This is an expansion of a perceptual problem known as the binding problem (Revonsuo & Newman, 1999), and it poses a considerable problem to the computational study of metacognition as it is through this abstract process that new understandings are generated.

Leading AI researcher Melanie Mitchell argues that while neural networks have made enormous progress in processing human language, they have made astonishingly little progress in forming concepts and abstractions—a fundamental unit of understanding that is absolutely crucial to unlock AI’s full potential (Artificial Intelligence Still Can’t Form Concepts – Communications of the ACM). Although deep learning systems can perform better than average humans on tests of abstract reasoning like Raven’s Progressive Matrices, research reveals they accomplish this not by learning humanlike concepts but by finding shortcuts (Artificial Intelligence Still Can’t Form Concepts – Communications of the ACM; Mitchell, 2023).

Contemporary neural networks fall short of human-level generalization due to their inability to dynamically and flexibly bind information distributed throughout the network. This is a binding problem that affects their capacity to acquire a compositional understanding of the world in terms of symbol-like entities, which is crucial for generalizing in predictable and systematic ways (On the Binding Problem in Artificial Neural Networks, Greff et al., 2020).

Computers are very good at putting things in categories, but they are not very good at developing the categories themselves. The ability to make new categories is more abstract than the constituent parts of the categories. Mitchell emphasizes that the essence of abstraction and analogy is few-shot learning, noting that if the goal is to create AI systems with humanlike abstraction abilities, it doesn’t make sense to train them on tens of thousands of examples (Artificial Intelligence Still Can’t Form Concepts – Communications of the ACM).

This ability to create novel categories and abstract representations enables more advanced behavior over time rather than simply repeating actions, which is a hallmark of metacognitive processing that remains one of AI’s greatest challenges. Recent frameworks for metacognitive AI encompass transparency (the ability to check information veracity), reasoning (how systems synthesize information and produce decisions), adaptation (accommodating to new environments), and perception (recognizing entities in the environment) (Metacognitive AI: Framework and the Case for a Neurosymbolic Approach; Wei & Shakarian, 2024).

The gap between human metacognitive abilities and AI capabilities underscores the profound complexity of conscious self-reflection and abstract thought. While AI continues to advance rapidly, the emergence of genuine metacognitive capabilities—systems that can truly reflect on their own thinking, generate novel conceptual categories, and flexibly adapt their cognitive strategies remains an open frontier at the intersection of neuroscience, cognitive psychology, and artificial intelligence.


References

Chou, C. Y., et al. (2023). Metacognition in educational contexts. Educational Psychology Review.

Cortese, A., et al. (2020). Unconscious reinforcement learning of hidden brain states supported by confidence. Nature Communications, 11, 4206.

Frith, C. D. (2023). Consciousness, (meta)cognition, and culture. Perspectives on Psychological Science, 18(4), 821-833.

Greff, K., et al. (2020). On the binding problem in artificial neural networks. arXiv preprint arXiv:2012.05208.

Holm, E. A., et al. (2024). Performance and metacognition disconnect when reasoning in human-AI interaction. arXiv preprint arXiv:2409.16708.

Mitchell, M. (2023). Abstraction and reasoning in AI systems. Communications of the ACM.

Panigrahi, A., et al. (2025). Language models are capable of metacognitive monitoring and control of their internal activations. arXiv preprint arXiv:2505.13763.

Tankelevitch, L., et al. (2024). The metacognitive demands and opportunities of generative AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ‘24), 1-24.

Wei, H., & Shakarian, P. (2024). Metacognitive Artificial Intelligence. Cambridge University Press.

Discussion about this video

User's avatar