Artificial intelligence (AI) is reshaping the landscape of knowledge production and dissemination with unprecedented power. It provides information, generates text, and even simulates reasoning with astonishing efficiency, bringing revolutionary opportunities for academic research and information services. However, beneath this wave of technological optimism, a deeper and less perceptible challenge is quietly emerging: as obtaining "answers" becomes easier than ever, are we losing our ability to truly "understand"? An "Illusion of Explanatory Depth" born from technological fluency is becoming the core paradox faced by the knowledge interface in the AI era, forcing us to reevaluate the nature of information, knowledge, and understanding, and delineating new coordinates for the future mission of library science and information science.
I. The Roots of Thought: An Ancient Warning from "Writing" to "Algorithms"#
Concerns that external information carriers may weaken human intrinsic understanding are not new to the AI era. As early as ancient Greece, Plato, through the voice of Socrates in "Phaedrus," warned of the potential dangers of "writing": it makes people reliant on external reminders rather than internal memory, thus acquiring "the appearance of wisdom" rather than "true wisdom." More than two thousand years later, generative AI, especially large language models (LLMs), can be seen as the ultimate form of this "external memory" and "external understanding." It can integrate information, organize language, and quickly generate seemingly perfect answers with unparalleled fluency, easily leading users to the illusion that they have mastered knowledge.
This phenomenon is known in cognitive psychology as the "fluency illusion." When information is presented clearly, coherently, and in an easy-to-process manner, people often overestimate their grasp of that information. AI is a powerful catalyst for this illusion. What it presents is not fragmented data, but highly organized and rhetorically optimized "information products." Users, in their interactions with AI, skip over the key stages filled with cognitive friction in traditional knowledge exploration—such as difficult literature searches, comparisons of multi-source information, the dialectics of conflicting viewpoints, and the active construction of knowledge systems. AI's "one-click generation" bypasses these necessary cognitive efforts, presenting the endpoint directly, but in doing so, it deprives users of the valuable journey to that endpoint. Users may "have" the answer but "lose" a deep understanding of the complex logic, underlying assumptions, and potential limitations behind the answer.
II. A Shift in Practice: Redefining the Core Mission of Library Science#
In the face of the challenge posed by the "Illusion of Explanatory Depth," the core values and practical paths of library science and information science need to be reshaped. Our mission is no longer merely to serve as intermediaries or providers of information—an area where AI has demonstrated strong capabilities—but to become facilitators and guardians of deep human understanding.
First, this means that information literacy education must transition to "critical AI literacy." Traditional "teaching how to fish" focuses on teaching users how to find, evaluate, and utilize information. In the AI era, we need to "teach the critical thinking of fishing," that is, to cultivate users' understanding of how AI works (based on probabilities rather than causality), recognize its limitations (such as "hallucinations" and biases), and critically assess its output. The core of education should shift from "how to find answers" to "how to question answers," guiding users to see AI as a tool for stimulating thought rather than a substitute for thinking, thus guarding against the intellectual inertia brought about by "cognitive outsourcing."
Second, the role of librarians must evolve from "information navigators" to "knowledge curators" and "understanding guides." In an AI-driven information ecosystem, our professional value lies in the expert selection, evaluation, and organization of vast amounts of AI-generated content, providing users with trustworthy, high-quality AI tools and information sources. More importantly, we must guide users beyond the surface answers provided by AI by designing research paths, organizing thematic discussions, and offering in-depth consultations, exploring the multidimensional perspectives and deeper logic behind the questions, thereby promoting the true internalization of knowledge.
Finally, library science should actively practice the concept of "IRM4AI" (Information Resource Management for AI), leveraging our discipline's deep foundations in knowledge organization, data governance, and information ethics to participate in the construction of "trustworthy AI." By providing high-quality, unbiased training data for AI models, constructing rigorous domain knowledge graphs to enhance their reasoning capabilities, and establishing quality assessment standards for AI-generated content, we can improve the reliability of AI from the source and mitigate its potential negative impacts.
III. The Fundamental Challenge: Can AI "Understand" and How Do We "Seek Knowledge"?#
The paradox of the "Illusion of Explanatory Depth" ultimately leads us to a fundamental philosophical inquiry: Can AI truly "understand"? And how should we redefine "seeking knowledge" in the AI era?
Currently, AI's "intelligence" is primarily based on pattern recognition and statistical associations from vast amounts of data; it lacks the "embodied understanding" unique to humans, which is based on embodied experience, emotions, intentions, and values. AI can manipulate symbols but cannot experience the real world that those symbols refer to. Therefore, AI's "explanation" is fundamentally different from human "understanding." Acknowledging this fundamental difference is a prerequisite for avoiding the "illusion."
Thus, AI should not be positioned as a "cognitive substitute," but rather as a "cognitive enhancer." Its value lies in processing complexities and scales that are difficult for humans to reach, thereby uncovering hidden patterns, providing novel perspectives, and inspiring innovation. However, the ultimate construction of meaning, value judgment, and critical reflection must be completed by human subjects. The future challenge lies in designing AI systems that can clearly reveal their limitations, encourage users to engage in deep exploration, and promote human-machine collaboration rather than one-sided dependence.
Ultimately, the arrival of the AI era forces us to rethink the true meaning of "knowledge acquisition." It should not be simplified to the rapid input of information but rather encompass a complete process of active exploration, critical evaluation, deep thinking, associative construction, and innovative application. Safeguarding and empowering this process is the irreplaceable value of library science in the future wave. In an era where everyone can easily obtain "answers," cultivating the desire and ability to pursue "understanding" will be our most enduring contribution to society.
References: AI4IRM and IRM4AI: The Double Helix Engine Driving the Development of Information Resource Management.