The Limitations of AI in Teaching and Research
Artificial intelligence (AI) has revolutionized various processes in higher education, from data analysis to personalized learning experiences. However, there are crucial moments that require human judgement and involvement, particularly in teaching and research. Qin Zhu, an associate professor at Virginia Tech, highlights the importance of recognizing AI’s limitations in these areas.

Teaching: The Human Element
Teaching is a fundamentally human endeavor that relies on empathy, intuition, and inspiration. While AI-powered platforms can customize lesson plans and provide instant feedback, they fall short in addressing the emotional and relational aspects of education. Educators interpret the unspoken dynamics of a classroom, adapting their approach to create an environment where students feel seen, heard, and motivated. Human interaction is essential for stimulating growth and transferring knowledge effectively.
In situations where students require encouragement, guidance, or understanding, AI’s limitations become apparent. Machines cannot discern the complexities of a student’s emotional state or provide the reassurance that comes from human interaction. The decision to eschew AI in teaching is vital when focusing on cultivating personal connections, inspiring curiosity, and nurturing students’ holistic development.
Research: The ‘What If’ Test
AI excels in processing vast datasets and identifying patterns, but it is less effective in abstract ‘thinking’ and defining new directions. AI can suggest correlations but lacks the broader context and the ability to ask the ‘what if?’ questions that lead to paradigm shifts. Human imagination and philosophical questioning are essential for conceptual leaps, such as those that led to quantum mechanics or the structure of DNA.
Researchers must maintain a cautious approach, validating AI-generated results rigorously to uphold research integrity. Ethical considerations, particularly regarding bias in AI models and responsible data use, are integral to research practice. Relying solely on AI for literature searches may lead to missing critical papers due to outdated or incomplete datasets. Moreover, AI tools are not value-neutral and may have biases embedded in their data or algorithms.
Ethical Decision-Making: The Human Context
Ethical decision-making is an area where AI faces significant limitations. While AI can assist in diagnosis or support clinical decision-making, it functions best when complementing human expertise rather than replacing it. In highly sensitive areas, such as end-of-life care or organ allocation, decisions involve ethical considerations that transcend data-driven logic. AI lacks the capacity for empathy, introspection, or understanding of broader human contexts, which are pivotal in making compassionate and morally sound decisions.
Conclusion
AI is a powerful partner in education and research, but it is not a replacement for human judgement and involvement. Its ability to process vast amounts of data and perform repetitive tasks is unmatched, but it lacks the emotional intelligence, creativity, and moral reasoning that define humanity. As we integrate AI into more facets of life, it is crucial to acknowledge its strengths while recognizing its limitations.
Qin Zhu is an associate professor in the Department of Engineering Education at Virginia Tech, specializing in ethics and policy of computing technologies and robotics.