AI Proficiency in Education: Opportunities, Risks, and a Framework

Jiliang Tang

Jiliang Tang

Professor, Michigan State University
AI & Machine Learning Expert

Back to Blog

Every opportunity comes paired with risks and challenges. In this keynote, Prof. Jiliang Tang presents a comprehensive framework for building AI proficiency - not learning to code, but understanding how to use AI responsibly, critically, and effectively in education.

The Core Message

"There is no free lunch. Every opportunity is paired with risks and challenges. We need to build AI proficiency - not to code, not to build models, but to use AI in the right way: for inquiry, creation, critical thinking, and responsibility."

The Opportunities: AI as an Equalizer

Breaking Down Barriers

AI can be an interactive partner that co-constructs knowledge with teachers and empowers them to think, create, and engage deeply with students. Most importantly, it can provide equal access to educational resources for everyone.

AI opens doors that were previously closed:

  • Students with limited resources can access the same educational infrastructure
  • Students with disabilities can share classrooms with all students
  • A student in Africa could join a classroom in Korea
  • AI acts as an intellectual partner, not just a tool
View original quote

Jiliang Tang: "AI provides new opportunities for everything. There are many students with very limited resources. There are many students with disabilities. With AI, they can share the same educational resources, the same educational infrastructure with normal students. You can imagine in the future, one day, a student in Africa can join a classroom in Korea to share the same educational resources with Korean students."

The Risks: No Free Lunch

The Hard Truth

AI is fundamentally a "guessing machine" - a probabilistic statistical model that predicts the next word. This creates hallucinations that are undetectable by the model itself.

Hallucination: The Confident Lie

Give AI "Georgia National" and it predicts "University" - correct, Georgia National University exists. Give it "Georgia State" and it also predicts "University" - but Georgia State University doesn't exist. AI cannot verify its own outputs.

This creates serious problems in education:

  • AI generates "comfortable but inaccurate content"
  • In domains where procedure matters, small errors compound
  • A mathematical error or flawed scientific explanation can damage learning
  • Students without verification skills will absorb misinformation
  • There's no mechanism in the model itself to detect these errors
View original quote

Jiliang Tang: "Nowadays, AI is a probability statistical model. It's a guessing machine. If you give AI 'Georgia National,' AI will predict 'University.' That's good - Georgia National University exists. But if you give AI 'Georgia State,' AI will also predict 'University.' But Georgia State University does not exist. That is a hallucination. The worst thing is this hallucination is undetectable. There is no mechanism from the model itself to verify this output."

Bias and Privacy

AI is trained on internet data that carries biases and stereotypes. Models inherit these problems. Meanwhile, AI continuously collects data - but who owns it? How is it stored? How is it used?

Critical questions that remain unanswered:

  • Training data from the internet carries society's biases
  • Models inherit stereotypes from their training data
  • Who owns the data collected from students and teachers?
  • How is data stored and secured?
  • Will your data be used to train the next model?
  • We don't have education-specific AI ethics guidelines yet
View original quote

Jiliang Tang: "How do you get the data? We collect the data from the internet. Unfortunately, the data collected from the internet carries bias. It carries stereotype. The model trained on this data will inherit those biases. In Europe, the US, China, we have some guidelines on how to use AI ethically. But for education, we need specific and dedicated efforts. Unfortunately, today, we don't have this kind of guidelines."

A Real Example: The Over-Reliance Problem

From Prof. Tang's Class

An undergraduate course at Michigan State University with ~200 students. Assessment had two parts: take-home homework (AI allowed) and closed-book in-class exams.

200
Students
Perfect
Take-Home Scores
80%
Failed In-Class
The Calculator Problem, Amplified

If you outsource too much thinking to AI, you lose the opportunity to build fundamental knowledge. Like over-reliance on calculators that led students to lose basic mathematical intuition - but worse.

The implications are serious:

  • Students performed perfectly on take-home work (using AI)
  • 80% failed when they couldn't use AI
  • They outsourced the learning, not just the work
  • Like calculators - students lost the ability to do basic math
  • This affects their capacity for advanced learning
View original quote

Jiliang Tang: "I just finished all the final grades. Students performed perfectly on take-home homework. But 80% of them failed the in-class exam. Why? Over-reliance and misuse of AI. If you rely on AI too much, you're outsourcing too much thinking to AI. You lose the opportunity to train your own fundamental knowledge. Many people compare current AI to a calculator. Students who over-relied on calculators lost the ability to learn advanced mathematics."

The AI Proficiency Framework

What AI Proficiency Is NOT

AI proficiency doesn't mean learning to code. It doesn't mean learning advanced machine learning. It doesn't mean building models from scratch. It means using AI correctly.

Three Dimensions of AI Proficiency
AI Literacy
Understand how AI works. Know that AI is not perfect - it's a probabilistic model that guesses.
AI Practice
Use AI for inquiry, creation, critical thinking. Don't 100% trust or 100% outsource to AI.
AI Access
Ensure safety, transparency, privacy, and human centrality when using AI.
Three Levels for Students
Higher-Order Skills
Creativity, critical evaluation, human-AI collaboration
Disciplinary-Specific
Different requirements for different fields
Fundamental (All Students)
Basic understanding: What is a model? What is data? Why hallucination?
Fundamental Questions Every Student Should Answer

What is a model? What is data? Where does the data come from? Why does AI hallucinate? Where does bias come from? If you can't answer these, you can't use AI responsibly.

Basic AI literacy questions:

  • What is a model? (You've heard of GPT, ChatGPT - but what IS a model?)
  • What is data? How is it collected?
  • Why is the model biased? (If you don't know where data comes from, you can't understand bias)
  • Why does AI hallucinate? What causes errors?
  • Why can't AI verify its own outputs?
View original quote

Jiliang Tang: "When you use AI, you need to have some basic understandings. We know they are AI models. But ask yourself - what is a model? AI is data-centric. What is data? How do we get the data? If you don't know where the data is from, you cannot understand why the model is biased."

Higher-Order Skills: What Makes Us Human

AI should enhance, not replace, higher-order skills. Creativity remains uniquely human. Critical evaluation of AI outputs is a new essential skill. Human-AI collaboration is a new competency we must develop.

Skills to enhance and develop:

  • Creativity: Even the best AI has very limited creativity - this is uniquely human
  • Critical Evaluation: New skill - ability to evaluate AI outputs since hallucination can't be eliminated
  • Human-AI Collaboration: Not just human-to-human - learning to collaborate with AI agents
  • Problem Composition: Breaking down problems for effective AI assistance
View original quote

Jiliang Tang: "AI should enhance instead of weakening our higher-order skills. Some of these skills are a gift to our human beings. Creativity - even for the best AI model currently, they have very limited ability for creativity. We need to critically evaluate AI output instead of 100% trusting it. In the future, AI agents will interact with human beings. We need to learn how to do this kind of collaboration. Human-AI collaboration - there are new opportunities, new skills. We need to learn."

For Teachers: New Challenges, New Skills

The Challenge Ahead

"In the future classroom, AI will become an intellectual company. That means it will challenge our teachers dramatically."

Supporting Teachers in the AI Era

Teachers need new PCK (Pedagogical Content Knowledge) for AI. They must learn to evaluate AI outputs in student work. Pedagogy itself needs adjustment when AI is part of the classroom.

What teachers need:

  • Professional development on evaluating AI outputs
  • New PCK specifically for AI in teaching
  • Adjusted pedagogical strategies for AI-integrated classrooms
  • Clear guidance on safety and ethics
  • Time for training (currently lacking)
  • Infrastructure support (hardware and software)
View original quote

Jiliang Tang: "We must support our teachers from many perspectives. I'm working with my colleague Yasemin about PCK - Pedagogical Content Knowledge. We also need to develop professional development on how to evaluate AI. In the future, students' homework and assignments - probably part of them are from AI. Teachers must have the ability to verify. We need to adjust our pedagogy because AI is part of the classroom now."

The Goal

Enjoy the advantages, mitigate the risks, make it possible. Build AI proficiency so that students and teachers can use AI responsibly, critically, and effectively - keeping human beings at the center.