Back to Blog

What does it mean to be "AI literate" in education? As large language models and generative AI tools become ubiquitous, the University of Michigan School of Education has been grappling with this fundamental question. This presentation explores the faculty initiatives, new courses, and broader concerns surrounding AI proficiency in STEM education.

Faculty Initiatives at Michigan

The School of Education faculty is divided on AI in education - some excited, some fearful. Rather than avoiding the tension, Michigan has embraced it through workshops and collaborative exploration.

Summer Faculty Workshop Tier 1

Throughout the summer, faculty gathered weekly to explore AI together. The sessions ranged from fundamentals to advanced applications:

  • Understanding basics: What is a large language model? What is generative AI?
  • Advanced use cases: Faculty sharing how they use AI in their own classes and research
  • Project brainstorming: Developing use cases for university and K-12 educators

"It was fun. We had a blast this summer. I don't always say that when I'm hanging out at the office in the summer, but we had fun."

Eileen Weiser Center for Learning Sciences Tier 1

The newly opened center serves as a hub for incubating interdisciplinary research projects. Current plans include:

  • AI workshops and talk series throughout the year
  • Point-counterpoint discussions: Bringing together AI enthusiasts and skeptics to articulate strengths, weaknesses, and ways forward

The goal is not to create fights, but to have substantive discussions about how educators should navigate these powerful tools.

Emerging AI Courses

Michigan is developing a suite of new courses to prepare students for the AI-transformed educational landscape.

Course Offerings Tier 2

Emerging Technologies for Learning (Spring semester):

  • Focus on AI and extended reality (VR, AR)
  • Understanding strengths and weaknesses from an educational perspective

Philosophical Course on AI and Knowledge (Ed Psych):

  • How AI tools are causing us to rethink the nature of knowledge itself
  • Examining human learning and what it means to know

AI in Education (Historical Perspective):

  • Looking at all waves of AI: neural networks, intelligent tutoring systems, expert systems, symbolic AI
  • Understanding that AI isn't just ChatGPT - it's a rich history of tools supporting education

"I will only do it if I can take a historical look at AI... In fact, it's hot, and then it's not hot, and then it's hot, and then it's not hot. We're in another hot right now, we'll see."

Graduate Certificate in Learning Experience Design (LXD) Tier 2

Co-directed with Dr. Rebecca Quintana, this program prepares designers for technology-based learning environments:

  • Residential program: Students work with learning designers across campus units
  • Online MOOC series on Coursera: Learning Experience Design certificate available globally
  • AI integration: Exploring how AI can support the design process itself

This led to a second Coursera series: "Exploring Generative AI for Learning Design" - brainstorming ideas to support designers using AI.

LLM as Partner, Not Answer Machine

The central research question: Can large language models serve as intellectual partners rather than just answer generators?

Theoretical Foundation

This work builds on Salomon, Perkins, and Gloverson's seminal paper "Partners in Cognition" - the idea of technology as an intellectual partner, not something that simply gives you answers.

Three Roles for LLMs Tier 2

1. Design Partner:

Can LLMs work collaboratively with students during design activities? Not giving answers, but helping brainstorm, throwing questions back to help students think through problems.

2. Scaffolding Agent:

Using LLMs as tools to determine what kind of scaffolding to generate and show learners - and potentially to better fade scaffolding throughout system use.

3. Collaborator:

Training models to act as collaborators - understanding what it means to collaborate rather than just generate.

"We've been putting in NSF grants to explore these ideas about large language models being this sort of intermediary, helping, being a tool to think with rather than a tool to just generate answers for you."

Student Behaviors We're Observing Tier 3

Some students are already using LLMs as thought partners rather than answer machines:

  • Getting ideas but still judging which direction to go
  • Asking "what direction sounds good and why?" then deciding themselves
  • Using generated multiple choice questions selectively: "This one's pretty good, I'll use that. This was just wrong. This was pretty good, but let me change it."

The key: students remain in charge, using AI as a brainstorming partner while maintaining their own judgment.

The Evolution: Information Literacy to AI Literacy

A reflection sparked by rewatching The Post on a flight - the Spielberg film about freedom of the press in the 1970s Nixon era.

The Information Curation Timeline Tier 2

Era 1: Newspapers as Curators

News organizations synthesized, curated, and presented information to the public. Different newspapers offered different lenses, but there was a managed flow of information.

Era 2: Search Engines (~1995)

AltaVista and other search engines gave students direct access to vast information. The famous "storm front" story: students researching weather typed "storm front" and got a white supremacist website instead.

"We thought, oh, maybe search engines, we can't just let kids loose on these things."

This led to the concept of information literacy - teaching students to judge, trust, and synthesize information.

Era 3: Large Language Models (Now)

AI can rapidly generate tons of information - some helpful for brainstorming and learning, but also content that is false, misleading, and harmful.

The Central Question

If search engines led us to think about information literacy, what do we now need to think about for AI literacy? What are the proficiencies that STEM learners, educators, and professionals need?

The Dangers We Face

AI Accentuates Existing Problems

AI isn't creating new problems - it's amplifying societal issues we already have around truth, trust, and confirmation bias. The tools spur these problems on faster and faster.

The Struggle Problem Tier 1

A conversation with a literacy colleague revealed a growing concern:

"What I fear is that these tools are creating this illusion that I can get the right answer without having to struggle. But it's in the struggle that the learning happens."

Students increasingly want extremely detailed rubrics - not to learn, but to know exactly what points to hit for their grade. They focus on the end product rather than the process.

The problem: In education, especially graduate school, it's the process that matters. These tools may be shortchanging the cognitive struggles we want students to engage in.

Confirmative Behavior Tier 2

From the discussion, a German participant observed:

"Humans tend to believe in something first, and then they try to find a rational for it. AI can help with this: 'Hey ChatGPT, if I want to do this, what is a good reason to do this?'"

Graduate students may already know what they want to say, then ask ChatGPT for supporting references - the opposite of proper research methodology where you read literature first, then form conclusions.

The old norm: "You're not supposed to cite anything you haven't read."

The new danger: People won't care if information is right or wrong, as long as it helps them convince others.

Transparency and Accountability Tier 2

Different institutions are setting different boundaries:

  • Journals: Most moving toward self-disclosure requirements for AI use
  • NSF: "You can use it in your proposal, but if you screw up, that's on you"
  • Reviewers: Not allowed to use AI in peer review
  • Syllabi: Range from "no LLM use" to "tell me exactly what you did with it"

An interesting finding: Many students feel they can use generative AI for their own work, but get angry if they find out faculty are using it for something related to their work.

Equity concern: Different students have access to different tools - some can afford paid versions with better capabilities, while K-12 schools vary widely in what they can provide.

Discussion Highlights

Key Exchanges from Q&A

On Different LLM Capabilities:

"In math education, we're starting to pay attention to which LLMs can actually do the math. OpenAI does a lot better job dealing with math computation than Google. So we're starting to see the pros and cons of different large language models."

On the Usability Paradox:

"When I started in human-computer interaction, the whole point of graphical user interfaces was I don't need to know the inner workings of the computer. But now I feel like I have to know how all of these models are working... It feels like we've gone sort of backwards."

On Two Layers of AI Literacy:

"One layer is: can I use AI in a way that I get out of it what I want? The other is developing a sense of being critical towards what comes out. I'm not sure we're trying to fix the problem in the wrong spot."

On Perception and Fear:

"I do research with AI, but I'm scared of using AI when searching literature... I'm worried that others will label me as a lazy, unintellectual scholar. How do we deal with that?"

Toward AI Proficiency: Next Steps

Defining AI proficiency is urgent because technology is outpacing the guardrails we have in place. Groups at meetings like this forum must start defining what it means to be AI literate.

Key Proficiencies to Define

"This is one area where the technology is really outpacing the guardrails that we have in place to deal with it... This is the kind of thing we have to define. And we may have to do it sooner than later."