Friction, Uncertainty and the Future of Learning with Justin Reich S11E3 (142)

What if the biggest risk of AI in education is not cheating, but the quiet erosion of friction, struggle and uncertainty — the very conditions that make learning possible?

In a moment where there is no clear best practice for AI in schools, are educators being called to stop searching for certainty and start building “local science” instead?

🎙️ Episode Summary

In this episode, Louka Parry is joined by Justin Reich, Associate Professor of Digital Media at MIT and Director of the MIT Teaching Systems Lab, for a thoughtful conversation about AI, schooling and the future of learning. Together, they explore how generative AI is disrupting familiar classroom practices, why friction and productive struggle still matter deeply for learning, and how schools can respond when there is no clear “best practice” to follow.

Justin argues that educators should be cautious of easy answers and instead listen closely to students and teachers, run local experiments, and assess what is actually improving learning rather than just performance. The conversation also dives into assessment, agency, cognition, technology’s mixed record in education, and the importance of humility in a time of rapid change. It’s a rich and grounded discussion about uncertainty, experimentation, and what schools should hold onto as they navigate an AI-shaped future.

👤 About Justin Reich

Justin Reich is an Associate Professor of Digital Media at MIT, Director of the MIT Teaching Systems Lab, and one of the leading voices exploring the intersection of technology, teaching and school improvement. His work examines how schools use technology, how teachers learn, and why innovation in education so often falls short of its promise. Justin is the author of Failure to Disrupt: Why Technology Alone Can’t Transform Education and Iterate: The Secret to Innovation in Schools, and host of the TeachLab podcast. With a career spanning wilderness education, high school teaching, and academic research, he brings a rare blend of practical insight, intellectual rigor and humility to some of the biggest questions shaping the future of learning.

🔗 Connect with Justin Reich

🔗 Stay Connected with Louka Parry

Tune in to be inspired, challenged, and reminded why love truly is at the heart of learning.

[Transcript Automated]

Louka Parry (00:00)

Well, hello everybody. Welcome back to the learning future podcast. I'm of course, Louka Parry and today we are speaking with a phenomenal educator who's really sitting at the intersection of so many different things happening in education and technology and learning. We're speaking with Justin Reich. He's an associate professor of digital media at MIT. He started his career leading outdoor expeditions and teaching wilderness medicine, taught high school history, and then worked with schools on technology integration.

At MIT, he studies how schools use technology and how teachers learn to improve their work. He's authored several books, including Failure to Disrupt, Why Technology Alone Can't Transform Education and Iterate the Secret to Innovation in Schools. He's a fellow podcaster. He's the host of the TeachLab podcast. And at the moment, they have this fascinating series called The Homework Machine, which is really trying to delve into the implications of AI. He's also the director of the MIT Teaching Systems Lab. Justin, thanks so much for joining us.

Justin Reich (00:54)

It's a pleasure to be here, Louka Thank you.

Louka Parry (00:57)

I'm really excited for the conversation, of course. My first question, what is something that you are learning currently that is kind of forefront of your mind?

Justin Reich (01:06)

The thing I learned today, which I gave a workshop at Harvard for educators across the university who were just trying to help their various schools work on AI related things. And whenever I do these events, I'm always trying to collect stories. And the one that caught my attention today was a law school professor describing how some of their Socratic seminars aren't working anymore.

Louka Parry (01:10)

Nice.

Justin Reich (01:32)

I think the way it works, I'm not, consider this like not fully reported, not fully vetted yet, just like, you know, anecdote. But the students are graded on their participation, so they prepare. They're increasingly preparing with GPTs and they're getting outlines of conversations for previous years. So they can sort of, they essentially can over anticipate the kinds of questions that are gonna be asked and they prepare responses to those questions.

Louka Parry (01:32)

Interesting.

So sure.

Justin Reich (02:01)

And so the conversations are not having enough variation and they're not having enough kind of interesting missteps or moments of uncertainty. It's basically like a series of prepared answers. ⁓ it's apparently in some of these classes, it's really flattened out ⁓ the utility of the Socratic discussion. ⁓ And ⁓ that just struck me as like such an interesting puzzle.

Louka Parry (02:09)

quality.

Justin Reich (02:28)

Because one of the things that we tell educators is, well, if ⁓ you're having problem with AI assessment, then you just need to be doing more in-person stuff. And you just need to be doing more authentic. This is pretty in-person, and this is pretty authentic. But it's not working because of the really subtle ways. I mean, it's probably a story about the importance of friction.

Louka Parry (02:38)

Do a Socratic seminar.

Yeah. Yeah.

Yes.

Justin Reich (02:53)

is

that we need friction and resistance and heterogeneity in our learning environments for interesting things to evolve and GPTs have flattened that out. There's also this, I mean, to me, there's this like totally dystopian kind of like, here are all of these people paying all this money to come together in the same place and the machines are kind of feeding them scripts and the scripts that are being generated off this text corpus that means they're homogenized and so forth. I mean, in some ways it's just like absolutely terrible.

Louka Parry (03:13)

Yep.

Justin Reich (03:23)

But it's kind of fascinating too to watch this particular moment of breakdown.

Louka Parry (03:28)

There's so many topics I want to go through. Let's, the big one, right? And it's something we're reflecting on a lot at the Learning Future too, is this idea is friction is fuel and the idea of frictionless digital experiences. The distinction between learning and performance, I think is what you're calling out here. know, if you can prep so much for performance, you're bypassing the learning. Of course, there was that viral MIT study, which was a small sample, not period, that we have talked about cognitive debt last year. Take us a little bit into your world, Justin. You know, like, what are you seeing?

And how are schools grappling with this?

Justin Reich (04:03)

⁓ Yeah, so I think in moments of change and complexity in education, people should not listen too much to elites. People should be careful about listening to professors and pundits and people at think tanks and ⁓ should really try to listen to students and teachers who are in classrooms. ⁓ So staying really close to

practicing educators, because elites are just saying things and making things up and are pretty far away from the action. ⁓ And folks who teach every day are right in there. So as GPT has arrived, one of the first things we did, we started this project called the Homework Machine where we interviewed 120 students and teachers across the country. The first question in our interview protocol is just, how's your year going? Like, what's going on? ⁓ And for some people, actually especially in the United States and more affluent places,

AI is really like the thing they want to talk about. This is the thing that's like causing challenges, messing up our schools. They would say messing them up or whatever they would say with it. And of course in lots of places in the United States, the problems are chronic absenteeism, the problems are poverty, the problems are violence. AI is the 17th or 97th thing on the list. And I think what students and teachers in K-12 schools told us was

Louka Parry (05:04)

Hmm.

Hmm.

Justin Reich (05:24)

you know, just a story of uncertainty ⁓ that like the old practices are not quite working. The new practices have not come into being. ⁓ schools are not well set up to manage extended periods of uncertainty. It's just not a thing that historically we've asked the US education system to do. I doubt it's what people have asked the Australia education system to do.

And I think, mean, your listeners will know, but not everybody realizes just like the extremely small margins by which these schools operate. Like if six middle school teachers call in sick, like the building doesn't work anymore. Like children are being supervised by a janitor in the cafeteria. there's not this huge overhead staff with which to manage change. I mean, I was talking to folks that, it's that same person at the Harvard Law School was saying that like, what's happening in the.

Louka Parry (06:04)

100%.

Justin Reich (06:21)

legal profession is changing and is uncertain. And if you're at a law school, what you do is you look to the legal, you know, your job is to prepare people for the legal profession. So you look to the legal profession and say, all right, what's going on? What do you need? And we're going to make people who can do that. But if the field itself is in a state of uncertainty, then the school is kind of stuck in this uncertainty. So I think that's, you know, one of the one of the places that I've ended up most is thinking about like

What kinds of supports could we offer to schools that would help them deal with unfortunately pretty long periods of uncertainty and to strengthen some muscles in that regard, which is gonna be hard.

Louka Parry (07:02)

You're clearly you're someone who gets to speak through story a lot through the podcast. And you know, there's 120, I want you to just double click on, on that practice comment that I think is so important to remember. You know, like we always used to say in education, everyone has a view on schooling because everyone's been to school by and large. Now everyone's played with AI to some level. everyone has someone's kind of like listening to these stories of practice.

How are you seeing people manage that uncertainty well and where are you seeing it absolutely fall in a heap?

Justin Reich (07:39)

Yeah, I would say the educators have a natural and in many circumstances positive, healthy, ⁓ to be celebrated reaction to seek best practices. ⁓ If there's a group of elementary school teachers and your kids are not learning to read very well, you should not invent reading instruction from first principles. You should go find experts and say experts.

Tell us everything that we've learned over the last hundred years about teaching reading well and do what those experts, I mean you're gonna have to modify it for your local circumstance, not by slavish, but like we know how to do this. I think in, for the foreseeable future, like automatically turning to best, like people who are seeking best practice, you're just gonna find folks who are saying things without evidence. There is no best practice.

for how to manage the arrival of GPTs in schools. ⁓ And so you don't have to look for, you know, I was talking to an educator the other day who was like, man, there's all these AI literacy frameworks out there, which do you think is the right one? I was like, they're all made up. ⁓ They're all too early. None of them have good evidence behind ⁓ them. It's very likely that substantial portions of them are wrong and substantial portions of the ideas behind them and the suggestions for executing them are wrong. ⁓

So I think, seeking best practice, which works in lots of circumstances, is probably not the best strategy here. ⁓ I think the best way to manage uncertainty is through kind what I've been calling local science. ⁓ You look at your community, you identify some of your strengths, and you say, okay, we think that maybe the best way for us to navigate this pathway forward is through this kind of approach. ⁓ And that approach could be,

Louka Parry (09:21)

Nice.

Justin Reich (09:35)

man, we really got to find a way to preserve the very best parts of our instruction and kind of keep AI at bay. Or it could be, man, there's some places here that we really want to invest. I was in this room of Harvard instructors today, and there are some folks who were like, we've got to figure out how to preserve really important things that we do. And there are other folks who said, our fields are being transformed, and we have to transform our instruction. Those are both reasonable bets. ⁓

Louka Parry (09:58)

Mmm.

Justin Reich (10:02)

So then you choose some of these things as a community and you put some experiments out there to try them and then you assess whether or not the things that you're trying are working. And the main thing that we can do to conduct the simplest form of assessment is we look at changes in student work. Most of us are not completely transforming the what of what we're teaching.

If you had a pig dissection in 2019, you probably still have a pig dissection in 2026. And there's going to be a lab that goes along with your fetal pig dissection. But you're probably either putting in a bunch of new rules to keep AI out of it, or you're probably introducing a bunch of new policies to allow AI into it. you grab a bunch of pig dissection lab reports from 2019, and you grab another set from 2026. In the best of all possible worlds, you mask all the names and you shuffle them up.

and you read them so you don't know which is which, and you say like, which of these works is evidence of the kinds of biology thinking that I want to see? Which of them is showing like the mastery of content understanding, the mastery of the procedure we're trying to get, your anatomical knowledge, your sense of how systems work, ⁓ and are the new things that we're implementing, are they amplifying the things that we most care about, or are you getting assignments back where you're like,

Louka Parry (11:06)

you

Justin Reich (11:26)

we tried to keep the AI out, but it's clearly in there. Or we let the AI in there and it like people are saying crazy stuff now. Or the work is so much better. Like we, you know, we gave them a new set of tools and we're pretty confident that the thinking is improving. ⁓ but I think that, you know, that that's local science is I think the, for the foreseeable future of the wave through uncertainty, you make some bets and you try some experiments and then you do some assessments to see whether or not the things you're doing are working. And then you really create it.

Louka Parry (11:36)

Mmm.

Justin Reich (11:55)

courageous about looking at things that are flopping and going, OK, we're not doing that anymore. We're going to try something else.

Louka Parry (12:00)

That's fantastic. I love that framing of local science. ⁓ You know, it's kind of science art craft, you know, the whole idea. I'm really curious about this idea around assessment. It's like one of the biggest questions for us in higher education in K to 12. ⁓ And so like you said, know, about how do we judge the thinking? It used to be the case that we had an assessment task that would be completed. And of course it was a proxy in some ways for a learning process.

capability, which was about clarifying thinking or learning structure or whatever the case would be. Take us into this world of learning V performance and the up the potential positive use cases that you're seeing and the cost of negative use cases because there's so much going on in this kind of, know, everyone now cheats versus everyone now is supported to think properly. You know, it's just it's too binary, of course. So give us some nuance.

Justin Reich (12:56)

Yeah, I mean, I definitely don't think I have solved it ⁓ or anybody else has. I think there's just like interesting things to explore in there. I'm ⁓ not ready to jettison all of the assessments that we used to do. I don't believe in things like, if GPT can do it, people don't need to do it anymore. It's like, no. ⁓

You know, the thing that makes us creative, the things that makes us proficient is knowledge that's in our head and the structures of knowledge that are in our head. one of the, there are lots of, you you have to assess what kind of knowledge is being shoved in people's heads as you're assessing how they do more complex things. You know, I mean, ⁓ like I teach a class called Learning Media and Technology, and at the end of the class, they have to do, ⁓ like, they do a project about whatever they want.

But the first assessment in the class, like ask them to do some writing about learning theory. Like I basically say that there are sort of two big camps of learning theory, one that's more like direct instruction and more one that's more like ⁓ apprenticeship. And I have them do some exercises where they write ⁓ about these two ideas. ⁓ And I think some critics of what I do would say something like,

man, know, that doesn't sound like, you know, the writing you have them do doesn't sound like it's super authentic. It doesn't sound like it's a real project. Why don't you just get them doing real work? And I'm like, dude, if I don't know that they understand these basic ideas about learning theory, they are very likely to do stupid projects that like go off in the weeds and make no sense because my job is to first make sure that they have the grounding that they need to do more complex things. So, you know, the answer is not gonna be.

Louka Parry (14:34)

Hmm.

Justin Reich (14:42)

turn everything into authentic, like we don't have time to turn everything into authentic projects and authentic projects are dumb when people don't have the work that they need to do authentic, the foundations they need to do authentic work. On the other hand, obviously, like there are a whole bunch of things in which we don't want students to, you know, there's gotta be a class of boring rote work that we don't need people to do anymore. But I think we, you know, I mean, there's a group of scientists

Louka Parry (14:50)

Actually, no.

Justin Reich (15:12)

that organize around this idea called cognitive load theory. And there's a bunch of interesting things about it. But I think one of the hearts of their argument is that it is rarely the case that the best way to train people for elite performance is to do the same thing that elite performers do. ⁓ Like you don't, every athlete knows this. Like you don't become an elite soccer player by just playing games of soccer against other elites, although that's important. What you do,

Louka Parry (15:33)

Mm.

Justin Reich (15:40)

is you decompose complex practice into specific tasks, and then you do a variety of kinds of drills and other exercises that develop automaticity at those specific tasks. And there's all kinds of intellectual things that we do that have the same dimension to them. So, you know, like what people need to learn is changing, you know, the kinds of scaffolds they need to change. I mean, if I were to break it down to the simplest things though, I would still say, like, the more you're in the core of the curriculum,

the more you're in the heart of the tested things we do, probably the more, or the more that you have, the more that people down the line depend on you. Like if you're teaching Statistics 101, there are people teaching Statistics 201 and 301 that are really depending upon you to make sure your students achieve certain kinds of understanding. And for those things, we probably ought to come up with a bunch of assessment systems where people really can't use AI.

and that we know that they in their heads have some capacity, you know, at least for the foreseeable future before we shake out like which things we just don't need humans to know at all. The more we're in the periphery of their curriculum, the more the, those are safer places for explorations. And the periphery can either be things that are like untested or stuff like that, or also, you know, there are courses in which we sample from a cannon.

Louka Parry (16:54)

Hmm.

Justin Reich (17:03)

rather than move from a sequence. So in like our literature classes, we sample from a canon. There's way too many books you could possibly read. So you just like pick a few of them, you know, and if you do like, you know, if you do four Shakespeare plays instead of five Shakespeare plays, it doesn't mean that, you know, students like learning is permanently crippled. Like you have, like if on the fourth Shakespeare play, you're just doing wacky AI stuff, that's great. Don't do wacky AI stuff with like, you know, eighth grade algebra.

Louka Parry (17:20)

Ha

Absolutely.

Justin Reich (17:31)

⁓ Don't do it in the places where like all the rest of the maths are depending upon students developing certain kinds of understanding.

Louka Parry (17:32)

Yeah. Yeah.

I love that distinction, Justin, between pulling from a cannon versus pulling from a sequence. You the idea of developmental continuum, you know, as being part of this new way of understanding ⁓ growth, new metrics, you know, even the idea of a transcripts that are a bit broader, but still have this kind of developmental progression ⁓ inherent within them. I'm really interested in your view on even just like super basically the like,

Just solid use of GPT. One thing that I would say is that many students are still using GPT as a learning assistant. And that's problematic because of the cognitive offloading cognitive debt issue versus a tutor. And this idea of the study modes or Khan, MeeGo and a whole bunch of others that you'll be familiar with. What's kind of the basic principles for using these tools to help your learning or to help your students learn? Cause they're going to largely use them anyway.

So how can we get a bit of guidance to them? And I know you've done some fantastic work and released a great report last year that kind of spoke to this somewhat.

Justin Reich (18:46)

Yeah, although I would say that the report we released, well maybe I'll tell the whole story of it, which is that my staff came to me and said, we want to write an AI guide to schools. And I was like, absolutely not. There are 800 AI guides to schools and we are not writing the 101st AI guide to schools. Like, it's not helpful to people. And they went and talked with the folks, the educators we've been working with, and they're like, no, we're gonna write a report. I was like, okay, fine. But there are two rules. One is,

Louka Parry (18:52)

Yeah, please.

Hahaha

Already, yeah.

Justin Reich (19:15)

Everything that's in it has to be from a practicing educator. So no pundits, no professional learning people, no think tanks, no thought, like just practicing educators. And second, we have to tell everyone like very clearly from the beginning that we have no idea whether or not anything in it is right. So I mean, I really don't think there is a great answer to your question yet. You know, the Harvard Physics Department did a study.

Louka Parry (19:39)

Hmm.

Justin Reich (19:41)

where they replaced a bunch of in-class active learning experiences with GPT physics tutoring experience and students did better. So if you're Harvard students studying physics, that's pretty applicable to you. It might not be applicable to the vast majority of other people. mean, one of the things we know is that digital learning tools often particularly benefit already educated, already affluent people.

Now, some of your listeners may work with already educated, already affluent people, and then it's like, all right, let's bust out the GPT tutors. There was another study that Edie did with DeepMind where they had GPTs giving advice to human tutors who were sort screening the advice for people giving math help. And the benefits were measurable, but they were extremely small. ⁓

Louka Parry (20:14)

Mm-hmm.

Justin Reich (20:38)

they didn't have a huge breakthrough. I mean, and you know, and then I mean, especially as people, there's there are definitely cases of AI psychosis and AI mania, people who develop these really unhealthy relationships with, with these tools. And so like, you know, if you if we if we induce an Australia sized, you know, nation of children to spend more time talking to GPT is like what fraction of them are going to descend into psychosis or mania.

Louka Parry (20:46)

Mmm.

Justin Reich (21:06)

and like, is there a tolerable fraction or should schools just like not be in the business of encouraging people to talk to GPTs? All these things are not known. ⁓ And so I think some, you know, some of the principles that I would use to guide my experiments would be talking to students and teachers.

Louka Parry (21:17)

Yeah.

Justin Reich (21:28)

about co-creating these experiences with them. That's something that I do with my students at MIT. We're constantly talking about like, is this thing helping you? Is it not? ⁓ Really thinking about the age of people. So I think as folks are older, there's probably more room for experimentation. And I think there ought to be like a lot more caution, a lot more parental discussion, a lot more ⁓ discussion with school leaders and things like that about experimenting with younger kids.

Louka Parry (21:36)

Mmm.

Justin Reich (21:57)

⁓ And then some, you know, some form of assessment of work product. You know, if you like, if there are people out there who are generating experiments where like they're trying some kind of tutoring routine and the students like it and the outcomes are better and people don't seem to be going crazy, ⁓ then it seems like, all right, well, let's keep doing some of those kinds of things. But I think a lot of the folks who are trying these kinds of experiments are finding out like, ⁓ you know, this is not, ⁓ we're,

we're not getting positive outcomes out of this. We're getting experiences where there's a veneer of quality in the work that people produce in the short term, and there's not proficiency in the long run.

Louka Parry (22:37)

Mm.

Mmm. Yeah.

Can I, what I'm really struck by Justin is kind of your humility actually in this conversation. You know, on the one hand you could be like, this is, know, and you hear a lot of pundits saying this is what you need to do. And a lot of kind of the sparkly, ⁓ shiny object syndrome that we seem to seem to have. Have you tried the new model? You know, versus what's the, what's the impact on learning? What's the impact on cognition? And I want to, I want us to talk just a little bit about that impact on cognition. Cause it's certainly like a signal.

soon to be trend that I see this in some ways, like a pushback against ed tech and big tech more generally with, you know, what's going on in California. And I'm just really curious about your, your view of someone that's been across this for decades, you know, working in this digital media, technology learning schooling intersection, where should technology, you know, where should we just not use technology at all? I mean, I think some schools, for example, are just pulling laptops out, you know, so for whole countries, by the way, also, as you know,

And we're also removing all the technology in the classroom, realizing that when you re these things also remove friction and friction that productive struggle within the cognitive process is literally the biological process of learning. have you got like, what's your kind of musing on that at this point in history?

Justin Reich (23:59)

Yeah, I mean, for me, one very personal thing I feel is that when I started teaching in 2003, my students didn't have lots of daily, interesting intellectual engagements with computers and the internet. And that was something special that I could provide as a teacher. And now I feel much more like my undergraduate students are glued to their phones in unhealthy ways, are isolated from their classmates.

Louka Parry (24:07)

Cool.

Justin Reich (24:29)

in social situations in unhealthy ways, and that actually a special thing I can create for them sometimes is spaces where they get to be with one another and be in connection to other people. And that's not because the of ⁓ the actual capacity of technology to support learning has changed, it's because the context of their lives has changed. So that's one thing that I think. ⁓ A second thing that I think is like a challenge in educational systems.

is that it's, or just like an unusual feature is that it's usually more important ⁓ to get one system right than it is to pick the one right system. So there are gonna be school systems in the future that get rid of all of their technology and do outstanding in-person instruction with materials and with students in social relations with each other. And those students are gonna be happy, productive people ⁓ who go on to contribute in lots of different ways, including technology rich ways.

Louka Parry (25:06)

Nice.

Justin Reich (25:25)

There's also gonna be schools that try that and do a lousy job and their students are not gonna be well prepared. By contrast, there's gonna be schools that are like, we are gonna AI all the things. And there might be some of those places that AI all the things who do it really well, who create really rich learning experiences, who get like a lot of these delicate balances right, who attract students who are sort of into those things. And there's definitely gonna be schools that AI all of the things and their students are not gonna be well prepared.

Louka Parry (25:28)

Mmm.

Justin Reich (25:51)

So, I mean, another way of saying it, there's more variation within models than there is across models. But I think, you know, I mean, another thing which is just the case is that, you know, education technology companies have not delivered on their promises. have not, particularly in like core tested subjects, it is really hard to find evidence that these technologies do a lot.

Louka Parry (25:57)

Nice.

Justin Reich (26:20)

to improve literacy, improve mathematical fluency, other things like that. And then the other problem over the 25 years that I've been teaching with them is that they've gotten worse. ⁓ are more, I mean the thing that we could have thought 25 years ago is like, man, these phones are awesome. They connect us to the world's information, but you they're kind of distracting sometimes. ⁓ And they're only a little bit more connected to the world's information than they were 25 years ago.

Louka Parry (26:39)

Mmm.

you

Justin Reich (26:49)

And these companies are just spending billions of dollars a year optimizing their capacity to hijack our attention and put it on advertisements. And so, you know, there are, I mean, I don't know, there's a line of sort of like AI think leader pundit people that are like, oh, the AI you use today is the worst AI you'll ever seen, you'll ever see. And I think some of that is true in the sense of the underlying technical capacity of the technology. But these companies have gotten

so extremely good at hijacking user communities and turning their eye, exploiting their eyeballs for value in other places so that the consumer, it's quite possible that the consumer experience of AI tools as they include more advertisement, as they include more sponsored content, as we're seeing them include more sexualized material and other things that we don't want anywhere near schools, they absolutely could get much, much worse. ⁓

Louka Parry (27:47)

Mmm.

Justin Reich (27:48)

And all

of those kinds of changes are gonna determine what, you know, are gonna shape the kinds of hard choices that schools will need to make. Maybe the last thing that I will say is, for people's lives, they're gonna learn doing stuff with the internet. If you live in the networked world, ⁓ as soon as you leave school, there's all kinds of stuff that you're gonna learn from the internet about beating video games, about doing your hair.

about, you know, I was fixing my snow thrower, which is probably not a problem you all have a lot in Australia, but I had to rebuild the carburetor with our last blizzard. And I think if we don't have learning experiences in schools with adults where we help people get good at learning online, that seems to me to be a, that seems to me to be a thing that I would be very cautious about jettisoning from school. If you told me that we're not using

Louka Parry (28:20)

Not particularly. I hope you got it done.

Justin Reich (28:44)

know, online reading apps anymore and that we're doing most of our writing by hand, I'd be like, there could be, like, I'm interested in people trying that. When people are more into, like, just no technology at all, then I'm like, ⁓ there's a lot of cool learning that you can do if you can be connected to the network world and you'll probably be better at it if adults help you. ⁓ So that's, mean, those are a lot of different things. That was not one answer in there. That was like four of them, but.

Louka Parry (29:08)

No, I love that.

I'm loving it, Justin, honestly. I think, maybe I just don't want to talk about agency and your view on agency, because I think implicit in that view in terms of an AI first school versus a low tech school is this idea of becoming an agentic human being, which means being able to act instead of being acted upon. your reflections, which I would call surveillance capitalism or the attention economy.

All the other books that's been written on them. And I think, you know, just to flag, I am very curious about the attachment economy, which I think is already underway. And there's some other people talking about that, like Zach Stein and others, you know, ⁓ it's, it was dopamine, now it's dopamine and doxytocin. And that's problematic. ⁓ but there's something about, know, how do you just credit conditions, create the conditions for which, ⁓ a young person steps into the world, still with a life of learning, by the way.

But they've stepped out of at least one of those rites of passage out of high school or into college or maybe out of college into the workforce, whatever the case might be. And they have expertise, but they've also got agency. And I just wonder about that as a, as a really key theme for the future of learning, knowing that we, if we create all these achiever type, you know, and this is Jenny Anderson, Rebecca Winthrop's work here. You might've seen the disengaged team, just, you know, if you're all in achiever mode, you're highly engaged, but it's low agency.

and then you step out into a landscape like the field of law you're talking to where it's about what you do with what you know, how agentic you are, the questions you ask, how you prompt your suite of agents to work alongside you, your paralegals, et cetera, et cetera. What's your reflection on how to cultivate agency, including with these tools or without these tools?

Justin Reich (30:51)

Yeah, I mean, almost anything the way, you know, one, the way we cultivate it in educational systems is by ⁓ giving people part task practice that leads to whole task practice. could be for agency or swimming or writing or lots of other things. say like, are, you know, what are circumstances where people really step out? I went to this wonderful talk by a guy named Carlo Rutella.

⁓ who's at Boston College and teaching their freshman lit class, which he does and he's been trying for like a fully AI resistant way. They're class of 35, so kind of big for a literature class. And he has this rule ⁓ that all of his students, there's a grading policy that you have to make a contribution every class. And in particular, he makes sure that every student talks within the first two weeks. ⁓ Now what happens if a student doesn't talk within the first two weeks? He basically like brings them into office hours a couple of times.

⁓ And they talk about why the person is not talking and then he says okay This is the question that I'm gonna start class with the next class Here is the reading go work on the reading a little bit and then come back to me and tell me what your answer is to this question and then if he needs to They like rehearse the question in office hours He like asked the question and the person says the thing and they're like great. That's what we're gonna do beginning class ⁓ like you know, and and then everybody feels like they're full contributing member to class and they have that like

wonderful breakthrough experience of being a talker and a contributor. And he didn't use anything that's different than any coach or anything else. It's just like, well, let's practice it and then you'll do it in front of other people. ⁓ But I also think for some of the biggest challenges we have in society, we should not assume that education is gonna solve this problem. Like there's sort of famous, don't even mean, you're in list, you probably know like the marshmallow test. ⁓

Louka Parry (32:29)

Nice.

Justin Reich (32:42)

which was the idea of like, you put a marshmallow in front of a kid and say, if you don't eat it in 10 minutes, ⁓ you get two marshmallows. ⁓ And ⁓ people thought for a while that it was like a measurement of people's sort of individual capacity to, you know, ⁓ suffer in order to have future gains. ⁓ But there was like subsequent evidence that suggested that basically, if you grew up in an affluent household,

Louka Parry (33:02)

Delay gratification, yeah.

Justin Reich (33:08)

⁓ You did better on the marshmallow test than if you grew up in a resource-constrained household ⁓ where like future gifts were less likely to come around than things in front of you. So actually maybe what makes agentic people is like people who are well-fed, people who have stable homes, people who live in safe communities. And schools can be part of that, but I don't know, there's this saying that every time the French have a problem, they hold a general strike, and every time the Americans have a problem, they make a new class.

which is to make a new course, which is to say that schools are pretty amazing, but they are not gonna solve all of the problems that we have in our society. A bunch of these, it's gonna be social movements that solve these problems, not the agency class.

Louka Parry (33:48)

Yes.

Yeah. It's such an interesting thread. know, in here in Australia, ⁓ we have quite a lot of conversations at the moment about the kind of theories of learning. It's really kind of in this, you know, kind of combination between your inquiry led or your explicit instruction. And it's an interesting dichotomy. And I think both are useful and both are tools, but there's something beyond that. But the other question is we always talk about is excellence and equity, Justin, you know, and this equity to me is this idea that you can have agency in some way. So it's not...

can't be a class that you take, it has to be experiences you have, including ones that meet your needs. And so for me, when you think about the biopsychosocial model of health, you know, we sometimes in education, we do also just consider, well, the instruction is the most important part. as you know, when I was a school leader in an outback school, it was so clearly the case that it was actually all the other preconditions that needed to be met. And so if we're to have high agency, you know, like

Talent is universal opportunity is not. There's definitely something else about in some ways restructuring the way that society works for children to thrive. And where societies fail to do that, that is a, that is a malfunctioning society where the incentives have kind of been completely co-opted by certain interests. And I think we're seeing that take place in many societies around the world at this point in time, you know, not least of all the increasing complexity, the break, you know,

the hot war that we're currently experiencing as this goes to recording. It's just such a devastating and yet fascinating thing to kind of consider. ⁓

There's not much to say about that after. That was my face. You're just letting it, letting it, yeah. Expand. ⁓ so I guess then just to come to, come to a close, if we would have this conversation in 10 years time, ⁓ I'd love for you to talk, talk about a scenario that you see. And again, there's no, nothing's guaranteed, but what's kind of a scenario that in some ways you're fighting for.

Justin Reich (35:29)

No, no, that was your piece. You got to express that vital idea and I was just letting it linger there. It was good. I agree.

Louka Parry (35:56)

through your work, through the way that you amplify voices of practitioners and students, the work you do across the teaching lab. What's the scenario in 10 years that it's a bit like your North Star for school?

Justin Reich (36:08)

So I don't put a lot of credence in theories that something about new technologies will completely transform education. That hasn't happened for the last century. I don't think it will happen in the decades ahead. ⁓

And if it does, it's not totally clear to me that we'd be really good at preparing for it. So it seems like preparing for the things we know make a lot of sense. I think, I'm really hopeful that 10 years from now, instead of saying, ah, we're living in this world of constant uncertainty and you have to do all these local experiments, we'll be like, hey, first of all, there were a whole bunch of big science experiments we did where we have giant randomized controlled trials where we can tell you, do you need to know how to teach a kid to code with a copilot?

We totally know how to do that. Like this is what Google does and this is what Amazon does and this is what other software engineers do and here's what you can teach a fourth grader and here's what you can teach a seventh grader and here's what you teach a tenth grader. Today you could not do that. You could not give people that kind of accurate guidance but I think in eight to 10 years we will be able to. I would hope we would be able to do the same with things like research and knowledge finding. I would hope we would do the same with writing.

If big science doesn't get us there, I hope that there's more accumulated little science. I hope that there are sort of more networks of schools that are like, you know, like we're a group of 30 outback schools and we've really found, ⁓ you know, across the 30 of us, we tried a bunch of different policies for addressing GPTs and the ones that seem to be kind of working are these ones. ⁓ And we're continually revisiting them in one way or another, but you know, we're revisiting them less and less because they feel more stable.

I don't think, know, I mean, what we historically we've asked schools for the most part to teach kids how to read and write and be nice to one another and do some math, ⁓ learn some other things about the world. It's not clear to me that those things need to be sort of totally transformed. ⁓ I think they still could be a pretty good foundation for using all kinds of digital tools in part. I guess in part I feel that way because

I think we're going to find that it's actually not that hard to use AI in the sense of getting AI to spit stuff out. ⁓ That what's really hard is discerning the quality of output. ⁓ And that what makes people good at discerning the quality of output is domain knowledge. People who are really good plumbers who use AI are not going to have specialized AI knowledge. They're going to have specialized plumbing knowledge.

that's gonna let them know like, the AI, it's off its rocker here, I need to find another one or oh, that was a really creative plumbing solution. I'm gonna try that and then, know, in fixing this thing or something like that. So it's a very conservative view, but I think the history, you know, we are in our second century of technologists coming to schools and saying that this new technology is about to transform what we're doing. know, in 1913, Thomas Edison said that in 10 years,

all text would be gone from schools and be completely replaced by film strips, which did not happen. ⁓ So we're sort of a hundred years into this journey where betting on like more or less a stable system with like some uncertain components has been a reasonably good bet.

Louka Parry (39:30)

I love that Justin. It's a good history teacher right there. know, if she doesn't repeat, but it rhymes. Beautifully said. All right. Final question for you. It's been wonderful to have you talk about some of your, views and elevate the perspectives of people working in schools. What is your take home message to the listeners of this podcast about the future, about learning, about schooling, whatever you want to leave us with.

Justin Reich (39:54)

Yeah, I think if you are a practicing educator and you're feeling like, man, things are uncertain. I feel like we should know what to do here. We do not know what to do. We do not know what to do. And actually, a lot of times, our early guesses are not that good. ⁓ And so that's terrible and feels bad, but maybe it feels a little bit better knowing that my colleagues at MIT, the people I talked to today at Harvard,

Louka Parry (40:06)

you

Justin Reich (40:20)

the folks at lots of other schools that I've talked with around the United States don't have any better answers. But, you know, many, many educators work in communities of smart, caring teachers who have the energy and the courage to come together as communities and try new things. And so I think as people pursue a sort of agenda of like local experimentation, local science are going to iterate towards some things that feel like...

They work better, they do some evaluation along the way, and they'll share what they're learning with their colleagues. And hopefully some of that local science will coalesce into something that looks more like best practice. There'll be some really cool big studies that are done that are funded by the EU or the US, Australia, that from top down give us some big science answers. some of this uncertainty is going to be resolved. But we don't have to be in rush to get there. We can just accept like,

These are the challenges we have. We're gonna have to bring a lot of humility to what we're doing. And also like the GPTs are so weird. They do such weird stuff and they're doing such weird things to people in society and kids love that stuff. ⁓ Like I think like a safe place to lean into and explore is like just how bonkers everything is. playing with that is fun. That's kind of the vibe that I'm sharing.

Louka Parry (41:30)

Yeah.

Hello.

Justin Reich (41:45)

you know, people can decide, we'll see whether or not history validates that as a reasonable point of view.

Louka Parry (41:51)

I love that. I love that Justin super playful. ⁓ There's something beautiful about all being in it together. So thank you so much for joining us on the learning future podcast.

Justin Reich (42:01)

It's a pleasure. I hope you all are having ⁓ great success in your experiments and I hope people check out the homework machine, the TeachLab podcast. We've got seven episodes of great stories of the lives of American educators and I'm sure they'll resonate ⁓ with what Australian and other oceanic Pacific educators are wrestling with.

Louka Parry (42:22)

It's good down here, man. It's all going well. Yeah. Check out teachlabpodcast.com and also the guide to AI in schools, which was a fantastic report. Perspectives for the perplexed, which Justin tried to not ⁓ release, but his team overwhelmed him with suggestions from schools and students. Thank you so much, mate. All the best.

Justin Reich (42:41)

You bet, Louka.

Next
Next

MasterClass, AI & the New Age of Education with David Rogier S11E2 (141)