‘They’ll lose their humanity’: Dartmouth professor says he’s surprised just how scared his Gen Z students are of AI

0
2

When Scott Anthony (Dartmouth College, class of 1996) left a 20-year career in high-stakes consulting to join the faculty at his alma mater in July 2022, he thought he was leaving the “intense day-to-day combat” of the corporate world for a quieter life of teaching. Instead (as Anthony previously described in a commentary for Fortune), he arrived on campus just months before the release of ChatGPT, landing him squarely in the center of the artificial intelligence (AI) revolution that has left many of his students paralyzed by anxiety.

In a recent interview, the former consultant at McKinsey and Innosight, a boutique firm cofounded by Clayton Christensen and Mark Johnson in 2000 and acquired by Huron in 2017, revealed the prevailing mood among the next generation of business leaders isn’t just excitement—it is fear.

“One of the things that really surprises me consistently is how scared our students are of using it,” Anthony said. He clarified this anxiety isn’t merely about academic integrity or cheating. Plenty of his students are excited to use AI and push into the frontier of this new tech advance, he clarified, but a meaningful portion approach it with “hesitation and fear.” They are “scared full stop.”

“There’s something about AI where people, I think, worry that they’ll lose their humanity if they lean too much into it,” Anthony explained. This is different from many of his long-tenured academic colleagues, who he said are usually eager to dig into the new tools at their disposal. The freshly minted author of Epic Disruptions: 11 Innovations That Shaped our Modern World, Anthony talked to Fortune about teaching a course on disruption while education and work itself is in the middle of being disrupted itself. “History teaches me very clearly that in the middle of a change like this, it’s very messy.”

The fear of losing yourself

Anthony said what he believes about studying disruption, and managing through it as a consultant, is that you look back later on and the pattern becomes clear, but at this particular stage, “there’s just a lot of noise.” He said he understands his students’ concerns about AI and shares it to some extent—offloading too much cognitive work to AI will atrophy the critical thinking skills required to lead.

An eye-catching MIT study published in June would seem to make Anthony’s point. Titled “your brain on ChatGPT,” with a subtitle mentioning “accumulation of cognitive debt.” Widely covered in the media as supporting Anthony’s students’ fear, that AI tools can somehow harm humanity, the study suggested that “cognitive activity scaled down in relation to external tool use.” In other words, it suggests that using AI makes you stupider.

Vitomir Kovanovic and Rebecca Marrone, from the University of South Australia, argued in The Conversation at the time that “brain-only group” repeated the task in question three times, a phenomenon known as the familiarisation effect. The AI control group only got to “use their brains” to perform the task once, they noted, and so achieved only slightly better engagement than the brain-only group’s first try. They argued AI is functioning like a calculator, and tasks haven’t become advanced enough to put students through the ringer, even using AI tools. Anthony, who didn’t comment on that specific MIT study, told Fortune he’s rolled up his sleeves on AI assessments.

“I’ve been teaching a class about how you lead disruptive change,” Anthony said, adding he wants to find someone who needs to learn a particular topic and use AI to tackle that. This doesn’t mean he wants something like, say, an AI-driven song that required one prompt to make. “I want you to actually go and expose the guts of the work that you did so I can then go and see whether you learned anything or not.” Sometimes, he said, elegant outputs are the result from students who didn’t learn anything, but he also gets “rough outputs where when you see what they’re actually doing.”

When asked about the example of someone like Jure Leskovec, the Stanford computer science professor who went fully to blue-book exams several years ago, as Fortune reported in September, Anthony said he respected that, but it wasn’t for him. “I’ve never given a blue-book exam,” he said, noting he’s just a few years removed from his consulting career and he may try it, but he’s not there yet. Some of his colleagues are very strict still: Not only does one colleague still only do blue-book exams, “he does not allow people to go to the bathroom during the exam. You just, you can’t leave the room.”

He agreed with Leskovec some changes are already irreversible: “The writing is all good now. The bad writing has been taken out.” This can be “dangerous,” he added, saying he really pushes his students to resist temptation.

“The thing I’ve just really been pushing, whether it’s students or whether it’s the executives that I’ve been working with, it’s so seductive and easy to say, ‘Let me offload,’” he said. The reason why, he explained, has to do with what he learned about Jerry Seinfeld and Julia Child while researching his book.

What Jerry Seinfeld believes about hard work

To paraphrase Seinfeld, Anthony said he tells his students “the right way is the hard way.” He recalled an interview Seinfeld gave to the Harvard Business Review in 2017 when the famous comedian, with a reputation as a bit of a micromanager, was asked if he ever wanted McKinsey to help with his process. “Who’s McKinsey?” He asked. When told that it was a consulting firm, he countered, “Are they funny?”

Seinfeld was making the point, Alexander told Fortune, that the hard way to be funny is the right way, at least for him. He said he wants students to do the “hard work” to develop the wisdom necessary to manage AI effectively.

“We just have to separate people from technology when we’re assessing learning or else we’re going to get AI regurgitation,” he warned. That can be useful for some things, “but if you’re trying to figure out whether people learn something or not, it’s useless.”

Anthony also drew on a fitness analogy: “You go to the gym, you want to lift any amount of weight, bring a forklift with you. You can lift the weight, but that’s not the point.”

Julia Child‘s long record of failure before success

Anthony said his research, teaching at the Tuck School of Business, and his writing shows people are getting bogged down by AI when they should be focused on the hard work Seinfeld was referencing. Take the example of the famous cooking author Julia Child, which Anthony said was his favorite chapter of the book because it was the most surprising. The lesson he drew from it is that you may not be able to be the next Steve Jobs, but you could be the next Julia Child. “If life bounces the right way, I could imagine that happening to me, you know?”

The professor explained Child’s example shows disruption “isn’t about being a superhero,” but it’s more about ordinary people following certain behaviors and showing curiosity.

“It’s a reminder that there is no straight line to success,” he said. She started working on her masterpiece, Mastering the Art of French Cooking, roughly 10 years—and two publisher changes—before succeeding with it. She also failed her first exam at Paris’ Cordon Bleu, persevering to become the woman who brought French cuisine to mainstream America. “It’s classic hero journey sort of stuff,” he said.

Consider the first French meal that Child cooked for her husband, Anthony said: brain, simmered in red wine. “Everybody agreed it was a disaster.” But again, he said, the hard work was the point.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: fortune.com