Nick Clegg is no AI doomer. But don’t call him a booster, either. The former president of global affairs at Meta says that while he’s hopeful that AI will automate away certain frictions, he’s unwilling to abide all the talk of superintelligence.
Since Clegg left Meta in January 2025, days before Donald Trump’s return to the White House, the former deputy prime minister of the UK has been relatively quiet about what he plans to do next. That is, until this week, when he announced his appointment to the board of two AI companies: British data center firm Nscale and education startup Efekta.
Efekta, a spinout of Swiss company EF Education First, sells an AI-based teaching assistant that’s meant to adapt to a student’s abilities and send progress reports to their teachers. The aim is to replicate the type of one-to-one instruction that isn’t feasible in a traditional classroom setting. The platform is currently used by around 4 million students, predominantly in Latin America and Southeast Asia, the company says. The hope is that Clegg will draw from his experience in politics and tech to counsel Efekta as it expands into new territories.
When we met at EF’s office in West London last week, Clegg said he believes the classroom will be among the first settings to be radically improved by AI. But he was less cheerful about the politics of the AI race, which he says will further concentrate power in Silicon Valley. He voiced equal frustration with the “pesky Brussels bureaucrats” that he claims have knee-capped European AI founders as with the Big Tech elites that have prostrated themselves at Trump’s feet.
The following conversation has been edited for length and clarity.
WIRED: Nick, on the spectrum from AI doomer to booster, where do you fall?
Nick Clegg: I somewhat disregard both kinds of hype. Saying that AI is going to destroy life as we know it by next Tuesday is as much hype as saying it’s the most powerful thing to have happened to the human being since the invention of fire. I have a real aversion to hype on both sides. It’s usually propagated by people who have something to sell or want to overstate the power of their own invention.
The reason there are these wild gyrations in the way people talk about the technology is that it’s both very versatile and very stupid. It is exceptionally powerful for certain things—like coding—and exceptionally useless for many others. I think that’s why we struggle to talk about it.
I think it has to do with the uncanny quality of some interactions with AI.
We always do this, as human beings. We call it artificial, then spend a lot of time anthropomorphizing it. That’s the way we refract experiences to make them comprehensible. But it’s a fundamental mistake.
What attracted you to the education sector? How do you expect AI to reshape the practice of teaching?
I’m completely convinced that immersive, online teaching can have very considerable benefits to pupils.
We all know that every child has different abilities, learns at different paces in different subjects, in response to different teachers. The dream of personalizing education has always eluded educators—and for very good reason. It’s very difficult to provide attention as a teacher to every pupil. I think the secret sauce that AI provides is that it really allows for adaptive, interactive personalization.
Why Efekta, specifically?
Its focus is on very big, underserved markets in Latin America and Southeast Asia, and so on. There are chronic teacher shortages across those parts of the world.
I think its product has a profound democratizing effect. In theory, a kid sitting in a provincial town in rural Brazil should be able to receive the same responsive interaction with the Efekta AI teacher as someone living in Mayfair.
Is anything lost by the introduction of AI to the classroom? Will we end up with a generation of students who use chatbots as a crutch—to draft essays, solve problems, and so on?
They’ll do that, anyway. Trying to shut out AI from schools is senseless. It’s about how you incorporate AI into education. Bad teachers will use it badly, and good teachers will use it very well—as they did whiteboards and calculators.
But we’re talking about a more fundamental change. I’m asking what it might mean for students not to develop foundational skills.
If you go back to the time when calculators were invented, [people thought that] kids are never going to be able to do mental arithmetic. But that didn’t turn out to be the case. It will have an effect, of course. But I think the net effect should be positive in terms of educational performance.
Children are probably uniquely vulnerable to the kinds of dangers associated with chatbots. How do you think about those risks?
Of course there are perils—particularly, vulnerable adults and children becoming emotionally dependent and invested in a relationship with something that has an avatar, humanoid presence in their lives.
At a societal level, we should take a very precautionary approach. I think you should have clear age-gating on how agentic AIs are made available to young people.
Like Australia’s social media ban for under-16s?
There’s no point in having a ban if you can’t measure people’s age. That’s where policymakers rush to catch headlines about bans and don’t quite think through the quite-difficult stuff. Unless you want all these platforms to, what, hold everyone’s passport details? My view for a long time has been that the only way to do that is through the choke points of iOS and Android, at an [app store] level.
But in principle, I think you should take a similarly precautionary approach. The susceptibility to becoming highly emotionally invested in and perhaps unduly influenced by your relationship with a kind, patient, 24-hour voice who’s listening to you all the time is a very real one.
I don’t think it’s a risk at all with the kind of products that Efekta produces, though.
Even though the AI is literally assuming the role of the teacher?
Well, no—because it is not. These agentic AIs produced by companies like Efekta are not going to have some sort of surreptitious midnight relationship where they say all sorts of ghastly things to a pupil. It’s a teacher-controlled experience.
You spent almost seven years at Meta. In that time, AI became the frontier technology. I’m curious how your experience at Meta colored your perspective on the opportunities, the risks, and limits of AI—and the quest for superintelligence.
If you ask three people at the same organization what superintelligence is, you’ll get three different answers. I get the impression that everyone in Silicon Valley has to say they’re within touching distance of artificial general intelligence or superintelligence, because that’s the way to attract the best data scientists. I find it difficult to grapple with a concept as hand-wavy as that.
The main thing that occurs to me is the power paradox. You have these technologies that empower us as individuals, but also dramatically increase power in the hands of a very small number of people on the West Coast of the US and in the tech sector in China.
It was ever thus with Big Tech, because of the network effects of social media. But because of the physics of large language models [LLMs]—how unbelievably expensive it is to build the infrastructure—this bifurcation of power is just going to become more and more extreme. And if this LLM paradigm carries on, it’ll be an increasingly small number of players. There’s going to be a shakeout at some point, because you can’t keep spending 130 billion quid a year just on AI infrastructure.
The swim lane we’re in at the moment feels like such an imbalance of individual empowerment on one hand and extraordinary globs of agglomerated power on the other. It poses really big dilemmas for us all.
You tried to address the concentration of power at Meta with the Facebook Oversight Board. Do you think it has been effective at governing the company—reining in its worst impulses?
I think they’ve done a great job.
What’s the clearest example?
They’ve made a number of binding content decisions which the company has had to implement. I know very well, because the teams that used to work for me would complain about it bitterly. I think it’s very cool that a company voluntarily tied its hands like that.
Is it the Supreme Court that some commentators want, that could clip Mark Zuckerberg’s wings completely? Well, probably not. But it was never designed to be that. It was designed to be the final recourse for edge decisions about content moderation versus free expression.
Where I am disappointed is that I had hoped when I helped set it up that you’d have other platforms buying into it by this stage.
You hoped that other platforms would replicate the model?
Yep—it hasn’t become a blueprint.
That’s partly because there’s been this massive sea change in attitude toward content moderation in the US post-Musk takeover at Twitter. Then, there’s this rather infantile tendency for the MAGA crowd to call any content moderation an act of censorship, which is a ludicrous distortion of the truth. They fetishize the word “censorship” for their own purposes.
That’s probably discouraged a lot of the other players.
Zuckerberg’s position on content moderation appears to have changed quite drastically in the period since you left. Meta has swapped independent fact-checkers for crowdsourced moderation.
It has in some respects. But in theory, there’s nothing wrong with crowdsourcing the approach to misinformation if you can make it work at scale.
I don’t think anyone should romanticize the idea of independent fact-checkers. They can only skim a tiny amount of content off the top. In America, whether you like it or not, close to half the population thought that fact-checkers were somehow ideologically biased against them. If one party or another thinks the edifice you created is diametrically opposed to their worldview, you’ve got a problem.
Do you think the change is a reflection of the climate under the Trump administration?
The climate has changed utterly in the United States. Clearly, Silicon Valley and the folk in DC have found content moderation a very convenient stick to beat pesky Brussels bureaucrats. There may be plenty of other reasons [to do that]—the AI Act, in particular, is a ludicrous act of self-harm. But every democratic jurisdiction has its right to decide on the boundary between content moderation and free expression.
The amount of self-serving political rhetoric around this is astonishing. If you speak to people in parts of America, they think the US is the only country that has ever understood the virtue of free expression. They attach a hallowed status to the First Amendment, as if ancient democracies in Europe have no idea what it is to draw the right balance.
It’s become a highly politicized thing. You saw that with the lineup of all the tech bros at the inauguration, all the endless ring-kissing at Mar-a-Lago. Clearly, they’ve decided—I guess for the protection of their businesses—to align with the current US administration. The fact Silicon Valley has done a total volte-face and is now immersed in politics is a huge change, and only time will tell whether it makes sense for them.
I’d be extremely skeptical about free-expression advocates in the US that say “only the Europeans do heavy-handed regulation.” What do you call what they’ve done to Anthropic, other than about the most heavy-handed regulatory assault on a company you could possibly imagine? Not even the most dirigiste, interventionist Brussels bureaucrat would go that far.
You really think the EU’s approach to AI amounts to self-harm?
It’s an almost classic, textbook example of how not to regulate.
The initial drafts were published two or three years before ChatGPT burst onto the scene. They had no idea what technology they were seeking to apply this legislation to. How is someone who has had any hand in developing an underlying foundation model supposed to be held responsible for any subsequent downstream and customized use? It obviously doesn’t work.
It’s a total betrayal of a whole class of really, really smart European entrepreneurs who want to build world-beating companies. It infuriates me, because the same people will pontificate about asserting European sovereignty and making sure that we’re not all dependent on American and Chinese technology. It’s about the worst way to guarantee our sovereignty.
If not through tight regulation, how would you suggest we deal with the risks of unfettered AI development?
I’ve become such a keen advocate of open source, because it’s about the best way to ensure that these technologies are properly democratized and you don’t have this oligopolistic power of a very small number of proprietary models running the show.
In the irony of ironies, China—the world’s largest autocracy—is doing the most to facilitate democratized access to these tools through open sourcing. Whether by accident or design, depends who you speak to.
Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: wired.com






