Limited Gatekeepers
Artificial intelligence is no longer a prospect for the future; it’s a reality of the modern economy. But while much of the public conversation has focused on the danger of machines becoming too powerful or uncontrollable, the more immediate risk may be one of consolidation.
For Yann LeCun, Chief AI Scientist at Meta, the real threat isn't a runaway machine, it's the rise of a few global gatekeepers. The influential researcher says he worries about a future in which AI assistants mediate nearly every digital interaction, essentially controlling what we see, read, and hear. “Soon, every single one of our interactions with the digital world will be mediated by AI assistants,” he said at VivaTech, an annual technology conference for startups. “This will be extremely dangerous for diversity of thought, for democracy, for just about everything.”
The New Information Diet
If the sum of a person’s online life is filtered through one proprietary system, the entity behind that technology gains an unprecedented role in shaping that individual's perception of the world. In an interview on the Lex Fridman Podcast, LeCun drew a sharp parallel to the freedom of the press: Just as a healthy democracy requires a plurality of independent news outlets to prevent propaganda, a healthy digital society requires a diverse ecosystem of AI models.
“This concentration of power through proprietary AI systems is a much bigger danger than everything else,” he says. “What works against this is people who think that, for reasons of security, we should keep AI systems under lock and key. That would lead to a very bad future in which our information diet is controlled by a small number of companies who own proprietary systems.”
Without greater diversity, the risk is that the biases, political pressures, and commercial interests of a small number of companies could dictate this diet. To prevent the world’s knowledge from being funneled through a few corporate entities, LeCun advocates an open-source model in which anyone can fine-tune the underlying technology. By making the weights and code of powerful systems available to the public, he believes we can empower individual citizens, NGOs, and government organizations to build tools that reflect their own specific needs and data.
“The only way you’re going to have AI systems that are not uniquely biased is if you have open-source platforms on top of which any group can build specialized systems,” he says. “So, the inevitable direction of history is that the vast majority of AI systems will be built on top of open-source platforms.”
The case for control
Not everyone views this level of control as a power grab. To many in the industry, keeping frontier models under lock and key is a matter of safety, not strategy. Some experts say giving out powerful AI models makes it nearly impossible to know who is using them, or for what. They worry there’s nothing to stop people from putting these systems to work in ways that cause real harm.
The main argument for centralization centers on the “one-way door.” Unlike traditional software, which can be fixed with a global patch, once the internal weights of an AI model go public, they remain public. In an interview last year, Demis Hassabis, the CEO of Google DeepMind, argued that a proprietary model allows a company to "close the tap" if a bad actor begins using the tool for harm. “The problem with open source is, if something goes wrong, you can’t recall it,” he says.
Some, like OpenAI’s CEO Sam Altman, think the new technology just needs to be rolled out carefully. In a blog post for OpenAI, Altman argues it’s better to take things step by step, so people have a chance to catch up. “We continue to believe that the best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology.”
The search for the middle ground
One proposed solution for this ongoing dilemma takes inspiration from how cellular phones work. When you call someone, it doesn't matter whether they have Verizon or you have AT&T; the call still goes through. Some policymakers want AI to work the same way.
Under this arrangement, AI systems would have to connect with each other. You could choose an assistant built by a small company that shares your perspective, and it would plug into the powerful computers owned by major tech firms. That means you get your choice of guide, while they provide the engine.
The common denominator in all of this, of course, is control. As AI becomes the main way people find facts and form opinions, whoever builds the systems holds real influence. When power stays concentrated, one set of voices determines what information is seen and what gets left out. Broader participation helps keep that influence in check. It all comes back to the same issue: Who gets to decide what we see?
