When OpenAI or Meta announces a major new multilingual model, something quietly damaging happens far from the press release: investors in smaller, community-focused AI organisations pick up the phone and tell the founders to shut down. Not because the startups failed — but because the giants arrived.
That dynamic, described by AI ethics researcher Timnit Gebru in a recent publication by the AI Now Institute, is one of the clearest illustrations of how Big Tech's centralised model of AI development functions less like a rising tide and more like a flood. "One is when OpenAI or Meta or something comes with an announcement of a big model, a number of potential investors in these smaller organizations literally told them to close up shop," Gebru said.
Gebru, who co-founded the Distributed AI Research Institute (DAIR) after her high-profile dismissal from Google, argues that the dominant paradigm — building ever-larger, general-purpose models intended to serve every use case across every language and culture — is not merely inefficient. It is, she contends, fundamentally unsafe.
"We've now been pushed to a paradigm that is ridiculous, that is never going to be safe because you don't have a well-defined task," she told the AI Now Institute. The consequences are already visible in critical infrastructure. OpenAI's Whisper transcription model, deployed by healthcare providers to generate patient notes, has been documented fabricating text in ways that could cause serious harm. In one case cited by Gebru, audio describing someone wearing a necklace was transcribed as a narrative about a terror attack and multiple fatalities.
These are not edge cases. They are symptoms of what critics call the "one giant model for everything" approach — a design philosophy that introduces failure modes which more constrained, purpose-built AI systems simply did not have.
The European Regulatory Moment
For European policymakers, the timing of this critique matters. The EU AI Act — the world's first comprehensive binding AI regulation — is now in its implementation phase, with high-risk system classifications, conformity assessments, and enforcement mechanisms being operationalised across member states. Yet the structural question Gebru and her allies are raising has not been squarely addressed: does European regulation merely manage the risks of centralised Big Tech AI, or does it actively create space for alternatives?
The contrast with the United States is instructive. California Governor Gavin Newsom's veto of SB 1047 — a state-level bill that would have imposed safety obligations on large AI model developers — was widely read as a victory for the industry's self-regulatory lobby. Europe has formally rejected that approach, but the risk of regulatory capture remains, particularly as American and Chinese firms invest heavily in Brussels engagement.
Meanwhile, signals from the Global South suggest the alternative governance conversation is accelerating. The India AI Impact Summit and emerging frameworks around Africa's Fourth Industrial Revolution are coalescing around concepts like community-led data sovereignty and linguistic diversity — priorities that align closely with the EU's own stated digital autonomy goals, but which receive little formal acknowledgement in current AI Act guidance.
The Resource Efficiency Blind Spot
Gebru's critique extends to the economics of AI development itself. "Industry has absolutely no incentive to look at less resource-intensive things because they view their stealing of data as a competitive advantage, and the fact that they can outdo anyone with GPU spend or how big their data centers are as a competitive advantage," she said.
This matters for Europe on two fronts. First, the environmental cost of large model training runs directly against EU climate commitments — a tension the AI Act currently handles only obliquely. Second, if regulatory frameworks are designed around the assumption that frontier AI means hyperscale AI, they will systematically disadvantage the smaller, more specialised European AI ecosystem the Commission says it wants to cultivate.
The European AI Office, which began operations in 2024, has an opportunity to address this structural bias directly — by designing conformity pathways, sandboxes, and public procurement criteria that do not implicitly reward scale above all else. Whether it will seize that opportunity remains an open question. What is no longer open to debate is that the monolithic model paradigm has a critic base serious enough, and a failure record substantial enough, to demand a regulatory answer.

