News Daily Nation Digital News & Media Platform

collapse
Home / Daily News Analysis / Greg Brockman says 80% of OpenAI’s code is now written by AI

Greg Brockman says 80% of OpenAI’s code is now written by AI

May 01, 2026  Twila Rosenbaum  18 views
Greg Brockman says 80% of OpenAI’s code is now written by AI

OpenAI president Greg Brockman said at Sequoia Capital’s AI Ascent 2026 conference on Thursday that artificial intelligence is now responsible for writing approximately 80% of the company’s code, according to a report published by Business Insider. The statement echoes remarks he made on the Knowledge Project podcast in late April, where he first floated a similar figure. Brockman has been making the rounds across multiple interviews this month, consistently arguing that AI coding capabilities have crossed a decisive productivity threshold, that artificial general intelligence (AGI) is “70-80% there” by his personal definition, and that the most binding constraint on what AI labs can achieve is no longer model capability but compute availability.

The 80% figure is striking, but its meaning is ambiguous. Two strong interpretations exist, and they lead to very different conclusions. The first interpretation is that AI tools now write 80% of the lines of code that end up in OpenAI’s codebase — a direct productivity claim. The second, weaker interpretation is that AI is involved in some capacity — such as autocomplete, refactoring suggestions, or generating code that humans later revise — in 80% of the coding work. That is a usage claim, not a replacement claim. Brockman’s qualifying phrase, “it’s hard to know what percent is not,” aligns more closely with the second interpretation. The gap between the two meanings is wide enough to materially alter what the figure signals about the state of AI in software engineering.

The pattern across AI lab leadership

Brockman is far from alone in citing high AI-coding adoption figures. Anthropic CEO Dario Amodei stated publicly last year that AI was writing 90% of the code at his company, and he set a target of reaching 100% within months. Cursor, an AI-first code editor, reached $2 billion in annualized revenue within three years on the strength of AI-assisted coding workflows. GitHub Copilot now has 4.7 million paid subscribers and 90% adoption among Fortune 100 companies. Anthropic itself claims that its $30 billion run-rate revenue is overwhelmingly concentrated in coding, enterprise search, and general productivity — a pattern consistent with the narrative that the labs producing the underlying models are also reporting that those models are transformative for software engineering.

Brockman has described a specific inflection point he dates to December 2025. In an early-April interview on the Big Technology podcast, he said that models went from being able to do roughly 20% of typical engineering tasks to roughly 80% over a short period. That shift, he argued, means engineers must “absolutely need to retool your workflow around these AIs.” He offered a concrete example: an OpenAI engineer who had previously been unable to get AI to handle low-level systems engineering can now hand the model a design document and watch it implement, instrument, and profile the resulting system to production quality.

Skepticism from independent research

Despite these bold claims from industry leaders, a significant body of work questions whether internal AI-coding productivity numbers should be taken at face value. A February 2026 paper from the National Bureau of Economic Research found that 80% of companies actively using AI reported no measurable impact on productivity. A widely cited 2025 study from MIT concluded that 95% of corporate AI pilot programs generated zero return on investment. Machine learning engineer Han-Chung Lee has argued in a widely circulated GitHub post that even rosy internal AI productivity numbers should be treated with skepticism because they are typically produced to hit adoption targets that no one can independently audit.

The independent academic critique has been sharpest from cognitive scientist Gary Marcus, who has called the broader AGI claims “a trillion-dollar delusion.” In a recent keynote at the Royal Society in London, Marcus said, “We as a society are placing truly massive bets around the premise that AGI is close. Large language models are deeply flawed imitators that are preying on the Eliza effect.” Marcus’s specific point about coding is structurally important: a model that produces code which compiles and passes the tests it was given is not the same as a model that produces correct, secure, maintainable, well-architected software. The first is verifiable in seconds; the second requires the kind of judgment that has historically been the bottleneck on engineering productivity.

Brockman himself acknowledges the gap, even as he argues it is closing. “The technology we have right now is very jagged,” he said in the Big Technology interview. “It is absolutely superhuman at many tasks. When it comes to writing code, those kinds of things, the AI can just do it. But there’s some very basic tasks that a human can do that our AI still struggles with.” This admission adds nuance to the 80% claim, suggesting that not all coding tasks are equally automatable.

Financial and labor market context

Two factors make Brockman’s 80% figure particularly worth examining at this moment. The first is the sheer financial scale of OpenAI’s current capital deployment. The company raised $122 billion in 2026 and is targeting an initial public offering at potentially $1 trillion. Brockman has been explicit that the central question for OpenAI is no longer model capability but compute scarcity. Compute, he has said, is now “a revenue centre, not a cost centre,” and OpenAI is committing essentially all available capital to expanding compute infrastructure. That capital deployment is being justified, in significant part, by exactly the kind of productivity claims he is making about AI coding.

The second factor is the labor market context. Tech companies have laid off thousands of engineers over the past two years, with management increasingly citing AI-driven productivity gains as the rationale. Microsoft, Google, Amazon, and Meta have all made significant cuts, and many smaller firms have followed suit. If AI is genuinely doing 80% of the coding at companies like OpenAI and Anthropic, the labor market consequences are substantial. But if the figure reflects a less robust reality — AI being involved in some workflow stage in most coding tasks, but not actually replacing 80% of engineering effort — then the layoffs may be running ahead of the actual productivity gains, and the long-term human cost of the gap may be considerable.

There is one additional layer to Brockman’s framing worth noting: he himself, by his own description and as profiled in TIME’s 100 Most Influential People in AI, spends approximately 80% of his working time coding, between 60 and 100 hours per week. The man making the claim that AI now writes 80% of the company’s code is also, by reputation, the company’s most prolific human coder. Whether that makes him the most credible witness to the productivity shift or the most invested in believing in it depends on which framing of the figure one accepts. His own intense coding habits could bias his perception, or they could give him unique insight into the practical capabilities of AI tools.

Historically, claims about AI productivity have often been inflated. The past decade has seen repeated cycles of hype around automation, from robotic process automation to low-code platforms, each promising to drastically reduce the need for human programmers. Yet the number of professional software engineers has grown steadily, and the complexity of software systems has increased in parallel. This time, however, the underlying technology — large language models trained on vast code corpora — represents a genuine leap. GitHub Copilot, for instance, has been shown to improve developer task completion speed by up to 55% in controlled studies, though those studies often measure simple tasks rather than full-system design.

The debate is unlikely to be resolved soon. The labs themselves have access to detailed telemetry on how their developers use AI tools, but they rarely release granular, auditable data. External researchers are forced to rely on surveys, controlled experiments, and indirect indicators. Meanwhile, the practical impact on code quality, security, and maintainability remains an open question. A 2025 study from researchers at Stanford University found that code generated by large language models was more likely to contain security vulnerabilities than human-written code, even though it compiled and passed tests more quickly. That finding adds a cautionary note to the productivity narrative.

For now, the most plausible interpretation of Brockman’s 80% figure is that AI is deeply embedded in OpenAI’s coding workflow — used for generation, completion, refactoring, and debugging — but that human engineers remain essential for architecture decisions, system integration, and quality assurance. The boundary between “written by AI” and “assisted by AI” may blur in everyday conversation, but it matters for investors, policymakers, and engineers who need to plan for the future. As Brockman himself acknowledges, the technology is jagged: superhuman in some areas, surprisingly weak in others. The 80% figure should be read as a directional signal, not a precise measurement.


Source: TNW | Artificial-Intelligence News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy