AI-generated pull requests, often termed ‘slop,’ are overwhelming project maintainers, even as quick utility functions can be created in mere seconds using large language models. The open source landscape is currently grappling with the profound influence of artificial intelligence.
The popular image of open source as a vast collaborative effort has never quite aligned with reality. In truth, much of the software underpinning our digital lives is upheld by a small group of individuals, often uncompensated, whose diligent work forms critical infrastructure for numerous businesses, a point recently underscored by Brookings research.
This dynamic, though often strained, was sustainable when contributing to projects involved significant hurdles. Participation required sufficient dedication to diagnose a bug, comprehend the existing codebase, and brave potential public scrutiny. However, AI agents are dissolving these barriers (and harbor no concern for appearing inept). Even Mitchell Hashimoto, a revered figure in open source and founder of HashiCorp, is now contemplating the complete cessation of external pull requests for his open source endeavors. His reasoning isn’t a loss of faith in open source principles, but rather an overwhelming influx of subpar pull requests—termed “slop PRs”—generated by large language models and their automated assistants.
This phenomenon aligns with the “agent psychosis” described by Flask creator Armin Ronacher. Ronacher depicts a scenario where developers become addicted to the quick gratification of agent-driven coding, unleashing these agents to make widespread changes within their own projects and, subsequently, across others. The outcome is a severe decline in quality. These pull requests are frequently superficial, featuring code that superficially appears correct due to its statistical generation but fundamentally lacks the deep context, nuanced trade-offs, and historical understanding that a human maintainer provides.
This situation is set to worsen.
As SemiAnalysis recently highlighted, we’ve moved beyond simple conversational AI interfaces into a new era of autonomous tools embedded directly in the terminal. Solutions like Claude Code can now independently analyze codebases, execute commands, and even submit pull requests. While this represents a significant boost in productivity for developers working on their own projects, it creates a formidable challenge for maintainers of widely-used repositories. The ease of generating a seemingly valid patch has plummeted, but the responsibility and effort required to integrate it safely remains unchanged.
This development leads me to question whether we are heading towards a future where the most reputable open source projects are precisely those that are most difficult to contribute to.
The Burden of Contribution
Let’s examine the economic forces driving this transformation. The core issue lies in the stark imbalance of review economics. A developer might spend a mere minute prompting an AI agent to correct typos or optimize loops across numerous files. Conversely, a maintainer could spend an hour meticulously reviewing those same alterations, ensuring they don’t introduce subtle bugs or deviate from the project’s long-term vision. When this scenario is multiplied by hundreds of contributors all leveraging their personal LLM assistants, the outcome isn’t an improved project, but rather a maintainer driven to abandon it.
Historically, a developer might identify and rectify a bug, then submit a pull request as an expression of gratitude—a genuine human exchange. Today, this interaction has become automated, with the sentiment of thanks supplanted by a deluge of digital clutter. A recent example emerged within the OCaml community, where maintainers rejected an AI-generated pull request comprising over 13,000 lines of code. Their objections included copyright concerns, insufficient resources for review, and the potential long-term maintenance overhead. One maintainer cautioned that such facile submissions pose a significant risk of gridlocking the entire pull request system.
Even GitHub is experiencing these effects across its vast platform. As my InfoWorld colleague Anirban Ghoshal reported, GitHub is considering stricter pull request governance and even options for UI-level deletion, due to maintainers being overwhelmed by AI-generated submissions. When the world’s leading code hosting service explores implementing a “kill switch” for pull requests, it signals more than a minor annoyance; it points to a fundamental change in the methodology of open source development.
This transformation disproportionately impacts smaller open source initiatives. Nolan Lawson delved into this topic in his article, “The Fate of ‘Small’ Open Source.” Lawson, the creator of blob-util—a JavaScript library with millions of downloads for managing Blobs—notes that for a decade, it was indispensable because installing it was simpler than coding the utility functions from scratch. Yet, in the era of Claude and GPT-5, the rationale for adding such a dependency diminishes. One can simply instruct an AI to generate the desired utility function, receiving a perfectly functional snippet within milliseconds. Lawson argues that the era of small, low-value utility libraries is over, rendered obsolete by AI. When an LLM can produce the code on demand, the motivation to maintain a dedicated library for it evaporates.
Develop In-House, Rather Than Import
Beyond utility, something more profound is being lost. These libraries served as valuable educational resources, enabling developers to learn problem-solving by examining the work of others. When we substitute these established libraries with transient, AI-generated snippets, we sacrifice the pedagogical approach that Lawson considers central to open source. We are opting for immediate answers over genuine comprehension.
This brings to mind Ronacher’s earlier challenge from a year ago: the notion that we should simply construct solutions ourselves. He posits that if incorporating a dependency entails continuous volatility, the logical response is to withdraw. He advocates for a shift towards fewer dependencies and greater self-sufficiency. In essence, use AI for assistance, but keep the resulting code within your own project boundaries. This presents a peculiar paradox: AI might reduce the demand for small libraries, while simultaneously increasing the volume of low-quality contributions to the libraries that do endure.
All these developments raise a critical question: if open source is no longer primarily driven by broad community contributions, what are the implications when the very channel for contribution becomes detrimental to its maintainers?
This scenario likely propels us toward a state of divergence. On one hand, we will observe large, corporate-backed initiatives such as Linux or Kubernetes. These are the “cathedrals” or the “bourgeoisie,” increasingly safeguarded by advanced filtering mechanisms. They possess the resources to develop bespoke AI-filtering tools and the institutional leverage to disregard the influx of noise. On the other hand, we will have more “localized” open source projects—the “proletariat,” so to speak. These are efforts managed by individuals or small core teams who have simply opted to cease accepting external contributions altogether.
The irony is profound: AI was intended to enhance the accessibility of open source, and in a way, it has. However, by lowering the entry barrier, it has also diminished the inherent value. When everyone can contribute, no single contribution stands out. As code becomes a machine-generated commodity, the only truly scarce resource left is the human discernment required to reject submissions.
The Future Path of Open Source
Open source is not facing its demise, but its definition of “open” is undergoing a significant reinterpretation. We are transitioning from an era of extensive transparency, where “anyone can contribute,” towards one characterized by rigorous curation. In essence, the future of open source might belong to an exclusive few, rather than the masses. While the notion of a broad open source “community” was always somewhat idealized, AI has finally rendered this illusion untenable. We are reverting to a paradigm where the crucial participants are those who actively write the code, not those who merely instruct a machine to do so. The age of casual, drive-by contributions is giving way to an era of authenticated human involvement.
In this evolving landscape, the most thriving open source projects will be those that present the highest hurdles to external contributions. They will necessitate considerable human effort, deep contextual understanding, and strong interpersonal connections. They will eschew the superficial “slop loops” and the phenomenon of “agent psychosis,” opting instead for deliberate, thoughtful, and profoundly personalized development. The “bazaar” model, while enjoyable in its time, could not withstand the advent of automated systems. The future of open source will be more contained, more discerning, and significantly more selective. This might prove to be its only viable path to endurance.
In conclusion, our need isn’t for more lines of code; it’s for greater diligence. Diligence towards the individuals who nurture these communities and craft code designed to persist beyond a simple AI prompt.