EFF: We’ve got a solution for AI’s junk.

Evan Schuman
9 Min Read

A new policy now mandates that code contributors must comprehend their submissions. But how will this understanding be verified?

On Thursday, the Electronic Frontier Foundation (EFF) updated its guidelines for AI-assisted code, now specifically stating that contributors must fully grasp the code they submit and ensure all comments and documentation are human-written.

While the EFF’s official statement offered little detail on verifying adherence, observers and experts in the field suggest that random audits will probably be the primary method.

Although the EFF explicitly stated it’s not prohibiting contributors from using AI for coding, this stance appeared hesitant. The organization noted that a complete ban would clash with “our general ethos” and be challenging to implement given AI’s widespread adoption. The EFF remarked that the “pervasive” nature of AI tools makes “a blanket ban impractical to enforce,” further criticizing AI tool developers for “speedrunning their profits over people” and placing the industry “once again in ‘just trust us’ territory of Big Tech being obtuse about the power it wields.”

This audit-like approach mirrors tactics used by tax authorities, where the potential for inspection encourages broader compliance.

Brian Levine, a cybersecurity consultant and FormerGov’s executive director, believes this new strategy is likely the most suitable for the EFF.

“The EFF aims to enforce accountability, a quality AI currently lacks. This could be a pioneering step towards making ‘vibe coding’ viable on a large scale,” he commented. “When developers are aware they’re responsible for the code they submit, quality standards will quickly rise. Safeguards don’t stifle innovation; instead, they prevent the entire ecosystem from being overwhelmed by AI-generated mediocrity.”

He continued, “Implementation is the challenge. No ‘magic scanner’ reliably identifies AI-generated code, and such a tool might never exist. The only effective strategy is cultural: mandating that contributors clarify their code, defend their decisions, and prove their comprehension of what they’re submitting. While AI isn’t always detectable, a contributor’s lack of understanding about their own submission is unmistakably apparent.”

The EFF’s Reliance on Trust

According to EFF senior staff technologist Jacob Hoffman-Andrews, an EFF representative, his team isn’t prioritizing methods for compliance verification or penalties for non-adherence. Hoffman-Andrews stated, “Given the manageable size of our contributor base, we are simply operating on trust.”

Should a contributor be found in breach of the guideline, the group plans to clarify the rules and request their cooperation. He elaborated, “This is a volunteer community built on a shared culture and mutual expectations. Our approach is to inform them, ‘This is the conduct we anticipate.’”

Brian Jackson, a principal research director at Info-Tech Research Group, suggested that businesses would probably gain an indirect advantage from policies like the EFF’s, as they are expected to elevate the quality of numerous open-source contributions.

He noted that for many businesses, a developer’s understanding of their code is less critical as long as the code successfully undergoes a comprehensive battery of tests covering functionality, cybersecurity, and compliance.

“In an enterprise setting, accountability is tangible, and productivity gains are significant. Key concerns include: Does this code covertly send data to unauthorized third parties? Does it fail security audits?” Jackson explained. “Their focus is on unmet quality standards.”

Emphasizing Documentation Over Code Itself

The increasing prevalence of subpar code, often labeled “AI slop,” in enterprises and other organizations, is becoming a significant worry.

Faizel Khan, a lead engineer at LandingPoint, believes the EFF’s choice to prioritize code documentation and explanations over the code itself is the correct approach.

Khan stated, “While code can be verified through tests and tools, an inaccurate or deceptive explanation leads to persistent maintenance burdens, as subsequent developers will rely on the documentation. This is a common area where large language models (LLMs) can sound authoritative yet be wrong.”

Khan proposed straightforward questions that contributors should be required to answer. He advised, “Pose specific review questions: Why this particular method? Which edge cases were taken into account? What is the rationale behind these tests? If the contributor cannot respond, the submission should not be merged. A pull request (PR) summary should also be mandated, detailing: What was altered, the reason for the alteration, significant risks, and evidence from tests that confirm functionality.”

Steven Eric Fisher, an independent cybersecurity and risk consultant and former director of cybersecurity, risk, and compliance for Walmart, remarked that the EFF has smartly opted to emphasize comprehensive coding integrity rather than solely the code itself.

Fisher explained, “The EFF’s policy shifts the responsibility for integrity to the submitter, alleviating open-source software (OSS) maintainers from the complete weight of validation and burden.” He observed that contemporary AI models struggle with comprehensive documentation, comments, and clear explanations. “Consequently, this shortcoming acts as a natural speed bump and a de facto work validation threshold,” he elaborated. He conceded that this might be effective currently, but only until technology advances enough to generate detailed documentation, comments, and rationalization streams.

Ken Garnett, a consultant and founder of Garnett Digital Strategies, concurred with Fisher, positing that the EFF’s strategy resembles a “Judo move.”

Circumventing the Detection Challenge

Garnett commented that the EFF has “essentially bypassed the detection issue altogether, and that’s precisely where its power lies. Instead of attempting to identify AI-generated code post-submission—a method that is both unreliable and becoming increasingly unfeasible—they’ve implemented a more fundamental change: a redesign of the workflow itself. The point of accountability has been shifted earlier in the process, before any reviewer even engages with the submission.”

He elucidated that the review dialogue itself serves as an enforcement tool. A developer submitting code they don’t comprehend would be revealed when a maintainer probes them to justify a design choice.

Garnett described this methodology as achieving “disclosure combined with trust, balanced by selective examination.” He pointed out that the policy reorients the incentive system earlier in the process via the disclosure mandate, independently confirms human accountability through the requirement for human-written documentation, and employs random checks for all other aspects.

Nik Kale, a principal engineer at Cisco and a committee member for both the Coalition for Secure AI (CoSAI) and ACM’s AI Security (AISec) program, expressed his approval of the EFF’s new policy, specifically because it avoided the predictable path of outright banning AI.

Kale asserted, “Submitting code without being able to explain it upon request constitutes a policy breach, irrespective of AI involvement. This is inherently more enforceable than a detection-based strategy, as it focuses on a contributor’s ability to vouch for their work rather than identifying the tool used.” He concluded, “For businesses observing this, the message is clear. If your operations rely on open source—and nearly all enterprises do—it’s crucial to scrutinize the contribution governance policies of the projects you utilize. Furthermore, if you’re developing open-source internally, you should establish your own such framework. The EFF’s model, emphasizing disclosure coupled with accountability, offers a robust blueprint.”

Artificial IntelligenceDeveloperRolesCareersOpen SourceSoftware Development
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *