Making AI Code Safer

Crystal Morin
9 Min Read

Vibe coding offers rapid, valuable development and is set to endure. This newfound flexibility, however, demands an equal understanding that robust security is essential and should never be taken for granted.

A concentrated developer, wearing glasses, works on a computer, with programming code, cybersecurity data, and digital technology reflected in their spectacles as they build a software program, showing a close-up of their eye.
Source: Ground Picture / Shutterstock

Emerging as the newest catalyst in technology, vibe coding is indeed quite impressive. These innovative AI-powered coding methods empower developers to deploy applications with increased speed and even enable business users to quickly build prototypes for workflows and tools, bypassing traditional, lengthy engineering queues.

By leveraging chatbots and customized prompts, vibe coders can swiftly create applications, moving them into production within just a few days. Gartner even projects that by 2028, 40% of all new enterprise software will be developed using vibe coding tools and methodologies, diverging from conventional human-driven waterfall or agile approaches. This rapid pace is incredibly appealing, making this forecast entirely understandable.

The primary hurdle arises when individuals, particularly non-coders and even some seasoned developers, receive an application that perfectly meets their immediate needs, leading them to believe the project is complete. However, in reality, the crucial work has just commenced.

Once an application is built, the ongoing tasks begin: maintenance, including updates, patches, scaling, and defense. Crucially, before exposing actual users and their data to potential vulnerabilities, it’s imperative to comprehend the underlying process by which AI constructed your new application.

Understanding Vibe Coding Mechanics

Vibe coding platforms and software leverage large language models (LLMs) that have been trained on vast datasets of existing code and development patterns. Users provide the model with application concepts, and in response, it produces components such as code, configurations, and user interface elements. Upon attempting to execute the code or view the application’s interface, one of two scenarios will unfold: either the application will perform and appear as anticipated, or an error message will surface. This initiates an iterative process of refining or altering the code until the intended functionality is achieved.

The optimal outcome is a functional application that adheres to established software development best practices, informed by AI’s prior learning and output. Nevertheless, AI could also generate an application that appears and performs perfectly on the surface, yet possesses underlying weaknesses in terms of fragility, inefficiency, or security.

A significant concern is that this methodology frequently overlooks the accumulated security and engineering expertise vital for maintaining secure operations in a production environment, where threats, regulatory demands, client confidence, and operational scalability all simultaneously come into play. Therefore, for those without a cybersecurity background or extensive app development experience, the question is: what steps should be taken?

Functionality Doesn’t Equate to Production Readiness

Identifying a problem is the crucial first step toward resolving it. While a vibe-coded prototype can be a successful proof of concept, the risk lies in assuming it’s fit for production deployment.

Begin by cultivating awareness. Leverage established security frameworks to verify the robustness of your application. Microsoft’s STRIDE threat model offers a pragmatic method to conduct a preliminary security review of a vibe-coded application prior to its launch. STRIDE represents:

  • Impersonation
  • Data Alteration
  • Non-Repudiation Failures
  • Confidentiality Breaches
  • Service Unavailability
  • Unauthorized Privilege Escalation

Employ STRIDE as a framework to pose critical, potentially uncomfortable questions yourself, rather than waiting for others to do so. For instance:

  • Is it possible for an individual to masquerade as another user?
  • Does the application inadvertently expose sensitive data via error messages, system logs, or APIs?
  • Are appropriate rate limits and timeouts implemented, or could requests be overwhelmed by spam?

To mitigate these potential vulnerabilities, ensure your newly vibe-coded application manages identities properly and maintains security as its default state. Furthermore, it’s crucial to confirm that the application’s code does not contain any hardcoded credentials accessible to unauthorized parties.

These practical concerns are inherent to all applications, regardless of whether they are developed by AI or human programmers. Proactive awareness of potential problems empowers you to implement tangible measures for a stronger defense. This shifts your perspective from merely “it functions” to “we comprehend its potential failure points.”

Human Oversight Remains Essential for Optimal Vibe Coding Outcomes

Irrespective of one’s individual views, vibe coding is here to stay. It significantly aids both developers and business units in constructing desired applications, proving undeniably beneficial. This newfound autonomy and capacity for app creation, however, must be complemented by the understanding that security is a non-negotiable requirement, not an inherent feature.

The objective of secure vibe coding is not to stifle progress but rather to maintain rapid innovation while simultaneously minimizing the potential impact of security breaches.

Regardless of your proficiency in AI-assisted coding or programming overall, various tools and methodologies are available to guarantee the security of your vibe-coded applications. Given the accelerated development pace of these applications, security measures must similarly be rapid and straightforward to deploy. This process commences with assuming ownership of your code from its inception and committing to its ongoing maintenance. Integrate security considerations from the earliest stages—preferably during application planning and initial reviews. Proactive security integration is always superior to attempting to retrofit it later.

Once your vibe-coded application is finalized and initial security assessments are complete, you can then devise a long-term strategy. Although vibe coding excels for prototyping or preliminary development, it’s frequently not the ideal method for robust, full-scale applications designed to accommodate an expanding user base. At this juncture, implementing more sophisticated threat modeling and automated security safeguards will enhance overall protection. Consider enlisting a dedicated developer or engineer at this stage as well.

Numerous other security best practices should be adopted at this phase of development. For instance, employing software scanning tools allows you to identify all software packages and auxiliary tools your application depends on, subsequently checking this inventory for potential vulnerabilities. Beyond assessing third-party risks, integrate security checks into your CI/CD pipeline, such as preventing hardcoded secrets using pre-commit hooks. Furthermore, leveraging metadata concerning any AI-generated components within the application can reveal what parts were AI-written, which models were utilized for code generation, and which LLM tools contributed to its construction.

In essence, vibe coding enables rapid development and deployment of your envisioned creations. While speed is certainly advantageous, security must remain an absolute prerequisite. Failing to implement appropriate security measures will expose you to a host of avoidable issues, excessive risks, or even more severe consequences.

New Tech Forum serves as a platform for technology executives—including external partners and other contributors—to thoroughly investigate and debate innovative enterprise technologies with unparalleled scope. Our selection process is subjective, guided by our assessment of technologies deemed significant and most relevant to InfoWorld’s readership. InfoWorld strictly prohibits the submission of marketing materials for publication and retains full editorial control over all contributed content. Please direct all correspondence to doug_dineley@foundryco.com.

Application SecuritySecurityDevSecOpsSoftware DevelopmentGenerative AIArtificial Intelligence
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *