Vibe Coding Is the New Open Source—in the Worst Way Possible

-

[ad_1]

Just like you probably don’t grow and grind wheat to make flour for your bread, most software developers don’t write every line of code in a new project from scratch. Doing so would be extremely slow and could create more security issues than it solves. So developers draw on existing libraries—often open source projects—to get various basic software components in place.

While this approach is efficient, it can create exposure and lack of visibility into software. Increasingly, however, the rise of vibe coding is being used in a similar way, allowing developers to quickly spin up code that they can simply adapt rather than writing from scratch. Security researchers warn, though, that this new genre of plug-and-play code is making software-supply-chain security even more complicated—and dangerous.

“We’re hitting the point right now where AI is about to lose its grace period on security,” says Alex Zenla, chief technology officer of the cloud security firm Edera. “And AI is its own worst enemy in terms of generating code that’s insecure. If AI is being trained in part on old, vulnerable, or low-quality software that’s available out there, then all the vulnerabilities that have existed can reoccur and be introduced again, not to mention new issues.”

In addition to sucking up potentially insecure training data, the reality of vibe coding is that it produces a rough draft of code that may not fully take into account all of the specific context and considerations around a given product or service. In other words, even if a company trains a local model on a project’s source code and a natural language description of goals, the production process is still relying on human reviewers’ ability to spot any and every possible flaw or incongruity in code originally generated by AI.

“Engineering groups need to think about the development lifecycle in the era of vibe coding,” says Eran Kinsbruner, a researcher at the application security firm Checkmarx. “If you ask the exact same LLM model to write for your specific source code, every single time it will have a slightly different output. One developer within the team will generate one output and the other developer is going to get a different output. So that introduces an additional complication beyond open source.”

In a Checkmarx survey of chief information security officers, application security managers, and heads of development, a third of respondents said that more than 60 percent of their organization’s code was generated by AI in 2024. But only 18 percent of respondents said that their organization has a list of approved tools for vibe coding. Checkmarx polled thousands of professionals and published the findings in August—emphasizing, too, that AI development is making it harder to trace “ownership” of code.

[ad_2]

Source link

Ariel Shapiro
Ariel Shapiro
Uncovering the latest of tech and business.

Latest news

What Happens During a Fire Watch? Inside the Process and Protocols

When a fire alarm system fails or a sprinkler line goes offline, things don’t pause until it’s fixed. In...

Bremont Is Sending a Watch to the Moon’s Surface

A multifaceted decahedral black ceramic bezel and sandwich-style three-piece case—a reworking of Bremont's signature Trip-Tick construction—house a chronometer-rated...

The Most WIRED Watches at Watches and Wonders 2026

The case is white zirconium oxide ceramic with a Ceratanium bezel and back, rated to handle temperature swings...

Bitcoin Price Pumps 6% Near $75,000 As Shorts Liquidate

Bitcoin price surged more than 5% in the evening of April 13, climbing near the $75,000...

You Can Soon Buy a $4,370 Humanoid Robot on AliExpress

Listing consumer electronics on the internet's large ecommerce marketplaces is a key step in “democratizing” the products, allowing...

Must read

You might also likeRELATED
Recommended to you