Site icon Information Technology

Secure software in the vibe coding era: A DevOps-led guide

Secure software in the vibe coding era: A DevOps-led guide

In this SJA exclusive, Kevin Boyle, Co-Founder and CEO of Gearset discusses securing software through ‘vibe coding.’

The use of AI-generated code is skyrocketing – so much so that Collins Dictionary named ‘vibe coding’ the Word of the Year for 2025.

At the same time, AI-generated code has become the most significant new source of anxiety in cybersecurity. 

One-in-five CISOs reported a major incident caused by a vulnerability in AI-generated code according to a report from this year.

Leaders in technology and security want to harness the efficiency and productivity gains of AI, but they also know they need to empower their teams to do so safely.

Company reputation and revenue is at stake, and they will have to answer for anything that goes wrong. What can help CISOs, CIOs and software development teams when using AI-generated code is setting up guardrails to prevent inevitable mistakes and hallucinations from snowballing into bigger problems further down the line. 

De-risking AI code

AI-generated code can without a doubt accelerate software development.

Our research from earlier in the year found room for improvement among UK businesses with deployments running four months behind schedule on average and significant costs resulting from these delays.

An injection of speed to the cycle of building, testing and releasing will bring productivity gains for thousands of businesses feeling this strain. 

AI-generated code is like a prolific (but junior) software engineer, while speedy, it still needs a lot of direction and correction.

The sheer volume and speed of AI-generated code changes can overwhelm manual review processes and make existing checks unreliable.

There’s an emerging pattern of ‘rubber stamping’ when developers suddenly up their output and human reviewers become overwhelmed with the volume of software they then have to approve.

Improperly constructed code can slip through the cracks, creating a house of cards that will eventually fall without deterministic quality control. 

Guardrails govern rogue code 

Establishing guardrails like automated code reviews is essential to safely unlocking the potential of  AI-generated code.

Good code hygiene is essential as the volume of change increases.

Businesses need a clear, repeatable set of actions for assessing the quality of code and fixing it before it reaches production. 

Businesses hoping to build, test and release AI code on a regular basis must ensure a degree of determinism in their guardrails.

AI-driven checks can help with volume, but relying on them alone risks AI systems effectively marking their own homework with both sides hallucinating or missing issues.

AI models are also often trained on older data and may not reflect the latest security guidance. 

Automated code reviews can handle the majority of simple fixes, giving human developers the space to focus on reviewing the complex elements and overall security implications of AI output.

Relieving that pressure allows teams of any size to scale change volume without escalating risk.

However, security leaders can’t become solely reliant on automated checks; human accountability is essential to ensure these reviews are robust and layered correctly.

If bad AI-generated changes slip through, the result is mission-critical downtime that puts leaders under the microscope for governance and audit failures.

Observability’s safety net

Even the most robust guardrails cannot catch every edge case in a complex AI-augmented ecosystem. This is why observability is critical and even more important with AI-generated code.

The old ways of relying on end-user reports are no longer feasible with AI having such a dramatic effect on change volume and potential errors.

Observability isn’t just for traditional software development; it’s necessary for business applications too. For instance, 49% of teams building on Salesforce lack observability tools according to our 2025 State of Salesforce DevOps Report.

Teams with observability catch bugs 50% faster and fix them 48% faster. Particularly in the Salesforce environment, compliance, security and data requirements are getting stricter, adding to the pressure to use tooling that improves observability. 

Monitoring errors, logs and performance data across all contributors gives businesses more confidence at each stage of the DevOps lifecycle.

In early development, you can detect any failing automations or unexpected behaviours in sandboxes before they reach the release stage.

During deployment, error rates and regressions can be picked up. And in production any unusual behaviors or misconfigurations that signal a security risk are laid bare. 

For security leaders, observability provides complete visibility across the entire pipeline and ensures AI innovation doesn’t become a liability, protecting the business’ reputation and the team’s ability to scale with confidence. 

Scaling for AI code

The growth of AI has coincided with the most profitable era ever for cyber-criminals.

Businesses of any size that want to implement AI code generation and testing cannot afford to leave themselves vulnerable with blind spots and one-size-fits-all review processes.

Scaling up requires the tooling, culture and time to implement thorough guardrails – otherwise increased output can be the downfall of a business’ ambitions to accelerate software delivery.

To achieve absolute confidence in these systems, integrating deterministic capabilities is currently the only way to ensure the predictable, secure outcomes required for enterprise-grade software.

link

Exit mobile version