Lead Story | Advisory | AppSec · DevSecOps · Team Leadership |
|
AI-Assisted Development Is the New Normal. Here Is How to Secure It.
Most developers now use AI to write or augment code — and for good reason. But the tooling that makes teams faster also introduces security blind spots that traditional workflows were not built for. Here is what the data says, and how to build guardrails without slowing your team down.
AI-assisted development — sometimes called “vibe coding” — is how most modern teams build software. And the tools have evolved well beyond autocomplete. Today’s agentic coding environments — Cursor, Windsurf, GitHub Copilot agent mode, Claude Code, and others — can scaffold entire features, make multi-file changes, execute terminal commands, and iterate on their own output with minimal human intervention. For experienced engineers and junior developers alike, these tools are a genuine productivity multiplier. The security challenge is that the scope of what AI touches in a codebase has grown faster than the controls around it.
Veracode tested over 100 large language models across Java, Python, C#, and JavaScript and found that 45% of code samples introduced OWASP Top 10 vulnerabilities. Java was the highest at 72%. Cross-site scripting defences failed in 86% of relevant samples. A separate study of 576,000 code samples from 16 LLMs found that 19.7% of package dependencies were hallucinated — the model suggested libraries that do not exist. These are not edge cases; they are patterns that show up consistently across models and languages.
None of this means teams should stop using AI tools. It means the security processes around them need to catch up. Most organisations have mature workflows for reviewing human-written code and vetting open-source libraries, but AI-generated output often bypasses both — not out of negligence, but because the tooling and policies have not yet adapted to the speed and autonomy of modern agentic coding environments. When an AI agent can create files, install packages, and run commands in a single workflow, the blast radius of a single unchecked session is larger than a single unchecked pull request ever was.
Six Guardrails for Securing AI-Assisted Development
1. Treat AI-generated code exactly like third-party code. Every function, every dependency, every snippet gets the same review rigour as a pull request from an external contributor. This is especially important with agentic tools that make multi-file changes autonomously — review the full diff, not just the file you asked it to edit.
2. Audit every dependency before it runs. Before npm install or pip install, verify the package exists, has a credible maintainer, and is not a hallucinated name. Several vendors now offer IDE-level dependency auditing that can catch phantom packages before first run.
3. Embed SAST and SCA in CI/CD — not as an afterthought. Static application security testing and software composition analysis must run on every commit, not quarterly. Pre-commit hooks can block hardcoded secrets and dangerous patterns before they reach the repository.
4. Write a security-first AI acceptable use policy. Define which AI coding tools are approved, what data can be sent to them, and what types of code (authentication, cryptography, input handling) require mandatory human review regardless of origin. If your team does not have one yet, it is worth prioritising.
5. Prompt for security, not just functionality. Instruct the model to include input validation, use parameterised queries, avoid hardcoded credentials, and explain its security reasoning. Research shows that security-focused prompting significantly reduces insecure output.
6. Measure your AI security debt. Track the percentage of vulnerabilities originating from AI-generated code separately. This data helps you calibrate how much review and testing your AI workflows actually need. Veracode’s 2026 report found high-risk vulnerabilities up 36% year-over-year, making visibility into the source of your findings more important than ever.
Tooling Landscape
HARDENED does not endorse or recommend specific vendors. The following is a non-exhaustive overview of tools that currently operate in this space, listed for awareness only. Evaluate each against your own environment, stack, and threat model.
|
Dependency & Supply Chain Auditing
Tools that scan dependencies at the IDE or CI level for known vulnerabilities, phantom packages, and malicious code include Socket (real-time supply chain detection), Snyk Open Source (SCA with licence compliance), Endor Labs (reachability analysis), and Mend (formerly WhiteSource, open-source risk management).
Static Analysis & Code Scanning (SAST)
Static application security testing tools that can be embedded in CI/CD pipelines to catch vulnerabilities before merge include Semgrep (open-source, rule-based), SonarQube (code quality and security), Checkmarx (enterprise SAST/SCA), and Veracode (cloud-based scanning). GitHub’s built-in CodeQL also provides free SAST for public repositories.
Secret Detection & Pre-Commit Guards
Pre-commit hooks and scanners that detect hardcoded credentials, API keys, and tokens before they reach the repository include GitGuardian (real-time secret detection), TruffleHog (open-source credential scanner), and gitleaks (lightweight, open-source). These are particularly critical in vibe coding workflows where LLMs frequently generate placeholder credentials that developers forget to remove.
AI-Specific Application Security
A newer category of tools is emerging specifically to address AI-generated code risk. Legit Security and Apiiro offer SDLC governance that can flag AI-generated pull requests for additional review. Snyk Code has added AI-aware rules to its engine. Invicti (formerly Netsparker) published a vibe coding security checklist for runtime validation of AI-generated web applications. This category is evolving rapidly — expect consolidation and new entrants throughout 2026.
|
→ Key Takeaway The goal is not to slow teams down — it is to make AI-assisted development as secure as it is fast. The organisations getting this right are not avoiding AI coding tools; they are wrapping them in the same review, testing, and governance processes they already apply to human-written code and third-party libraries. Start with a policy, enforce it with tooling, and measure the results. |
Quick Hits
| 01 |
Slopsquatting: Attackers Are Weaponising AI’s Phantom Packages
When an LLM hallucates a package name — say, react-codeshift instead of the real jscodeshift — attackers can register that phantom name on npm or PyPI and plant malicious code inside it. Researchers analysed 756,000 AI-generated code samples and found nearly 20% recommended non-existent packages, with 43% of those hallucinations appearing consistently across repeated prompts. The term “slopsquatting” was coined by Python Software Foundation developer-in-residence Seth Larson. Socket, Snyk, and Mend have all published tooling guidance for detecting these phantom dependencies before they enter your supply chain. Socket →
| High | AppSec · Supply Chain |
|
| 02 |
Veracode 2026 Report: High-Risk Vulnerabilities Surge 36% as Security Debt Deepens
Veracode’s 2026 State of Software Security report paints a bleak picture: high-risk vulnerabilities — flaws that are both severe and likely to be exploited — rose from 8.3% to 11.3% of all findings, a 36% year-over-year increase. 82% of organisations now carry security debt, and 60% carry critical debt. The report ties the acceleration directly to AI-driven development speed outpacing security processes. Organisations using AI-generated code for automation reported 15–20% more vulnerabilities introduced into production. The takeaway: speed without guardrails is not velocity — it is accumulating debt that compounds. Veracode →
| High | Enterprise · DevSecOps |
|
CVE Watch
|
Patch of the Day
Chrome Gemini AI Panel — Privilege Escalation via Malicious Extensions
A vulnerability in Chrome’s built-in Gemini AI side panel allowed malicious browser extensions with only basic permissions to escalate privileges and access the victim’s camera, microphone, take screenshots of any website, and read local files. Palo Alto Networks Unit 42 researcher Gal Weizman discovered that when gemini.google.com loaded inside the AI side panel rather than a standard tab, the panel context inherited browser-level privileges — and extensions could inject JavaScript into it. Google patched the flaw in Chrome 143.0.7499.192. This is a concrete example of why embedding AI assistants into browsers creates new high-privilege attack surfaces. If your organisation has not updated Chrome since January, do so now.
| Vendor: Google · Patched: Chrome 143.0.7499.192 · Discovered: Unit 42 · Exploited: PoC Confirmed |
|
Compliance Tip of the Day
|
(ISC)² CISSP Domain 8 — Software Development Security
AI Does Not Exempt You from Secure SDLC
Today’s lead is a Domain 8 wake-up call. CISSP Domain 8 covers security in the software development lifecycle — including code review, testing methodologies, and secure coding standards. The core principle has not changed just because the code’s author is a model instead of a human: all code must be validated against security requirements before deployment. Domain 8 also emphasises software supply chain risk, which now explicitly includes AI-hallucinated dependencies. Study point: For CISSP candidates — understand that secure SDLC controls apply regardless of whether code is human-written, AI-generated, or sourced from open-source libraries. The exam may test your ability to apply the same secure development principles to emerging development practices.
|
|
HARDENED | This newsletter does not constitute professional security advice. Security configurations and threat landscapes vary by organisation. Consult a qualified security professional for implementation guidance specific to your environment. How we work: HARDENED uses AI agents for research, drafting, and automation. Every issue is reviewed by humans before publication. If you spot an error, reply directly — we correct the record promptly. Editor’s Source Note: Vibe coding vulnerability statistics sourced from Veracode’s GenAI Code Security Report (100+ LLMs tested, 45% failure rate) and confirmed by The Register, Augment Code, and Resilient Cyber. Package hallucination data (19.7% of 576,000 samples) from a peer-reviewed study covered by BleepingComputer, Infosecurity Magazine, CSO Online, and Bank Info Security. Slopsquatting term attributed to PSF developer-in-residence Seth Larson. The “react-codeshift” example sourced from Socket.dev research (January 2026). Veracode 2026 State of Software Security (36% YoY increase in high-risk vulns, 82% security debt) sourced from Veracode, AI-Tech Park, and Petri. CVE-2026-0628 (Chrome Gemini, CVSS 8.8) discovery credited to Palo Alto Networks Unit 42 researcher Gal Weizman; details confirmed via The Hacker News, Dark Reading, GBHackers, Digital Watch Observatory, and Chrome Unboxed. Patched in Chrome 143.0.7499.192 (January 2026). Tooling landscape compiled from vendor documentation (Socket, Snyk, Endor Labs, Mend, Semgrep, SonarQube, Checkmarx, Veracode, GitGuardian, TruffleHog, gitleaks, Legit Security, Apiiro, Invicti); HARDENED has no commercial relationship with any vendor listed. |
|