Your AI Security Audit Has a Blind Spot — It Can't Check Dependencies
When you ask an AI to audit your code for security issues, it reviews your code patterns. But it has no idea if the 47 packages it installed have known CVEs.
There’s a piece of advice that keeps circulating in developer communities: “When you’re done building, ask your AI to do a full security audit.” It sounds reasonable. And it partly works.
The AI will find injection vulnerabilities, authentication bypasses, XSS, insecure deserialization, hardcoded secrets. It’s good at pattern matching against your source code. That’s Layer 1 of application security, and modern AI assistants handle it well.
But there’s a blind spot nobody talks about.
The AI Doesn’t Know What It Installed
When your AI assistant writes code, it pulls in packages. Express, lodash, jsonwebtoken, bcrypt, axios. Dozens of direct dependencies, hundreds of transitive ones. A typical Node.js project has 300 to 900 packages in its dependency tree.
The AI chose these packages based on training data. Training data that’s months old. It doesn’t check whether express@4.17.1 has a path traversal CVE published last week. It doesn’t know if that jsonwebtoken version has a bypass. And it has absolutely no access to vulnerability databases while it’s coding.
When you ask it to “do a security audit,” it reviews the code you wrote (or rather, the code it wrote for you). It doesn’t review the code in your node_modules/ folder. That’s thousands of files from packages maintained by strangers.
Two Different Problems, Two Different Tools
Code review and dependency scanning are fundamentally different:
| Code Review | Dependency Scanning | |
|---|---|---|
| What it checks | Your source code patterns | Third-party packages against CVE databases |
| Data source | The code itself | NVD, CISA KEV, EUVD, vendor advisories |
| AI good at it? | Yes | No (no database access during coding) |
| Example finding | SQL injection in your query builder | CVE-2021-23337 in lodash 4.17.20 |
Your AI assistant handles the left column. Nobody handles the right column unless you explicitly add a tool for it.
What Actually Works
You need a scanner that matches your lock file against real vulnerability databases. Not AI interpretation, not pattern matching — actual CVE lookups against package-lock.json, Cargo.lock, go.sum, or whatever your ecosystem uses.
npx @ottersight/cli scan .
This runs Syft (SBOM generation) and Grype (CVE matching) locally, then enriches the results with:
- CISA KEV — is this vulnerability actively exploited in the wild?
- EUVD — what does the EU Vulnerability Database say? (relevant for CRA/NIS2)
- EPSS — what’s the probability this gets exploited?
The output is a severity-sorted table with actual CVE IDs, not AI opinions.
The Workflow
Here’s what a complete security check looks like for an AI-assisted project:
- Build your feature with your AI assistant
- Ask the AI to review the code for security patterns (injection, auth, XSS)
- Run a dependency scan to check what the AI installed
# Step 3: the part most people skip
npx @ottersight/cli scan .
Step 2 and Step 3 are complementary. Skipping either one leaves a gap. Most people skip Step 3 because they don’t know it’s a separate problem.
If You Use Claude Code
There’s an MCP server that gives Claude direct access to vulnerability databases:
npx @ottersight/mcp
Instead of guessing based on training data, Claude can now query real CVE databases and return structured vulnerability data it can actually reason about.
Both tools are MIT licensed and run entirely locally: github.com/Ottersight/ottersight-cli
The AI writes the code. Something else needs to check what the AI installed. That’s not a criticism of AI coding — it’s just how the security model works. Code review and dependency scanning are different layers. Use both.