The security registry for AI packages

Security intelligence for AI agents

The first CVE-like registry for AI packages. 193+ packages audited, 303 vulnerabilities found. 31 critical. Public, transparent, community-driven.

npx skills add agentaudit-dev/agentaudit-skill
0
Findings
0
Critical
0
Packages
0
Audits
Works with
Claude CodeClaude Code
CursorCursor
WindsurfWindsurf
GitHub CopilotGitHub Copilot
OpenAI CodexOpenAI Codex
OpenClawOpenClaw
agentaudit

Scan or Search

Run a free AI security audit on any GitHub repo, or search our database

Free AI-powered security audit3 scans/dayPublic repos only

How It Works

Add the skill, scan packages, get trust scores

1. Add Skill
πŸ›‘οΈSKILL.md
2. Security Scan
analyzing code...
1import { exec } from "child_process"
2const API_KEY = process.env.KEY
3exec(`curl ${userInput}`) // ⚠️
4fs.readFileSync("/etc/passwd")
5return { status: "ok" }
3. Trust Score
Safe (80+)
Review (50-79)
Risky (<50)
πŸ§ͺ

LLM Security Benchmark

Which AI model finds the most real vulnerabilities?

Every audit on AgentAudit records which LLM performed the analysis. We're building the first open benchmark for AI security capabilities β€” detection rates, severity accuracy, and cross-model consensus. All from real-world audits, not synthetic tests.

337+
Audits Tracked
303
Findings Found
193+
Packages Scanned
View Benchmark β†’

Real Threats We've Found

Live Data

Not theoretical risks. Actual malicious code discovered in AI packages.

193
Packages Scanned
and counting...
Active
31
Critical Threats
πŸ“€

Data Exfiltration

Sending your code to external servers.

πŸ’‰

Prompt Injection

Hidden instructions manipulating agents.

303
Total Findings
🎭

Obfuscation

Hidden malicious payloads.

337
Audits Completed

Live Security Feed

Real vulnerabilities discovered in AI packages β€” updated in real-time

✍️ Latest Research

Featured investigation plus fast-scan briefs. Structured like an editorial desk, not a cloned card wall.

View All Research β†’

Use Cases

Four integration paths with different depth and intent β€” optimized for quick scanning and drill-down.

πŸ›‘οΈFastest PathAgent Skill GuardrailPre-install gate

Inject package checks before install commands are executed by your assistant.

npx skills add agentaudit-dev/agentaudit-skill
πŸ”ŒIn-Chat VerificationMCP Server Trust ChecksNatural language query

Ask for a trust score before your model connects to external MCP tooling.

"Is mcp-fetch safe?" β†’ 97/100 PASS
βš™οΈPipeline PolicyCI/CD Security GateAutomated fail conditions

Stop merges when critical package findings appear in dependency diffs.

fail-on: critical
πŸ“¦Programmatic AccessRegistry & API LookupHTTP endpoint

Query package risk from scripts, bots, or internal security dashboards.

GET /api/check?package=express

AI Package Risk, Quantified

A trust-style dashboard showing exposure, critical findings, and current ecosystem coverage at a glance.

Critical findings31

High-impact vulnerabilities confirmed in real AI package ecosystems.

Packages audited193+
Total reports337+
Review coverage confidence82%

Getting Started

Install the AgentAudit Skill in one command. Works with all major AI platforms.

πŸ₯‡ Recommended: Full Coverage
Auto-detect all IDEs (recommended)
npx skills add agentaudit-dev/agentaudit-skill
Auto-detects Claude Code, Cursor, Windsurf, Copilot, Codex, Kiro, and more
or install for a specific IDE
Claude Code
$npx skills add agentaudit-dev/agentaudit-skill --agent claude-code
Installs to .claude/skills/ β€” add -g for global, -y to skip prompts
βœ“ Pre-install checksβœ“ LLM code analysisβœ“ Submit findings
Full Docs β†’
πŸ”Œ

MCP Server

Ask your AI assistant "Is this package safe?" directly in chat.

terminal
$ git clone https://github.com/agentaudit-dev/agentaudit-mcp
$ npm install && npm run build
MCP Setup Guide β†’
⚑

REST API

Look up any package directly. Free, no auth required for reads.

terminal
$ curl agentaudit.dev/api/check?package=express
Trust Score: 92/100 β€” PASS
API Reference β†’
πŸ€–

GitHub Action

CI/CD security scanning. Flag or fail builds on unsafe packages.

.github/workflows/
uses: agentaudit-dev/agentaudit-github-action@v1
fail-on: critical
Action Docs β†’

How AgentAudit keeps you informed

Three focused layers of security intelligence for every AI package in your stack.

Prevention
01

Check Before Install

Search any package and get an instant trust score before code executes on your machine.

Trust ScoreStatic AnalysisDependency RiskInstall Verdict
@anthropic/mcp-fetch
97/100 β€” PASS βœ…
Live Intel
02

Community Intelligence

New findings are pushed continuously and confidence improves through cross-validation.

  • NEW ASF-2025-0142 Β· command injection
  • UPDATE ASF-2025-0138 Β· score 72 β†’ 45
  • NEW ASF-2025-0144 Β· unsafe eval path
Auditability
03

Verifiable Trust

Every score change and audit record is chain-linked, signed, and reproducible.

hasha3f8c2...e91d
prev7b2e1a...f4c8
sigverified βœ“

Start securing your AI stack

One command. Every package checked before it runs.

npx skills add agentaudit-dev/agentaudit-skill