Summary
magic-mcp fetches external UI component code (similar to shadcn) and injects it into AI coding assistants (Cursor, Windsurf, Cline). When fetched component code contains attacker-crafted content, prompt injection payloads embedded in code comments, variable names, or documentation strings can hijack the AI assistant's behavior.
Attack Vector
- Attacker publishes a UI component with prompt injection hidden in code comments or documentation
- Developer's AI assistant fetches the component via magic-mcp → injected content enters the LLM context
- Injection instructs the AI to introduce backdoors, exfiltrate environment variables, or modify other project files
Impact
- Code Backdoor Injection: AI inserts malicious code into the developer's project (data exfiltration, reverse shells, credential theft)
- Supply Chain Attack: A single malicious component can compromise every project that uses magic-mcp to fetch it
- Environment Variable Exfiltration: Injection could instruct the AI to read and expose
.env files, API keys, or signing credentials
- Silent Compromise: Injected code modifications may be subtle enough to pass code review
OWASP Classification
- OWASP LLM Top 10: LLM01 (Prompt Injection)
- OWASP Agentic Top 10: AG01 (Prompt Injection via Tool Results), AG07 (Supply Chain Vulnerability)
Recommendation
- Add a Security Warning to the README about the risks of fetching external code into AI context
- Implement code content sanitization before passing to LLM
- Add integrity checks (checksums, signatures) for fetched components
- Consider a curated/verified component registry with security reviews
References
Free compliance check: Run your own prompts through our EU AI Act compliance scanner — instant results, no account required: prompttools.co/report
Best,
Joerg Michno
ClawGuard — Open-Source AI Agent Security | 225 patterns, 15 languages
Summary
magic-mcp fetches external UI component code (similar to shadcn) and injects it into AI coding assistants (Cursor, Windsurf, Cline). When fetched component code contains attacker-crafted content, prompt injection payloads embedded in code comments, variable names, or documentation strings can hijack the AI assistant's behavior.
Attack Vector
Impact
.envfiles, API keys, or signing credentialsOWASP Classification
Recommendation
References
Free compliance check: Run your own prompts through our EU AI Act compliance scanner — instant results, no account required: prompttools.co/report
Best,
Joerg Michno
ClawGuard — Open-Source AI Agent Security | 225 patterns, 15 languages