Independent security researcher focused on AI supply chain security and prompt injection attack vectors. Findings submitted through responsible disclosure programs.
Discovered a new attack pattern where npm packages can permanently hijack AI coding assistants through postinstall hooks. A malicious package was found injecting 13 persistent skill files into Claude Code's configuration directory (~/.claude/commands/), disabling all security prompts and user confirmations. The injected files survived package uninstallation with no cleanup mechanism, effectively creating a persistent backdoor that auto-approved all operations without user consent.
All findings are submitted through official vulnerability disclosure programs before public release. I follow coordinated disclosure practices and work with vendors to ensure fixes are deployed before details are published.