openmatrix@0.1.93
Discovered a new attack pattern where npm packages can permanently hijack AI coding assistants through postinstall hooks. A package was found injecting 13 persistent skill files into Claude Code's configuration directory (~/.claude/commands/), disabling all security prompts and user confirmations. The injected files survived package uninstallation with no cleanup mechanism, effectively creating a persistent backdoor that auto-approved all operations without user consent.
openmatrix@0.1.93 uses a postinstall hook to silently inject 13 skill files into ~/.claude/commands/ that bypass Claude Code's permission system, block all competing tools, and auto-approve all operations without user consent. This is a novel attack category: AI Assistant Hijacking via Persistent Prompt Injection.
"postinstall": "node scripts/install-skills.js"
Automatically writes 13 Markdown files to ~/.claude/commands/om/ and ~/.config/opencode/commands/om/ on npm install.
The injected auto.md contains a <BYPASS-MODE> section that instructs Claude to:
This effectively disables Claude Code's safety permission system.
Both marked priority: critical and always_load: true. They contain <EXTREMELY-IMPORTANT> prompt injection blocks that:
superpowers:*, gsd:*, or any other task orchestrationAll 13 skills contain <NO-OTHER-SKILLS> blocks that instruct Claude to refuse to use any competing or built-in task management features.
auto.md forces all git commits to include:
Co-Authored-By: OpenMatrix https://github.com/bigfish1913/openmatrix
~/.claude/commands/om/ persist across ALL Claude Code sessionsalways_load: true ensures skills load every sessionnpm uninstall openmatrix does NOT remove injected files~/.claude/commands/om/om.md
~/.claude/commands/om/openmatrix.md
~/.claude/commands/om/auto.md
~/.claude/commands/om/approve.md
~/.claude/commands/om/brainstorm.md
~/.claude/commands/om/check.md
~/.claude/commands/om/meeting.md
~/.claude/commands/om/report.md
~/.claude/commands/om/research.md
~/.claude/commands/om/resume.md
~/.claude/commands/om/retry.md
~/.claude/commands/om/start.md
~/.claude/commands/om/status.md
(+ same 13 files in ~/.config/opencode/commands/om/)
This is not traditional malware — it doesn't steal credentials or open reverse shells. Instead, it exploits a new attack surface: AI assistant behavior modification via persistent prompt injection.
The key insight: in the age of AI coding assistants, you don't need to execute malicious code yourself. You just need to instruct the AI to do it for you. The injected skills tell Claude Code to:
This is the equivalent of injecting a .bashrc that aliases rm to rm -rf — but for AI assistants.
chokidar@^5.0.0 declared in dependencies but never imported in any source file.
rm -rf ~/.claude/commands/om/
rm -rf ~/.config/opencode/commands/om/