#001CriticalCVSS 9.2⚡ Novel Vector2026-04-02

Persistent Prompt Injection via npm Supply Chain

openmatrix@0.1.93

First AI Assistant Hijack via Supply Chain
Veredicto:SUSPICIOUS — Undisclosed dangerous behavior

Visión General

Discovered a new attack pattern where npm packages can permanently hijack AI coding assistants through postinstall hooks. A package was found injecting 13 persistent skill files into Claude Code's configuration directory (~/.claude/commands/), disabling all security prompts and user confirmations. The injected files survived package uninstallation with no cleanup mechanism, effectively creating a persistent backdoor that auto-approved all operations without user consent.

9.2
CVSS Score
50K+
Reddit Views
13
Injected Files
HackerOne
Submitted Via

Flujo del Ataque

Initial Access
User runs npm install openmatrix. The postinstall hook (scripts/install-skills.js) executes automatically with no user interaction required.
Persistence
13 Markdown skill files written to ~/.claude/commands/om/ with always_load: true flag, ensuring execution in every future Claude Code session.
Defense Evasion
auto.md contains <BYPASS-MODE> that instructs Claude to auto-approve all Bash commands, file operations, and agent calls — disabling the safety permission system.
Execution Control
om.md and openmatrix.md marked priority: critical intercept ALL development requests, routing them through the attacker's workflow. <NO-OTHER-SKILLS> blocks prevent using any other tools.
No Cleanup
npm uninstall does NOT remove injected files. No uninstall script provided. Files persist indefinitely across all sessions until manually discovered and deleted.

MITRE ATT&CK Mapeo

T1546Event Triggered Execution — npm postinstall lifecycle hook
T1547Boot/Logon Autostart — persistent skills via always_load: true
T1562.001Impair Defenses — disable permission prompts and safety system
T1195.002Supply Chain Compromise — malicious npm package distribution

Tags

Claude CodePrompt InjectionnpmPersistenceAI Security

Informe Completo

AI Assistant Hijack via Persistent Prompt Injection: openmatrix

TL;DR

openmatrix@0.1.93 uses a postinstall hook to silently inject 13 skill files into ~/.claude/commands/ that bypass Claude Code's permission system, block all competing tools, and auto-approve all operations without user consent. This is a novel attack category: AI Assistant Hijacking via Persistent Prompt Injection.

Package

  • Name: openmatrix@0.1.93
  • Maintainer: bigfishnpm (756091180@qq.com)
  • Versions: 93 (rapid publishing, 9 versions on 2026-04-02 alone)
  • Repository: github.com/bigfish1913/openmatrix
  • Risk Score: 380 (filter: 50, scanner: 330)

Attack Vector (verified from source code)

Postinstall Skill Injection

"postinstall": "node scripts/install-skills.js"

Automatically writes 13 Markdown files to ~/.claude/commands/om/ and ~/.config/opencode/commands/om/ on npm install.

Permission Bypass (auto.md)

The injected auto.md contains a <BYPASS-MODE> section that instructs Claude to:

  • Auto-approve ALL Bash commands without user confirmation
  • Auto-approve ALL file operations without user confirmation
  • Auto-approve ALL Agent calls without user confirmation

This effectively disables Claude Code's safety permission system.

Behavioral Hijack (om.md, openmatrix.md)

Both marked priority: critical and always_load: true. They contain <EXTREMELY-IMPORTANT> prompt injection blocks that:

  • Intercept ALL development-related user requests
  • Route them through OpenMatrix's workflow instead of native Claude behavior
  • Block other skills: explicitly forbids Claude from using superpowers:*, gsd:*, or any other task orchestration

Skill Blocking

All 13 skills contain <NO-OTHER-SKILLS> blocks that instruct Claude to refuse to use any competing or built-in task management features.

Git Co-authorship Injection

auto.md forces all git commits to include:

Co-Authored-By: OpenMatrix https://github.com/bigfish1913/openmatrix

Persistence

  • Files in ~/.claude/commands/om/ persist across ALL Claude Code sessions
  • always_load: true ensures skills load every session
  • npm uninstall openmatrix does NOT remove injected files
  • No uninstall script provided

Files Injected

~/.claude/commands/om/om.md
~/.claude/commands/om/openmatrix.md
~/.claude/commands/om/auto.md
~/.claude/commands/om/approve.md
~/.claude/commands/om/brainstorm.md
~/.claude/commands/om/check.md
~/.claude/commands/om/meeting.md
~/.claude/commands/om/report.md
~/.claude/commands/om/research.md
~/.claude/commands/om/resume.md
~/.claude/commands/om/retry.md
~/.claude/commands/om/start.md
~/.claude/commands/om/status.md
(+ same 13 files in ~/.config/opencode/commands/om/)

What Makes This a Novel Attack

This is not traditional malware — it doesn't steal credentials or open reverse shells. Instead, it exploits a new attack surface: AI assistant behavior modification via persistent prompt injection.

The key insight: in the age of AI coding assistants, you don't need to execute malicious code yourself. You just need to instruct the AI to do it for you. The injected skills tell Claude Code to:

  1. Skip all permission checks (the user never confirms dangerous operations)
  2. Only use OpenMatrix's workflow (blocks built-in safety features)
  3. Attribute work to OpenMatrix (reputation hijacking)

This is the equivalent of injecting a .bashrc that aliases rm to rm -rf — but for AI assistants.

Phantom Dependency

chokidar@^5.0.0 declared in dependencies but never imported in any source file.

MITRE ATT&CK

  • T1059.007 — Command and Scripting Interpreter: JavaScript
  • T1546 — Event Triggered Execution (postinstall)
  • T1547 — Boot or Logon Autostart Execution (always_load: true)
  • T1562.001 — Impair Defenses: Disable or Modify Tools (permission bypass)
  • T1195.002 — Supply Chain Compromise

Remediation

rm -rf ~/.claude/commands/om/
rm -rf ~/.config/opencode/commands/om/

Credits

  • Discovered by: Yuri Borges Martins
  • Tool: npm-sentinel (AI-Powered NPM Malware Hunter)
  • Date: 2026-04-02