Brad Zhang / public archive

Brad Zhang

AI product notes, agent workflow writing, and open-source dossiers for founders and early technical teams.

Back to X dispatches

Article / April 5, 2026

English indexed dispatch

Fireworks-skill-memory major update! ! I received a lot of feedback after I posted it l...

Fireworks-skill-memory major update! ! I received a lot of feedback after I posted it last time. I launched v4 today and fixed several real problems I found after using it. The...

Fireworks-skill-memory major update! ! I received a lot of feedback after I posted it last time. I launched v4 today and fixed several real problems I found after using it. The entire process has no background resident process, no cron job, and is purely hung on the existing Stop/SessionStart hook - this is the idea of ​​​​harness engineering: using existing life cycle hooks to do things without introducing new infrastructure. If it is already installed, just let Claude update it: Help me update fireworks-skill-memory and tell me what has been changed this time: ▎

1. With the automatic update mechanism, there is no need to manually update CC after this update! There was previous user feedback: users didn’t even know that a new version had been updated. You have to go to GitHub manually every time, no one will do that. The current mechanism: - After the session ends every day, silent git fetch is performed in the background once - When a new commit is found remotely, a notification file is written - The next time you open Claude Code, an automatic prompt will appear at the beginning of the session: ▎

2. Finally there is a log! Previously, Stop hook was a complete black box - success, failure, skip, all silent. I don't know where to look when something goes wrong. Now a line will be written to ~/.claude/skill-memory.log after each session. The format is timestamp + session ID + which skills were processed + results. ▎

3. Error capture covers all tools. It turns out that error seeds are captured only when SKILL.md is read. But in fact, a large number of errors occur after Bash, Edit, Read and other tools are called, and all these signals are lost. Now a new independent PostToolUse is added. Error signals are written to session-level temporary files and are uniformly distilled when the session ends. ▎

4. Knowledge injection is advanced before execution. The previous injection time was when Claude read SKILL.md, but that was already after the skill started executing. Added to PreToolUse, historical lessons are injected into the context before the Skill is called. Claude can see "I stepped on the pit here last time" during the planning stage, rather than remembering it after stepping on it. ▎

5. Model abandonment no longer fails silently claude-haiku-4-5 is written in the script. When Anthropic is abandoned, distillation will stop silently, the knowledge base will stop accumulating, and users will not be aware of it at all. Now that a fallback chain has been added, the main model will automatically try the next one if it fails, and which fallback is used will be recorded in the log. ▎

6. Knowledge base capacity 100 items + smart injection limit expanded from 30/20 to 100 items! But when injecting, you don’t stuff the entire context—it sorts by HIT count and only injects the top-20 most relevant items. The knowledge base can be accumulated very deep, and the context window will not be burst. #ClaudeCode #AITools #DeveloperTools

Open original on X