Summary
On April 1, 2026 at about 9:42 PM PDT, a WSL instance running freshell OOM-killed codex, claude, node, and the freshell serve/start processes. The proximate trigger was freshell's production serve pipeline (npm run serve => npm run build && npm run start), but the dominant memory pressure appears to have come from a very large accumulated Claude session inventory, especially under freshell, with evidence of duplicate session resumes and stale/orphaned pane state.
This does not look like a Codex internal crash.
Impact
- WSL killed multiple
codex processes, so Codex appeared to "crash".
freshell's production server was killed mid-run.
- The whole distro then shut down as part of the OOM fallout.
What Happened
The kernel / systemd evidence from the crash window showed:
init.scope: Failed with result 'oom-kill'
61.4G memory peak
15.9G memory swap peak
- SIGKILLs for multiple
codex, claude, node, npm run serve, and npm run start processes
The Node workload in the crash window was specifically freshell's production serve pipeline in /home/user/code/freshell:
package.json defines:
start: cross-env NODE_ENV=production node dist/server/index.js
serve: npm run build && npm run start
- npm logs from the crash minute showed the exact chain, all in
/home/user/code/freshell:
npm run serve
npm run build
npm run typecheck:client
npm run build:client
npm run build:server
npm run start
Strongest Root-Cause Finding
The dominant memory consumer was not node and not codex. It was the Claude session swarm.
The OOM task accounting I extracted from the kernel dump was approximately:
claude: 369 tasks, about 52.9 GB RSS, about 10.9 GB swap
node: 412 tasks, about 9.3 GB RSS, about 4.8 GB swap
codex: about 50 MB RSS, about 198 MB swap
So the full failure mode looks like:
freshell's npm run serve / npm run start was active.
- A very large existing Claude process/session inventory was already resident in the distro.
- Combined memory pressure pushed WSL over the limit.
init.scope OOM-killed npm, node, claude, and codex processes.
Why This Looks Like A Freshell Session-State Problem
The current freshell tab/session state is highly suspicious even after restart:
- The tabs registry currently contains
182 open Claude-family panes:
153 claude
29 freshclaude
- Of those, the highest concentration is in:
80 under /home/user/code/freshell
25 under /home/user/code/DirectorDeck
There is also concrete duplication of the same Claude session IDs across multiple simultaneously open panes.
Examples:
- Session
1e993f75-53a1-48e1-a6af-690a1b6233f6 in /home/user/code/freshell appears in 5 separate open tabs.
- Session
f097b34d-31f4-4683-b676-819a5dc8f2cd in /home/user/code/nanoclaw appears in 4 separate open tabs.
- Session
05953df5-9626-4814-b929-6468f5d696d1 in /home/user/code/temp appears in 3 separate open panes.
- Session
cd532d90-67f5-4041-88d8-c4407e1aab62 in /home/user/code/familiar appears in 3 separate open tabs.
For the worst freshell example above, shell history also shows the same session being resumed repeatedly:
claude --resume 1e993f75-53a1-48e1-a6af-690a1b6233f6
- repeated multiple times in bash history
There are also 40 open claude / freshclaude panes with no resumeSessionId at all, including 21 under /home/user/code/freshell, which looks consistent with stale/orphaned pane state.
Important Negative Finding
I did not find evidence of a sudden restore/spawn burst in the exact OOM minute from the production Freshell log that owned the crash window.
That means this does not currently look like a one-minute infinite respawn loop. It looks more like long-lived accumulation / duplication of session state and/or duplicated attaches to the same session IDs, which then becomes catastrophic once a heavy Node workload is also running.
Hypothesis
The likely product bug is one or more of the following:
- Freshell allows the same Claude session to be attached to multiple live panes without strong deduping / single-owner enforcement.
- Pane/session lifecycle cleanup is incomplete, leaving stale pane records that still correspond to live backend/session resources.
- Resume semantics may create additional resident Claude process trees for a session that should have been reattached instead of duplicated.
- Over time, this builds up a large Claude resident set that is mostly invisible at the UI level until a heavy Node workload pushes the machine into OOM.
Suggested Investigation Areas
- Enforce a single live pane binding per
resumeSessionId unless duplication is explicitly intended.
- Audit session restore / tab restore / resume codepaths for duplicate attach behavior.
- Audit cleanup of closed panes and panes missing
resumeSessionId.
- Add guardrails / telemetry around:
- open pane count by provider
- duplicate
resumeSessionId count
- count of panes without session IDs
- spawned process count by provider/session
- per-session memory / descendant-process accounting if available
- Consider refusing or warning on extreme duplicate-session states before spawning more providers.
Why I'm Filing This Against Freshell
The immediate WSL crash is an environment-level OOM, but the strongest product-level root cause I could identify is freshell retaining and/or duplicating a very large Claude session inventory, with concrete duplicate resumeSessionId attachments in the tab registry. That appears to be the part most likely actionable inside this repo.
Summary
On April 1, 2026 at about 9:42 PM PDT, a WSL instance running
freshellOOM-killedcodex,claude,node, and thefreshellserve/start processes. The proximate trigger wasfreshell's production serve pipeline (npm run serve=>npm run build && npm run start), but the dominant memory pressure appears to have come from a very large accumulated Claude session inventory, especially underfreshell, with evidence of duplicate session resumes and stale/orphaned pane state.This does not look like a Codex internal crash.
Impact
codexprocesses, so Codex appeared to "crash".freshell's production server was killed mid-run.What Happened
The kernel / systemd evidence from the crash window showed:
init.scope: Failed with result 'oom-kill'61.4G memory peak15.9G memory swap peakcodex,claude,node,npm run serve, andnpm run startprocessesThe Node workload in the crash window was specifically
freshell's production serve pipeline in/home/user/code/freshell:package.jsondefines:start:cross-env NODE_ENV=production node dist/server/index.jsserve:npm run build && npm run start/home/user/code/freshell:npm run servenpm run buildnpm run typecheck:clientnpm run build:clientnpm run build:servernpm run startStrongest Root-Cause Finding
The dominant memory consumer was not
nodeand notcodex. It was the Claude session swarm.The OOM task accounting I extracted from the kernel dump was approximately:
claude: 369 tasks, about 52.9 GB RSS, about 10.9 GB swapnode: 412 tasks, about 9.3 GB RSS, about 4.8 GB swapcodex: about 50 MB RSS, about 198 MB swapSo the full failure mode looks like:
freshell'snpm run serve/npm run startwas active.init.scopeOOM-killednpm,node,claude, andcodexprocesses.Why This Looks Like A Freshell Session-State Problem
The current
freshelltab/session state is highly suspicious even after restart:182open Claude-family panes:153claude29freshclaude80under/home/user/code/freshell25under/home/user/code/DirectorDeckThere is also concrete duplication of the same Claude session IDs across multiple simultaneously open panes.
Examples:
1e993f75-53a1-48e1-a6af-690a1b6233f6in/home/user/code/freshellappears in5separate open tabs.f097b34d-31f4-4683-b676-819a5dc8f2cdin/home/user/code/nanoclawappears in4separate open tabs.05953df5-9626-4814-b929-6468f5d696d1in/home/user/code/tempappears in3separate open panes.cd532d90-67f5-4041-88d8-c4407e1aab62in/home/user/code/familiarappears in3separate open tabs.For the worst
freshellexample above, shell history also shows the same session being resumed repeatedly:claude --resume 1e993f75-53a1-48e1-a6af-690a1b6233f6There are also
40openclaude/freshclaudepanes with noresumeSessionIdat all, including21under/home/user/code/freshell, which looks consistent with stale/orphaned pane state.Important Negative Finding
I did not find evidence of a sudden restore/spawn burst in the exact OOM minute from the production Freshell log that owned the crash window.
That means this does not currently look like a one-minute infinite respawn loop. It looks more like long-lived accumulation / duplication of session state and/or duplicated attaches to the same session IDs, which then becomes catastrophic once a heavy Node workload is also running.
Hypothesis
The likely product bug is one or more of the following:
Suggested Investigation Areas
resumeSessionIdunless duplication is explicitly intended.resumeSessionId.resumeSessionIdcountWhy I'm Filing This Against Freshell
The immediate WSL crash is an environment-level OOM, but the strongest product-level root cause I could identify is
freshellretaining and/or duplicating a very large Claude session inventory, with concrete duplicateresumeSessionIdattachments in the tab registry. That appears to be the part most likely actionable inside this repo.