fix(macOS): reopen main window after closing it with red X#575
fix(macOS): reopen main window after closing it with red X#575psimaker wants to merge 1 commit intojamiepine:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds macOS-specific main-window lifecycle coordination: intercepts Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant Runtime as Tauri Runtime
participant Main as main.rs
rect rgba(100,150,255,0.5)
Note over User,Main: Window Close Sequence (macOS)
User->>Frontend: Click close
Frontend->>Runtime: CloseRequested
Runtime->>Main: Intercept CloseRequested (label == "main")
Main->>Frontend: Emit "window-close-requested"
Main->>Main: Start 5s timeout
Frontend->>Main: Send "window-close-allowed"
Main->>Main: Cancel timeout
Main->>Frontend: Hide window
Main->>Main: Set closing = true
end
rect rgba(150,200,100,0.5)
Note over User,Main: Reopen Sequence (macOS)
User->>Runtime: Click app icon (Reopen)
Runtime->>Main: RunEvent::Reopen
Main->>Main: Check visible windows
alt No visible windows
Main->>Frontend: Show main window
Main->>Frontend: Unminimize window
Main->>Frontend: Focus window
end
end
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tauri/src-tauri/src/main.rs (1)
71-80: Remove redundantshow/unminimize/set_focuscalls on freshly-built window to avoid error log spam.A window built via
WebviewWindowBuilder::from_config()automatically honors the config'svisibleandminimizedsettings. The main window config here specifies neither (defaulting to visible and not minimized), so the window is already in the desired state after build. The calls on lines 72–80 are no-ops on success but each error is logged viaeprintln!, creating noise if macOS quirks cause them to fail (e.g., unminimize on an already-unminimized window or focus-stealing bugs in the tao crate).The
show/unminimize/set_focustriplet on lines 32–40 (existing-window branch) is correct and should stay, since the window's state is unknown. But for the recreated window, drop or demote these calls to debug logging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tauri/src-tauri/src/main.rs` around lines 71 - 80, The recreated main window built from WebviewWindowBuilder::from_config() already respects the config visibility/minimized state, so remove the redundant calls to window.show(), window.unminimize(), and window.set_focus() in the "recreated window" branch (the block that prints "macOS Reopen: main window recreated"); alternatively, if you want to preserve diagnostics, replace those eprintln! error logs with debug-level logs and only call those methods in the existing-window branch where state is unknown (keep the window.show()/window.unminimize()/window.set_focus() calls there).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tauri/src-tauri/src/main.rs`:
- Around line 1438-1449: The current swap-based boolean guard "closing" used in
the WindowEvent::CloseRequested handler is racy and lets a rapid second user
close bypass api.prevent_close(), so replace it with an atomic state machine
(e.g., AtomicU8) with three states: 0=Idle, 1=AwaitingFrontend (prevent user
re-close), 2=ProgrammaticClose (allow through). Update the handler logic around
WindowEvent::CloseRequested to: read the state atomically, if Idle transition to
AwaitingFrontend and call api.prevent_close() to start cleanup; if
AwaitingFrontend ignore subsequent user closes; if ProgrammaticClose allow the
event through and reset to Idle. Ensure the async cleanup task sets the state to
ProgrammaticClose just before calling window.close(), and resets to Idle if
cleanup aborts, so window.close() from the task is not treated as a user click;
keep references to "closing", WindowEvent::CloseRequested, api.prevent_close(),
and window.close() to locate replacements.
---
Nitpick comments:
In `@tauri/src-tauri/src/main.rs`:
- Around line 71-80: The recreated main window built from
WebviewWindowBuilder::from_config() already respects the config
visibility/minimized state, so remove the redundant calls to window.show(),
window.unminimize(), and window.set_focus() in the "recreated window" branch
(the block that prints "macOS Reopen: main window recreated"); alternatively, if
you want to preserve diagnostics, replace those eprintln! error logs with
debug-level logs and only call those methods in the existing-window branch where
state is unknown (keep the window.show()/window.unminimize()/window.set_focus()
calls there).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f2696ac6-70b3-46f5-8f76-83358f7000eb
📒 Files selected for processing (1)
tauri/src-tauri/src/main.rs
7bd3209 to
e38fae8
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tauri/src-tauri/src/main.rs`:
- Around line 1477-1481: When the emit fallback fails in the close handler (the
block with app_handle.emit), the code sets close_state to
CLOSE_STATE_PROGRAMMATIC then calls window.close(). If window.close() fails and
the function returns, close_state remains PROGRAMMATIC causing subsequent user
closes to skip frontend cleanup; change the fallback path to restore close_state
to CLOSE_STATE_IDLE on failure: after attempting window.close().ok() (or if it
returns Err), set close_state.store(CLOSE_STATE_IDLE, Ordering::SeqCst) before
returning so the state mirrors the async close path; locate the logic around
app_handle.emit, close_state, CLOSE_STATE_PROGRAMMATIC and CLOSE_STATE_IDLE to
apply this fix.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1e81227c-bb86-400b-8d8e-26b3d4275ab7
📒 Files selected for processing (1)
tauri/src-tauri/src/main.rs
e38fae8 to
19335ca
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tauri/src-tauri/src/main.rs`:
- Around line 60-82: The recreated-window race happens because
show_or_recreate_main_window emits the "window-close-requested" handshake before
the frontend's setupWindowCloseHandler is registered; fix by adding a
frontend-ready ack: have the frontend send a "main-window-ready" event from
setupWindowCloseHandler when it registers its listener, then in
show_or_recreate_main_window (and the on_window_event close path) wait for that
ack (with a short timeout, e.g. a few seconds) before emitting
"window-close-requested"; if the ack times out treat the window as untrusted and
make the stop/keep-running decision in Rust (or default to safe behavior),
ensuring all references to window-close-requested, setupWindowCloseHandler,
show_or_recreate_main_window, and on_window_event are updated to use the new
handshake and timeout.
- Around line 1592-1595: The macOS Reopen handler currently ignores the
has_visible_windows flag and always calls show_or_recreate_main_window(app);
update the RunEvent::Reopen arm to only call show_or_recreate_main_window when
has_visible_windows is false (e.g. if !has_visible_windows {
show_or_recreate_main_window(app); }) so the handler honors the
has_visible_windows contract; adjust the match arm containing RunEvent::Reopen {
has_visible_windows, .. } accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b094861c-a718-4f5f-8182-b7ab0e09cca4
📒 Files selected for processing (1)
tauri/src-tauri/src/main.rs
19335ca to
beeee0b
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tauri/src-tauri/src/main.rs`:
- Around line 1542-1614: The close-state machine is global but the handler runs
for every window, so change the on_window_event CloseRequested handling to only
operate for the main window: at the start of the WindowEvent::CloseRequested
branch check if window.label() == MAIN_WINDOW_LABEL and return early for
non-main windows (i.e., do not read or mutate main_window_lifecycle.close_state
or frontend_ready for other windows). Ensure all places that compare_exchange,
store, call wait_for_main_window_ready, and invoke request_frontend_window_close
(references: main_window_lifecycle, CLOSE_STATE_IDLE,
CLOSE_STATE_AWAITING_FRONTEND, CLOSE_STATE_PROGRAMMATIC, MAIN_WINDOW_LABEL,
wait_for_main_window_ready, request_frontend_window_close) are only reached when
the label matches MAIN_WINDOW_LABEL.
In `@tauri/src/platform/lifecycle.ts`:
- Around line 67-77: Currently registerWindowCloseHandler swallows listen(...)
failures by setting this.windowCloseHandlerSetup = null and returning, causing
the caller to think setup succeeded; instead propagate the failure so setup
rejects. Change registerWindowCloseHandler (the async function that calls listen
and awaits WINDOW_CLOSE_REQUESTED_EVENT) to rethrow the caught error (or throw a
new Error wrapping it) rather than returning, and avoid marking
windowCloseHandlerSetup as success when listen fails; this ensures callers that
await registerWindowCloseHandler observe the failure and Rust receives a correct
readiness signal.
- Around line 91-104: The check using the renderer global
window.__voiceboxServerStartedByApp is unreliable after the "main" window is
recreated; replace that client-side ownership check with an authoritative
backend query: call the existing backend command that reads ServerState (e.g.,
an IPC/Tauri command like a Rust function that returns whether the server was
started/owned by this app) and use its boolean result instead of
window.__voiceboxServerStartedByApp before deciding to call this.stopServer();
keepServer variable use stays the same, but locate the logic around the
window-close handling and update it to await the backend ownership check
(referring to stopServer() and the ServerState-backed command name) so cleanup
runs even after renderer recreation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ee9a0ef9-0e22-4154-a8d0-761a9cc800bc
📒 Files selected for processing (2)
tauri/src-tauri/src/main.rstauri/src/platform/lifecycle.ts
| .on_window_event(|window, event| { | ||
| if let WindowEvent::CloseRequested { api, .. } = event { | ||
| // If we're already in the close flow, let it proceed | ||
| if closing.load(std::sync::atomic::Ordering::SeqCst) { | ||
| return; | ||
| let main_window_lifecycle = window | ||
| .app_handle() | ||
| .state::<Arc<MainWindowLifecycleState>>() | ||
| .inner() | ||
| .clone(); | ||
|
|
||
| match main_window_lifecycle.close_state.compare_exchange( | ||
| CLOSE_STATE_IDLE, | ||
| CLOSE_STATE_AWAITING_FRONTEND, | ||
| Ordering::SeqCst, | ||
| Ordering::SeqCst, | ||
| ) { | ||
| Ok(_) => {} | ||
| Err(CLOSE_STATE_AWAITING_FRONTEND) => { | ||
| // Another user close was requested while frontend cleanup is still running. | ||
| // Keep the window alive and let the existing cleanup task finish. | ||
| api.prevent_close(); | ||
| return; | ||
| } | ||
| Err(CLOSE_STATE_PROGRAMMATIC) => { | ||
| // This close was triggered by our own window.close() call. | ||
| if window.label() == MAIN_WINDOW_LABEL { | ||
| main_window_lifecycle | ||
| .frontend_ready | ||
| .store(false, Ordering::SeqCst); | ||
| } | ||
| main_window_lifecycle | ||
| .close_state | ||
| .store(CLOSE_STATE_IDLE, Ordering::SeqCst); | ||
| return; | ||
| } | ||
| Err(_) => { | ||
| api.prevent_close(); | ||
| return; | ||
| } | ||
| } | ||
|
|
||
| #[cfg(target_os = "macos")] | ||
| if window.label() == MAIN_WINDOW_LABEL { | ||
| println!("macOS close: main window close request intercepted for frontend cleanup"); | ||
| } | ||
| closing.store(true, std::sync::atomic::Ordering::SeqCst); | ||
|
|
||
| // Prevent automatic close so frontend can clean up | ||
| api.prevent_close(); | ||
|
|
||
| // Emit event to frontend to check setting and stop server if needed | ||
| let app_handle = window.app_handle(); | ||
|
|
||
| if let Err(e) = app_handle.emit("window-close-requested", ()) { | ||
| eprintln!("Failed to emit window-close-requested event: {}", e); | ||
| window.close().ok(); | ||
| if window.label() == MAIN_WINDOW_LABEL | ||
| && !main_window_lifecycle.frontend_ready.load(Ordering::SeqCst) | ||
| { | ||
| eprintln!( | ||
| "Main window close requested before frontend close handler was ready; waiting" | ||
| ); | ||
| let window_for_close = window.clone(); | ||
| let lifecycle_for_async = main_window_lifecycle.clone(); | ||
|
|
||
| tauri::async_runtime::spawn(async move { | ||
| if wait_for_main_window_ready(lifecycle_for_async.clone()).await { | ||
| request_frontend_window_close(window_for_close, lifecycle_for_async); | ||
| } else { | ||
| eprintln!( | ||
| "Main window frontend readiness timed out; keeping window open" | ||
| ); | ||
| lifecycle_for_async | ||
| .close_state | ||
| .store(CLOSE_STATE_IDLE, Ordering::SeqCst); | ||
| } | ||
| }); | ||
| return; | ||
| } | ||
|
|
||
| // Set up listener for frontend response | ||
| let window_for_close = window.clone(); | ||
| let closing_for_timeout = closing.clone(); | ||
| let (tx, mut rx) = mpsc::unbounded_channel::<()>(); | ||
|
|
||
| let listener_id = window.listen("window-close-allowed", move |_| { | ||
| let _ = tx.send(()); | ||
| }); | ||
|
|
||
| tauri::async_runtime::spawn(async move { | ||
| tokio::select! { | ||
| _ = rx.recv() => { | ||
| window_for_close.close().ok(); | ||
| } | ||
| _ = tokio::time::sleep(tokio::time::Duration::from_secs(5)) => { | ||
| eprintln!("Window close timeout, closing anyway"); | ||
| window_for_close.close().ok(); | ||
| } | ||
| } | ||
| window_for_close.unlisten(listener_id); | ||
| closing_for_timeout.store(false, std::sync::atomic::Ordering::SeqCst); | ||
| }); | ||
| request_frontend_window_close(window.clone(), main_window_lifecycle.clone()); | ||
| } |
There was a problem hiding this comment.
Restrict this close-state machine to the main window.
This handler currently runs for every window, but close_state is global. A close flow on a different window can set CLOSE_STATE_PROGRAMMATIC, and a concurrent user close on "main" can then bypass frontend cleanup.
Proposed fix
.on_window_event(|window, event| {
if let WindowEvent::CloseRequested { api, .. } = event {
+ if window.label() != MAIN_WINDOW_LABEL {
+ return;
+ }
+
let main_window_lifecycle = window
.app_handle()
.state::<Arc<MainWindowLifecycleState>>()
.inner()
.clone();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tauri/src-tauri/src/main.rs` around lines 1542 - 1614, The close-state
machine is global but the handler runs for every window, so change the
on_window_event CloseRequested handling to only operate for the main window: at
the start of the WindowEvent::CloseRequested branch check if window.label() ==
MAIN_WINDOW_LABEL and return early for non-main windows (i.e., do not read or
mutate main_window_lifecycle.close_state or frontend_ready for other windows).
Ensure all places that compare_exchange, store, call wait_for_main_window_ready,
and invoke request_frontend_window_close (references: main_window_lifecycle,
CLOSE_STATE_IDLE, CLOSE_STATE_AWAITING_FRONTEND, CLOSE_STATE_PROGRAMMATIC,
MAIN_WINDOW_LABEL, wait_for_main_window_ready, request_frontend_window_close)
are only reached when the label matches MAIN_WINDOW_LABEL.
| private async registerWindowCloseHandler(): Promise<void> { | ||
| try { | ||
| // Listen for window close request from Rust | ||
| await listen<null>('window-close-requested', async () => { | ||
| // Import store here to avoid circular dependency | ||
| const { useServerStore } = await import('@/stores/serverStore'); | ||
| const keepRunning = useServerStore.getState().keepServerRunningOnClose; | ||
|
|
||
| // Check if server was started by this app instance | ||
| // @ts-expect-error - accessing module-level variable from another module | ||
| const serverStartedByApp = window.__voiceboxServerStartedByApp ?? false; | ||
|
|
||
| console.log( | ||
| '[lifecycle] window-close-requested: keepRunning=%s, serverStartedByApp=%s', | ||
| keepRunning, | ||
| serverStartedByApp, | ||
| ); | ||
|
|
||
| if (!keepRunning && serverStartedByApp) { | ||
| // Stop server before closing (only if we started it) | ||
| try { | ||
| await this.stopServer(); | ||
| } catch (error) { | ||
| console.error('Failed to stop server on close:', error); | ||
| } | ||
| } | ||
|
|
||
| // Emit event back to Rust to allow close | ||
| await emit('window-close-allowed'); | ||
| await listen<null>(WINDOW_CLOSE_REQUESTED_EVENT, async () => { | ||
| await this.handleWindowCloseRequested(); | ||
| }); | ||
| } catch (error) { | ||
| this.windowCloseHandlerSetup = null; | ||
| console.error('Failed to setup window close handler:', error); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Propagate close-listener setup failure instead of resolving successfully.
If listen(...) fails here, setup still resolves and Rust never gets a reliable readiness signal. That can leave the close flow in a permanently degraded state (repeated wait/keep-open behavior).
Proposed fix
private async registerWindowCloseHandler(): Promise<void> {
try {
// Listen for window close request from Rust
await listen<null>(WINDOW_CLOSE_REQUESTED_EVENT, async () => {
await this.handleWindowCloseRequested();
});
} catch (error) {
this.windowCloseHandlerSetup = null;
console.error('Failed to setup window close handler:', error);
- return;
+ throw error;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private async registerWindowCloseHandler(): Promise<void> { | |
| try { | |
| // Listen for window close request from Rust | |
| await listen<null>('window-close-requested', async () => { | |
| // Import store here to avoid circular dependency | |
| const { useServerStore } = await import('@/stores/serverStore'); | |
| const keepRunning = useServerStore.getState().keepServerRunningOnClose; | |
| // Check if server was started by this app instance | |
| // @ts-expect-error - accessing module-level variable from another module | |
| const serverStartedByApp = window.__voiceboxServerStartedByApp ?? false; | |
| console.log( | |
| '[lifecycle] window-close-requested: keepRunning=%s, serverStartedByApp=%s', | |
| keepRunning, | |
| serverStartedByApp, | |
| ); | |
| if (!keepRunning && serverStartedByApp) { | |
| // Stop server before closing (only if we started it) | |
| try { | |
| await this.stopServer(); | |
| } catch (error) { | |
| console.error('Failed to stop server on close:', error); | |
| } | |
| } | |
| // Emit event back to Rust to allow close | |
| await emit('window-close-allowed'); | |
| await listen<null>(WINDOW_CLOSE_REQUESTED_EVENT, async () => { | |
| await this.handleWindowCloseRequested(); | |
| }); | |
| } catch (error) { | |
| this.windowCloseHandlerSetup = null; | |
| console.error('Failed to setup window close handler:', error); | |
| return; | |
| } | |
| private async registerWindowCloseHandler(): Promise<void> { | |
| try { | |
| // Listen for window close request from Rust | |
| await listen<null>(WINDOW_CLOSE_REQUESTED_EVENT, async () => { | |
| await this.handleWindowCloseRequested(); | |
| }); | |
| } catch (error) { | |
| this.windowCloseHandlerSetup = null; | |
| console.error('Failed to setup window close handler:', error); | |
| throw error; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tauri/src/platform/lifecycle.ts` around lines 67 - 77, Currently
registerWindowCloseHandler swallows listen(...) failures by setting
this.windowCloseHandlerSetup = null and returning, causing the caller to think
setup succeeded; instead propagate the failure so setup rejects. Change
registerWindowCloseHandler (the async function that calls listen and awaits
WINDOW_CLOSE_REQUESTED_EVENT) to rethrow the caught error (or throw a new Error
wrapping it) rather than returning, and avoid marking windowCloseHandlerSetup as
success when listen fails; this ensures callers that await
registerWindowCloseHandler observe the failure and Rust receives a correct
readiness signal.
| // Check if server was started by this app instance | ||
| // @ts-expect-error - accessing module-level variable from another module | ||
| const serverStartedByApp = window.__voiceboxServerStartedByApp ?? false; | ||
|
|
||
| console.log( | ||
| '[lifecycle] window-close-requested: keepRunning=%s, serverStartedByApp=%s', | ||
| keepRunning, | ||
| serverStartedByApp, | ||
| ); | ||
|
|
||
| if (!keepRunning && serverStartedByApp) { | ||
| // Stop server before closing (only if we started it) | ||
| try { | ||
| await this.stopServer(); |
There was a problem hiding this comment.
Use backend-owned state for “server started by app” after window recreation.
window.__voiceboxServerStartedByApp is tied to the renderer context. After recreating "main", this can reset to false, causing close cleanup to skip stopServer() even when this app launched the server.
Proposed direction
- // `@ts-expect-error` - accessing module-level variable from another module
- const serverStartedByApp = window.__voiceboxServerStartedByApp ?? false;
+ const serverStartedByApp = await invoke<boolean>('is_server_managed_by_app');Then back this with a Rust command that reads authoritative server ownership from ServerState (e.g., managed child process presence), not renderer globals.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tauri/src/platform/lifecycle.ts` around lines 91 - 104, The check using the
renderer global window.__voiceboxServerStartedByApp is unreliable after the
"main" window is recreated; replace that client-side ownership check with an
authoritative backend query: call the existing backend command that reads
ServerState (e.g., an IPC/Tauri command like a Rust function that returns
whether the server was started/owned by this app) and use its boolean result
instead of window.__voiceboxServerStartedByApp before deciding to call
this.stopServer(); keepServer variable use stays the same, but locate the logic
around the window-close handling and update it to await the backend ownership
check (referring to stopServer() and the ServerState-backed command name) so
cleanup runs even after renderer recreation.
beeee0b to
0ee3982
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
tauri/src-tauri/src/main.rs (1)
1475-1479:⚠️ Potential issue | 🟡 MinorReset
closingflag if emit-fallback close fails.If the emit fails and
window.close()also fails (or triggers a re-entrant CloseRequested that returns early), theclosingflag remainstrue. Subsequent user-initiated closes will bypass frontend cleanup entirely because line 1464 returns early without callingapi.prevent_close().The macOS main window path (line 68) correctly resets the flag on failure; this path should do the same for consistency.
Proposed fix
if let Err(e) = app_handle.emit(WINDOW_CLOSE_REQUESTED_EVENT, ()) { eprintln!("Failed to emit {WINDOW_CLOSE_REQUESTED_EVENT} event: {}", e); - window.close().ok(); + if let Err(close_err) = window.close() { + eprintln!("Failed to close window after emit failure: {}", close_err); + closing.store(false, Ordering::SeqCst); + } return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tauri/src-tauri/src/main.rs` around lines 1475 - 1479, The current close-path sets the shared closing flag true and then emits WINDOW_CLOSE_REQUESTED_EVENT; if emit or the fallback window.close() fails the closing flag stays true and blocks future close flows. Update the closure handling around app_handle.emit(WINDOW_CLOSE_REQUESTED_EVENT) and the fallback window.close() to reset the shared closing flag back to false when either emit fails or window.close() returns Err or otherwise does not complete the frontend handshake, mirroring the macOS main-window path behavior; ensure you reference and clear the same flag used by the early-return at the api.prevent_close() check so subsequent user-initiated closes run the normal cleanup.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tauri/src-tauri/src/main.rs`:
- Around line 1475-1479: The current close-path sets the shared closing flag
true and then emits WINDOW_CLOSE_REQUESTED_EVENT; if emit or the fallback
window.close() fails the closing flag stays true and blocks future close flows.
Update the closure handling around app_handle.emit(WINDOW_CLOSE_REQUESTED_EVENT)
and the fallback window.close() to reset the shared closing flag back to false
when either emit fails or window.close() returns Err or otherwise does not
complete the frontend handshake, mirroring the macOS main-window path behavior;
ensure you reference and clear the same flag used by the early-return at the
api.prevent_close() check so subsequent user-initiated closes run the normal
cleanup.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8cea5dcc-d798-47da-8ec2-af0892d518c1
📒 Files selected for processing (1)
tauri/src-tauri/src/main.rs
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
♻️ Duplicate comments (1)
tauri/src-tauri/src/main.rs (1)
1475-1478:⚠️ Potential issue | 🟠 MajorReset
closingif the fallbackwindow.close()fails.If this fallback close errors, the window stays open with
closing=true, and the next user close bypasses the frontend cleanup path. Mirror the macOS helper here and restore the guard on failure.🩹 Proposed fix
if let Err(e) = app_handle.emit(WINDOW_CLOSE_REQUESTED_EVENT, ()) { eprintln!("Failed to emit {WINDOW_CLOSE_REQUESTED_EVENT} event: {}", e); - window.close().ok(); + if let Err(close_err) = window.close() { + eprintln!("Failed to close window after emit failure: {}", close_err); + closing.store(false, Ordering::SeqCst); + } return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tauri/src-tauri/src/main.rs` around lines 1475 - 1478, When the fallback window.close() call fails the code must reset the closing guard so the window isn't left stuck with closing=true; after attempting the fallback close in the block that calls app_handle.emit(WINDOW_CLOSE_REQUESTED_EVENT, ()), check the Result from window.close() and on Err(...) restore the closing flag (the same guard used around the close flow, e.g. the closing variable/Mutex/Atomic) back to false and log the error, mirroring the macOS helper behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tauri/src-tauri/src/main.rs`:
- Around line 1475-1478: When the fallback window.close() call fails the code
must reset the closing guard so the window isn't left stuck with closing=true;
after attempting the fallback close in the block that calls
app_handle.emit(WINDOW_CLOSE_REQUESTED_EVENT, ()), check the Result from
window.close() and on Err(...) restore the closing flag (the same guard used
around the close flow, e.g. the closing variable/Mutex/Atomic) back to false and
log the error, mirroring the macOS helper behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d214a2f5-f82f-4abc-a278-810d07046877
📒 Files selected for processing (1)
tauri/src-tauri/src/main.rs
0ee3982 to
b80bebe
Compare
Summary
Fixes a macOS/Tauri window lifecycle issue where closing the main Voicebox window with the red macOS close button leaves the app process running, but clicking the Dock icon does not restore the window.
Problem
On macOS, closing the main window with the red close button could leave Voicebox running without a visible main window.
Local diagnostic after reproducing:
Fix
On macOS, the red close button for the
mainwindow now:window-close-requested/window-close-allowedfrontend cleanup handshakeDock reopen now honors
has_visible_windowsand only shows, unminimizes, and focuses the existingmainwindow when no visible windows are present.This avoids destroying and recreating the renderer, so no frontend readiness handshake or renderer-global ownership migration is needed.
Verification
Automated:
git diff --checkcargo checkbun run typecheckcd tauri && bun run buildManual macOS QA:
just dev.Cmd+Qexits the app cleanly.Observed result:
Summary by CodeRabbit
macOS Enhancements