Skip to content

Conversation

@gkpln3
Copy link

@gkpln3 gkpln3 commented Sep 14, 2025

Description

Fixed a bug in watch mode that caused unrelated interruptible tasks to be killed when a dependency changed.
The fix makes watch mode more granular by identifying the specific tasks affected by a file change and only restarting them.

This fixes the issue in #9421

@gkpln3 gkpln3 requested a review from a team as a code owner September 14, 2025 19:37
@gkpln3 gkpln3 requested a review from tknickman September 14, 2025 19:37
@vercel
Copy link
Contributor

vercel bot commented Sep 14, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
examples-basic-web Ready Ready Preview Comment Sep 14, 2025 7:37pm
examples-designsystem-docs Ready Ready Preview Comment Sep 14, 2025 7:37pm
examples-tailwind-web Ready Ready Preview Comment Sep 14, 2025 7:37pm
examples-vite-web Ready Ready Preview Comment Sep 14, 2025 7:37pm

@vercel
Copy link
Contributor

vercel bot commented Sep 14, 2025

@gkpln3 is attempting to deploy a commit to the Vercel Team on Vercel.

A member of the Team first needs to authorize it.

@gkpln3 gkpln3 changed the title fix(watch): only stop relevant tasks while watching for changes. DRAFT: fix(watch): only stop relevant tasks while watching for changes. Sep 14, 2025
@gkpln3 gkpln3 marked this pull request as draft September 14, 2025 19:39
Comment on lines +110 to +122
pub fn task_name(&self) -> Option<(String, String)> {
let package = self
.env
.get(&OsString::from("TURBO_PACKAGE_NAME"))?
.to_str()?
.to_string();
let task = self
.env
.get(&OsString::from("TURBO_TASK_NAME"))?
.to_str()?
.to_string();
Some((package, task))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The task_name() method tries to read TURBO_PACKAGE_NAME and TURBO_TASK_NAME environment variables that are never set anywhere in the codebase, causing it to always return None and breaking the selective task stopping functionality.

View Details
📝 Patch Details
diff --git a/crates/turborepo-lib/src/task_graph/visitor/command.rs b/crates/turborepo-lib/src/task_graph/visitor/command.rs
index 0b3cb4d74..c38cbb956 100644
--- a/crates/turborepo-lib/src/task_graph/visitor/command.rs
+++ b/crates/turborepo-lib/src/task_graph/visitor/command.rs
@@ -135,6 +135,10 @@ impl<'a> CommandProvider for PackageGraphCommandProvider<'a> {
         cmd.env_clear();
         cmd.envs(environment.iter());
 
+        // Set package and task name for selective task stopping
+        cmd.env("TURBO_PACKAGE_NAME", task_id.package());
+        cmd.env("TURBO_TASK_NAME", task_id.task());
+
         // If the task has an associated proxy, then we indicate this to the underlying
         // task via an env var
         if self

Analysis

Missing environment variables breaks selective task stopping functionality

What fails: ProcessManager::stop_tasks() cannot identify tasks to stop because Command::task_name() always returns None due to missing TURBO_PACKAGE_NAME and TURBO_TASK_NAME environment variables

How to reproduce:

  1. Start turbo in watch mode with multiple tasks running
  2. Trigger a file change that should only stop specific tasks
  3. Observe that no tasks are selectively stopped due to task_name() returning None

Result: All tasks continue running instead of selective stopping, breaking the granular watch mode functionality introduced in commit 688456031

Expected: Tasks should be selectively stopped based on package and task names when files change during watch mode

Root cause: PackageGraphCommandProvider::command() in crates/turborepo-lib/src/task_graph/visitor/command.rs never sets the TURBO_PACKAGE_NAME and TURBO_TASK_NAME environment variables that Command::task_name() tries to read

Comment on lines +162 to +173
pub fn filter_tasks(&self, tasks_to_run: &HashSet<(PackageName, String)>) -> Engine<Built> {
let new_graph = self.task_graph.filter_map(
|node_idx, node| {
if let TaskNode::Task(task) = &self.task_graph[node_idx] {
if tasks_to_run.contains(&(task.package().into(), task.task().to_string())) {
return Some(node.clone());
}
}
None
},
|_, _| Some(()),
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filter_tasks() method excludes the root node from the filtered graph, which could break task graph integrity and cause execution failures.

View Details
📝 Patch Details
diff --git a/crates/turborepo-lib/src/engine/mod.rs b/crates/turborepo-lib/src/engine/mod.rs
index 76221fd62..d34ca45fe 100644
--- a/crates/turborepo-lib/src/engine/mod.rs
+++ b/crates/turborepo-lib/src/engine/mod.rs
@@ -162,16 +162,25 @@ impl Engine<Built> {
     pub fn filter_tasks(&self, tasks_to_run: &HashSet<(PackageName, String)>) -> Engine<Built> {
         let new_graph = self.task_graph.filter_map(
             |node_idx, node| {
-                if let TaskNode::Task(task) = &self.task_graph[node_idx] {
-                    if tasks_to_run.contains(&(task.package().into(), task.task().to_string())) {
-                        return Some(node.clone());
+                match &self.task_graph[node_idx] {
+                    TaskNode::Task(task) => {
+                        if tasks_to_run.contains(&(task.package().into(), task.task().to_string())) {
+                            Some(node.clone())
+                        } else {
+                            None
+                        }
                     }
+                    TaskNode::Root => Some(node.clone()),
                 }
-                None
             },
             |_, _| Some(()),
         );
 
+        let root_index = new_graph
+            .node_indices()
+            .find(|index| new_graph[*index] == TaskNode::Root)
+            .expect("root node should be present");
+
         let task_lookup: HashMap<_, _> = new_graph
             .node_indices()
             .filter_map(|index| {
@@ -187,7 +196,7 @@ impl Engine<Built> {
 
         Engine {
             marker: std::marker::PhantomData,
-            root_index: self.root_index,
+            root_index,
             task_graph: new_graph,
             task_lookup,
             task_definitions: self.task_definitions.clone(),

Analysis

Invalid root_index in Engine.filter_tasks() causes panic when accessing filtered task graph

What fails: Engine::filter_tasks() in crates/turborepo-lib/src/engine/mod.rs excludes root node from filtered graph but reuses original root_index, causing panic when accessing engine.task_graph[engine.root_index]

How to reproduce:

let engine = engine.filter_tasks(&tasks_to_run);
// Any access to engine.task_graph[engine.root_index] will panic with "index out of bounds"

Result: Panic with "index out of bounds" when trying to access root node via root_index in filtered engine

Expected: Root node should be preserved in filtered graph and root_index should point to valid node, consistent with other filtering methods like create_engine_for_interruptible_tasks() which correctly handle root node preservation

@gkpln3 gkpln3 changed the title DRAFT: fix(watch): only stop relevant tasks while watching for changes. fix(watch): only stop relevant tasks while watching for changes. Sep 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants