-
Notifications
You must be signed in to change notification settings - Fork 9
Reload defunct runners #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
return l.slots[existing], nil | ||
select { | ||
case <-l.slots[existing].done: | ||
l.log.Warnf("Will reload defunct %s runner for %s. Runner error: %s.", backendName, model, |
Check failure
Code scanning / CodeQL
Log entries created from user input High
user-provided value
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 1 day ago
To fix the issue, the model
variable should be sanitized before being used in the log entry on line 383 of loader.go
. Specifically, we should remove any newline characters (\n
, \r
) from the model
string to prevent log injection attacks. This can be achieved using strings.ReplaceAll
or similar methods.
The sanitization should be applied directly before the log statement to ensure that the logged value is safe. This fix will not alter the functionality of the code but will enhance its security.
-
Copy modified line R13 -
Copy modified lines R384-R386
@@ -12,2 +12,3 @@ | ||
"github.com/docker/model-runner/pkg/logging" | ||
"strings" | ||
) | ||
@@ -382,3 +383,5 @@ | ||
case <-l.slots[existing].done: | ||
l.log.Warnf("Will reload defunct %s runner for %s. Runner error: %s.", backendName, model, | ||
safeModel := strings.ReplaceAll(model, "\n", "") | ||
safeModel = strings.ReplaceAll(safeModel, "\r", "") | ||
l.log.Warnf("Will reload defunct %s runner for %s. Runner error: %s.", backendName, safeModel, | ||
l.slots[existing].err) |
l.timestamps[existing] = time.Time{} | ||
return l.slots[existing], nil | ||
select { | ||
case <-l.slots[existing].done: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it'd make sense to also run l.evictRunner(backendName, model
) so we don't have to evict all runners in order to find a free slot. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, that makes sense.
c4243a2
to
8d5a74a
Compare
case <-l.slots[existing].done: | ||
l.log.Warnf("Will reload defunct %s runner for %s. Runner error: %s.", backendName, model, | ||
l.slots[existing].err) | ||
l.evictRunner(backendName, model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
l.evictRunner(backendName, model) | |
// Reset the reference count to zero so that we can evict the runner and then start a new one. | |
l.references[existing] = 0 | |
l.evictRunner(backendName, model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. Though I wonder if it would not be safer to let the reference counting work normally, issue and idle check here, and expand the idle check logic to look for defunct or stale runners. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expand the idle check logic to look for defunct or stale runners
I like this!
Although, in this specific case, this code which comes right after the code you're changing will evict all (1, currently, but still) runners if all the slots are full and the current one that's attempted to be loaded is defunct and not clean up, right?
// If there's not sufficient memory or all slots are full, then try
// evicting unused runners.
if memory > l.availableMemory || len(l.runners) == len(l.slots) {
l.evict(false)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure forcing the refcount to 0 does put us at a risk of panic
ing in loader.release
. I've opted not to force the refcount to 0, and added logic in evict
to remove defunct runners.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we can't force the refcount to 0
here.
The bigger issue I see with the new logic is that evictRunner
in this case might not actually evict if there's a non-zero reference count for the defunct runner (e.g. a client that hasn't realized its backend is defunct yet). The problem is that this code would then continue and override the l.runners
entry for runnerKey{backend, model, mode}
with a newly created runner, so when that hypothetical outstanding defunct runner is finally released, it will decrement the reference count for the new runner in release
(since it uses the same key to look up the slot).
I think what I would do is put a label (say WaitForChange:
) just above the last block of code in this loop (grep for "Wait for something to change") and then in the case <-l.slots[existing].done:
path, I would goto WaitForChange
. Then, in release
, add a check for <-runner.done
and immediately evict if l.references[slot] == 0
. Because realistically any client using a defunct runner will find out quite quickly once the socket connection closes, which means the runner will be release
'd quickly, which will call broadcast
and break the waiting load
call out of its waiting loop.
8d5a74a
to
869b389
Compare
In case a runner becomes defunct, e.g. as a result of a backend crash it would be neat to be able to reload it. So, if the loader finds runner, have it check if the runner is still alive, and create a new one if the runner is defunct. Signed-off-by: Piotr Stankiewicz <[email protected]>
869b389
to
e69a618
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea, but I think we'll need a slightly different approach.
defunct := false | ||
select { | ||
case <-l.slots[slot].done: | ||
defunct = true | ||
default: | ||
} | ||
if unused && (!idleOnly || idle || defunct) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This chunk looks good, I would just update the doc comment for evict
to reflect that it also evicts defunct runners if possible.
case <-l.slots[existing].done: | ||
l.log.Warnf("Will reload defunct %s runner for %s. Runner error: %s.", backendName, model, | ||
l.slots[existing].err) | ||
l.evictRunner(backendName, model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that we can't force the refcount to 0
here.
The bigger issue I see with the new logic is that evictRunner
in this case might not actually evict if there's a non-zero reference count for the defunct runner (e.g. a client that hasn't realized its backend is defunct yet). The problem is that this code would then continue and override the l.runners
entry for runnerKey{backend, model, mode}
with a newly created runner, so when that hypothetical outstanding defunct runner is finally released, it will decrement the reference count for the new runner in release
(since it uses the same key to look up the slot).
I think what I would do is put a label (say WaitForChange:
) just above the last block of code in this loop (grep for "Wait for something to change") and then in the case <-l.slots[existing].done:
path, I would goto WaitForChange
. Then, in release
, add a check for <-runner.done
and immediately evict if l.references[slot] == 0
. Because realistically any client using a defunct runner will find out quite quickly once the socket connection closes, which means the runner will be release
'd quickly, which will call broadcast
and break the waiting load
call out of its waiting loop.
In case a runner becomes defunct, e.g. as a result of a backend crash it would be neat to be able to reload it. So, if the loader finds runner, have it check if the runner is still alive, and create a new one if the runner is defunct.