Skip to content

libkrun: add krun_add_vsock_unix_tunnel API #345

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

ggoodman
Copy link
Contributor

@ggoodman ggoodman commented Jun 5, 2025

Introduce a new API to create Unix socket tunnels between host and
guest over vsock connections. This enables transparent communication
through Unix domain sockets across the VM boundary.

The API supports both listen and connect modes:

  • Listen mode: guest listens on Unix socket, connects to host via vsock
  • Connect mode: guest connects to Unix socket, accepts from host via vsock

Implementation includes proper thread synchronization to prevent race
conditions during tunnel setup, ensuring proxy threads are ready
before the target process starts.

Introduce a new API to create Unix socket tunnels between host and
guest over vsock connections. This enables transparent communication
through Unix domain sockets across the VM boundary.

The API supports both listen and connect modes:
- Listen mode: guest listens on Unix socket, connects to host via vsock
- Connect mode: guest connects to Unix socket, accepts from host via vsock

Implementation includes proper thread synchronization to prevent race
conditions during tunnel setup, ensuring proxy threads are ready
before the target process starts.

Signed-off-by: Geoffrey Goodman <[email protected]>
@tylerfanelli
Copy link
Member

tylerfanelli commented Jun 5, 2025

@ggoodman Thanks for the contribution.

We have something similar to this with the krun_add_vsock_port API. This allows a guest to communicate with a IPC socket (on the host) through an AF_VSOCK socket. Seems these two are trying to solve the same issue.
But furthermore, I've been thinking about this issue lately and wonder if a more general solution is possible.

libkrun implements Transparent Socket Impersonation (AF_TSI) for guest networking. From the host’s perspective, all connections come from/go to the libkrun-enabled runtime, and are visible in the network namespace of the runtime’s context.

I wonder if we can extend this to IPC as well. That is, if we could enable a krun API like krun_forward_unix_ipc. This would extend the concept of AF_TSI but for AF_UNIX sockets. With this configured, every time a guest application attempts to open an AF_UNIX socket for IPC, the guest kernel will "hijack" it and create an AF_VSOCK transparently that would forward data to the host (and back). This could allow "transparent" IPC between libkrun guests and processes on the host within the krun VM's namespace.

@slp WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants