Summary
Replace StubEmitter's C++ byte-array stub generation with hand-written, per-architecture ASM entry/exit stubs. These stubs are what the loader patches into the original binary to redirect protected regions to the VM.
Current implementation
StubEmitter (loader/src/X86_64StubEmitter.cpp, ARM64StubEmitter.cpp, X86_32StubEmitter.cpp) generates entry/exit stubs as raw byte vectors in C++:
// Simplified — actual code emits raw bytes
std::vector<uint8_t> X86_64StubEmitter::emit_entry_stub(...) {
// push rbx; push rbp; push r12-r15; mov rdi, vmctx_ptr; call vm_execute; ...
return {0x53, 0x55, 0x41, 0x54, ...};
}
This works but is hard to maintain, debug, and extend.
Proposed design
Hand-written ASM stubs per architecture, assembled into object files and linked into the loader:
loader/src/stubs/
entry_x86_64.S # SysV AMD64 ABI: save RBX/RBP/R12-R15, setup args, call vm_execute
exit_x86_64.S # Restore callee-saved, place return in RAX/XMM0, RET
entry_arm64.S # AAPCS64: save X19-X28/FP/LR, setup X0-X7, BL vm_execute
exit_arm64.S # Restore, place return in X0/D0, RET via X30
entry_x86_32.S # cdecl: save EBX/ESI/EDI/EBP, push args, call vm_execute
exit_x86_32.S # Restore, EAX return, RET
entry_win64.asm # Win64 ABI: save RBX/RBP/RDI/RSI/R12-R15, shadow space
exit_win64.asm # Restore, RAX/XMM0 return
Stub responsibilities
Entry stub (patched at protected region start):
- Save all native callee-saved registers
- Save native RSP/SP
- Load VMContext pointer (from known location)
- Transfer native arguments to VM registers (per ABI)
- Call
vm_execute_with_args()
- Place return value in native return register (RAX/X0)
- Restore callee-saved registers
- RET to original call site
Exit stub (counterpart):
- Receive VmExecResult from vm_execute
- Extract plaintext return value
- Handle multi-value returns (RAX+RDX / X0+X1 for structs)
- Handle floating-point returns (XMM0 / D0)
- Restore native stack
- RET
Benefits over byte-array generation
- Debuggable: source-level stepping in GDB/LLDB
- Maintainable: comments, labels, structured code
- Correct by construction: assembler validates encoding
- Extensible: easy to add FP register save/restore, CET ENDBR, BTI
Dependencies
Deleted placeholders
The following v1 placeholder files (pure comments, zero code) were deleted:
runtime/src/entry_exit/vm_entry_x86_64.S
runtime/src/entry_exit/vm_entry_arm64.S
runtime/src/entry_exit/vm_exit_x86_64.S
runtime/src/entry_exit/vm_exit_arm64.S
Note: these were in runtime/ but entry/exit stubs belong in loader/ (the loader generates them). The new stubs should be in loader/src/stubs/.
Summary
Replace
StubEmitter's C++ byte-array stub generation with hand-written, per-architecture ASM entry/exit stubs. These stubs are what the loader patches into the original binary to redirect protected regions to the VM.Current implementation
StubEmitter(loader/src/X86_64StubEmitter.cpp,ARM64StubEmitter.cpp,X86_32StubEmitter.cpp) generates entry/exit stubs as raw byte vectors in C++:This works but is hard to maintain, debug, and extend.
Proposed design
Hand-written ASM stubs per architecture, assembled into object files and linked into the loader:
Stub responsibilities
Entry stub (patched at protected region start):
vm_execute_with_args()Exit stub (counterpart):
Benefits over byte-array generation
Dependencies
Deleted placeholders
The following v1 placeholder files (pure comments, zero code) were deleted:
runtime/src/entry_exit/vm_entry_x86_64.Sruntime/src/entry_exit/vm_entry_arm64.Sruntime/src/entry_exit/vm_exit_x86_64.Sruntime/src/entry_exit/vm_exit_arm64.SNote: these were in
runtime/but entry/exit stubs belong inloader/(the loader generates them). The new stubs should be inloader/src/stubs/.