-
Notifications
You must be signed in to change notification settings - Fork 30
Description
We have the MigTD in working condition with a single connection at time but when adding support for concurrent connections using tdvmcall interface, I noticed the following:
-
1st WFR tdvmcall from MigTD : VMM does not have a vMotion request, so we return WFR tdvmcall with WFR_OP_NO_OP in response data->operation.
-
VMotion request for 1st connection arrives : VMM posts the required data at the WFR physical address (0xa5f000) and then wakes up MigTD by posting interrupt. This connection is from destination VM first.
-
The 2nd WFR arrives from MigTD : VMM does not have a 2nd connection request yet, so we return WFR tdvmcall with WFR_OP_NO_OP in response data->operation.
-
1st connection Send(STREAM_OP_REQUEST) tdvcmall is then made by MigTD: VMM gets Send call with op STREAM_OP_REQUEST.
-
1st connection Recv(STREAM_OP_RESPONSE) tdvcall : VMM injects a response to Send(STREAM_OP_REQUEST) with STREAM_OP_RESPONSE. The connection with VMM is now established. Connection 1 now goes in STREAM_OP_RW.
-
1st connection GetQuote call : GetQuote call from connection 1 succeeds.
-
1st connection Recv(STREAM_OP_RW) : Recv tdvcall call is made from connection 1 but VMM does not have data yet so it returns packetHdr->len = 0.
-
Vmotion request for 2nd connection arrives: VMM posts the required data at the WFR physical address (0xa5f000) and then wakes up MigTD by posting interrupt. This connection is from Source VM.
-
MigTD now keeps calling 1st connection Recv(STREAM_OP_RW) even though VMM does not have any data.
As you can see, MigTD does not switch to the 2nd successful WFR but instead it keeps calling Recv() from the 1st connection. The source connection would have sent the Initial handshake data but that is not happening because 2nd WFR request is not accepted and instead Recv() from 1st connection getting called indefinitely.
So either VMM is missing something or MigTD's tdvmcall behaviour is off. It's immaterial whether source or destination sends connect request to VMM first since kernel/VMM has no control over scheduling. We understand that it is the source which will send the initial TLS handshake first but who started first (source or destination) should be irrelevant.
We would like to get a detailed documentation for the concurrent support if VMM is missing something. This will help us in implementing it.