-
Notifications
You must be signed in to change notification settings - Fork 27
Mctp bridge support #71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Mctp bridge support #71
Conversation
Thanks for the contribution! I'll get to a proper review shortly. I have some pending changes that rework a lot of the peer, link and network allocation mechanisms. That shouldn't affect your code too much, but I'll request a rebase once that is merged. |
Sure no problem |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the main design point here is how we're handling the pool allocations. It looks like your particular use-case is around static allocations, which I'll focus on here.
As I mentioned in the dbus changes, we cannot add arguments without further version-compatiblity changes. After a bit of chatting with the team, I think a better approach would be to add a new dbus call to explicitly allocate a bridge and a predefined pool (which would include the pool size). Perhaps something like:
AllocateBridgeStatic(addr: ay, pool_start: y, pool_size: y)
- where the
Set Endpoint ID
response must match the expected pool size.
(we would also want purely-dynamic pools to be allocated from SetupEndpoint and friends, but that would result in a dynamic pool allocation. This dynamic pool would be defined either by a toml config option, or via a new TMBO dbus interface. However, we can handle those later, I think)
Would that work?
tests/conftest.py
Outdated
Contains one interface (lladdr 0x10, local EID 8), and one endpoint (lladdr | ||
0x1d), that reports support for MCTP control and PLDM. | ||
Contains two interface (lladdr 0x10, local EID 8), (lladdr 0x11, local EID 10) with one endpoint (lladdr | ||
0x1d), and MCTP bridge (lladdr 0x1e, pool size 2), that reports support for MCTP control and PLDM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please don't alter the default sysnet unless it's needed by a significant number of tests (which in this case, it is not). Just set up the test fixtures default as needed for your new test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, so better to update the current interface (lladdr 0x10, local EID 8) with pool size and update pool size numbered eids to the network simulating a bridge rather than creating a new one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep! Just update the interface/endpoints/etc in the individual test case.
In general, can you add a bit more of an explanation / rationale as part of your commit messages, instead of just log output? There is some good guidance for commit messages up in the "Patch formatting and changelogs" section of https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/5.Posting.rst |
We'll also need to consider the routing setup for bridged endpoints. Ideally we would:
the issue is that there is no kernel support for (2) at present: we need some kernel changes to implement gateway routes. It is possible to create "somewhat-fake" routes for those endpoints, using a neighbour table entry for each (bridged) peer that uses the bridge phys address, but that's a bit suboptimal. I'd prefer not to encode that hack into mctpd if possible. I do have a todo for the kernel changes necessary for that, sounds like I should get onto it! |
IIUC, 1 is what we can achieve with the tools we have today, right? For ex: add route to the bridge itself and then When you say sub-optimal, are you referring to the neighbour lookup that happens in When is the gateway support in kernel for MCTP nets planned? We can help if you have a design in mind. |
Hi Santosh,
Yes, but it requires a lot of workaround to set up.
That isn't adding a neighbour table entry though; just a route. USB is a little different in that there are no neighbour table entries required, because there is no physical addressing. For a bridge, using this scheme would require:
(for USB, we don't need (2) or (4), but that's purely a property of the transport type. We would need those to be supported in mctpd to allow other transport types like i2c). This would work, but it's messy.
No, the neighbour lookups happen in
Not so much faster, more tidier. With a gateway route, we would require:
No fake neighbour table entries are required - since the kernel just looks up the gateway physical address from the gateway's neighbour table entry.
I have it done - will push a development branch shortly. |
|
Hi Jeremy, Thank you for the detailed response.
Ack, I see something like I2C would need a PHY address.
Ack. I should have said the neigh_lookup call that happens in route.c!
Thank you, that does seem cleaner. |
And for the userspace changes, my |
@jk-ozlabs : I think we agree that mctpd has to poll all allocated endpoints with a Get Endpoint ID periodically. I think the first thing we'd need to enable in order to do that is to make MCTP requests and responses asynchronous. Do you have a design in mind to make MCTP requests async (like via a request queue per allocated endpoint)? |
Just as a clarification - not all endpoints, but EIDs within allocated endpoint ranges, which have not yet been enumerated. And this is assuming we expect mctpd to automatically enumerate those bridged devices. I think the latter is reasonable, but we don't have a specific design point around that yet. With that in mind, yes, we probably want to make that async, as those requests are likely to not have a response, and therefore we're at worst-case waiting time. In terms of design: we probably don't want a We don't necessarily need to keep much state for that polling mechanism (ie, between request and response) - receiving a Get Endpoint ID response for anything in that range would trigger the enumeration process. |
Wouldn't we also want to poll enumerated endpoints under the bridge to determine when they "went away"?
Ack. How periodically do you think we should check? Same as the logic for determining when to set endpoint state as degraded (TReclaim/2)? |
No, and we don't do that with directly-attached endpoints either. The current philosophy is that we don't care if an endpoint disappears, until some application calls [I'm okay with revisiting this, or handling bridged endpoints differently, if there's a compelling argument for doing so]
Treclaim/2 seems a bit too often to me, but might be fine as a starting point. I suspect that an ideal approach would be to poll more regularly when a bridge pool is initially allocated, then reduce frequency. However, let's not complicate the initial implementation too much here, and just use a configurable constant. |
.. and speaking of |
So we have a case where we will have to call allocate endpoint ID on the bridge device when not all of its downstream devices are available. In such a case, how do you think we can determine when those downstream EIDs become available unless we poll? |
I am suggesting we poll. Just that we then stop polling once we enumerate the endpoint. |
Ah, ack, then. |
It will need some rework as currently it assumes the peer is a neighbour and uses physical addressing for |
Thanks for that, Andrew. There might be some commonality between the peers undergoing (non-local) recovery, and those EIDs that are behind a bridge, but not-yet enumerated. If a The same should occur for a |
Yep, that sounds sensible. |
686395a
to
f6d0b8a
Compare
Thank you all for taking out time to look into the PR, I've addressed to the asked comments on previous commit, added new commit for MCTP Bridge design doc, need to push Polling mechanism now |
Thanks for the updates! A couple of comments:
|
bf8f331
to
fa59ed7
Compare
Hello Jeremy Thank you for looking over the commits, based on your comment # 1 I have removed the new .md file which captured MCTP Bridge support details on PR and updated the existing mctpd.md file with new information about dbus api AssignBridgeStatic. Regarding user consumable document, I'm not much sure what this could be, if you could let me know what this document should be I can create one and update the PR. I recently got the permission for PMCI WG, have glanced over what was stated on issue #1540, basically the Idea is to split the BusOwner EID pool and segregate a chunk of eids for Bridge's downstream pool on the higher end of Busowner pool while keeping lower end for non-bridge devices. This would be helpful for Dynamic EID assignment of downstream pool devices incase multiple Bridge's are there under same network. My current implementation involves finding of contiguous eid chunk of min(requested pool size, bridge's pool size capability) from available BusOwner's pool but we begin looking from Asked pool_start (Static) or from next to Bridge's EID (Dynamic) and we look till we get right sized chunk and mark that eid as pool_start. I did based this from the same line of spec for which you raised the issue.
I can create a new PR for Endpoint polling if thats what you mean and skip for this PR. Also for adding route for the downstream endpoint to Bridge, we would need your implementation implementation to be merged for both linux kernel and mctpd. Internally I've tested my polling logic with your pulled changes, but for this PR I haven't picked them up so discovery of downstream EID via LearnEndpoint would probably not be possible with only this PR
I've updated the patch set now for easier review, hope it helps, let me know if I can do anything else to further ease the review. Thanks for your .clang format, once that is pushed I would reply those onto my change |
fa59ed7
to
2766a52
Compare
BTW, the kernel changes required have hit net-next, and so I have merged the gateway and range support into the netlink and mctp components. |
Ack.
We could certainly start with dynamic EIDs only and add the deterministic EID support as a follow-up when we reach a design that can allow both facilities to co-exist. What do you think is needed to allow both?
Agree, but we are looking to integrate a solution that works end-to-end on OpenBMC EM driven configs. Currently, doing dynamic seems like something that can be pushed through easier. |
Mainly a facility to ensure that the static and dynamic allocations can coexist. My current plan for this is a way to configure an explicit range of EIDs that mctpd can dynamically allocate, and then the rest are reserved for static allocation.
OK. On the mctpd side, static is certainly easier, as we don't need to worry about calculating any allocations :) But for dynamic, I would think something like:
We would likely want a new dbus interface on endpoints which report as bridges, with properties indicating the pool range. Also, this assumes that the AssignEndpoint caller does not need to handle bridge pools specially; if so, we'll likely need another approach. Then, as a follow-up:
Footnotes
|
Based on what you proposed and also considering that we don't have flexibility/relief in choosing non-contiguous pool start yet, for this initial draft can we go with this:
and following the above said, let me formalize how this would stand out now.
graph TD
A[AssignEndpoint Called] --> B[Find Initial Available Range for <br/>BridgeEID + <br/>Assumed Pool Size]
B --> C[Mark EID Range as Reserved]
C --> D[Send SET_ENDPOINT_ID<br/>Command]
D --> E{Pool Support<br/>Response?}
E -->|No Pool Support| F[Release Assumed<br/>Pool Range]
F --> G[Keep Only<br/>Bridge EID]
E -->|Has Pool Support| H[Check Actual<br/>Pool Size Needed]
H --> I[Release Extra<br/>Reserved EIDs]
I --> J[Send ALLOCATE_ENDPOINT_ID<br/>pool_start = bridge_eid + 1<br/>pool_size = <br/>response from <br/>SET_ENDPOINT_ID]
J --> K[Setup Bridge<br/>Polling Logic]
%% Add some styling
style A fill:#f9f,stroke:#333,stroke-width:2px
style E fill:#bbf,stroke:#333,stroke-width:2px
style K fill:#bfb,stroke:#333,stroke-width:2px
|
Hi Faizan, Mostly looks good, just a couple of small corrections / notes:
This is mixing up
Yep. And as part of this we'll likely need a config file entry for that max pool size.
You seem to be separating the "reserve a range" and "setting pool_start and pool_size" operations - they're the same thing. When we "allocate" the range internally at the beginning of this process, we would just set |
Add MCTP control message structures for the ALLOCATE_ENDPOINT_ID command to support bridge endpoint EID pool allocation. - Add mctp_ctrl_cmd_alloc_eid request/response structure Signed-off-by: Faizan Ali <faizana@nvidia.com>
1b9ed20
to
259cde1
Compare
Hello Jeremy, Thank you for the feedback, I've updated the implementation based on above discussions. Took couple of decisions here :
Please let me know your thoughts on the implementation, open to any suggestions :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, it looks like we're converging on a good approach here.
A few comments inline, and one overall: I'm not sure that the addition of the reserved_eid_set
is for, when we already have the pool information in the peers list. Can we implement the allocation without duplicating this data?
We'll definitely need more tests though, to check cases of allocation failures, and cases where we need to constrain the pool requests from the peer.
It could be implemented, we'll just have to fetch eids presence, in another bridge peer quite a lot many times (felt like less efficient at the moment) But checking via peer would be cleaner and less maintainable way, while BITMAP would be more efficient owing to frequent check we need to do while looking for contiguous available eids per bridge. I'm okay with removing it. Let me know. |
Not sure I fully understand your response there:
what do you mean by "fetch eids presence"? where is this fetching from?
why less maintainable? it would be one fewer data structure to keep in sync. I would suggest you add the EID allocator as a new function, that:
there's no real need for the bitmap to be persistent; we're only dealing with a fixed number of peers here, and only need to do this when performing the initial allocation (so once per discovered peer). This way, all of the allocation logic is in a single function, and we cannot have a state where the map is out of sync with the actual peers list. |
By fetching i mean, iterating over peers array to find if the eid is used or not, we also now need to check if eid is falling within range of any peer's allocated pool (peer->pool_start to peer->pool_start + peer->pool_size). But I'll see if we can avoid bitmap. If not then will implement via one function way
This is meant for non bitmap approach
Yes I agree
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few style comments
* For mctp bridge to have contiguous bridge EID with it's downstream endpoints, we introduce max_pool_size, bus-owner configuration, to be assumed pool size before we get it's actual size via SET_ENDPOINT_ID command response. Signed-off-by: Faizan Ali <faizana@nvidia.com>
259cde1
to
2495361
Compare
Removed BITMAP logic, utilizing contiguous nature of pool allocation for simpler checks |
2495361
to
b0951d4
Compare
Add support for MCTP bridge endpoints that can allocate pools of EIDs for downstream endpoints. We assume each AssignEndpoint d-bus call will be for a bridge. with this we allocate/reserve a max_pool_size eid range contiguous to bridge's own eid. Later this pool size is updated based on SET_ENDPOINT_ID command response. - Add pool_size and pool_start fields to struct peer - Update AssignEndpoint d-bus call to support MCTP Bridge dynamic EID assignement. - Fetch contiguous eids for bridge and its downstream endpoints. - is_eid_in_bridge_pool(): for static eid assignement via AssignEndpointStatic d-bus call, need to check eid if being part of any other bridge's pool range. Signed-off-by: Faizan Ali <faizana@nvidia.com>
b0951d4
to
03e3618
Compare
Add implementation for the MCTP ALLOCATE_ENDPOINT_ID control command to enable bridges to allocate EID pools for downstream endpoints. - Add endpoint_send_allocate_endpoint_id() for sending allocation requests - Update gateway route for downstream EIDs - Integrate allocation of dynamic eid for downstream endpoints of bridge with AssignEndpoint d-bus method Signed-off-by: Faizan Ali <faizana@nvidia.com>
* updated mctpd.md with new mctp bridge support for dynamic eid assignment from AssignEndpoint d-bus call Signed-off-by: Faizan Ali <faizana@nvidia.com>
Add new test for validating AssignEndpoint D-Bus method to verify bridge endpoint EID allocation being contiguous to its downstream eids. - Add test_assign_dynamic_bridge_eid() for bridge simulation testing - Simulate bridge with some downstream endpoints in test framework - Test EID allocation of bridge and downstream being contiguous Signed-off-by: Faizan Ali <faizana@nvidia.com>
* au.com.codeconstruct.MCTP.Bridge1 interface will capture details of bridge type endpoint such as pool start, pool size, pool end. Signed-off-by: Faizan Ali <faizana@nvidia.com>
03e3618
to
cd91cc8
Compare
Looking pretty good with the changes. Let me know when you're at a stable point and want me to re-review. |
Hello Jeremy/Matt |
Neat! I shall do. I've got a couple of other things pending at the moment, so will get on to a proper review shortly, but in the meantime there are a still a couple of things from the earlier reviews:
|
Ack, let me get onto it.
Thanks, I can incorporate the test case. |
This PR aims to introduce ALLOCATE_ENDPOINT_ID message support along with MCTP Bridge endpoint into the existing peer structure.