Generic skeleton implementation - Only with Event and Black Box Test with IPC_bridge App#140
Conversation
…l partial restart handling implementation.
7e7c369 to
809cecd
Compare
|
@mina-hamdi it looks like you have conflicts that require manual resolution. Could you please fix them? |
Signed-off-by: Abhishek GOYAL <abhishek.goyal@valeo.com>
hi @LittleHuba : confilct resolved |
|
hi @LittleHuba , @hoe-jo , @castler A follow-up pull request is planned for the beginning of Feb to add Field support, tests, and Event documentation, incorporating feedback from this review |
| size_t sample_size, | ||
| size_t sample_alignment) noexcept | ||
| { | ||
| auto* data_storage = storage_resource_->construct<EventDataStorage<std::uint8_t>>( |
There was a problem hiding this comment.
Makes this EventDataStorage (which is an alias for DynamicArray) really sense here .... I mean, what is the added value of the DynamicArray here, if the slots/elements within the DynamicArray do not even represent the "real" events ... and you have to do pointer-arithmetic anyhow?
Apart from this, when creating a DynamicArray<uint8_t>, you get potential WRONG alignment! Because it will allocate starting on a one-byte aligned address, which might be wrong, when your sampl_alignmentis larger!
I.e. why not simply:
void* data_storage = storage_resource_->allocate(sample_size * element_properties.number_of_slots, sample_alignment)
There was a problem hiding this comment.
Totally Agree with you
There was a problem hiding this comment.
hi @crimson11 : We started implementing this, but while analysing it I noticed one issue — please correct me if I’m wrong.
On the proxy side the event storage is retrieved via
PolymorphicOffsetPtrAllocator<uint8_t>
which means the shared memory must contain an actual EventDataStorage (DynamicArray) object.
If the Skeleton were to store only a raw buffer returned by allocate(...), the proxy would interpret raw bytes as a DynamicArray object, which would be undefined behaviour.
So we still need to construct an EventDataStorage object in shared memory.
Regarding the alignment concern: the misalignment risk does not come from DynamicArray itself, but from the allocator used in the generic path:
PolymorphicOffsetPtrAllocator<uint8_t>
The allocator currently requests alignment based on the element type:
proxy_->allocate(bytes, alignof(T));
For T = uint8_t this becomes:
alignof(uint8_t) == 1 → proxy_->allocate(bytes, 1)
So the backing buffer of EventDataStorage<uint8_t> may start at a 1-byte aligned address, which is insufficient when the memory is later used to store samples with a runtime alignment (sample_alignment, e.g. 8 or 16). This seems to be the actual alignment bug.
Given this, it looks like the correct fix is to adjust the allocator to request the required runtime alignment for the generic case, rather than switching to raw allocate().
There was a problem hiding this comment.
I guess, you are basically right with your analysis! I.e. the GenericSkeleton needs to create an EventDataStorage / DynamicArray<T> in shared memory!
But obviously DynamicArray - like every other container in the world - can just use standardized allocator-APIs! And an allocator by definition is templated by type T.
What this means:
With the current architecture/interfaces, it is "hard" to create a "correct" DynamicArray, without knowing/having T!
What solutions do I see:
- Long term: the lola binding layer doesn't really need type-information! It could be entirely implemented without event/field type template args! Because the binding layer (opposed to the binding independent layer) is just moving around bytes! The upcoming method-implementation for
mw::comis completely "type-erased" at the binding level .... so sooner or later, the existing impl. on the lola binding layer for events/fields will be refactored to be also "type-erased". This would automatically solve your problem... because thenEventDataStorage<T>will not exist anymore! It will be type-erased container instead! - Short term: You should use a data-type, which is "worst case aligned" ... E.g.:
auto* data_storage = storage_resource_->construct<EventDataStorage<std::max_align_t>>... of course you have to adapt the num-of-elements accordingly ... this should do the trick for now?
There was a problem hiding this comment.
Thanks for detailing this.
Since the long-term direction of the binding layer is to become fully type-erased, I agree that in the meantime we should go with the short-term solution using a worst-case aligned storage type.
With this approach, we will also adjust the proxy side accordingly.
Currently, the proxy retrieves the storage via:
event_entry->second.template get<EventDataStorage<EventSampleType>>()
This effectively assumes that the object stored in shared memory is an EventDataStorage<EventSampleType>.In the generic skeleton case, this assumption is no longer guaranteed to hold. If the producer constructs EventDataStorage<std::max_align_t> instead, retrieving it as EventDataStorage<EventSampleType> would
be conceptually equivalent to a reinterpret-cast of a different container type, which may result in undefined behaviour.
Therefore, on the typed proxy side, instead of retrieving a typed EventDataStorage, we will:
- Retrieve the raw storage via the meta-information
- Use the runtime
size_of_andalign_of_fromdata_type_info_ - Recompute the aligned slot size
- Access the slot memory via byte pointer arithmetic
- Cast the computed slot start address to
SampleType*
whats your view ?
There was a problem hiding this comment.
Hm. I would not to this extensive change for the typed Proxy.
Sidenote: When working with shared-memory we are always somewhat in the UB dust! Two different processes (eventually compiled with different compilers/compiler settings), with completely different life-time views on objects, working on the same memory and "interpreting" the memory in a specific way ... is afaik already not completely backed my the C++ standard. But - you know - compilers do not do totally crazy stuff and because of this shared-memory works ;)
So on the creator (GenericSkeleton/GenericSkeletonEvent) side, I would simply do:
auto* data_storage = storage_resource_->construct<EventDataStoragestd::max_align_t>(num_slots, allocator);
This gives you a correctly laid out DynamicArray at data_storage, which points to a memory region of bytes, which is maximum alligned and has size sizeof(std::max_align_t) * num_slots.
When the user of GenericSkeletonEvent later "fills" this memory by mem-copying to the returned AllocateePtr ... then we already make some formal assumptions, which we might/should write down anyhow? The whole idea/interface of GenericSkeletonEvent implicitly expects, that the "samples" you are exchanging are trivially-copyable!
If you take this into account, then the whole UB stuff is even more irrelevant as we have already very strong constraints on the "data"/sample-types.
Long story short - I don't see any issue with the casting done on the (typed) ProxyEvent side.
| // The at() method on EventDataStorage (which is a DynamicArray) correctly handles | ||
| // the pointer arithmetic based on its template type (std::uint8_t). | ||
| // We multiply by size_info_.size here because the underlying storage is a byte array. | ||
| void* data_ptr = &data_storage->at(static_cast<std::uint64_t>(slot.GetIndex()) * size_info_.size); |
There was a problem hiding this comment.
why is this cast to std::uint64_t necessary? However, if you follow my comment for Skeleton::CreateEventDataFromOpenedSharedMemory(), then this code here will change ...
There was a problem hiding this comment.
Nice point and for sure after change in Skeleton::CreateEventDataFromOpenedSharedMemory() as per your feedback this code will change to something like this
void* base_ptr = data_storage_.get();
void* data_ptr = static_caststd::uint8_t*(base_ptr) + (static_caststd::uint64_t(slot.GetIndex()) * size_info_.size);
but still i am thinking we need type casting in uint64_t because
slot.GetIndex() returns SlotIndexType which is uint16_t. In C++, when we do arithmetic with small integer types like uint16_t, the compiler first promotes them to a larger “normal” integer type (typically int or unsigned int) before doing the multiplication. Then, because size_info_.size is size_t, the final multiplication ends up being done after a sequence of implicit conversions. Which exact intermediate type is used can depend on the platform (32-bit vs 64-bit), and it’s not obvious from reading the code.
By explicitly casting the index first, we force the multiplication to happen in 64-bit arithmetic from the start. This makes the intent clearer and avoids any risk of unexpected overflow due to intermediate type promotions.
Whats your view?
There was a problem hiding this comment.
OK. But then I guess cast to DynamicArray::size_type would be the right type to cast?
bemerybmw
left a comment
There was a problem hiding this comment.
I haven't done a full review, just left a few comments after glancing over the PR.
score/mw/com/impl/plumbing/skeleton_service_element_binding_factory_impl.h
Show resolved
Hide resolved
score/mw/com/impl/plumbing/skeleton_service_element_binding_factory_impl.h
Show resolved
Hide resolved
score/mw/com/impl/plumbing/skeleton_service_element_binding_factory_impl.h
Show resolved
Hide resolved
|
|
||
| TEST_F(GenericSkeletonTest, CreateWithInstanceSpecifier) | ||
| { | ||
| auto instance_specifier = InstanceSpecifier::Create("a/b/c").value(); |
There was a problem hiding this comment.
Unit tests should follow the Given, when, then structure. (E.g. https://martinfowler.com/bliki/GivenWhenThen.html)
Hi @crimson11
As discussed earler Rasing a pull request containing the Event with black-box tests through Example Application.
Rest will done in second pull request(First week of Feb) which includes the complete Event functionality documentation along with tests for both Event and Field