-
Notifications
You must be signed in to change notification settings - Fork 22
Codice continuation packets #2569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Codice continuation packets #2569
Conversation
| All packets have (SHCOARSE, EVENT_DATA, CHKSUM) fields. To combine | ||
| the segmented packets, we only concatenate along the EVENT_DATA field | ||
| into the first packet of the group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for my future self and may be others. I don't want to lose context of these changes if someone looks at this later since those details are in our email. Can you break line where appropriate and make it more clear as needed?
| All packets have (SHCOARSE, EVENT_DATA, CHKSUM) fields. To combine | |
| the segmented packets, we only concatenate along the EVENT_DATA field | |
| into the first packet of the group. | |
| Direct event data are segmented into packets. CoDICE assembles these segmented packets slightly differently than the standard. Onboard, CoDICE direct event data are segmented as follows: | |
| Standalone / unsegmented packets: | |
| Data is packed in the order defined in the telemetry definition in Excel. eg. | |
| (SHCOARSE, many metadata fields, EVENT_DATA, CHECKSUM). | |
| First segment: | |
| Data is packed in the order defined in the telemetry definition in Excel. Eg. | |
| (SHCOARSE, many metadata fields, EVENT_DATA, CHECKSUM). | |
| As we see here, the first segment packet contains metadata defined in the telemetry definition. Those metadata are unpacked in later functions/steps because those metadata will always be fixed bit lengths and should exist. | |
| Middle segment: | |
| Data is packed in the following order: SHCOARSE, EVENT_DATA, CHECKSUM. | |
| There can be multiple middle packets for a given packet group. | |
| Last segment: | |
| Data is packed in the following order: SHCOARSE, EVENT_DATA, CHECKSUM. | |
| This last segment of event data can contain padded values. | |
| Because of this behavior, we defined the XTCE to unpack the data using the field order | |
| (SHCOARSE, EVENT_DATA, CHKSUM). This simplifies XTCE-based unpacking and allows | |
| the remaining packet-specific details to be handled in code. In this function, | |
| the segmented event data are combined by concatenating the EVENT_DATA field across | |
| the packet group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is too verbose. I made this a generic function in the ultra continuation packet PR so that multiple instruments can use it. Over there, it isn't the same packet definition as here, so this wouldn't make sense to add to the generic function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add it somewhere in this codice_l1a_de.py? I don't mind where we put this in that file but would be nice to capture this for future reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some of this to the codice_l1a_de file now, explaining what fields we have in XTCE and why we are doing it this way now.
|
Current code looks good to me. I think there are few changes coming from yesterday discussion. let me know when you want me to review this again. |
We have had some issues with missing source sequence counters, meaning not all necessary packets were present. This just adds some simple logs that can warn about this case and give some awareness about what is going on.
The acquisition start times dictate what priorities are in a group. i.e. all priorities have the same start time. Since we know how many priorities there are per Hi/Lo configuration, we know that we should have that many exact items with a given acquisition start time. This allows us to filter out items that weren't complete.
d065c55 to
d9ca856
Compare
|
OK, I think the last few commits contain the logic that we need for handling missing priority groups and dropping those. I still need to look into whether that is OK, or if we want to fill in some priority values and use FILL for the rest, or drop the entire sequence when that happens. But, this should be ready for review now and I did a cursory check and looks like it is producing valid events for the December test cases now. |
lacoak21
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK ive only review part of this! Im going to do the rest first thing tomorrow.
| ---------- | ||
| packets : xarray.Dataset | ||
| Dataset containing the packets to group. | ||
| packets : xr.Dataset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wont this cause documenation build errors?
xarray.Dataset -> xr.Dataset
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 This passed the documentation build... I think you're right though, so I went through and updated this here and elsewhere throughout this file. It looks like there was previously a mix of usage.
tech3371
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great to me. Thank you for taking this on! Sorry for delay review. I didn't see the comment in my filtered email.
| """ | ||
| # Check for sequential source sequence counters | ||
| # CCSDS source sequence counter is a 14-bit field (0-16383) | ||
| counter_max = 16384 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we make this into a constant above? Otherwise, it looks good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is a critical constant and isn't used in other places, so I just kept it outside for now. We can re-evaluate later and move it out if needed/desired.
| pkt_bytes = pkt_bytes.reshape(n_events, 8)[:, ::-1] | ||
| all_event_bytes[offset : offset + n_events] = pkt_bytes | ||
|
|
||
| # Record destination indices for scattering |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do you mean by scattering here and on line 331?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question! My AI friend wrote that comment. But, what it is getting at is that we are "scattering" the data from one layout into another array-based layout in a sense. So an item in array 1 at location (1, 2, 3) will get "scattered" into array2 at location (2, 5, 7) and there is a relationship we are storing for how that mapping is working.
I don't think that was clear, so I tried to reword both locations. It is basically doing array-based index lookups and assignments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting! Got it. Ive only ever "scattered" data in reference to uncertainty and not something that we can "fix".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
d9ca856 to
deada8e
Compare
0de5674
into
IMAP-Science-Operations-Center:dev
Change Summary
Overview
This changes the CoDICE packet definition for direct events. Previously, we were reading in many fields for each packet, but this is incorrect, the fields are only in the first packet. This meant that we were extracting things with XTCE, then recombining those fields back into binary and then extracting again. The issue came about when the second packet's binary payload was not long enough to unpack all the individual fields.
This is a significant refactor as well. Rather than working with binary strings, we are working with bytes directly and doing bitshifts on numpy arrays. Thanks to good unit tests, this was able to be refactored with the help of AI.
closes #2568