-
Notifications
You must be signed in to change notification settings - Fork 70
Convert NDPluginPva to use PVXS #532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first commit breaks the build, because it uses the copied (pvAccess) versions of the files, but links everything against PVXS. Maybe it would be best to do "copy files" -> "convert into PVXS" -> "hook up build system"?
Updating headers before implementation also breaks the build... I think it's easier to understand to have a single commit for implementation and header that can be verified at once, instead of splitting it.
What do you think?
Thanks for the comments, I'll try and go through them soon. I think this is a reasonable change, also happy to squash things down more if needed. |
f5a9c57
to
834fad7
Compare
ADApp/ntndArrayConverterSrc/Makefile
Outdated
LIBRARY_IOC += ntndArrayConverter | ||
INC += ntndArrayConverterAPI.h | ||
ifeq ($(WITH_PVXS), YES) | ||
INC += ntndArrayConverter.h | ||
LIB_SRCS += ntndArrayConverter_pvxs.cpp | ||
USR_CPPFLAGS += -DHAVE_PVXS | ||
else | ||
INC += ntndArrayConverter.h | ||
LIB_SRCS += ntndArrayConverter.cpp | ||
endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently this builds only one implementation, should we be building both versions to allow users to switch between pvAccess and pvxs without rebuilding ADCore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like that's too much flexibility without clear differences between versions. IMO choosing between one or the other should be left to the EPICS "distributor", to simplify things.
But I am biased, because I didn't even think it was necessary to keep supporting the old PVA libraries, since PVXS can always be installed as a separate module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I'd vote for PVXS only unless there is a real need for the old libraries. The network protocol is backwards compatible (AFAIK)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer compatibility with the old libraries to be retained, at least until PVXS is made part of base. We have local areaDetector applications that make use of PVAccess/PVData libraries as well as the pvaDriver and pvaPlugin modules packaged with areaDetector. So we'd need to port our local applications to PVXS in order to update areaDetector, although I suppose using both at the same time is possible. Ideally we would port our local applications, and it doesn't seem difficult to do this. It's just we have at least 60+ IOC applications spread across 30 beamlines, and it would be a major upgrade.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this may also be the first required use of C++11 in ADCore (if a PVA plugin is needed), before moving my test build to a containerised environment I was seeing linker errors trying to compile this against Diamond's released support modules (asyn and busy iirc) which were compiled using an older version of GCC - I don't think pvAccessCPP et al. require C++11.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. Since the 2 PVA libraries have different features it makes sense to support both. It also does not require rebuilding ADCore when switching between them.
Yes. For example, APS DAQ system still uses old libraries, and if the support for those is dropped, there would have to be lots of work required on our end if we wanted to upgrade to the new ADCore version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old plugin relies on PvDatabase server, which has the ability to distribute different data to different clients (e.g., image1 to client1, image2 to client2, etc.) if clients request that.
Is this a PVXS limitation that's being worked on, or just something nice to have that people will eventually be able to live without?
I could imagine emulating it by having a fanout to multiple NDPluginPVXS instances, but it definitely doesn't have the same ergonomics...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PVDatabase server has a pretty convenient plugin framework that allows one to develop different server side plugins, which in turn allows one to control what updates get returned to clients that monitor the given channel. In this particular case, the plugin I mentioned is this one: https://github.com/epics-base/pvDatabaseCPP/blob/master/documentation/dataDistributorPlugin.md
This feature has nothing to do with the PVA vs PVXS protocol itself. Just like the plugin framework in PVDatabase server, something like this could be added to PVXS-based servers. I do not know whether there are plans for doing so or not. I find it very useful for real-time processing of fast detector data, which could not be handled otherwise by a single client. Some examples of possible workflows are here: https://github.com/epics-base/pvaPy/blob/master/documentation/streamingFramework.md.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, so it sounds like the best course for the time being is to provide an NDPluginPVA, NDPluginPVXS and the required NTNDArrayConverter and NTNDArrayConverterPVXS (or some similar name), and maybe we have completely separate header files for each so that we can choose to provide one, both or neither simultaneously based on CONFIG flags.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old plugin relies on PvDatabase server, which has the ability to distribute different data to different clients ...
The PVXS SharedPV
object specifically does not do this. Thus the "shared" part of its name. PVXS also provides a lower level API on which SharedPV
is built (also p4p.gw
). The spam.cpp
test program is one example usage, providing a unique deluge to each subscriber to test per-subscription flow control.
NDPluginBasePixel in ADCore requires C++11. These are all of the modules in my areaDetector tree that have c++ flags:
|
remove unused freeNDArray struct
replace some snake_case with camelCase
Docs will need updating. |
Closes #528
Building with WITH_PVXS = YES (also require WITH_PVA = YES for time being) allows us to use pvxs, removing dependency on normativeTypesCPP, pvDataCPP, pvDatabaseCPP, pvAccessCPP.
WIP; currently testing against modified versions of ADSimDetector and pvaDriver, note that packages depending on this using PVXS will also need to be built with WITH_PVXS=YES in their config files. It may be that ultimately it would be better to move all the PVXS logic out of the separate *_pvxs.cpp files, but this was the simplest and most readable way to do it during development.
There are a few TODOs I will address, so making this a draft PR for now.