Skip to content

Convert NDPluginPva to use PVXS #532

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 23 commits into
base: master
Choose a base branch
from
Open

Conversation

jsouter
Copy link

@jsouter jsouter commented Apr 28, 2025

Closes #528

Building with WITH_PVXS = YES (also require WITH_PVA = YES for time being) allows us to use pvxs, removing dependency on normativeTypesCPP, pvDataCPP, pvDatabaseCPP, pvAccessCPP.

WIP; currently testing against modified versions of ADSimDetector and pvaDriver, note that packages depending on this using PVXS will also need to be built with WITH_PVXS=YES in their config files. It may be that ultimately it would be better to move all the PVXS logic out of the separate *_pvxs.cpp files, but this was the simplest and most readable way to do it during development.

There are a few TODOs I will address, so making this a draft PR for now.

@jsouter jsouter changed the title Allow PVXS to be used in place of pvAccess libraries Convert NDPluginPva to use PVXS Apr 28, 2025
Copy link
Member

@ericonr ericonr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The first commit breaks the build, because it uses the copied (pvAccess) versions of the files, but links everything against PVXS. Maybe it would be best to do "copy files" -> "convert into PVXS" -> "hook up build system"?

Updating headers before implementation also breaks the build... I think it's easier to understand to have a single commit for implementation and header that can be verified at once, instead of splitting it.

What do you think?

@jsouter
Copy link
Author

jsouter commented Apr 28, 2025

The first commit breaks the build, because it uses the copied (pvAccess) versions of the files, but links everything against PVXS. Maybe it would be best to do "copy files" -> "convert into PVXS" -> "hook up build system"?

Updating headers before implementation also breaks the build... I think it's easier to understand to have a single commit for implementation and header that can be verified at once, instead of splitting it.

What do you think?

Thanks for the comments, I'll try and go through them soon. I think this is a reasonable change, also happy to squash things down more if needed.

@jsouter jsouter force-pushed the pvxs branch 3 times, most recently from f5a9c57 to 834fad7 Compare April 30, 2025 13:51
Comment on lines 7 to 20
LIBRARY_IOC += ntndArrayConverter
INC += ntndArrayConverterAPI.h
ifeq ($(WITH_PVXS), YES)
INC += ntndArrayConverter.h
LIB_SRCS += ntndArrayConverter_pvxs.cpp
USR_CPPFLAGS += -DHAVE_PVXS
else
INC += ntndArrayConverter.h
LIB_SRCS += ntndArrayConverter.cpp
endif
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently this builds only one implementation, should we be building both versions to allow users to switch between pvAccess and pvxs without rebuilding ADCore?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like that's too much flexibility without clear differences between versions. IMO choosing between one or the other should be left to the EPICS "distributor", to simplify things.

But I am biased, because I didn't even think it was necessary to keep supporting the old PVA libraries, since PVXS can always be installed as a separate module.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I'd vote for PVXS only unless there is a real need for the old libraries. The network protocol is backwards compatible (AFAIK)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer compatibility with the old libraries to be retained, at least until PVXS is made part of base. We have local areaDetector applications that make use of PVAccess/PVData libraries as well as the pvaDriver and pvaPlugin modules packaged with areaDetector. So we'd need to port our local applications to PVXS in order to update areaDetector, although I suppose using both at the same time is possible. Ideally we would port our local applications, and it doesn't seem difficult to do this. It's just we have at least 60+ IOC applications spread across 30 beamlines, and it would be a major upgrade.

Copy link
Author

@jsouter jsouter May 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this may also be the first required use of C++11 in ADCore (if a PVA plugin is needed), before moving my test build to a containerised environment I was seeing linker errors trying to compile this against Diamond's released support modules (asyn and busy iirc) which were compiled using an older version of GCC - I don't think pvAccessCPP et al. require C++11.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. Since the 2 PVA libraries have different features it makes sense to support both. It also does not require rebuilding ADCore when switching between them.

Yes. For example, APS DAQ system still uses old libraries, and if the support for those is dropped, there would have to be lots of work required on our end if we wanted to upgrade to the new ADCore version.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old plugin relies on PvDatabase server, which has the ability to distribute different data to different clients (e.g., image1 to client1, image2 to client2, etc.) if clients request that.

Is this a PVXS limitation that's being worked on, or just something nice to have that people will eventually be able to live without?

I could imagine emulating it by having a fanout to multiple NDPluginPVXS instances, but it definitely doesn't have the same ergonomics...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PVDatabase server has a pretty convenient plugin framework that allows one to develop different server side plugins, which in turn allows one to control what updates get returned to clients that monitor the given channel. In this particular case, the plugin I mentioned is this one: https://github.com/epics-base/pvDatabaseCPP/blob/master/documentation/dataDistributorPlugin.md

This feature has nothing to do with the PVA vs PVXS protocol itself. Just like the plugin framework in PVDatabase server, something like this could be added to PVXS-based servers. I do not know whether there are plans for doing so or not. I find it very useful for real-time processing of fast detector data, which could not be handled otherwise by a single client. Some examples of possible workflows are here: https://github.com/epics-base/pvaPy/blob/master/documentation/streamingFramework.md.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, so it sounds like the best course for the time being is to provide an NDPluginPVA, NDPluginPVXS and the required NTNDArrayConverter and NTNDArrayConverterPVXS (or some similar name), and maybe we have completely separate header files for each so that we can choose to provide one, both or neither simultaneously based on CONFIG flags.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old plugin relies on PvDatabase server, which has the ability to distribute different data to different clients ...

The PVXS SharedPV object specifically does not do this. Thus the "shared" part of its name. PVXS also provides a lower level API on which SharedPV is built (also p4p.gw). The spam.cpp test program is one example usage, providing a unique deluge to each subscriber to test per-subscription flow control.

@MarkRivers
Copy link
Member

I think this may also be the first required use of C++11 in ADCore (if a PVA plugin is needed),

NDPluginBasePixel in ADCore requires C++11.

These are all of the modules in my areaDetector tree that have c++ flags:

(base) [epics@corvette areaDetector]$ find . -name Makefile -exec grep -H 'c++' {} \;
./ADPICam/PICamApp/src/Makefile:USR_CXXFLAGS_Linux += -std=c++1y
./ADCore/ADApp/pluginSrc/Makefile:NDPluginBadPixel_CXXFLAGS_Linux += -std=c++11
./ADCore/ADApp/pluginSrc/Makefile:NDPluginBadPixel_CXXFLAGS_Darwin += -std=c++11
./ADCore/ADApp/pluginTests/Makefile:  USR_CXXFLAGS_Linux += -std=c++11
./ADUVC/uvcApp/src/Makefile:USR_CPPFLAGS += -std=c++11
./ADGenICam/GenICamApp/src/Makefile:USR_CXXFLAGS_Linux += -std=c++11
./ADLambda/LambdaApp/src/Makefile:USR_CXXFLAGS += -std=c++17
./ADAravis/aravisApp/src/Makefile:USR_CXXFLAGS_Linux += -std=c++11
./ADPluginBar/barApp/barSrc/Makefile:CODE_CXXFLAGS=-std=c++11
./ADTimePix3/tpx3App/src/Makefile:USR_CPPFLAGS += -std=c++11
./ADSpinnaker/spinnakerApp/src/Makefile:USR_CXXFLAGS_Linux += -std=c++11 -Wno-unknown-pragmas
./ADSpinnaker/spinnakerApp/exampleSrc/Makefile:USR_CXXFLAGS_Linux += -std=c++11 -Wno-unknown-pragmas

jsouter

This comment was marked as duplicate.

@jsouter jsouter marked this pull request as ready for review June 11, 2025 10:28
@jsouter
Copy link
Author

jsouter commented Jun 12, 2025

Docs will need updating.
Also: need to make a dev build which contains both the old pva packages and PVXS, and check build/deployments work as expected, especially with regards to the pvxs ioc server

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Convert NDPluginPva to use PVXS
7 participants