Skip to content

Conversation

@ptsOSL
Copy link
Collaborator

@ptsOSL ptsOSL commented May 1, 2025

This PR will cover the full switch from cothread to asyncio/aioca for both atip and virtac. It requires a broad range of changes, so will be done in stages:

  • Majority of asyncio changes to get atip/virtac functional with asyncio/aioca
  • Adding of asyncio compliant multiprocessing for running the cpu intensive recalculations
  • Improvements to development setup for testing asyncio changes
  • Updated create_csv.py to asyncio
  • TODO: Fix tests and other CI failures
  • TODO: Finally update documentation

ptsOSL added 4 commits May 1, 2025 09:02
Added: Awaits and asyncio, switch from cothread queue to asyncio queue, caput/cage/camonitor from aioca instead of cothread, changed how the code is run (now uses asyncio.run()), made some necessary changes to events and when they are set/cleared, removed most references to cothread besides a few in the documentation which will be updated seperately and those in create_csv.py which will be updated later
CPU intensive recalculations are now done on a seperate cpu core. This means can now handle CA requests at the same time as doing the calculation. Fresh values will only be output to records after recalculation is done while the up_to_date event is set. So the output pvs will still be to date with the last processed pv setpoint. There may still be setpoints in the queue waiting to be processed, so the output data will not be up to date with these
Add test functionality to atip as well as configuration to run it from vscode. This means that something actually happens when you run the 'atip' command if you pass '-t'. This makes it easier to test atip without virtac. Also changed vscode default CA ports to 806*.
@ptsOSL ptsOSL requested review from MJGaughran and T-Nicholls May 1, 2025 09:44
@MJGaughran MJGaughran changed the title Switch atip and virtac to aioca Draft: Switch atip and virtac to aioca May 1, 2025
logging.debug("Executing callback function.")
callback()
logging.debug(f"Executing callback function: {callback.__name__}")
await callback()
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

                # For Virtac this function calls update_pvs() which gets data from
                # the pytac datasource to update the softioc pvs with. The data
                # source is sim_data_sources.py and its get_value() function waits
                # on the wait_for_calculation() function which waits for the
                # up_to_date flag which currently will always be set, so this
                # process is pointless.

async with self._new_data_lock:
if callback is not None:
logging.debug(
f"Executing callback function: {callback.__name__}"
Copy link
Collaborator Author

@ptsOSL ptsOSL May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This lock stops incoming caputs from clearing the _up_to_date flag while we are trying to output the new data to our pvs.

When we set the pytac simulated lattice, we always wait for the up_to_date flag to be set, but if we recalculate data and then a caput comes in, we still want to update our pvs with the calculations we just did. This way our pvs are in sync with our pytac lattice, but they may not be in sync with caputs coming in. This is because we accept these caputs immediately but only process the data later.

If we did process the data and recalculate the data immediately and then output it to the pvs, each caput would take 500ms which isnt feasible.

Checking for new data immediately after recalculating and before updating the pvs and throwing away the new data if there is a new caput is also not feasible due to the frequency of incoming caputs.

Outputting data to the pvs should be very quick, the only reason it needs to be async is because we need to await on a flag to be set, but when we are setting pytac lattices from this function, that flag is guarenteed to be set due to the lock, so this lock shouldnt cause any noticeable performance issues as it is used here.

@ptsOSL
Copy link
Collaborator Author

ptsOSL commented May 2, 2025

I believe that this implementation of asyncio now works in a fairly similar way to how it used to work with cothread.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants