Releases: NVIDIA/NVFlare
2.7.0rc2: Feature enhancements and bug fixes
What's Changed
- Update edge with 40k device experiment, add docstring to recipe by @ZiyueXu77 in #3599
- Dashboard extra prop by @yanchengnv in #3600
- Update edge example README and fix XOR ET issue by @YuanTingHsieh in #3602
- Add POCEnv impl by @YuanTingHsieh in #3596
- [Website] Add NVIDIA FLARE Day drop down menu for different years. by @holgerroth in #3601
- Fix typos by @cyyever in #3606
- Update device selection logic to align with new simulated device pattern by @ZiyueXu77 in #3603
- Add support of msg traffic control for swarm learning by @yanchengnv in #3604
- Reduce Swarm Learning log msgs by @yanchengnv in #3608
- [Tutorials] Updates to reduce compute requirements by @holgerroth in #3610
- Provisioning for Edge by @nvidianz in #3609
- Update the token expiration time by @YuanTingHsieh in #3605
- Use ScatterAndGather by default in FedAvgRecipe and allow custom ModelAggregators by @holgerroth in #3607
- Convert ET model once for each task by @yanchengnv in #3616
- fix typo in distributed optimization README by @francescofarina in #3614
- [Tutorials] Fix custom fedavg by @holgerroth in #3618
- Remove overseer preflight check by @YuanTingHsieh in #3615
- Fix CC GPU Authorizer by @YuanTingHsieh in #3622
- Made --client mandatory as suggested by QA by @nvidianz in #3620
- Remove overseer from general provision by @YuanTingHsieh in #3621
- Clear converted model cache when model version changed by @YuanTingHsieh in #3625
- Add iOS NVFlareSDK by @YuanTingHsieh in #3623
- Make global eval widget async by @ZiyueXu77 in #3619
- Add Android SDK for NVFlare using Executorch by @nvkevlu in #3624
- bug fix pre-innstall print_help by @chesterxgchen in #3633
- Fix edge et example and readme by @YuanTingHsieh in #3631
- Fix edge project default by @YuanTingHsieh in #3630
-
- PT FedAvg Recipe: Add missing arguments 2) LR FedAvg Recipe by @chesterxgchen in #3634
- Add an iOS ExampleApp and iOS NVFlareSDK by @YuanTingHsieh in #3561
- Fix poc shutdown by @YuanTingHsieh in #3626
- update readme and getting started by @chesterxgchen in #3637
- Add android demo app using Android Edge SDK by @nvkevlu in #3559
- Fix auth test by @YuanTingHsieh in #3635
- Consolidate license files by @ZiyueXu77 in #3640
- [Tutorial] Update tutorial README.md [skip ci] by @chesterxgchen in #3647
- Add hello cyclic recipe by @YuanTingHsieh in #3645
- Enable HTTPS for iOS edge by @YuanTingHsieh in #3644
- Update Recipe ProdEnv and POCEnv by @YuanTingHsieh in #3641
- Add FedStatsRecipe by @holgerroth in #3646
- Change Stream Runner shutdown print to debug by @YuanTingHsieh in #3636
- Add recipe enhancements by @YuanTingHsieh in #3652
Full Changelog: 2.7.0rc1...2.7.0rc2
2.7.0rc1: Release candidate of 2.7.0
What's Changed
- Self-Paced-Training - Chapter 2 proof-reading by @zhijinl in #3292
- BioNeMo 2: Add TB Streamer by @holgerroth in #3289
- New HTTP Driver by @nvidianz in #3293
- Self-Paced-Training - Chapter 4 Proof Reading by @zhijinl in #3288
- Logging fix and clarifications by @SYangster in #3291
- Add pytorch edge: controller, emulator, and job by @ZiyueXu77 in #3290
- Add demo ios app by @YuanTingHsieh in #3266
- Re-organize edge simulator and add full simulation pipeline by @ZiyueXu77 in #3298
- update for flwr=1.16.0 by @holgerroth in #3300
- Clean up log messages in ReliableMessage by @yanchengnv in #3304
- Refactor edge training framework by @yanchengnv in #3301
- Cherry-pick PR #3302 by @IsaacYangSLA in #3305
- [Tutorials] Fix broken links by @chesterxgchen in #3310
- Tutorials: Fix Typos by @chesterxgchen in #3314
- Fix job templates by @YuanTingHsieh in #3311
- Remove start_app command by @yanchengnv in #3313
- Update github workflow by @YuanTingHsieh in #3329
- Fix log exception dict, example and doc updates by @SYangster in #3307
- NVFLARE pre-installer by @chesterxgchen in #3295
- Update XOR and CIFAR examples with new SAGE controller by @ZiyueXu77 in #3309
- Enhance CCWF Cross Site Evaluation by @yanchengnv in #3322
- Rewrite of F3 Streaming Testing Tools by @nvidianz in #3323
- Fix logic in AnalyticsReceiver to correctly check for server-side process by @YuanTingHsieh in #3318
- Fix SubprocessLauncher backward compatibility by @YuanTingHsieh in #3312
- Add missing message prop key by @YuanTingHsieh in #3339
- Fix cma_decomposer by @holgerroth in #3315
- TF scaffold rm numerics check by @holgerroth in #3344
- Add details for the documentation about auditing by @nvkevlu in #3332
- Add dependency for tensorboard to fix example by @nvkevlu in #3330
- Fix azure start script generation in azure provision builder by @yanchengnv in #3357
- Fix var substitution in config parsing by @yanchengnv in #3356
- Fixed a race-condition in testing tool by @nvidianz in #3371
- Cherry-pick Separate STDERR from STDOUT in tie (#3346) by @YuanTingHsieh in #3363
- update tutorials by @chesterxgchen in #3351
- Add docs about dashboard prefix by @nvkevlu in #3367
- Use externalizer in FOBS by @nvidianz in #3375
- Update PyTorch Persistor by @holgerroth in #3381
- Sync 2.6 changes back to main : Web, README by @chesterxgchen in #3382
- Cherry-pick custom log changes by @SYangster in #3383
- Check job folder is valid before running rmtree() by @nvidianz in #3387
- Cherry pick 2.6 docs updates to main by @nvkevlu in #3390
- Added error handling in file streaming tool by @nvidianz in #3386
- aiohttp license, ignoring driver errors by @nvidianz in #3397
- Fix CCWF cross-site-evaluation prep model task by @yanchengnv in #3399
- Update secure xgb readme with more detailed instructions by @ZiyueXu77 in #3391
- Update-based mechanism for edge workflow by @yanchengnv in #3389
- Add FLARE DAY 2025 call for submission banner by @SYangster in #3405
- Cheery pick #3373 by @IsaacYangSLA in #3412
- Cherry pick 2.6 fixes 3378 and 3392 by @YuanTingHsieh in #3410
- Cherry-pick of [2.6] Reduce flower status query frequency (#3343) by @YuanTingHsieh in #3411
- Update README, DOCS and Web for examples by @chesterxgchen in #3407
- Fix missing items in startup kit by @yanchengnv in #3422
- Bump @babel/helpers from 7.26.0 to 7.27.0 in /web by @dependabot[bot] in #3333
- Bump esbuild, @astrojs/mdx, @astrojs/tailwind and astro in /web by @dependabot[bot] in #3353
- Bump prismjs from 1.29.0 to 1.30.0 in /web by @dependabot[bot] in #3419
- Update main with changes from 2.6 by @nvkevlu in #3437
- Added delay to shutdown controller by @nvidianz in #3436
- Cherry-pick [2.6] Enhance lightning DDP example (#3421) by @YuanTingHsieh in #3424
- Cherry-pick [2.6] Update tf text to include docker command (#3404) by @YuanTingHsieh in #3425
- example sync from 2.6 by @chesterxgchen in #3438
- Ingore -1 in slot numbers by @nvidianz in #3443
- MONAI examples: update nvflare version by @holgerroth in #3441
- AMPLIFY multi-task example by @holgerroth in #3423
- Async edge training by @yanchengnv in #3447
- Add minor updates for Python version to docs by @nvkevlu in #3445
- BioNeMo examples: apply 2.6 changes by @holgerroth in #3442
- Cherry-pick Update CIFAR10 to use TBWriter (#3449) by @YuanTingHsieh in #3453
- Cherry-pick #3402 by @YuanTingHsieh in #3418
- Apply same VDR updates as 2.6 to main by @ZiyueXu77 in #3459
- Cherry-pick [2.6] Fix typos (#3450) by @YuanTingHsieh in #3460
- Edge - add hello_async and hello_sync schemes by @ZiyueXu77 in #3451
- Cherry-pick [2.6] Use lychee to do link check (#3454) by @YuanTingHsieh in #3461
- Bump transformers from 4.48.0 to 4.50.0 in /examples/advanced/llm_hf by @dependabot[bot] in #3457
- Fix pt code example on website by @holgerroth in #3466
- Fix logging deprecation warnings by @emmanuel-ferdman in #3469
- Cherry-pick [2.6] Update WandB code and example (#3429) by @YuanTingHsieh in #3458
- Enhance provision to prepare for CC by @yanchengnv in #3475
- Tutorials: Initial chapter on advanced algorithms by @holgerroth in #3476
- Enhance edge simulation by @yanchengnv in #3473
- TF example: Add diff algos result by @holgerroth in #3483
- Cherry pick doc updates from 2.6 by @YuanTingHsieh in #3487
- Remove torch ping version by @YuanTingHsieh in #3485
- Cherry-pick [2.6] Update openmined-psi version (#3467) by @YuanTingHsieh in #3486
- Fix ci main by @YuanTingHsieh in #3489
- Update amplify lib to use main branch by @YuanTingHsieh in #3490
- Revert prismjs upgrade, remove dark mode styling by @SYangster in #3493
- Add CC OnPrem CVM builder by @YuanTingHsieh in #3482
- Update cc mgr and authorizers by @YuanTingHsieh in #3478
- AMPLIFY: add Pearson score and "all tasks" examples by @holgerroth in #3452
- add PyTorch Lightning Logger by @chesterxgchen in #3494
- [BUGFIX] fixed missing client_api_config.json for flower job by @gslama12 in #3495
- Fix prismjs loading by @SYangster in #3499
- Add GRPC support for edge API by @yanchengnv in #3498
- Additional Chapter 2 Proof-Reading for self-paced course by @zhijinl in #3496
- Add job api for edge jobs by @yanchengnv in #3501
- Revert Builder spec change by @YuanTingHsieh in #3503
- Remove template...
2.6.2: Updating Flower Integration and examples
2.6.1: Bug fixes and feature enhancements
What's Changed
- [2.6] Update CIFAR10 to use TBWriter by @YuanTingHsieh in #3449
- [2.6] Use lychee to do link check by @YuanTingHsieh in #3454
- [2.6] Fix typos by @YuanTingHsieh in #3450
- [2.6] add more info to address VDR questions by @ZiyueXu77 in #3456
- [2.6] Fix integration tests following code and example structure changes by @YuanTingHsieh in #3448
- [2.6] Fix xgboost ci job configs by @YuanTingHsieh in #3465
- [2.6] Update openmined-psi version by @YuanTingHsieh in #3467
- [2.6] Increase stream act timeout for unit test by @YuanTingHsieh in #3468
- [2.6] Use script path instead of relative path by @YuanTingHsieh in #3472
- [2.6] Update documentation by @YuanTingHsieh in #3471
- [2.6] Fix installation doc by @YuanTingHsieh in #3480
- [2.6] TF example: Add diff algos result by @holgerroth in #3484
- [2.6] Reduce test round to avoid timeout by @YuanTingHsieh in #3488
- [2.6] Add rank type check for client API by @holgerroth in #3524
- [2.6] Fix lightning api for existing NeMo examples. by @holgerroth in #3518
- [2.6] Support server name longer than 64 chars by @yanchengnv in #3537
Full Changelog: 2.6.0...2.6.1
2.6.0: Major release
Special thanks to all the contributors for this release (in git shortlog order):
67 @YuanTingHsieh(謝沅廷)
38 @yanchengnv
37 @ZiyueXu77
36 @SYangster
27 @chesterxgchen
25 @nvkevlu
24 @holgerroth
23 @nvidianz
22 @yhwen
14 @IsaacYangSLA
9 @zhijinl
4 @francescofarina
1 @agiusa
1 @NAEV95
1 @pxLi
1 @taleinat
1 @falibabaei
What is New ?
Message Quantization: Reducing Communication Overhead
One of the major bottlenecks in FL is the exchange of model updates among remote participants and servers. The size of these messages can be prohibitively large, leading to increased latency and bandwidth consumption. Furthermore, given that recent LLMs are trained with reduced precision, the default fp32 message precision can even artificially inflate the message size. Message quantization offers a solution by reducing the precision of transmitted updates, thereby compressing the message size.
We implemented quantization and dequantization with our filter mechanism: quantization will be performed over the outgoing model weights before transmission, and dequantization will recover the original precision upon receiving the message at the other end. There are two benefits of such an implementation: first, no code change will be needed from user side - the same training script can be used with and without message quantization with a simple config setting; second, both training and aggregation will be performed at original precision, rather than quantized data, such that the potential impact message quantization can have over the training process will be minimized.
We use direct cropping and casting to convert fp32 to fp16, and make use of bitsandbytes to perform 8- and 4-bit quantizations. With this new functionality, we support both numpy arrays (previous default), and torch Tensors directly for training LLMs.
Table 1 illustrates the message size in MB for a 1B parameter LLM under different precisions. You can find more details regarding training loss curve alignments in our LLM example.
By applying message quantization techniques, FL can achieve significant bandwidth savings, and for training LLM with Supervised Fine-Tuning (SFT) in our experiments. As shown in Figure 2, message quantization does not sacrifice model convergence quality with regard to the training loss.

Figure 2. Federated SFT comparison: FL under message quantization.
Native Tensor Transfer
In 2.6, we further introduce support for native tensor transfer. This feature allows sending PyTorch tensors directly, reducing serialization and communication overhead. In the previous version, a Tensor would be transformed to Numpy for serialization and communication. In this release, the native tensors can be directly transferred. No tensor to numpy conversion is needed, and thus the original data format is preserved. Only PyTorch is supported for now.
Model Streaming Enhancements
Reduce Local Memory Usage
One critical challenge in FL is the memory overhead for sending and receiving the messages. Under the default setting, large memory needs to be allocated to hold the entire message. Such requirements can be affordable with decent system capabilities and moderate model size, but when considering a 70B or larger parameter model, it can quickly drain the available system memory.
In order to send the model in chunks, we need additional memory to prepare and receive model chunks, that requires the model size to be doubled, i.e. 70GB model requires 140 GB memory. To address this issue, we are introducing, …
- Object container streaming: Processing and transmitting model incrementally, rather than requiring the entire dictionary of gradients to be stored in memory at once. Container streaming serializes one item of the parameter dictionary at a time. So for the above example of a 70 GB model with 1 GB item-max, the additional memory needed for sending the message is 70 GB if sending it as a whole, i.e. 70+70=140 GB. In contrast, object ContainerStreamer only needs 1GB additional memory: i.e. 70+1 = 71 GB
Support Unlimited Memory Streaming
Large model streaming is currently bound to the CPU/GPU memory size, i.e. the model must be able to fix into memory before we can stream to the remote server. What if the model size is bigger than the memory size ? In this release we also introduced the file-based streaming to address this concern
- File Streaming: File streaming reads the file chunk-by-chunk and therefore only consumes the memory required to hold one chunk of the data. Thus, the additional memory needed by FileStreamer is independent of the model size / max item size, and only relies on the file i/o setting, which can be a very small memory overhead.
The table below illustrates the memory comparisons with a local simulation of one-time sending a 1B model. We record the system memory footprint and compare the peak memory usage of the three settings: regular, container streaming, and file streaming. We can observe that the memory usage is significantly reduced by using streaming, especially for file streaming. However, file streaming can take a longer time to finish the job due to file I/O efficiency.
Note: Streaming enhancements are not yet integrated into the high-level APIs or existing FL algorithm controllers/executors. However, users can build custom controllers or executors to leverage this feature. Full support will be included in a future release.
Structured Logging
Structured logging is a long standing request from customer, together with other logging related requests, we are trying to address customers concerns related to
- Logging observability – can we format the log in json format, that can be used data observability tool
- Can we make it easier for the data scientists to just focus on the training log rather than the communication logs?
- Can we make dynamic changes to the log level for easy debugging in production ?
- Can we change log level for the model and causing the call classes in the module and sub-modules to change log level instead of having to change each class individually
This feature addresses these concerns.
-
We change the python logging configuration from fileConfig to dictConfig
The new FLARE Loggers are designed to follow the package level hierarchy using dot separated logger names in order to facilitate granular control at different levels. -
We provide a Default Logging Configuration file log_config.json.default for all NVFLARE sub-systems with pre-configured handlers for console level colors, logs, error logs, structured json logs, and fl training logs.
-
We support Dynamic Logging Configuration Commands to allow dynamic change logging configuration without restarting the FL system.
-
To support various needs and backward compatibility, we now have the following default log files
- log.txt – default log file as previous version
- log.json – json format log
- Log_error.txt – “ERROR” level logs to log_error.txt for quick error lookup
- log_fl.txt – This log removes the system and communication related logs and clearly shows logs related to FL tasks ( such as training)
-
Consider many researchers will mostly using Simulator for quick experiments, we further defined a few predefined logging modes for simulator
- log config mode ('concise', 'full', 'verbose'), default to concise
- Concise – only shows the FL tasks logs
- Full – default to previous logging configuration as previous release
- Verbose – debug level logging
For details, please refer to logging tutorials and logging documentation
- log config mode ('concise', 'full', 'verbose'), default to concise
Federated Statistics Extension
Quantiles Support: Expands statistical capabilities by introducing quantile computation for federated statistics.
Quantile statistics refers to statistical measures that divide a probability distribution or dataset into intervals with equal probabilities or proportions. Quantiles help summarize the distribution of data by providing key points that indicate how values are spread.
Please refer Federated Statistics for tabular data example
System Monitoring
FLARE Monitoring provides an initial solution for tracking system metrics of your federated learning jobs. Different from Machine learning experiment tracking, where it focused on the training metrics, the moni...
2.6.0rc5: Minor bug fixes
What's Changed
- [2.6] Enhance lightning DDP example by @YuanTingHsieh in #3421
- [2.6] Cherry pick updates to docs for auditing and fix docs by @nvkevlu in #3428
- [2.6] Handle None client in ClientRunManager by @yanchengnv in #3431
- [2.6] Fix docs for 2.6 by @nvkevlu in #3430
- [2.6] Explain and suggest dashboard launch in CSP by @IsaacYangSLA in #3432
- [2.6] Update Experimental tracking example (MLflow & Tensorboard) by @chesterxgchen in #3427
- [2.6] Added delay to shutdown controller by @nvidianz in #3435
- [2.6] MONAI examples: update nvflare version by @holgerroth in #3434
- [2.6] Update WandB code and example by @YuanTingHsieh in #3429
Full Changelog: 2.6.0rc4...2.6.0rc5
2.6.0rc4: Minor bug fixes and documentation updates
What's Changed
- [2.6] Fix server start.sh arg replacement and add system_info.ipynb by @yanchengnv in #3416
- [2.6] Added check for job folder before running rmtree() by @nvidianz in #3413
- [2.6] Fix broken links in README by @ZiyueXu77 in #3417
- [2.6] Update README, docs, web etc. by @chesterxgchen in #3409
Full Changelog: 2.6.0rc3...2.6.0rc4
2.6.0rc3
What's Changed
- [2.6] Enhance CCWF Cross Site Evaluation by @yanchengnv in #3341
- [2.6] Reduce flower status query frequency by @yanchengnv in #3343
- [2.6] TF scaffold rm numerics check by @holgerroth in #3345
- [2.6] Fix var substitution in config parsing by @yanchengnv in #3350
- [2.6] Separate STDERR from STDOUT in tie by @YuanTingHsieh in #3346
- [2.6] Fix azure start scripts generation in azure builder by @yanchengnv in #3349
- [2.6] Fixed a race condition in testing tool by @nvidianz in #3358
- Add clarification to brats by @ZiyueXu77 in #3359
- [2.6] Cherry pick addition of tensorboard dependency for 2.6 branch by @nvkevlu in #3354
- [2.6] Remove old log config from MANIFEST.in by @SYangster in #3366
- [2.6] Remove edge code and examples by @YuanTingHsieh in #3348
- [2.6] Tutorial Fix: gramma correction sync with main by @chesterxgchen in #3355
- [2.6] Add exit code when nvflare provision encounter exception by @IsaacYangSLA in #3364
- [2.6] Update non-nvflare info for packages and releases by @IsaacYangSLA in #3337
- [2.6] Fix dashboard user message by @IsaacYangSLA in #3373
- [2.6] Fix notebook render issue for 08.0_introduction/introduction.ipynb by @ZiyueXu77 in #3370
- [2.6] Add custom logger utility by @SYangster in #3365
- [2.6] Use externalizer in FOBS by @nvidianz in #3372
- [2.6] Bionemo2 updates in 2.6 by @holgerroth in #3374
- [2.6] Web tutorials updates by @chesterxgchen in #3369
- [2.6] Remove HA fields from Dashboard UI and add check for server name by @nvkevlu in #3376
- [2.6] Update README by @chesterxgchen in #3377
- [2.6] Added error handling in file streaming tool by @nvidianz in #3384
- [2.6] Add release notes and additional documentation by @nvkevlu in #3379
- [2.6] Enhance MlflowReceiver and WandBReceiver to support running on client side by @YuanTingHsieh in #3378
- [2.6] Enhance Params Exchange Mechanism by @YuanTingHsieh in #3385
- [2.6] Support BioNeMo 2.5 by @holgerroth in #3388
- [2.6] Fix CSE Prep Model Task by @yanchengnv in #3394
- [2.6] Fix ClientAPI + multi GPU by @YuanTingHsieh in #3392
- [2.6] aiohttp license, ignoring driver errors by @nvidianz in #3396
- Revert "[2.6] Enhance Params Exchange Mechanism (#3385)" by @YuanTingHsieh in #3401
- [2.6] Apply same xgb updates to 2.6 as main by @ZiyueXu77 in #3403
- [2.6] Use server_expected_format to represent the server <-> client params format by @YuanTingHsieh in #3402
- [2.6] Update tf text to include docker command by @YuanTingHsieh in #3404
Full Changelog: 2.6.0rc2...2.6.0rc3
2.6.0rc2: Enhancements and bug fixes
What's Changed
- [2.6] New HTTP Driver by @nvidianz in #3294
- [2.6] Logging fix and clarifications by @SYangster in #3297
- [2.6] update for flwr=1.16.0 by @holgerroth in #3299
- Due to credential env var format change, the user prompt requires one by @IsaacYangSLA in #3302
- [2.6] Fix cma_decomposer by @holgerroth in #3316
- [2.6] Update PyTorch Persistor by @holgerroth in #3317
- [2.6] Fix logic in AnalyticsReceiver to correctly check for server-side process by @YuanTingHsieh in #3320
- [2.6] Cherry-pick Fix job templates (#3311) by @YuanTingHsieh in #3319
- [2.6] Remove start_app command by @yanchengnv in #3327
- [2.6] Clean up msg log of ReliableMessage by @yanchengnv in #3326
- [2.6] Update github workflow by @YuanTingHsieh in #3334
- [2.6] Fix SubprocessLauncher backward compatibility [skip ci] by @YuanTingHsieh in #3321
- [2.6] Fix log exception dict, example and doc updates [skip ci] by @SYangster in #3328
- [2.6] Rewrite of F3 streaming testing tools by @nvidianz in #3324
Full Changelog: 2.6.0rc1...2.6.0rc2
2.6.0rc1: Release candidate 1 of nvflare 2.6.0
What's Changed
- Update higgs data link by @ZiyueXu77 in #2941
- Update video links by @SYangster in #2937
- [main] doc fix typo by @chesterxgchen in #2939
- Add research examples to tutorial page by @SYangster in #2942
- Fix doc and docstring issues by @YuanTingHsieh in #2931
- Add check for receive before send in client api by @YuanTingHsieh in #2930
- Add flare series section, enhancements by @SYangster in #2948
- Bump tqdm from 4.66.1 to 4.66.3 in /research/condist-fl by @dependabot in #2557
- Bump micromatch from 4.0.5 to 4.0.8 in /web by @dependabot in #2838
- Bump dset from 3.1.3 to 3.1.4 in /web by @dependabot in #2936
- Add support to newer python version by @YuanTingHsieh in #2951
- Upgrade formatter version for support higher version of Python by @YuanTingHsieh in #2957
- Fix research links redirects by @SYangster in #2953
- Cherry-pick PR#2959 by @IsaacYangSLA in #2964
- improved fobs register_folder to catch ValueError by @yhwen in #2958
- Add fedrag example with embedding training by @ZiyueXu77 in #2915
- Bump rollup from 3.29.4 to 3.29.5 in /web by @dependabot in #2963
- Bump path-to-regexp from 6.2.2 to 6.3.0 in /web by @dependabot in #2938
- Bump vite from 4.5.3 to 4.5.5 in /web by @dependabot in #2950
- Add the hello-pt-resnet example by @yhwen in #2954
- Update CONTRIBUTING.md by @YuanTingHsieh in #2969
- update PSI to support python 3.11 by @chesterxgchen in #2972
- Add web branch versioning by @SYangster in #2974
- [Main] Support object reuse by @yanchengnv in #2975
- update openmind-psi to 2.0.5 for python 3.12 support by @chesterxgchen in #2981
- Replace the distutils with shutil by @yhwen in #2978
- Allow multiple workflows in CCWF by @yanchengnv in #2980
- F3 Streaming Code Rewrite by @nvidianz in #2960
- Pass components into script runner by @YuanTingHsieh in #2983
- Fix tf model persistor and tf model by @YuanTingHsieh in #2984
- Allow customization of BaseFedJob by @YuanTingHsieh in #2985
- Add umami analytics by @SYangster in #2987
- Update pt params converter by @holgerroth in #2989
- Bionemo demos by @NAEV95 in #2968
- Add FLARE DAY page by @SYangster in #2992
- Add web speaker by @SYangster in #2999
- Fix doc typo and VDR reported issues by @YuanTingHsieh in #2994
- BioNeMo: use multi threading but reduce num workers by @holgerroth in #2996
- Update test script by @YuanTingHsieh in #2995
- Update documentation for Dockerfile, add location of tbevents, fix link by @nvkevlu in #2993
- Fix the entry for getting started in the TOC by @nvkevlu in #3007
- Expose init in client lightning api by @YuanTingHsieh in #3004
- Enhance web responsive design for mobile by @SYangster in #3010
- Refactoring F3 Streaming sender by @nvidianz in #2986
- Update flwr job object, client, server by @YuanTingHsieh in #3008
- Bump cookie, @astrojs/mdx and astro in /web by @dependabot in #3002
- Add GNN encoder and xgb outputs for finance end-to-end example by @ZiyueXu77 in #2970
- Fix fobs issue by @YuanTingHsieh in #3011
- Fix fobs doc by @YuanTingHsieh in #3012
- Remove the need to create additional ports when running a job by @yhwen in #3017
- [#3021] Fixed broken doc ref to 'helm_chart' by @agiusa in #3022
- Fix PTModel optional arguments by @SYangster in #3025
- FedBPT: Fix fedbpt cma version by @holgerroth in #3029
- split learning: upgrade openmined-psi to 2.0.5 by @chesterxgchen in #3020
- Enhance POC notebook and docs by @SYangster in #3031
- Support multiple host names for FLARE server by @yanchengnv in #3018
- Use multi-line table by @SYangster in #3034
- Fix mock executors by @yanchengnv in #3040
- Update the GNN encoding for XGB financial example by @ZiyueXu77 in #3039
- Add XGB explainability output by @ZiyueXu77 in #3044
- Support file source in Job API by @yanchengnv in #3043
- Update finance end-to-end readme for figure position by @ZiyueXu77 in #3046
- Enhance comm scalability Part 1 by @yanchengnv in #3047
- Bump astro from 4.15.12 to 4.16.6 in /web by @dependabot in #3048
- Modify log config to write errors to a separate log file by @nvkevlu in #3050
- Support aborting messages by @yanchengnv in #3053
- Job launcher by @yhwen in #3049
- Support extra provision builder generated component files by @yanchengnv in #3056
- Update LLM_HF example by @ZiyueXu77 in #3054
- Job launcher server side by @yhwen in #3055
- Update dashboard cloud base image version to meet Python 3.9 by @IsaacYangSLA in #3064
- Add precision conversion and quantization filters by @ZiyueXu77 in #3059
- Support large object streaming by @yanchengnv in #3061
- Add ability for clients to send error log to server by @nvkevlu in #3057
- Support Aux Message and Object Streaming in SP and CP by @yanchengnv in #3068
- Add no_grad to validation steps by @holgerroth in #3071
- Adjust tf/fedopt_ctl to include updates for the model's non-trainable by @falibabaei in #3058
- Fix controller sequential relay test by @YuanTingHsieh in #3073
- Fix cert issue by @YuanTingHsieh in #3074
- Fix custom pythonpath missing by @yhwen in #3069
- Add Simulator SP Aux message support by @yhwen in #3076
- Fix xgb explainability issue by @YuanTingHsieh in #3078
- Fix auth ls command response by @YuanTingHsieh in #3075
- Fix the streaming test conflict by @nvidianz in #3082
- Cherry-pick Fix TF examples (#3038) by @YuanTingHsieh in #3083
- Bump cross-spawn from 7.0.3 to 7.0.6 in /web by @dependabot in #3084
- Fix poc command by @yanchengnv in #3087
- Fix the simulator without custom folder job run error by @yhwen in #3091
- Docker job launcher by @yhwen in #3072
- Logger hierarchy by @SYangster in #3081
- Refactor provision for general use - Part 1 by @yanchengnv in #3092
- Implement a new algorithm for the CUDA plugin by @YuanTingHsieh in #3085
- Remove the extra client app custom folder by @yhwen in #3101
- Add storage capability for client logs and allow for use with LogSender and LogReceiver by @nvkevlu in #3077
- Keep project_name shorter than the limit by @IsaacYangSLA in #3106
- Directly send tensor via jit serialization by @ZiyueXu77 in #3088
- Update root readme by @holgerroth in #3108
- Updated XGBoost User Guide by @nvidianz in #3111
- Handle param converter according to exchange...



