Skip to content

Commit 81dd280

Browse files
Remove last number of version in doc content (#687)
* Update workspace related documentation (#684) * Update workspace related documentation * Add more details to server/client workspace and add reference * Update documentation format (#685) * Remove last number of version in doc content
1 parent 65848ff commit 81dd280

17 files changed

+222
-185
lines changed

docs/conf.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,8 @@ def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
4848
author = "NVIDIA"
4949

5050
# The full version, including alpha/beta/rc tags
51-
release = "2.1.0"
52-
version = "2.1.0"
51+
release = "2.1.2"
52+
version = "2.1.2"
5353

5454

5555
# -- General configuration ---------------------------------------------------

docs/example_applications.rst

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -46,15 +46,15 @@ For the complete collection of example applications, see https://github.com/NVID
4646

4747
Custom Code in Example Apps
4848
===========================
49-
There are several ways to make :ref:`custom code <custom_code>` available to clients when using NVIDIA FLARE. Most
50-
hello-* examples use a custom folder within the FL application. Note that using a custom folder in the app needs to be
51-
:ref:`allowed <troubleshooting_byoc>` when using secure provisioning. By default, this option is disabled in the secure
52-
mode. POC mode, however, will work with custom code by default.
49+
There are several ways to make :ref:`custom code <custom_code>` available to clients when using NVIDIA FLARE.
50+
Most hello-* examples use a custom folder within the FL application.
51+
Note that using a custom folder in the app needs to be :ref:`allowed <troubleshooting_byoc>` when using secure provisioning.
52+
By default, this option is disabled in the secure mode. POC mode, however, will work with custom code by default.
5353

5454
In contrast, the `CIFAR-10 <https://github.com/NVIDIA/NVFlare/tree/main/examples/cifar10>`_,
5555
`prostate segmentation <https://github.com/NVIDIA/NVFlare/tree/main/examples/prostate>`_,
5656
and `BraTS18 segmentation <https://github.com/NVIDIA/NVFlare/tree/main/examples/brats18>`_ examples assume that the
57-
learner code is already installed on the client's system and
58-
available in the PYTHONPATH. Hence, the app folders do not include the custom code there. The PYTHONPATH is
59-
set in the ``run_poc.sh`` or ``run_secure.sh`` scripts of the example. Running these scripts as described in the README
60-
will make the learner code available to the clients.
57+
learner code is already installed on the client's system and available in the PYTHONPATH.
58+
Hence, the app folders do not include the custom code there.
59+
The PYTHONPATH is set in the ``run_poc.sh`` or ``run_secure.sh`` scripts of the example.
60+
Running these scripts as described in the README will make the learner code available to the clients.

docs/examples/access_result.rst

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,7 @@
11
Accessing the results
22
^^^^^^^^^^^^^^^^^^^^^
33

4-
Once the job is finished, you can issue the ``download_job [JOB_ID]``
5-
in the admin client to download the results.
4+
The results of each job will usually be stored inside the server side workspace.
65

7-
`[JOB_ID]` is the ID assigned by the system when submitting the job.
8-
9-
The result will be downloaded to your admin workspace
10-
(the exact download path will be displayed when running the command).
11-
12-
The download workspace will be in ``[DOWNLOAD_DIR]/[JOB_ID]/workspace/``.
6+
Please refer to :ref:`access server-side workspace <access_server_workspace>`
7+
for accessing the server side workspace.

docs/examples/hello_tf2.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ let's put this preparation stage into one method ``setup``:
8181

8282
.. literalinclude:: ../../examples/hello-tf2/custom/trainer.py
8383
:language: python
84-
:lines: 41-73
84+
:lines: 41-71
8585
:lineno-start: 41
8686
:linenos:
8787

docs/faq.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -291,7 +291,7 @@ Server related questions
291291

292292
#. What happens if the FL server crashes?
293293

294-
See :ref:`high_availability` for the features implemented in NVIDIA FLARE 2.1.0 around FL server failover.
294+
See :ref:`high_availability` for the features implemented in NVIDIA FLARE 2.1 around FL server fail-over.
295295

296296
#. Why does my FL server keep crashing after a certain round?
297297

docs/flare_overview.rst

Lines changed: 33 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -4,36 +4,53 @@
44
NVIDIA FLARE Overview
55
#####################
66

7-
**NVIDIA FLARE** (NVIDIA Federated Learning Application Runtime Environment) is a domain-agnostic, open-source, extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflow to a federated paradigm.
7+
**NVIDIA FLARE** (NVIDIA Federated Learning Application Runtime Environment) is a domain-agnostic, open-source,
8+
extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflow to a federated paradigm.
89

9-
With Nvidia FLARE platform developers can build a secure, privacy preserving offering for a distributed multi-party collaboration.
10+
With Nvidia FLARE platform developers can build a secure, privacy preserving offering
11+
for a distributed multi-party collaboration.
1012

11-
NVIDIA FLARE SDK is built for robust, production scale for real-world federated learning deployments. It includes:
13+
NVIDIA FLARE SDK is built for robust, production scale for real-world federated learning deployments.
1214

13-
* A runtime environment enabling data scientists and researchers to easily carry out FL experiments in a real-world scenario. Nvidia FLARE supports multiple task execution, maximizing data scientist's productivity.
15+
It includes:
16+
17+
* A runtime environment enabling data scientists and researchers to easily carry out FL experiments in a
18+
real-world scenario. Nvidia FLARE supports multiple task execution, maximizing data scientist's productivity.
1419

15-
* System capabilities to stand up Federated learning with high availability infrastructure, eliminating FL server being a single point of failue.
20+
* System capabilities to start up federated learning with high availability infrastructure.
1621

1722
* Built-in implementations of:
1823

19-
* Federated Training workflows (scatter-gather, Cyclic);
20-
* Federated Evaluation workflows (global model evaluation, cross site model evalidation);
21-
* Learning algorithms (FedAvg, FedOpt, FedProx) and
22-
* Privacy preserving algorithms (homomorphic encryption, differential privacy)
24+
* Federated training workflows (scatter-and-gather, Cyclic)
25+
* Federated evaluation workflows (global model evaluation, cross site model validation);
26+
* Learning algorithms (FedAvg, FedOpt, FedProx)
27+
* Privacy preserving algorithms (homomorphic encryption, differential privacy)
28+
2329
* Extensible management tools for:
2430

25-
* Secure provisioning (SSL certificates),
31+
* Secure provisioning (SSL certificates)
2632
* Orchestration (Admin Console) | (Admin APIs)
27-
* Monitoring of Federated learning experiments. (Aux APIs; Tensorboard visualization)
33+
* Monitoring of federated learning experiments (Aux APIs; Tensorboard visualization)
2834

29-
* A rich set of programmable APIs allowing researchers to create new federated workflows, learning & privacy preserving algorithms.
35+
* A rich set of programmable APIs allowing researchers to create new federated workflows,
36+
learning & privacy preserving algorithms.
3037

3138

3239
High-level System Architecture
3340
==============================
34-
As outlined above, NVIDIA FLARE includes components that allow researchers and developers to build and deploy end-to-end federated learning applications. The high-level architecture is shown in the diagram below. This includes the foundational components of the NVIDIA FLARE API and tools for Privacy Preservation and Secure Management of the platform. On top of this foundation are the building blocks for federated learning applications, with a set of Federation Workflows and Learning Algorithms.
41+
As outlined above, NVIDIA FLARE includes components that allow researchers and developers to build and deploy
42+
end-to-end federated learning applications.
43+
44+
The high-level architecture is shown in the diagram below.
45+
46+
This includes the foundational components of the NVIDIA FLARE API and tools for privacy preservation and
47+
secure management of the platform.
48+
49+
On top of this foundation are the building blocks for federated learning applications,
50+
with a set of federation workflows and learning algorithms.
3551

36-
Alongside this central stack are tools that allow experimentation and proof-of-concept development with the FL Simulator (POC mode), along with a set of tools used to deploy and manage production workflows.
52+
Alongside this central stack are tools that allow experimentation and proof-of-concept development
53+
with the FL Simulator (POC mode), along with a set of tools used to deploy and manage production workflows.
3754

3855
.. image:: resources/FL_stack.png
3956
:height: 300px
@@ -65,7 +82,7 @@ in a way that allows others to easily customize and extend.
6582
Every component and API is specification-based, so that alternative implementations can be
6683
constructed by following the spec. This allows pretty much every component to be customized.
6784

68-
We strive to be unopinionated in reference implementations, encouraging developers and end-users
85+
We strive to be open-minded in reference implementations, encouraging developers and end-users
6986
to extend and customize to meet the needs of their specific workflows.
7087

7188

@@ -81,7 +98,7 @@ problems in a straightforward way.
8198

8299
We design ths system to be general purpose, to enable different "federated" computing use cases.
83100
We carefully package the components into different layers with minimal dependencies between layers.
84-
In this way, implementations for specific use cases should not demand modificastions to the
101+
In this way, implementations for specific use cases should not demand modifications to the
85102
underlying system core.
86103

87104

docs/highlights.rst

Lines changed: 26 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@
44
Highlights
55
##########
66

7-
New in NVIDIA FLARE 2.1.0
8-
=========================
7+
New in NVIDIA FLARE 2.1
8+
=======================
99
- :ref:`High Availability (HA) <high_availability>` supports multiple FL Servers and automatically cuts
1010
over to another server when the currently active server becomes unavailable.
1111
- :ref:`Multi-Job Execution <multi_job>` supports resource-based multi-job execution by allowing for concurrent runs
@@ -31,22 +31,34 @@ Training workflows
3131
Evaluation workflows
3232
--------------------
3333
- :ref:`Cross site model validation <cross_site_model_evaluation>` is a workflow that allows validation of each
34-
client model and the server global model against each client dataset. Data is not shared, rather the collection
35-
of models is distributed to each client site to run local validation. The results of local validation are
36-
collected by the server to construct an all-to-all matrix of model performance vs. client dataset.
34+
client model and the server global model against each client dataset.
35+
36+
Data is not shared, rather the collection of models is distributed to each client site to run local validation.
37+
38+
The results of local validation are collected by the server to construct an all-to-all matrix of
39+
model performance vs. client dataset.
40+
3741
- :ref:`Global model evaluation <cross_site_model_evaluation>` is a subset of cross-site model validation in which
3842
the server’s global model is distributed to each client for evaluation on the client’s local dataset.
3943

4044
Privacy preservation algorithms
4145
-------------------------------
42-
Privacy preserving algorithms in NVIDIA FLARE are implemented as filters that can be applied as data is sent or received between peers.
46+
Privacy preserving algorithms in NVIDIA FLARE are implemented as :ref:`filters <filters_for_privacy>`
47+
that can be applied as data is sent or received between peers.
48+
49+
- Differential privacy:
50+
51+
- Exclude specific variables (:class:`ExcludeVars<nvflare.app_common.filters.exclude_vars.ExcludeVars>`)
52+
- truncate weights by percentile (:class:`PercentilePrivacy<nvflare.app_common.filters.percentile_privacy.PercentilePrivacy>`)
53+
- apply sparse vector techniques (:class:`SVTPrivacy<nvflare.app_common.filters.svt_privacy.SVTPrivacy>`).
54+
55+
- Homomorphic encryption: NVIDIA FLARE provides homomorphic encryption and decryption
56+
filters that can be used by clients to encrypt Shareable data before sending it to a peer.
57+
58+
The server does not have a decryption key but using HE can operate on the encrypted data to aggregate
59+
and return the encrypted aggregated data to clients.
4360

44-
- :ref:`Differential privacy <filters_for_privacy>` - Three reference filters are included to exclude specific
45-
variables (exclude_vars), truncate weights by percentile (percentile_privacy), or apply sparse vector techniques (SVT, svt_privacy).
46-
- :ref:`Homomorphic encryption <filters_for_privacy>` - NVIDIA FLARE provides homomorphic encryption and decryption
47-
filters that can be used by clients to encrypt Shareable data before sending it to a peer. The server does not
48-
have a decryption key but using HE can operate on the encrypted data to aggregate and return the encrypted
49-
aggregated data to clients. Clients can then decrypt the data with their local key and continue local training.
61+
Clients can then decrypt the data with their local key and continue local training.
5062

5163
Learning algorithms
5264
-------------------
@@ -65,5 +77,5 @@ Learning algorithms
6577
Examples
6678
---------
6779

68-
Available at https://github.com/NVIDIA/NVFlare/tree/main/examples, including cifar10 (end-to-end workflow), hello-pt,
69-
hello-monai, hello-numpy, hello-tf2.
80+
Nvidia FLARE provide a rich set of :ref:`example applications <example_applications>` to walk your through the whole
81+
process.

docs/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Federated learning allows multiple clients, each with their own data, to collabo
99

1010
NVIDIA FLARE is built on a componentized architecture that allows researchers to customize workflows to their liking and experiment with different ideas quickly.
1111

12-
With NVIDIA FLARE 2.1.0, :ref:`High Availability (HA) <high_availability>` and :ref:`Multi-Job Execution <multi_job>` introduce new concepts and change the way the system needs to be configured and operated. See `conversion from 2.0 <appendix/converting_from_previous.html>`_ for details.
12+
With NVIDIA FLARE 2.1, :ref:`High Availability (HA) <high_availability>` and :ref:`Multi-Job Execution <multi_job>` introduce new concepts and change the way the system needs to be configured and operated. See `conversion from 2.0 <appendix/converting_from_previous.html>`_ for details.
1313

1414
.. toctree::
1515
:maxdepth: 1

docs/programming_guide/fl_context.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,8 +80,8 @@ ClientEngineSpec for services they provide.
8080

8181
Job ID (fl_ctx.get_job_id())
8282
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
83-
FL application is always running within a RUN, which has a unique ID number. From NVIDIA FLARE version 2.1.0, job ID is
84-
used as the run number, and it no longer has to be an integer.
83+
FL application is always running within a RUN, which has a unique ID number.
84+
From NVIDIA FLARE version 2.1, job ID is used as the run number, and it no longer has to be an integer.
8585

8686
Identity Name (fl_ctx.get_identity_name())
8787
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -203,7 +203,7 @@ The following diagram shows the lifecycle of the FL context for each iteration.
203203
.. image:: ../resources/FL_Context.png
204204
:height: 600px
205205

206-
In the Peer Context, following props from the Server are available (job ID is used as the run number in version 2.1.0+):
206+
In the Peer Context, following props from the Server are available (job ID is used as the run number in version 2.1+):
207207
- Run Number: peer_ctx.get_job_id())
208208

209209
Server Side FL Context

docs/programming_guide/high_availability.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@
33
#####################################
44
High Availability and Server Failover
55
#####################################
6-
Previously in NVIDIA FLARE 2.0 and before, the FL server was the single point of failure for the system. Starting with
7-
NVIDIA FLARE 2.1.0, a high availability (HA) solution has been implemented to support multiple FL servers with
8-
automatic cutover when the currently active server becomes unavailable.
6+
Previously in NVIDIA FLARE 2.0 and before, the FL server was the single point of failure for the system.
7+
Starting with NVIDIA FLARE 2.1, a high availability (HA) solution has been implemented to support
8+
multiple FL servers with automatic cut-over when the currently active server becomes unavailable.
99

1010
The following areas were enhanced for supporting HA:
1111

@@ -40,8 +40,8 @@ moment, there is at most one hot server.
4040

4141
The endpoint of the Overseer is provisioned and its configuration information is included in the startup kit of each entity.
4242

43-
For security reasons, the Overseer must only accept authenticated communications. In NVIDIA FLARE 2.1.0, the Overseer is
44-
implemented with mTLS authentication.
43+
For security reasons, the Overseer must only accept authenticated communications.
44+
In NVIDIA FLARE 2.1, the Overseer is implemented with mTLS authentication.
4545

4646
Overseers maintain a service session id (SSID), which changes whenever any hot SP switch-over occurs, either by admin
4747
commands or automatically. The following are cases associated with SP switch-over and SSID:

0 commit comments

Comments
 (0)