Skip to content

Validating indicators

Maura Pintor edited this page Oct 22, 2021 · 3 revisions

We upload here a zip file that contains the following items:

  1. the instructions for loading the models;
  2. the information that we tracked during the attacks, for enabling computing the indicators; and
  3. the values of the indicators obtained from the tracked information.

These are provided at least for the models that are implemented with pure PyTorch and Keras. We will later decide whether to include the results for the model implemented in SecML, as it would require to re-implement the defense with other tools.

The instructions for running the attacks, as well as the hyperparameters we used for the experiments, can be found in our paper.

Loading the models

The instructions are already part of the repository, and can be found at the following URLs:

They also include the links for downloading the corresponding models.

Attack tracking info

The information stored during the optimization of the attacks is provided as a pickle, containing the list of 10 samples. For each item of the list, the following information is stored:

  • the input sample (key x)
  • the original label of the input sample (key y)
  • the target label of the attack (key y_target)
  • the predicted label after the attack (key y_adv)
  • the output scores along the optimization path (key scores_path)
  • the norms of the gradients along the optimization path (key grad_norms)
  • the value of the attacker's loss along the optimization path (key attacker_loss)
  • the predicted labels after K different restarts (one label for each restart), if any restart was used (key restart_labels)
  • the predicted labels after using the input directly on the target model, if another model was used for creating the attacks (key transfer_labels)

Values of the indicators

Additionally, we provide csv files containing the indicators for each of the input samples, in the same order as the list in the pickled file.

This allows validating the method, as developers can:

  1. reproduce results to ensure the information tracked from the attacks is consistent with the one used by the indicators; and
  2. validate the implemented indicators by passing the tracked information provided, and ensuring the outputs of the indicators are the same as the ones presented here.

Sample data

The following zip contains the sample data described for the three models mentioned above, including results for the PGD attacks, the PGD* attacks, and the APGD attacks.

iof_data.zip

Clone this wiki locally