Version: 1.0
The Danish Anonymization Benchmark (DAB) is a GDPR-oriented, open-source benchmark for evaluating automated anonymization of Danish text data. The current version (1.0) consists of 54 manually annotated (anonymized) Danish documents and pipelines for benchmarking anonymization models and expanding the dataset by adding and annotating new data.
The project features:
- Generate masking predictions with instruction-tuned 🤗 HuggingFace models
- Benchmark anonymization models to obtain evaluation metrics
- DAB annotation guidelines
- Annotate new data in Label Studio with a setup guide and config
- Support for multiple annotators
- Bootstrap annotations with a pre-annotation framework
System requirements:
- This project is developed on Python 3.12.3 and only supports Python 3.12.
- Bash-compatible shell
Guide for setup:
-
Clone the repository:
git clone https://github.com/alekswael/DAB cd DAB -
Run
setup.sh:bash setup.sh
This script checks for the Python version, creates a virtual environment at
venv/, installs dependencies fromrequirements.txtand downloads a SpaCy model. -
Activate the virtual environment:
# For MacOS and Linux source venv/bin/activate # For Windows (cmd.exe) venv\Scripts\activate.bat
NOTE: To use this project, you must be authenticated with the Hugging Face Hub. Please ensure you have a Hugging Face account and an access token. You can log in by running huggingface-cli login and following the instructions (see the documention for help).
-
📈 Model prediction
Generate masking predictions with an anonymization model. Currently, the project contains code for generating predictions with three model configurations:
- DaAnonymization with DaCy large (simple, adapted version for this project)
- DaAnonymization with DaCy large fine-grained (simple, adapted version for this project)
- google/gemma-3-12b-it, implemented through 🤗 Hugging Face (locally) or Google's API (cloud based)
To generate predictions for these models, you can run the
predict_masks.shscript:bash predict_masks.sh
You can add the
--cloudflag when runninggemma_predict.pyinpredict_masks.shif you want to run Gemma through Google's API - make sure to set theGOOGLE_API_KEYenvironment variable in a.envfile.Instruction-tuned models from 🤗 Hugging Face
There is also support for generating masks by prompting an instruction-tuned model hosted on 🤗 Hugging Face. To do this, you can run the
hf_pipeline_predict.pyscript and specify the--model_nameflag:python src/predict/hf_pipeline_predict.py \ --data_path "./data/DAB_annotated_dataset.json" \ --save_path "./output/predictions/gemma_3_1b_it_predictions.json" \ --model_name "google/gemma-3-1b-it"
You can view/change the instruction prompt in the
hf_pipeline_instruction_prompt.txtfile (this prompt is also used when prompting google/gemma-3-12b-it).🕵️ Other anonymization models
If you want to generate predictions with a different model, make sure to save the output with the correct formatting. See the model prediction JSON reference in the formatting reference for more information.
-
📋️ Model evaluation
Evaluate an anonymization model on a series of metrics. If you want to benchmark the provided models, you can run the
benchmark_models.shscript:bash benchmark_models.sh
To benchmark a single model, make sure the predictions are available in
output/predictions/. Specify the arguments and run thebenchmark_model.pyscript:python src/benchmark/benchmark_model.py \ --gold_standard_file "./data/DAB_annotated_dataset.json" \ --model_predictions_file "./output/predictions/mymodel_predictions.json" \ --benchmark_output_file "./output/benchmarks/mymodel_benchmark_result.txt" \ --bert_weighting
-
📄 Add new documents
Create a new subfolder in
data/raw/and documents for annotation. Each subfolder should be named according to the dataset source, e.g.data/ ├── raw/ │ ├── private_docs/ │ │ ├── document1.txt │ │ ├── document2.pdf │ │ └── ... │ ├── legal_cases/ │ │ ├── case1.txt │ │ ├── case2.pdf │ │ └── ... │ ├── news_articles/ │ │ ├── article1.txt │ │ ├── article2.pdf │ │ └── ...NOTE: Currently supports
.txtand.pdf(native and non-native) files. -
📁 Compile dataset
Run the
compile_dataset.pyscript:python src/data_processing/compile_dataset.py \ --data_dir "./data/raw/" \ --save_path "./data/dataset.json"
Compile the raw documents from the subfolders in
data/raw/and format into a single basic Label Studio JSON. -
🤖 Pre-annotate dataset
Pre-annotate the dataset to bootstrap the annotation process. The pre-annotations consist of fine-grained named entities generated with DaCy fine-grained medium (Enevoldsen et al., 2021; Enevoldsen et al., 2024) and a series of RegExes. To pre-annotate the dataset, run the
pre_annotate.pyscript:python src/data_processing/pre_annotate.py \ --data_path "./data/dataset.json" \ --save_path "./data/dataset_pre_annotated.json"
-
✍️ Annotate the documents in Label Studio
Annotate your own data in Label Studio. Read and follow the DAB Annotation Guidelines and the Label Studio setup guide in the
annotation/folder. -
🛠️ Post-process annotated dataset
After saving your annotated JSON file, post-process it to make it compatible with the prediction/evaluation pipeline. Run the
add_entity_ids.pyscript:python src/data_processing/add_entity_ids.py \ --data_path "./data/dataset_annotated.json"If you want to print the masked text from your annotations, you can run
check_annotated_offsets.py:python src/data_processing/check_annotated_offsets.py \ --data_path "./data/dataset_annotated.json"
spacy-experimental 0.6.4must be installed from source, see thesetup.shfile- Running with Python 3.13 will raise an error when installing
spacy-experimental 0.6.4 - Gemma 3 models require atleast
transformers==4.50.0, forcing this raises dependency issues withspacy-transformersbut has no impact on performance
- Increase no. of documents & annotators
- Convert project to package
The annotation guidelines and evaluation methodology are adapted from the Text Anonymization Benchmark by Pilán et al. (2022) (Github | Paper).
Enevoldsen, K., Hansen, L., & Nielbo, K. L. (2021). DaCy: A unified framework for danish NLP. Ceur Workshop Proceedings, 2989, 206-216.
Enevoldsen, K., Jessen, E. T., & Baglini, R. (2024). DANSK: Domain Generalization of Danish Named Entity Recognition. Northern European Journal of Language Technology, 10(1), Article 1. https://doi.org/10.3384/nejlt.2000-1533.2024.5249
Pilán, I., Lison, P., Øvrelid, L., Papadopoulou, A., Sánchez, D., & Batet, M. (2022). The Text Anonymization Benchmark (TAB): A Dedicated Corpus and Evaluation Framework for Text Anonymization. Computational Linguistics, 48(4), 1053–1101. https://doi.org/10.1162/coli_a_00458