|
1 | | -# SCALAR Part-of-speech tagger |
2 | | -This the official release of the SCALAR Part-of-speech tagger |
| 1 | +# SCALAR Part-of-Speech Tagger for Identifiers |
3 | 2 |
|
4 | | -There are two ways to run the tagger. This document describes both ways. |
| 3 | +**SCALAR** is a part-of-speech tagger for source code identifiers. It supports two model types: |
5 | 4 |
|
6 | | -1. Using Docker compose (which runs the tagger's built-in server for you) |
7 | | -2. Running the tagger's built-in server without Docker |
| 5 | +- **DistilBERT-based model with CRF layer** (Recommended: faster, more accurate) |
| 6 | +- Legacy Gradient Boosting model (for compatibility) |
8 | 7 |
|
9 | | -## Getting Started with Docker |
| 8 | +--- |
10 | 9 |
|
11 | | -To run SCNL tagger in a Docker container you can clone the repository and pull the latest docker impage from `srcml/scanl_tagger:latest` |
| 10 | +## Installation |
12 | 11 |
|
13 | | -Make sure you have Docker and Docker Compose installed: |
14 | | -https://docs.docker.com/engine/install/ |
15 | | -https://docs.docker.com/compose/install/ |
| 12 | +Make sure you have `python3.12` installed. Then: |
16 | 13 |
|
17 | | -``` |
18 | | -git clone git@github.com:SCANL/scanl_tagger.git |
| 14 | +```bash |
| 15 | +git clone https://github.com/SCANL/scanl_tagger.git |
19 | 16 | cd scanl_tagger |
20 | | -docker compose pull |
21 | | -docker compose up |
| 17 | +python -m venv venv |
| 18 | +source venv/bin/activate |
| 19 | +pip install -r requirements.txt |
22 | 20 | ``` |
23 | 21 |
|
24 | | -## Getting Started without Docker |
25 | | -You will need `python3.12` installed. |
| 22 | +--- |
26 | 23 |
|
27 | | -You'll need to install `pip` -- https://pip.pypa.io/en/stable/installation/ |
28 | | - |
29 | | -Set up a virtual environtment: `python -m venv /tmp/tagger` -- feel free to put it somewhere else (change /tmp/tagger) if you prefer |
| 24 | +## Usage |
30 | 25 |
|
31 | | -Activate the virtual environment: `source /tmp/tagger/bin/activate` (you can find how to activate it here if `source` does not work for you -- https://docs.python.org/3/library/venv.html#how-venvs-work) |
| 26 | +You can run SCALAR in multiple ways: |
32 | 27 |
|
33 | | -After it's installed and your virtual environment is activated, in the root of the repo, run `pip install -r requirements.txt` |
| 28 | +### CLI (with DistilBERT or GradientBoosting model) |
34 | 29 |
|
35 | | -Finally, we require the `token` and `target` vectors from [code2vec](https://github.com/tech-srl/code2vec). The tagger will attempt to automatically download them if it doesn't find them, but you could download them yourself if you like. It will place them in your local directory under `./code2vec/*` |
| 30 | +```bash |
| 31 | +python main --mode run --model_type lm_based # DistilBERT (recommended) |
| 32 | +python main --mode run --model_type tree_based # Legacy model |
| 33 | +``` |
36 | 34 |
|
37 | | -## Usage |
| 35 | +Then query like: |
38 | 36 |
|
39 | 37 | ``` |
40 | | -usage: main [-h] [-v] [-r] [-t] [-a ADDRESS] [--port PORT] [--protocol PROTOCOL] |
41 | | - [--words WORDS] |
42 | | -
|
43 | | -options: |
44 | | - -h, --help show this help message and exit |
45 | | - -v, --version print tagger application version |
46 | | - -r, --run run server for part of speech tagging requests |
47 | | - -t, --train run training set to retrain the model |
48 | | - -a ADDRESS, --address ADDRESS |
49 | | - configure server address |
50 | | - --port PORT configure server port |
51 | | - --protocol PROTOCOL configure whether the server uses http or https |
52 | | - --words WORDS provide path to a list of acceptable abbreviations |
| 38 | +http://127.0.0.1:8080/GetValue/FUNCTION |
53 | 39 | ``` |
54 | 40 |
|
55 | | -`./main -r` will start the server, which will listen for identifier names sent via HTTP over the route: |
56 | | - |
57 | | -http://127.0.0.1:5000/{cache_selection}/{identifier_name}/{code_context} |
58 | | - |
59 | | -**NOTE: ** On docker, the port is 8080 instead of 5000. |
60 | | - |
61 | | -"cache selection" will save results to a separate cache if it is set to "student" |
62 | | - |
63 | | -"code context" is one of: |
| 41 | +Supports context types: |
64 | 42 | - FUNCTION |
65 | | -- ATTRIBUTE |
66 | 43 | - CLASS |
| 44 | +- ATTRIBUTE |
67 | 45 | - DECLARATION |
68 | 46 | - PARAMETER |
69 | 47 |
|
70 | | -For example: |
| 48 | +--- |
| 49 | + |
| 50 | +## Training |
| 51 | + |
| 52 | +You can retrain either model (default parameters are currently hardcoded): |
| 53 | + |
| 54 | +```bash |
| 55 | +python main --mode train --model_type lm_based |
| 56 | +python main --mode train --model_type tree_based |
| 57 | +``` |
| 58 | + |
| 59 | +--- |
71 | 60 |
|
72 | | -Tag a declaration: ``http://127.0.0.1:5000/cache/numberArray/DECLARATION`` |
| 61 | +## Evaluation Results |
73 | 62 |
|
74 | | -Tag a function: ``http://127.0.0.1:5000/cache/GetNumberArray/FUNCTION`` |
| 63 | +### DistilBERT (LM-Based Model) — Recommended |
75 | 64 |
|
76 | | -Tag an class: ``http://127.0.0.1:5000/cache/PersonRecord/CLASS`` |
| 65 | +| Metric | Score | |
| 66 | +|--------------------------|---------| |
| 67 | +| **Macro F1** | 0.9032 | |
| 68 | +| **Token Accuracy** | 0.9223 | |
| 69 | +| **Identifier Accuracy** | 0.8291 | |
77 | 70 |
|
78 | | -#### Note |
79 | | -Kebab case is not currently supported due to the limitations of Spiral. Attempting to send the tagger identifiers which are in kebab case will result in the entry of a single noun. |
| 71 | +| Label | Precision | Recall | F1 | Support | |
| 72 | +|-------|-----------|--------|-------|---------| |
| 73 | +| CJ | 0.88 | 0.88 | 0.88 | 8 | |
| 74 | +| D | 0.98 | 0.96 | 0.97 | 52 | |
| 75 | +| DT | 0.95 | 0.93 | 0.94 | 45 | |
| 76 | +| N | 0.94 | 0.94 | 0.94 | 418 | |
| 77 | +| NM | 0.91 | 0.93 | 0.92 | 440 | |
| 78 | +| NPL | 0.97 | 0.97 | 0.97 | 79 | |
| 79 | +| P | 0.94 | 0.92 | 0.93 | 79 | |
| 80 | +| PRE | 0.79 | 0.79 | 0.79 | 68 | |
| 81 | +| V | 0.89 | 0.84 | 0.86 | 110 | |
| 82 | +| VM | 0.79 | 0.85 | 0.81 | 13 | |
80 | 83 |
|
81 | | -You will need to have a way to parse code and filter out identifier names if you want to do some on-the-fly analysis of source code. We recommend [srcML](https://www.srcml.org/). Since the actual tagger is a web server, you don't have to use srcML. You could always use other AST-based code representations, or any other method of obtaining identifier information. |
| 84 | +**Inference Performance:** |
| 85 | +- Identifiers/sec: 225.8 |
82 | 86 |
|
| 87 | +--- |
83 | 88 |
|
84 | | -## Tagset |
| 89 | +### Gradient Boost Model (Legacy) |
85 | 90 |
|
86 | | -**Supported Tagset** |
87 | | -| Abbreviation | Expanded Form | Examples | |
88 | | -|:------------:|:--------------------------------------------:|:--------------------------------------------:| |
89 | | -| N | noun | Disneyland, shoe, faucet, mother | |
90 | | -| DT | determiner | the, this, that, these, those, which | |
91 | | -| CJ | conjunction | and, for, nor, but, or, yet, so | |
92 | | -| P | preposition | behind, in front of, at, under, above | |
93 | | -| NPL | noun plural | Streets, cities, cars, people, lists | |
94 | | -| NM | noun modifier (**noun-adjunct**, adjective) | red, cold, hot, **bit**Set, **employee**Name | |
95 | | -| V | verb | Run, jump, spin, | |
96 | | -| VM | verb modifier (adverb) | Very, loudly, seriously, impatiently | |
97 | | -| D | digit | 1, 2, 10, 4.12, 0xAF | |
98 | | -| PRE | preamble | Gimp, GLEW, GL, G, p, m, b | |
| 91 | +| Metric | Score | |
| 92 | +|----------------------|-----------| |
| 93 | +| Accuracy | 0.8216 | |
| 94 | +| Balanced Accuracy | 0.9160 | |
| 95 | +| Weighted Recall | 0.8216 | |
| 96 | +| Weighted Precision | 0.8245 | |
| 97 | +| Weighted F1 | 0.8220 | |
| 98 | +| Inference Time | 249.05s | |
99 | 99 |
|
100 | | -**Penn Treebank to SCALAR tagset** |
| 100 | +**Inference Performance:** |
| 101 | +- Identifiers/sec: 8.6 |
101 | 102 |
|
102 | | -| Penn Treebank Annotation | SCALAR Tagset | |
103 | | -|:---------------------------:|:------------------------:| |
104 | | -| Conjunction (CC) | Conjunction (CJ) | |
105 | | -| Digit (CD) | Digit (D) | |
106 | | -| Determiner (DT) | Determiner (DT) | |
107 | | -| Foreign Word (FW) | Noun (N) | |
108 | | -| Preposition (IN) | Preposition (P) | |
109 | | -| Adjective (JJ) | Noun Modifier (NM) | |
110 | | -| Comparative Adjective (JJR) | Noun Modifier (NM) | |
111 | | -| Superlative Adjective (JJS) | Noun Modifier (NM) | |
112 | | -| List Item (LS) | Noun (N) | |
113 | | -| Modal (MD) | Verb (V) | |
114 | | -| Noun Singular (NN) | Noun (N) | |
115 | | -| Proper Noun (NNP) | Noun (N) | |
116 | | -| Proper Noun Plural (NNPS) | Noun Plural (NPL) | |
117 | | -| Noun Plural (NNS) | Noun Plural (NPL) | |
118 | | -| Adverb (RB) | Verb Modifier (VM) | |
119 | | -| Comparative Adverb (RBR) | Verb Modifier (VM) | |
120 | | -| Particle (RP) | Verb Modifier (VM) | |
121 | | -| Symbol (SYM) | Noun (N) | |
122 | | -| To Preposition (TO) | Preposition (P) | |
123 | | -| Verb (VB) | Verb (V) | |
124 | | -| Verb (VBD) | Verb (V) | |
125 | | -| Verb (VBG) | Verb (V) | |
126 | | -| Verb (VBN) | Verb (V) | |
127 | | -| Verb (VBP) | Verb (V) | |
128 | | -| Verb (VBZ) | Verb (V) | |
| 103 | +--- |
129 | 104 |
|
130 | | -## Training the tagger |
131 | | -You can train this tagger using the `-t` option (which will re-run the training routine). For the moment, most of this is hard-coded in, so if you want to use a different data set/different seeds, you'll need to modify the code. This will potentially change in the future. |
| 105 | +## Supported Tagset |
| 106 | + |
| 107 | +| Tag | Meaning | Examples | |
| 108 | +|-------|------------------------------------|--------------------------------| |
| 109 | +| N | Noun | `user`, `Data`, `Array` | |
| 110 | +| DT | Determiner | `this`, `that`, `those` | |
| 111 | +| CJ | Conjunction | `and`, `or`, `but` | |
| 112 | +| P | Preposition | `with`, `for`, `in` | |
| 113 | +| NPL | Plural Noun | `elements`, `indices` | |
| 114 | +| NM | Noun Modifier (adjective-like) | `max`, `total`, `employee` | |
| 115 | +| V | Verb | `get`, `set`, `delete` | |
| 116 | +| VM | Verb Modifier (adverb-like) | `quickly`, `deeply` | |
| 117 | +| D | Digit | `1`, `2`, `10`, `0xAF` | |
| 118 | +| PRE | Preamble / Prefix | `m`, `b`, `GL`, `p` | |
| 119 | + |
| 120 | +--- |
| 121 | + |
| 122 | +## Docker Support (Legacy only) |
| 123 | + |
| 124 | +For the legacy server, you can also use Docker: |
| 125 | + |
| 126 | +```bash |
| 127 | +docker compose pull |
| 128 | +docker compose up |
| 129 | +``` |
| 130 | + |
| 131 | +--- |
| 132 | + |
| 133 | +## Notes |
| 134 | + |
| 135 | +- **Kebab case** is not supported (e.g., `do-something-cool`). |
| 136 | +- Feature and position tokens (e.g., `@pos_0`) are inserted automatically. |
| 137 | +- Internally uses [WordNet](https://wordnet.princeton.edu/) for lexical features. |
| 138 | +- Input must be parsed into identifier tokens. We recommend [srcML](https://www.srcml.org/) but any AST-based parser works. |
| 139 | + |
| 140 | +--- |
| 141 | + |
| 142 | +## Citations |
| 143 | + |
| 144 | +Please cite: |
| 145 | + |
| 146 | +``` |
| 147 | +@inproceedings{newman2025scalar, |
| 148 | + author = {Christian Newman and Brandon Scholten and Sophia Testa and others}, |
| 149 | + title = {SCALAR: A Part-of-speech Tagger for Identifiers}, |
| 150 | + booktitle = {ICPC Tool Demonstrations Track}, |
| 151 | + year = {2025} |
| 152 | +} |
| 153 | +
|
| 154 | +@article{newman2021ensemble, |
| 155 | + title={An Ensemble Approach for Annotating Source Code Identifiers with Part-of-speech Tags}, |
| 156 | + author={Newman, Christian and Decker, Michael and AlSuhaibani, Reem and others}, |
| 157 | + journal={IEEE Transactions on Software Engineering}, |
| 158 | + year={2021}, |
| 159 | + doi={10.1109/TSE.2021.3098242} |
| 160 | +} |
| 161 | +``` |
132 | 162 |
|
133 | | -## Errors? |
134 | | -Please make an issue if you run into errors |
| 163 | +--- |
135 | 164 |
|
136 | | -# Please Cite the Paper(s)! |
| 165 | +## Training Data |
137 | 166 |
|
138 | | -Newman, Christian, Scholten , Brandon, Testa, Sophia, Behler, Joshua, Banabilah, Syreen, Collard, Michael L., Decker, Michael, Mkaouer, Mohamed Wiem, Zampieri, Marcos, Alomar, Eman Abdullah, Alsuhaibani, Reem, Peruma, Anthony, Maletic, Jonathan I., (2025), “SCALAR: A Part-of-speech Tagger for Identifiers”, in the Proceedings of the 33rd IEEE/ACM International Conference on Program Comprehension - Tool Demonstrations Track (ICPC), Ottawa, ON, Canada, April 27 -28, 5 pages TO APPEAR. |
| 167 | +You can find the most recent SCALAR training dataset [here](https://github.com/SCANL/scanl_tagger/blob/master/input/tagger_data.tsv) |
139 | 168 |
|
140 | | -Christian D. Newman, Michael J. Decker, Reem S. AlSuhaibani, Anthony Peruma, Satyajit Mohapatra, Tejal Vishnoi, Marcos Zampieri, Mohamed W. Mkaouer, Timothy J. Sheldon, and Emily Hill, "An Ensemble Approach for Annotating Source Code Identifiers with Part-of-speech Tags," in IEEE Transactions on Software Engineering, doi: 10.1109/TSE.2021.3098242. |
| 169 | +--- |
141 | 170 |
|
142 | | -# Training set |
143 | | -The data used to train this tagger can be found in the most recent database update in the repo -- https://github.com/SCANL/scanl_tagger/blob/master/input/scanl_tagger_training_db_11_29_2024.db |
| 171 | +## More from SCANL |
144 | 172 |
|
145 | | -# Interested in our other work? |
146 | | -Find our other research [at our webpage](https://www.scanl.org/) and check out the [Identifier Name Structure Catalogue](https://github.com/SCANL/identifier_name_structure_catalogue) |
| 173 | +- [SCANL Website](https://www.scanl.org/) |
| 174 | +- [Identifier Name Structure Catalogue](https://github.com/SCANL/identifier_name_structure_catalogue) |
147 | 175 |
|
148 | | -# WordNet |
149 | | -This project uses WordNet to perform a dictionary lookup on the individual words in each identifier: |
| 176 | +--- |
150 | 177 |
|
151 | | -Princeton University "About WordNet." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010 |
| 178 | +## Trouble? |
152 | 179 |
|
| 180 | +Please [open an issue](https://github.com/SCANL/scanl_tagger/issues) if you encounter problems! |
0 commit comments