Skip to content

Commit fb4f2af

Browse files
authored
Update benchmarks.html
1 parent 644d9f5 commit fb4f2af

File tree

1 file changed

+12
-4
lines changed

1 file changed

+12
-4
lines changed

benchmarks.html

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,9 @@ <h2> 📊 Available Benchmarks</h2>
8282
<br/><br/>
8383
This package facilitates the integration and evaluation of new algorithms (e.g., a novel deep learning architecture or a novel data augmentation strategy) in standardized EEG decoding pipelines based on MOABB-supported tasks, i.e., motor imagery (MI), P300, and steady-state visual evoked potential (SSVEP).
8484
<br/><br/>
85+
<b>Reference Papers:</b>
86+
<br/><br/>
87+
8588
Davide Borra, Francesco Paissan, and Mirco Ravanelli. <i>SpeechBrain-MOABB: An open-source Python library for benchmarking deep neural networks applied to EEG signals.</i> Computers in Biology and Medicine, Volume 182, 2024. <a href="https://www.sciencedirect.com/science/article/pii/S001048252401182X" target="_blank">[Paper]</a>
8689
<br/><br/>
8790
Davide Borra, Elisa Magosso, and Mirco Ravanelli. <i>Neural Networks, Page 106847, 2024. <a href="https://www.sciencedirect.com/science/article/pii/S001048252401182X" target="_blank">[Paper]</a>
@@ -96,7 +99,9 @@ <h2> 📊 Available Benchmarks</h2>
9699
The package helps integrate and evaluate new audio tokenizers in speech tasks of great interest such as <i>speech recognition</i>,  <i>speaker identification</i><i>emotion recognition</i><i>keyword spotting</i><i>intent classification</i><i>speech enhancement</i><i>separation</i>, <i>text-to-speech</i>, and many more.
97100
<br><br>
98101
It offers an interface for easy model integration and testing and a protocol for comparing different audio tokenizers.
99-
<br><br>
102+
<br><br>
103+
<b>Reference Paper:</b>
104+
<br/><br/>
100105
Pooneh Mousavi, Luca Della Libera, Jarod Duret, Arten Ploujnikov, Cem Subakan, Mirco Ravanelli,
101106
<em>DASB - Discrete Audio and Speech Benchmark</em>, 2024
102107
arXiv preprint arXiv:2406.14294.
@@ -114,11 +119,13 @@ <h2> 📊 Available Benchmarks</h2>
114119

115120
<br><br>
116121
An ideal method should achieve both positive forward transfer (i.e. improve performance on new tasks leveraging shared knowledge from previous tasks) and positive backward transfer (i.e. improve performance on previous tasks leveraging shared knowledge from new tasks).
117-
<br><br>
122+
<br><br>
123+
<b>Reference Paper:</b>
124+
<br/><br/>
118125

119126
Luca Della Libera, Pooneh Mousavi, Salah Zaiem, Cem Subakan, Mirco Ravanelli, (2024). CL-MASR: A continual learning benchmark for multilingual ASR. <i>IEEE/ACM Transactions on Audio, Speech, and Language Processing, 32</i>, 4931–4944.
120127
<a href="https://arxiv.org/abs/2310.16931" target="_blank">[Paper]
121-
<<br><br>
128+
<br><br>
122129
<hr class="separation-line">
123130

124131

@@ -128,7 +135,8 @@ <h2> 📊 Available Benchmarks</h2>
128135
<br><br>
129136
This is why we called it the Multi-probe Speech Self Supervision Benchmark (MP3S). It has been demonstrated that the performance of the model is greatly influenced by this selection
130137
<br><br>
131-
138+
<b>Reference Papers:</b>
139+
<br/><br/>
132140

133141
Salah Zaiem, Youcef Kemiche, Titouan Parcollet, Slim Essid, Mirco Ravanelli, (2023). Speech Self-Supervised Representation Benchmarking: Are We Doing it Right? <i>Proceedings of Interspeech 2023</i>
134142
<a href="https://arxiv.org/abs/2306.00452" target="_blank">[Paper]</a>

0 commit comments

Comments
 (0)