You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: benchmarks.html
+12-4Lines changed: 12 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -82,6 +82,9 @@ <h2> 📊 Available Benchmarks</h2>
82
82
<br/><br/>
83
83
This package facilitates the integration and evaluation of new algorithms (e.g., a novel deep learning architecture or a novel data augmentation strategy) in standardized EEG decoding pipelines based on MOABB-supported tasks, i.e., motor imagery (MI), P300, and steady-state visual evoked potential (SSVEP).
84
84
<br/><br/>
85
+
<b>Reference Papers:</b>
86
+
<br/><br/>
87
+
85
88
Davide Borra, Francesco Paissan, and Mirco Ravanelli. <i>SpeechBrain-MOABB: An open-source Python library for benchmarking deep neural networks applied to EEG signals.</i> Computers in Biology and Medicine, Volume 182, 2024. <ahref="https://www.sciencedirect.com/science/article/pii/S001048252401182X" target="_blank">[Paper]</a>
@@ -96,7 +99,9 @@ <h2> 📊 Available Benchmarks</h2>
96
99
The package helps integrate and evaluate new audio tokenizers in speech tasks of great interest such as <i>speech recognition</i>, <i>speaker identification</i>, <i>emotion recognition</i>, <i>keyword spotting</i>, <i>intent classification</i>, <i>speech enhancement</i>, <i>separation</i>, <i>text-to-speech</i>, and many more.
97
100
<br><br>
98
101
It offers an interface for easy model integration and testing and a protocol for comparing different audio tokenizers.
99
-
<br><br>
102
+
<br><br>
103
+
<b>Reference Paper:</b>
104
+
<br/><br/>
100
105
Pooneh Mousavi, Luca Della Libera, Jarod Duret, Arten Ploujnikov, Cem Subakan, Mirco Ravanelli,
101
106
<em>DASB - Discrete Audio and Speech Benchmark</em>, 2024
102
107
arXiv preprint arXiv:2406.14294.
@@ -114,11 +119,13 @@ <h2> 📊 Available Benchmarks</h2>
114
119
115
120
<br><br>
116
121
An ideal method should achieve both positive forward transfer (i.e. improve performance on new tasks leveraging shared knowledge from previous tasks) and positive backward transfer (i.e. improve performance on previous tasks leveraging shared knowledge from new tasks).
117
-
<br><br>
122
+
<br><br>
123
+
<b>Reference Paper:</b>
124
+
<br/><br/>
118
125
119
126
Luca Della Libera, Pooneh Mousavi, Salah Zaiem, Cem Subakan, Mirco Ravanelli, (2024). CL-MASR: A continual learning benchmark for multilingual ASR. <i>IEEE/ACM Transactions on Audio, Speech, and Language Processing, 32</i>, 4931–4944.
@@ -128,7 +135,8 @@ <h2> 📊 Available Benchmarks</h2>
128
135
<br><br>
129
136
This is why we called it the Multi-probe Speech Self Supervision Benchmark (MP3S). It has been demonstrated that the performance of the model is greatly influenced by this selection
130
137
<br><br>
131
-
138
+
<b>Reference Papers:</b>
139
+
<br/><br/>
132
140
133
141
Salah Zaiem, Youcef Kemiche, Titouan Parcollet, Slim Essid, Mirco Ravanelli, (2023). Speech Self-Supervised Representation Benchmarking: Are We Doing it Right? <i>Proceedings of Interspeech 2023</i>
0 commit comments