Skip to content

Commit acc144a

Browse files
authored
Include badges in README.md
1 parent ceef28c commit acc144a

File tree

1 file changed

+30
-11
lines changed

1 file changed

+30
-11
lines changed

README.md

Lines changed: 30 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,23 @@
1-
![image](Utility/toucan.png)
1+
<p align="right">
2+
<img alt="GitHub Repo stars" src="https://img.shields.io/github/stars/DigitalPhonetics/IMS-Toucan">
3+
<img alt="GitHub Repo Downloads" src="https://img.shields.io/github/downloads/DigitalPhonetics/IMS-Toucan/total">
4+
<img alt="GitHub Release" src="https://img.shields.io/github/v/release/DigitalPhonetics/IMS-Toucan">
5+
<a href=https://huggingface.co/spaces/Flux9665/MassivelyMultilingualTTS><img alt="Demo Link" src="https://img.shields.io/badge/DEMO-<COLOR>.svg"></a>
6+
</p>
7+
8+
---
9+
210

3-
IMS Toucan is a toolkit for teaching, training and using state-of-the-art Speech Synthesis models, developed at the
11+
IMS Toucan is a toolkit for training, using, and teaching state-of-the-art Text-to-Speech Synthesis models, developed at the
412
**Institute for Natural Language Processing (IMS), University of Stuttgart, Germany**. Everything is pure Python and
5-
PyTorch based to keep it as simple and beginner-friendly, yet powerful as possible.
13+
PyTorch based to keep it as simple and beginner-friendly, yet powerful as possible.
614

7-
---
15+
<br>
16+
17+
![image](Utility/toucan.png)
18+
19+
---
20+
<br>
821

922
## Links 🦚
1023

@@ -32,7 +45,8 @@ PyTorch based to keep it as simple and beginner-friendly, yet powerful as possib
3245

3346
[We have also published a massively multilingual TTS dataset on Huggingface🤗](https://huggingface.co/datasets/Flux9665/BibleMMS)
3447

35-
---
48+
---
49+
<br>
3650

3751
## Installation 🦉
3852

@@ -128,7 +142,8 @@ However, the espeak-ng installation file you need to set this variable to is a .
128142
Mac. In order to locate the espeak-ng library file, you can run `port contents espeak-ng`. The specific file you are
129143
looking for is named `libespeak-ng.dylib`.
130144

131-
---
145+
---
146+
<br>
132147

133148
## Inference 🦢
134149

@@ -161,7 +176,8 @@ pass them to the interface when you use it in your own code.
161176
To change the language of the model and see which languages are available in our pretrained model,
162177
[have a look at the list linked here](https://github.com/DigitalPhonetics/IMS-Toucan/blob/feb573ca630823974e6ced22591ab41cdfb93674/Utility/language_list.md)
163178

164-
---
179+
---
180+
<br>
165181

166182
## Creating a new Recipe (Training Pipeline) 🐣
167183

@@ -189,7 +205,8 @@ Once this is complete, we are almost done, now we just need to make it available
189205
*run* function from the pipeline you just created and give it a meaningful name. Now in the
190206
*pipeline_dict*, add your imported function as value and use as key a shorthand that makes sense.
191207

192-
---
208+
---
209+
<br>
193210

194211
## Training a Model 🦜
195212

@@ -242,7 +259,8 @@ fuser -v /dev/nvidia*
242259

243260
Whenever a checkpoint is saved, a compressed version that can be used for inference is also created, which is named _best.py_
244261

245-
---
262+
---
263+
<br>
246264

247265
## FAQ 🐓
248266

@@ -268,9 +286,10 @@ Here are a few points that were brought up by users:
268286
but nothing that hints at them in the text. That's why ASR corpora, which leave out punctuation, are usually difficult
269287
to use for TTS.
270288

271-
---
289+
---
290+
<br>
272291

273-
## Disclaimer 🦆
292+
## Acknowledgements 🦆
274293

275294
The basic PyTorch modules of FastSpeech 2 and GST are taken from
276295
[ESPnet](https://github.com/espnet/espnet), the PyTorch modules of

0 commit comments

Comments
 (0)