Skip to content

Commit e76ac94

Browse files
committed
Update docs & increase version count prior to release
1 parent 9e20117 commit e76ac94

File tree

4 files changed

+106
-19
lines changed

4 files changed

+106
-19
lines changed

docs/CHANGELOG.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,9 @@
7979
- Invalid response - **fixed**
8080

8181
## v0.1.4
82+
**What's new?**
83+
- Incomplete response - **fixed**
8284

85+
## v0.2.0
8386
**What's new?**
84-
- Incomplete response - **fixed**
87+
- Multiple LLM providers

docs/README.md

Lines changed: 100 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
<a href="https://github.com/Simatwa/python-tgpt/actions/workflows/python-test.yml"><img src="https://github.com/Simatwa/python-tgpt/actions/workflows/python-test.yml/badge.svg" alt="Python Test"/></a>
99
-->
1010
<a href="https://github.com/Simatwa/python-tgpt/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/static/v1?logo=GPL&color=Blue&message=MIT&label=License"/></a>
11-
<a href="https://pypi.org/project/python-tgpt"><img alt="PyPi" src="https://img.shields.io/static/v1?logo=pypi&label=Pypi&message=v0.1.4&color=green"/></a>
11+
<a href="https://pypi.org/project/python-tgpt"><img alt="PyPi" src="https://img.shields.io/static/v1?logo=pypi&label=Pypi&message=0.2.0&color=green"/></a>
1212
<a href="https://github.com/psf/black"><img alt="Black" src="https://img.shields.io/static/v1?logo=Black&label=Code-style&message=Black"/></a>
1313
<a href="#"><img alt="Passing" src="https://img.shields.io/static/v1?logo=Docs&label=Docs&message=Passing&color=green"/></a>
1414
<a href="https://github.com/Simatwa/python-tgpt/actions/workflows/python-package.yml"><img src="https://github.com/Simatwa/python-tgpt/actions/workflows/python-package.yml/badge.svg"/></a>
@@ -33,16 +33,16 @@ python-tgpt
3333

3434

3535
```python
36-
>>> import tgpt
37-
>>> bot = tgpt.TGPT()
36+
>>> from tgpt.leo import LEO
37+
>>> bot = LEO()
3838
>>> bot.chat('Hello there')
3939
" Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?"
4040
>>>
4141
```
4242

43-
This project enables seamless interaction with [LLaMA](https://ai.meta.com/llama/) AI without requiring an API Key.
43+
This project enables seamless interaction with free LLMs without requiring an API Key.
4444

45-
The name *python-tgpt* draws inspiration from its parent project [tgpt](https://github.com/aandrew-me/tgpt), which operates on [Golang](https://go.dev/). Through this Python adaptation, users can effortlessly engage with LLaMA's capabilities *(alias **LEO** by Brave)*, fostering a smoother AI interaction experience.
45+
The name *python-tgpt* draws inspiration from its parent project [tgpt](https://github.com/aandrew-me/tgpt), which operates on [Golang](https://go.dev/). Through this Python adaptation, users can effortlessly engage with a number of free LLMs available as well as OpenAI's ChatGPT models, fostering a smoother AI interaction experience.
4646

4747
### Features
4848

@@ -55,6 +55,17 @@ The name *python-tgpt* draws inspiration from its parent project [tgpt](https://
5555
- 🚀 Ready to use (No API key required)
5656
- ⛓️ Chained requests via proxy
5757
- 🤖 Pass [awesome-chatgpt prompts](https://github.com/f/awesome-chatgpt-prompts) easily
58+
- 🧠 Multiple LLM providers
59+
60+
## Providers
61+
62+
These are simply the hosts of the LLMs, which include:
63+
64+
1. Leo *(By Brave)*
65+
2. Fakeopen
66+
3. Koboldai
67+
4. Opengpt
68+
5. OpenAI *(API key required)*
5869

5970
## Prerequisites
6071

@@ -94,6 +105,8 @@ Choose one of the following methods to get started.
94105

95106
This package offers a convenient command-line interface.
96107

108+
> **Note** : `leo` is the default *provider*.
109+
97110
- For a quick response:
98111
```bash
99112
python -m tgpt generate "<Your prompt>"
@@ -104,6 +117,8 @@ This package offers a convenient command-line interface.
104117
python -m tgpt interactive "<Kickoff prompt (though not mandatory)>"
105118
```
106119

120+
Make use of flag `--provider` postfixed with the provider name of your choice. e.g `--provider koboldai`
121+
107122
You can also simply use `tgpt` instead of `python -m tgpt`.
108123

109124
Starting from version 0.1.2, `generate` is the default command if you issue a prompt, and `interactive` takes action if you don't. Therefore, something like this will generate a response and exit `$ tgpt "<Your prompt>"` while `$ tgpt` will fire up an interactive prompt.
@@ -119,8 +134,8 @@ Starting from version 0.1.2, `generate` is the default command if you issue a pr
119134
1. Generate a quick response
120135

121136
```python
122-
from tgpt import TGPT
123-
bot = TGPT()
137+
from tgpt.leo import LEO
138+
bot = LEO()
124139
resp = bot.chat('<Your prompt>')
125140
print(resp)
126141
# Output : How may I help you.
@@ -129,8 +144,8 @@ print(resp)
129144
2. Get back whole response
130145

131146
```python
132-
from tgpt import TGPT
133-
bot = TGPT()
147+
from tgpt.leo import LEO
148+
bot = LEO()
134149
resp = bot.ask('<Your Prompt')
135150
print(resp)
136151
# Output
@@ -146,8 +161,8 @@ Just add parameter `stream` with value `true`.
146161
1. Text Generated only
147162

148163
```python
149-
from tgpt import TGPT
150-
bot = TGPT()
164+
from tgpt.leo import LEO
165+
bot = LEO()
151166
resp = bot.chat('<Your prompt>', stream=True)
152167
for value in resp:
153168
print(value)
@@ -163,8 +178,8 @@ How may I help you today?
163178
2. Whole Response
164179

165180
```python
166-
from tgpt import TGPT
167-
bot = TGPT()
181+
from tgpt.leo import LEO
182+
bot = LEO()
168183
resp = bot.ask('<Your Prompt>', stream=True)
169184
for value in resp:
170185
print(value)
@@ -180,6 +195,75 @@ for value in resp:
180195
"""
181196
```
182197

198+
> **Note** : All providers have got a common class methods.
199+
200+
<details>
201+
202+
<summary>
203+
204+
Openai
205+
206+
</summary>
207+
208+
```python
209+
import tgpt.openai as openai
210+
bot = openai.OPENAI("<OPENAI-API-KEY>")
211+
print(bot.chat("<Your-prompt>"))
212+
```
213+
214+
</details>
215+
216+
217+
<details>
218+
219+
<summary>
220+
221+
Koboldai
222+
223+
</summary>
224+
225+
```python
226+
import tgpt.koboldai as koboldai
227+
bot = koboldai.KOBOLDAI()
228+
print(bot.chat("<Your-prompt>"))
229+
```
230+
231+
</details>
232+
233+
234+
<details>
235+
236+
<summary>
237+
238+
Fakeopen
239+
240+
</summary>
241+
242+
```python
243+
import tgpt.fakeopen as fakeopen
244+
bot = fakeopen.FAKEOPEN()
245+
print(bot.chat("<Your-prompt>"))
246+
```
247+
248+
</details>
249+
250+
<details>
251+
252+
<summary>
253+
254+
Opengpt
255+
256+
</summary>
257+
258+
```python
259+
import tgpt.opengpt as opengpt
260+
bot = opengpt.OPENGPT()
261+
print(bot.chat("<Your-prompt>"))
262+
```
263+
264+
</details>
265+
266+
183267

184268
</details>
185269

@@ -192,8 +276,8 @@ To obtain more tailored responses, consider utilizing [optimizers](tgpt/utils.py
192276
</summary>
193277

194278
```python
195-
from tgpt import TGPT
196-
bot = TGPT()
279+
from tgpt.leo import LEO
280+
bot = LEO()
197281
resp = bot.ask('<Your Prompt>', optimizer='code')
198282
print(resp)
199283
```
@@ -205,7 +289,7 @@ print(resp)
205289
You can still disable the mode:
206290

207291
```python
208-
bot = tgpt.TGPT(is_conversation=False)
292+
bot = koboldai.KOBOLDAI(is_conversation=False)
209293
```
210294

211295
Utilize the `--disable-conversation` flag in the console to achieve the same functionality.

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
setup(
1616
name="python-tgpt",
17-
version="0.1.4",
17+
version="0.2.0",
1818
license="MIT",
1919
author="Smartwa",
2020
maintainer="Smartwa",

src/tgpt/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
from .utils import appdir
22

3-
__version__ = "0.1.4"
3+
__version__ = "0.2.0"
44
__author__ = "Smartwa"
55
__repo__ = "https://github.com/Simatwa/python-tgpt"
66

0 commit comments

Comments
 (0)