New Changes 🌟
-
Offline Model Support: Integrated vLLM with the access to offline models via IntelliServer API.
This will allow to integrate vllm with RAG and semantic search capabilities. -
Enhanced Chatbot UI: Added support for the o3-mini model. Added configuration options for vllm custom url.
Install & Run
docker pull intellinode/intelliserver:latest
docker run -p 80:80 intellinode/intelliserver:latest