Have the basics of ollama #4
Open
DanielMarchand wants to merge 39 commits intodrudilorenzo:fix-and-improvefrom
Open
Have the basics of ollama #4DanielMarchand wants to merge 39 commits intodrudilorenzo:fix-and-improvefrom
DanielMarchand wants to merge 39 commits intodrudilorenzo:fix-and-improvefrom
Conversation
… for different backends
… is becoming relatively stable
…p schedule prompts
…rying to figure out Django view exceptions
…o a json for later model checking and evaluation
…raining the use of triples
…the only one I know that 'should' work
chowington
referenced
this pull request
in crcresearch/agentic_collab
Sep 30, 2024
Support vLLM on EC2 instances
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The basics work. The problem is the code base is not very well-designed to handled custom prompting depending on the model. For example wake up dates require longer token limits with the llama3 models than with opean ai ones. Also I had to switch from system to assistant in the chat complete to get better answers, there are other subtle differences in how the prompts need to be set up, would be nice to discuss an overall architecture for this. Otherwise I think this is a really cool direction letting people with decent GPUs (tested on 3080, i'm sure 4090 would be even more special) get nice results at no cost.
This is heavily based on joonspk-research#155 by ketsapiwiq I had do some aspects differently but much of the logic is the same