-
Notifications
You must be signed in to change notification settings - Fork 9
Open
Description
Bug Report 🐛
Users are frequently receiving empty and undeterministic responses from the LLM. The responses are inconsistent and unpredictable, affecting the reliability of the output. To improve the quality of the responses, it is necessary to optimize the temperature and other hyperparameters to enhance determinism.
Expected Behavior
The LLM should provide consistent and predictable output. Responses should not be empty, and the results should be more deterministic.
Current Behavior
Possible Solution
Optimize the temperature and other hyperparameters to enhance determinism in the LLM output.
Explore providers API documentations for the fix.
Steps to Reproduce
- Query the LLM through prompt provider.
- Observe that some responses are empty.
- Note the variability in responses for similar inputs.
Context (Environment)
Application
- VSCode (All versions)
Detailed Description
Possible Implementation
Metadata
Metadata
Assignees
Labels
No labels