Skip to content

Add separate model selection for code completion #1032

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

muravvv
Copy link

@muravvv muravvv commented May 22, 2025

Add separate settings for code completion model:

  • Separate selection of model provider for code completion to use different providers for chat and code completion (for example to use ProxyAI model for chat and other actions + local code completion)
  • Separate Ollama model selection for code completion

This pull request closes issues #733, #804 and #1025.

Also during addition of the test for new Ollama model selection, I have found a bug in code completion tests: code completion cache (CodeCompletionCacheService) was not resetted between tests. As I used different model responce for same input (rivate void main instead of ublic void main in other tests), next executed tests (test code completion with OpenAI provider and test code completion with ProxyAI provider) became failed. To fix this bug I have added cleaning of CodeCompletionCacheService to each test.

This bug most likely led that some of the other tests did not work: actually that tests was tested code completion cache instead of real model query code. So after my fix this tests will start working as expected.

muravvv added 3 commits May 18, 2025 17:48
also added missing clearing of code completion cache to other tests. This fixes ProxyAI test - before the fix it was passed always regardless of real ProxyAI completion operation
@lenis0012
Copy link

+1 for this change

@carlrobertoh
Copy link
Owner

Thank you! This is something that has been missing for ages.

The current PR seems to be working and doing what it is supposed to do. I'm mostly fine with merging this and publishing it in the next release. However, from a UI/UX perspective, I believe there should be another settings page with options to configure separate models for each feature. Because autocompletion isn't the only one that deserves a separate model. Similar option should also apply to:

  • Auto Apply
  • Edit Code (inline edits)
  • Commit Message Generation

...and perhaps a few more.

Having a dedicated new settings page for all these configurations would greatly improve the experience.

Unfortunately, I'll be off for a few days, so I will re-check this PR a bit later.

@muravvv
Copy link
Author

muravvv commented Jun 7, 2025

Yes, I also don't like resulting UI, but I have no idea how to make it better.

My first idea was to duplicate whole Providers category: create Code completion providers with its own Selected provider setting and pages for each provider:
separate category
But significant number of parameters (API keys, hosts and so on) actually are common for all actions with separate models. And it is not good to requere a user to change same API key in five places, when he's API key for one of providers was changed.

A good settings page is used in Continue plugin:
continue plugin
In that variant model for each task is selected from unified list of all models from all providers (in their plugin this list is defined manually in yaml config files). But it cannot be used in ProxyAI as for some providers model name is entered as text, and so complete list of available models cannot be built.

So if there is an idea how to make a good UI, I can implement it (if I'll have time for this, of course). Expectly as new Auto Apply is a great feature and it also requeres separate model (as many good models cannot generate valid diff blocks).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants