You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello I tried to use Tabby with my AMD Ryzen 3900 CPU(12/24) with the goal to enable context awareness for my projects, as can be seen in section 5 this was not successful. To be honest, I'm a bite feed with it right now :)
But I created this document, and perhaps it is useful for someone, and perhaps someone can point out what I'm doing or where I'm thinking wrong.!
Anyway, have a great day!
Nico
🚀 Tabby Setup Guide
for Local C++ Codebase Search & AI Assistance
This guide will help you set up Tabby for local, privacy-friendly, code-aware AI assistance on your C++ project.
You’ll be able to ask questions like “Where do I use function xxx?” and get instant, context-aware answers (I hope).
Tabby support CodeLlama-13B and CodeLlama-7 and others models see here
Download the latest Windows release (tabby-windows-amd64.exe)
unzip it in a folder, e.g. C:\apps\tabby\
Rename the executable tabby-windows-amd64.exe to tabby.exe.
3. Configure and start Tabby
3.1 Config.toml
Create a config.toml (not default created)
in the Windows folder: %USERPROFILE%\.tabby\ for example I tried this:
# My system
# CPU Ryzen 3900, 12 core, 24 logical 4 mhz(light OC)
# 32 GB Internal
# 4 TB ssd (2)
# AMD Redeom 6600 RX 8 GB
#
# Using tab version: Version 0.31.0 (also used Version 0.30.0)
# ===============================
# Tabby Server Config
# ===============================
# - Running `C:\Apps\tabby\tabby.exe serve` will use this config
# - for models see: https://github.com/tabbyml/registry-tabby?tab=readme-ov-file
# Withoout this the VSC Tabby extension might not be able to connect (even with extension setting set right)
[server]
host = "0.0.0.0"
port = 8085
completion_timeout = 300 # If using this block, this must be present!
#
#
[model.chat.local]
# model_id = "TabbyML/CodeGemma-7B-Instruct" # DOES NOT WORK FOR MY PC (CPU), empty responses
# model_id = "TabbyML/Qwen2.5-Coder-1.5B-Instruct" # OKAY
# model_id = "Qwen2.5-Coder-7B-Instruct" # Okay speed GUI-Control
model_id = "Qwen2.5-Coder-14B-Instruct" # Oaky slow, just acceptable
device = "cpu"
num_threads = 22 # use 14,16 of 24 avoids thread overhead. Test increase from 14 to 18(still not sure use most or not)
context_size =1024 # 512 tokens (test back to 512 from 2048)
batch_size = 1 # 2 = No
temperature = 0.0 # sampling randomness
top_p = 0.9 # narrower sampling pool to reduce loops
max_tokens = 512 # maximum tokens per completion
repetition_penalty = 1.2
[model.embedding.local]
model_id = "TabbyML/Nomic-Embed-Text"
# Local model setup for completion
[model.completion.local]
# model_id = "TabbyML/CodeLlama-7B" # 2 large models is to much for CPU based system(at least mine)
# model_id = "TabbyML/CodeLlama-13B" # overkill, or perhaps acceptable when using light one for chat ?
model_id = "TabbyML/Qwen2.5-Coder-1.5B" # Small, fast, good balance
device = "cpu"
num_threads = 12
context_size = 512
batch_size = 1
temperature = 0.1
top_p = 0.9
max_tokens = 256 # completions don’t need very long outputs (128 also okay?)
# Repository context provider (your project)
# This will create a copied repository local based on the git_url dedicated for tabby
# - Make sure you have atleast a local git repo that is commited!
# - Windows Location of copied repository: C:\Users\[name]\.tabby\repositories
# - After startin the tabby server, see also this location: http://localhost:8085/settings/providers/git
# - here you also need to specify the path to the repo (I use the original, not the copied! not sure that is right)
# this seems dupplicate with the setting below, So I have no idea what the difference is!?
[[repositories]]
name = "Skia"
# Change path to your git project root (dont include /.git!)
git_url = "file:///D:/CPP/Projects/OPEN_SOURCE/GUI-Control"
# 3. Completion behavior
[completion]
max_input_length = 2048
max_decoding_tokens = 128
# 4. AI personality / prompt tuning
[answer]
system_prompt = """
You are Tabby, a coding assistant specialized in C and C++.
Always give concise, accurate, and practical answers.
If the user asks 'where do I initialize Skia', include file/line numbers if present in the repo context.
"""
Note 1: When starting tabby with extra paramameters this configuration will be applied. Note 2: by default the models are stored at: %USERPROFILE%.tabby\models\TabbyML setting the variable: TABBY_DATA_DIR. did not work
3.2 start and configure Tabby
From your Powersehll CLI start tabby with:
C:\Apps\tabby\tabby.exe serve --port 8085 (access web interface: http://localhost:8085 ).
This will by default use the above created config.toml file. Alternatively you can start a specific model with: tabby serve --model TabbyML/CodeLlama-7B
3.3 Index Your Project
Now we will try to index our C++ project. This with the aim to indexing our project files so that they can be included in the search and you can ask questions like : where did I initialize skia in my project (file and line please)
In your project folder create a file .tabbyignore here you can use te git styled ignore syntaxt to exclude files/folders from indexing. Sample
# Exclude build artifacts and dependencies
build/
build-l/
build-win/
build-win-vs/
*.exe
*.dll
depw/
# Don't ignore this folder (including sub folders)
!depw/skia/include/
!depw/glfw-3.4/include/
Account button -> settings ->context provider -> Git -> Create (or use this direct Link )
Define a Git URL to the project root (where .git folder is defined, see note below). And set a Repository name
Press the Play button to create or update the folder so it gets indexed
Check if your local repository has been created
localhost:8085
Click on your Avitor and select "Code browser"
you should see your project, click on it to see what is included
*Note\question: I'm still not sure if it should be the 'shadow one' location, defined in config.toml or the real project location (that what I tried)
*Warning: The 'Repository context provider' requires that this folder has at least a local Git repository that is committed! So tabby only indexes files that are comitted
4. Use Tabby in VSC
4.1 add. VS Code Extension
In VS Code, go to Extensions (Ctrl+Shift+X)
Search for Tabby and install the official extension
4.2. Configure the Extension
Open VS Code settings (Ctrl+,)
Search for tabby
Set Tabby: Server Endpoint to http://localhost:8085
5. Ask Code-Aware Questions
Open your project in VS Code.
Use the Tabby sidebar or command palette (Ctrl+Shift+P → “Tabby: Ask”)
Use the @ sign to indicate the search should be in your project. Use the repository name defined in 3.3 (or select from the combox the project location ??)
Then ask questions like:
@Repository-name do I have a main.cpp file in my projectGeneral answere
do I have a main.cpp file in my project - @Repository-nameGeneral answere
@Repository-name Where do I use function glfwCreateWindow?General answere
Show all usages of BufferTarget
Find all places Skia is initialized
do I have a main.cpp file in my project
Tabby will search your indexed codebase and provide file/line references.
WARNING\OBSERVATION
Probbaly I did something wrong. But this did not work. Most of the time it states that AI has not access to local file, one time it said: 'Yes you appear to have a main.cpp fle in your project' when asking show contents it could not do it.
@Repository-name Where do I use function glfwCreateWindow?General answere
Using the same question above and selecting the from the combobox my project (I have 2 items should be one) the system indicates that is search for 6 itmes (5 *.md items and 1 .txt item. But I have absolutely more items includeing *.cpp and *.h, I also checked this in the web interface: http://localhost:8085/files ) and continues with a general answere
6. (Optional) Enable Inline Suggestions
Tabby can also provide code completions and inline suggestions as you type.
7. Troubleshooting
If Tabby can’t answer code questions, make sure:
The server is running (http://localhost:8085)
The project is indexed
The VS Code extension is configured to the correct endpoint
8. Privacy & Local-Only
All code and queries stay on your machine.
No cloud required unless you opt-in to cloud models.
You’re ready!
Tabby will now let you search, navigate, and ask questions about your C++
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello I tried to use Tabby with my AMD Ryzen 3900 CPU(12/24) with the goal to enable context awareness for my projects, as can be seen in section 5 this was not successful. To be honest, I'm a bite feed with it right now :)
But I created this document, and perhaps it is useful for someone, and perhaps someone can point out what I'm doing or where I'm thinking wrong.!
Anyway, have a great day!
Nico
🚀 Tabby Setup Guide
for Local C++ Codebase Search & AI Assistance
This guide will help you set up Tabby for local, privacy-friendly, code-aware AI assistance on your C++ project.
You’ll be able to ask questions like “Where do I use function xxx?” and get instant, context-aware answers (I hope).
Tabby support
CodeLlama-13B
andCodeLlama-7
and others models see here1. Prerequisites
This document focusses on Windwows
2. Download & Install Tabby (Windows)
tabby-windows-amd64.exe
)C:\apps\tabby\
tabby-windows-amd64.exe
to tabby.exe.3. Configure and start Tabby
3.1 Config.toml
Create a config.toml (not default created)
in the Windows folder:
%USERPROFILE%\.tabby\
for example I tried this:Note 1: When starting tabby with extra paramameters this configuration will be applied.
Note 2: by default the models are stored at: %USERPROFILE%.tabby\models\TabbyML setting the variable: TABBY_DATA_DIR. did not work
3.2 start and configure Tabby
From your Powersehll CLI start tabby with:
C:\Apps\tabby\tabby.exe serve --port 8085
(access web interface: http://localhost:8085 ).This will by default use the above created
config.toml
file. Alternatively you can start a specific model with:tabby serve --model TabbyML/CodeLlama-7B
3.3 Index Your Project
Now we will try to index our C++ project. This with the aim to indexing our project files so that they can be included in the search and you can ask questions like :
where did I initialize skia in my project (file and line please)
project folder
create a file.tabbyignore
here you can use te git styled ignore syntaxt to exclude files/folders from indexing. SamplePlay
button to create or update the folder so it gets indexed*Note\question: I'm still not sure if it should be the 'shadow one' location, defined in
config.toml
or the real project location (that what I tried)*Warning: The 'Repository context provider' requires that this folder has at least a local Git repository that is committed! So tabby only indexes files that are comitted
4. Use Tabby in VSC
4.1 add. VS Code Extension
Ctrl+Shift+X
)4.2. Configure the Extension
Ctrl+,
)tabby
http://localhost:8085
5. Ask Code-Aware Questions
Ctrl+Shift+P
→ “Tabby: Ask”)@Repository-name do I have a main.cpp file in my project
General answeredo I have a main.cpp file in my project - @Repository-name
General answere@Repository-name Where do I use function glfwCreateWindow?
General answereShow all usages of BufferTarget
Find all places Skia is initialized
do I have a main.cpp file in my project
WARNING\OBSERVATION
Probbaly I did something wrong. But this did not work. Most of the time it states that AI has not access to local file, one time it said: 'Yes you appear to have a main.cpp fle in your project' when asking show contents it could not do it.
@Repository-name Where do I use function glfwCreateWindow?
General answere6. (Optional) Enable Inline Suggestions
7. Troubleshooting
http://localhost:8085
)8. Privacy & Local-Only
You’re ready!
Tabby will now let you search, navigate, and ask questions about your C++
Beta Was this translation helpful? Give feedback.
All reactions