Skip to content

Releases: janhq/jan

0.7.3

13 Nov 12:31
0fd9418

Choose a tag to compare

What's Changed

Read more

0.7.2

16 Oct 05:27
6d6a1fd

Choose a tag to compare

What's Changed

Full Changelog: v0.7.1...v0.7.2

0.7.1

03 Oct 16:18
dc4de43

Choose a tag to compare

What's Changed

Full Changelog: v0.7.0...v0.7.1

0.7.0

02 Oct 09:30
f537429

Choose a tag to compare

What's Changed

Read more

0.6.10

18 Sep 08:06
b9f658f

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.6.9...v0.6.10

0.6.9

28 Aug 10:22
5fae954

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.6.8...v0.6.9

0.6.8

14 Aug 09:29
6ac3d6d

Choose a tag to compare

What's Changed

  • Sync Release/v0.6.6 into dev by @louis-menlo in #5973
  • fix: thread sorting issue by @cmppoon in #5976
  • ✨enhancement: blurry logo model provider by @urmauur in #5986
  • fix: missing text color responsive left panel by @urmauur in #5989
  • fix assistant dropdown onClick not triggered consistently by @cmppoon in #5991
  • Sync Release/v0.6.6 into dev by @louis-menlo in #5997
  • Add comprehensive Products section and reorganize documentation structure by @ramonpzg in #5958
  • Sync release/v0.6.6 into dev by @louis-menlo in #6004
  • refactor: clean up cortex by @louis-menlo in #6003
  • ci: enable PR trigger for dev branch in tauri nightly workflow by @Minh141120 in #6014
  • fix: react state loop from hooks useMediaQuery by @urmauur in #6031
  • fix: wrong desc setting cont_batching by @urmauur in #6034
  • Fix: Llama.cpp server hangs on model load by @qnixsynapse in #6030
  • feat(docs): Docs v2 Astro migration by @ramonpzg in #5950
  • feat: Improve llama.cpp argument handling and add device parsing tests by @qnixsynapse in #6041
  • fix: show error toast message on download error by @cmppoon in #6044
  • fix: Generate A Response button does not show context size error dialog by @urmauur in #6029
  • fix connected servers status not in sync when edit mcp json by @cmppoon in #6020
  • feat: Add check for AVX2 instruction support for MCP on MacOS with Intel CPUs by @shmutalov in #5530
  • chore: skip nightly build workflow for external contributor by @Minh141120 in #6050
  • fix: support missing llamacpp cuda backends by @louis-menlo in #6046
  • ci: disable autoqa on nightly build by @Minh141120 in #6051
  • ✨feat: recommended label llamacpp setting by @urmauur in #6052
  • Fix: Improve Llama.cpp model path handling and error handling by @qnixsynapse in #6045
  • ✨feat: jinja template customize per model instead provider level by @urmauur in #6053
  • fix: Jan hub model detail and deep link by @louis-menlo in #6024
  • fix: Add conditional Vulkan support check for better GPU compatibility by @qnixsynapse in #6066
  • Readme Update and Additional Model Providers by @ramonpzg in #6064
  • fix: gpt-oss thinking block by @urmauur in #6071
  • fix: should not include reasoning text in the chat completion request by @louis-menlo in #6072
  • chore: update workflow name by @Minh141120 in #6073
  • ci: deprecate jan docs new release workflow in favor of jan-docs by @Minh141120 in #6078
  • Add gpt-oss local installation blog post by @eckartal in #6075
  • feat: Add support for overriding tensor buffer type by @qnixsynapse in #6062
  • refactor: move session management & port allocation to backend by @qnixsynapse in #6083
  • 🐛fix: onboarding loop by @urmauur in #6054
  • refactor: Use more precise terminology in API server logs by @qnixsynapse in #6085
  • fix: update ux recemmend backend label into desc setting by @urmauur in #6088
  • feat: Introduce structured error handling for llamacpp extension by @qnixsynapse in #6087
  • refactor: clean up unused hardware apis by @louis-menlo in #6089
  • added v0.6.7 changelog and jupyter mcp tutorial by @ramonpzg in #6116
  • fix: Prevent accidental message submitting on ChatInput for IME users by @B0sh in #6109
  • adding handbook, blog, and changelog to jan docs v2 by @ramonpzg in #6118
  • fix: bring back gpu detection - app data relocate issue by @louis-menlo in #6119
  • ci: update generate release note and jan docs release by @Minh141120 in #6121
  • docs: Update 3-epic.md by @freelerobot in #6124
  • docs: Update 4-goal.md by @freelerobot in #6125
  • ci: add path for tauri nightly build by @Minh141120 in #6130
  • fix: HF token is not used while searching repositories by @louis-menlo in #6137
  • fix: factory reset process got blocked by @louis-menlo in #6140
  • fix: visualize readme content for private repo with HF token by @louis-menlo in #6141
  • fix: Improve error message for invalid version/backend format by @qnixsynapse in #6149
  • fix: migrations model setting by @urmauur in #6165
  • fix: handle modelId special char by @urmauur and @louis-menlo on #6172
  • fix: feature toggle auto updater by @Minh141120 and @louis-menlo on #6175

New Contributors

Full Changelog: v0.6.6...v0.6.8

0.6.7

06 Aug 11:04
v0.6.7
804b0f0

Choose a tag to compare

Changes

  • fix: should not include reasoning text in the chat completion request @louis-menlo (#6072)
  • fix: gpt-oss thinking block @urmamur (#6071)
  • fix: react state loop from hooks useMediaQuery @urmamur (#6031)

Contributor

@louis-menlo and @urmauur

0.6.6

31 Jul 11:17
v0.6.6
4bcfa84

Choose a tag to compare

Changes

  • hotfix: regression issue with colon in model name @louis-menlo (#6008)
  • Add RunEvent::Exit event to tauri to handle macos context menu exit @qnixsynapse (#6005)
  • fix: remove auto refresh model custom provider @urmauur (#6002)
  • fix: generate response button disappear on tool call @louis-menlo (#5988)
  • fix: title tooltip MCP edit json @urmauur (#5987)
  • fix: download progress missing when left panel scrollable @urmauur (#5984)
  • fix: failed provider models list due to broken cortex import @louis-menlo (#5983)
  • Sync Release/v0.6.6 into dev @louis-menlo (#5973)
  • fix: use direct process termination instead of console events on Windows @qnixsynapse (#5972)
  • fix: rename thread dialog shows previous thread @urmauur (#5963)
  • chore: allow all HTTPS image sources in img-src directive @Minh141120 (#5970)
  • feat: Enhance port selection with availability check @qnixsynapse (#5966)
  • fix: csp including img.shields.io and cdn-uploads.huggingface.co in img-src directive @Minh141120 (#5967)
  • ci: tolerate artifact upload @Minh141120 (#5969)
  • fix: assistant with last used and fix metadata @urmauur (#5955)
  • fix: search models result in hub should be sorted by weight @louis-menlo (#5954)
  • fix: factory reset fail with access denied error @louis-menlo (#5952)
  • fix: set autoUnload in onLoad() @qnixsynapse (#5956)
  • fix: update edge case experimental feature MCP @urmauur (#5951)
  • fix: correctly apply auto_unload setting from config @qnixsynapse (#5953)
  • fix: Prevent race condition with auto-unload during rapid model loading @qnixsynapse (#5947)
  • chore: uninstall when upgrading windows installer @Minh141120 (#5945)
  • fix: openrouter unselect itself @louis-menlo (#5943)
  • fix: tool approval params scrollable @urmauur (#5941)
  • fix: migrate app settings to the new version @louis-menlo (#5936)
  • fix: Remove sInfo from activeSessions before unloading @qnixsynapse (#5938)
  • fix: update default GPU toggle, and simplify state @urmauur (#5937)
  • chore: revert back to passive mode on windows installer @Minh141120 (#5934)
  • fix: update ui version_backend, mem usage hardware @urmauur (#5932)
  • fix: Frontend updates when llama.cpp backend auto-downloads @qnixsynapse (#5926)
  • fix: calculation memory on hardware and system monitor @urmauur (#5922)
  • fix: persist model capabilities refresh app @urmauur (#5918)
  • fix: validate name assistant and improve area clickable @urmauur (#5920)
  • fix: Allow N-GPU Layers (NGL) to be set to 0 in llama.cpp @qnixsynapse (#5907)
  • fix: models hub should show latest data only @louis-menlo (#5925)
  • fix: Persist 'Auto-Unload Old Models' setting in llama.cpp @qnixsynapse (#5906)
  • feat: Enhance Llama.cpp backend management with persistence @qnixsynapse (#5886)
  • Chore cua mac runner @hiento09 (#5888)
  • fix: provider settings should be refreshed on page load @louis-menlo (#5887)
  • 🐛fix: get system info and system usage @urmauur (#5884)
  • fix: gpu detected from backend version @urmauur (#5882)
  • fix: bring back HF repo ID search in Hub @louis-menlo (#5880)
  • chore: revert app artifact name for macos linux and windows builds @Minh141120 (#5878)
  • feat: add support for querying available backend devices @qnixsynapse (#5877)
  • fix: llama.cpp backend shows blank list sometime @louis-menlo (#5876)
  • ci: rename app github artifact on windows and linux build @Minh141120 (#5875)
  • ci: autoqa github artifact @Minh141120 (#5873)
  • fix: jan should have a general assistant instruction @louis-menlo (#5872)
  • fix: tmp download file should be removed on cancel @louis-menlo (#5849)
  • 🐛fix: remove sampling parameters from llamacpp extension @urmauur (#5871)
  • 🐛fix: update vulkan active syntax @urmauur (#5869)
  • fix: app should not show manually deleted models @louis-menlo (#5868)
  • feat: migrate cortex models to llamacpp extension @louis-menlo (#5838)
  • fix: charmap encoding @Minh141120 (#5865)
  • fix: HuggingFace provider should be non-deletable @louis-menlo (#5856)
  • fix: gemini tool call support - version bump @louis-menlo (#5848)
  • Fix: engine unable to find dlls on when running on Windows @qnixsynapse (#5863)
  • chore: update build appimage script @Minh141120 (#5866)
  • ✨enhancement: dialog model error trigger from provider screen @urmauur (#5858)
  • fix: support load model configurations @urmauur (#5843)
  • fix: delete all thread should not include fav @urmauur (#5864)
  • Chore: enrich autoqa log @hiento09 (#5862)
  • refactor: Improve Llama.cpp backend management and auto-update @qnixsynapse (#5845)
  • fix: autoqa prompt template @Minh141120 (#5854)
  • feat: add vcruntime for windows installer @Minh141120 (#5852)
  • ✨enhancement: auto focus always allow action from tool approval dialog and add req parameters @urmauur (#5836)
  • enhancement: better error page component @urmauur (#5834)
  • chore: sync make build with dev @Minh141120 (#5847)
  • refactor: standardize build process and remove build-tauri target @Minh141120 (#5846)
  • fix: custom tauri nsis template CheckIfAppIsRunning macro @Minh141120 (#5840)
  • fix: update @taur-apps/cli to newest verison to fix appimage download @Minh141120 (#5839)
  • fix: prevent terminal window from opening on model load on WindowsOS @qnixsynapse (#5837)
  • feat: add claude-4 @louis-menlo (#5829)
  • feat: support per-model overrides in llama.cpp load() @qnixsynapse (#5820)
  • fix: llama.cpp integration model load and chat experience @louis-menlo (#5823)
  • test: deprecate webdriver test in favor of auto qa using CUA @louis-menlo (#5825)
  • Revert "chore(deps): update rand requirement from 0.8 to 0.9 in /src-tauri" @louis-menlo (#5824)
  • fix: Legacy threads show on top of new threads (#5696) @louis-menlo (#5810)
  • fix: llama.cpp backend download on windows @louis-menlo (#5813)
  • fix: dependabot should just update security patch @louis-menlo (#5814)
  • chore(deps): update rand requirement from 0.8 to 0.9 in /src-tauri @dependabot (#5399)
  • docs: Add Instruction for Toggling Experimental Features Before Toggling MCP Servers @bytrangle (#5771)
  • chore(deps): bump @radix-ui/react-hover-card from 1.1.11 to 1.1.14 @dependabot (#5603)
  • Fix autoqa lib dependencies @hiento09 (#5812)
  • refactor: simplify proxy settings by removing unused SSL verification options @louis-menlo (#5809)
  • feat: Add Hugging Face as a provider @gary149 (#5808)
  • fix: Improve stream error handling and parsing @qnixsynapse (#5807)
  • feat: add autoqa @hiento09 (#5779)
  • set line number userSelect to none so that code can be copied without line number @ethanova (#5782)
  • feat: add model load error handling to improve UX @louis-menlo (#5802)
  • fix: Add --reasoning-format none to support rendering of reasoning content @qnixsynapse (#5803)
  • feat: proxy support for the new downloader @louis-menlo (#5795)
  • Sync release/0.6.5 into dev to start new development cycle @louis-menlo (#5801)
  • refactor: move thinking toggle to runtime settings for dynamic control @qnixsynapse (#5800)
  • test: deprecate webdriver test in favor of auto qa using CUA @louis-menlo (#5797)
  • Documentation Updates for v0.6.5 @ramonpzg (#5799)

Contributor

@Minh141120, @bytrangle, @dependabot, @dependabot[bot], @ethanova, @gary149, @hiento09, @louis-menlo, @qnixsynapse, @ramonpzg and @urmauur

0.6.5

17 Jul 09:02
v0.6.5
c283979

Choose a tag to compare

Changes

  • ✨enhancement: support base layout responsive Ul @urmauur (#5472)
  • ✨enhancement: setting responsive @urmauur (#5615)
  • ✨feat: bump version of llama.cpp - b5857 @louis-menlo (#5742)
  • 🐛fix: revert back stat hover for three dots @urmauur (#5777)
  • 🐛fix: download icon when left panel close @urmauur (#5776)
  • 🐛fix: revert installationmode in nsis template @Minh141120 (#5778)
  • 🐛fix: make three dots default show 3 dots and can trigger with right click @urmauur (#5712)
  • 🐛fix: custom based url and header by upgrade token.js version @samhvw8 (#5596)
  • 🐛fix: update base URL for Anthropic provider @samhvw8 (#5600)
  • 🐛fix: Tauri Applmage failing to render on wayland + mesa @DistractionRectangle (#5463)
  • 🐛fix: fetch models from custom provider causes app to crash @louis-menlo (#5791)
  • 🔧 config: all yml to md for issue template @LazyYuuki (#5661)
  • 🔧 config: fix bug template @LazyYuuki (#5658)
  • 🔧 config: from yml to md for template @LazyYuuki (#5657)

Contributors

@LazyYuuki, @Minh141120, @louis-menlo, @samhvw8, @DistractionRectangle and @urmauur