Skip to content

Conversation

@dinhlongviolin1
Copy link
Contributor

Describe Your Changes

Fixes Issues

  • Closes #
  • Closes #

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

dinhlongviolin1 and others added 30 commits September 12, 2025 13:07
* fix: Polish translation (#6421)

* ci: remove paths triggered for jan server

* ci: fix typo in branch name for jan web

---------

Co-authored-by: Piotr Orzechowski <[email protected]>
Co-authored-by: Minh141120 <[email protected]>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Merge dev-web branch into prod-web
Sync dev with dev web (google auth)
* fix: avoid error validate nested dom

* fix: correct context shift flag handling in LlamaCPP extension (#6404) (#6431)

* fix: correct context shift flag handling in LlamaCPP extension

The previous implementation added the `--no-context-shift` flag when `cfg.ctx_shift` was disabled, which conflicted with the llama.cpp CLI where the presence of `--context-shift` enables the feature.
The logic is updated to push `--context-shift` only when `cfg.ctx_shift` is true, ensuring the extension passes the correct argument and behaves as expected.

* feat: detect model out of context during generation

---------

Co-authored-by: Dinh Long Nguyen <[email protected]>

* chore: add install-rust-targets step for macOS universal builds

* fix: make install-rust-targets a dependency

* enhancement: copy MCP permission

* chore: make action mutton capitalize

* Update web-app/src/locales/en/tool-approval.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: simplify macos workflow

* fix: KVCache size calculation and refactor (#6438)

- Removed the unused `getKVCachePerToken` helper and replaced it with a unified `estimateKVCache` that returns both total size and per‑token size.
- Fixed the KV cache size calculation to account for all layers, correcting previous under‑estimation.
- Added proper clamping of user‑requested context lengths to the model’s maximum.
- Refactored VRAM budgeting: introduced explicit reserves, fixed engine overhead, and separate multipliers for VRAM and system RAM based on memory mode.
- Implemented a more robust planning flow with clear GPU, Hybrid, and CPU pathways, including fallback configurations when resources are insufficient.
- Updated default context length handling and safety buffers to prevent OOM situations.
- Adjusted usable memory percentage to 90 % and refined logging for easier debugging.

* fix: detect allocation failures as out-of-memory errors (#6459)

The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory.
Adding this check ensures such messages are correctly classified as out‑of‑memory errors,
providing more accurate error handling CPU backends.

* fix: pathname file install BE

* fix: set default memory mode and clean up unused import (#6463)

Use fallback value 'high' for memory_util config and remove unused GgufMetadata import.

* fix: auto update should not block popup

* fix: remove log

* fix: imporove edit message with attachment image

* fix: imporove edit message with attachment image

* fix: type imageurl

* fix: immediate dropdown value update

* fix: linter

* fix/validate-mmproj-from-general-basename

* fix/revalidate-model-gguf

* fix: loader when importing

* fix/mcp-json-validation

* chore: update locale mcp json

* fix: new extension settings aren't populated properly (#6476)

* chore: embed webview2 bootstrapper in tauri windows

* fix: validat type mcp json

* chore: prevent click outside for edit dialog

* feat: add qa checklist

* chore: remove old checklist

* chore: correct typo in checklist

* fix: correct memory suitability checks in llamacpp extension (#6504)

The previous implementation mixed model size and VRAM checks, leading to inaccurate status reporting (e.g., false RED results).
- Simplified import statement for `readGgufMetadata`.
- Fixed RAM/VRAM comparison by removing unnecessary parentheses.
- Replaced ambiguous `modelSize > usableTotalMemory` check with a clear `totalRequired > usableTotalMemory` hard‑limit condition.
- Refactored the status logic to explicitly handle the CPU‑GPU hybrid scenario, returning **YELLOW** when the total requirement fits combined memory but exceeds VRAM.
- Updated comments for better readability and maintenance.

* fix: thread rerender issue

* chore: clean up console log

* chore: uncomment irrelevant fix

* fix: linter

* chore: remove duplicated block

* fix: tests

* Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows

fix: deeplink issue on Windows

* fix: reduce unnessary rerender due to current thread retrieval

* fix: reduce app layout rerender due to router state update

* fix: avoid the entire app layout re render on route change

* clean: unused import

* Merge pull request #6514 from menloresearch/feat/web-gtag

feat: Add GA Measurement and change keyboard bindings on web

* chore: update build tauri commands

* chore: remove unused task

* fix: should not rerender thread message components when typing

* fix re render issue

* direct tokenspeed access

* chore: sync latest

* feat: Add Jan API server Swagger UI (#6502)

* feat: Add Jan API server Swagger UI

- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server.
- Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`.
- Extend the proxy whitelist to include Swagger UI routes.
- Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files.
- Update whitelisted paths and integrate CORS handling for the new endpoints.

* feat: serve Swagger UI at root path

The Swagger UI endpoint previously lived under `/docs`. The route handling and
exclusion list have been updated so the UI is now served directly at `/`.
This simplifies access, aligns with the expected root URL in the Tauri
frontend, and removes the now‑unused `/docs` path handling.

* feat: add model loading state and translations for local API server

Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.

* fix: tests

* fix: linter

* fix: build

* docs: update changelog for v0.6.10

* fix(number-input): preserve '0.0x' format when typing (#6520)

* docs: update url for gifs and videos

* chore: update url for jan-v1 docs

* fix: Typo in openapi JSON (#6528)

* enhancement: toaster delete mcp server

* Update 2025-09-18-auto-optimize-vision-imports.mdx

* Merge pull request #6475 from menloresearch/feat/bump-tokenjs

feat: fix remote provider vision capability

* fix: prevent consecutive messages with same role (#6544)

* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests

* feat: Prompt progress when streaming (#6503)

* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>

* chore: add ci for web stag (#6550)

* feat: add getTokensCount method to compute token usage (#6467)

* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>

* fix: custom fetch for all providers (#6538)

* fix: custom fetch for all providers

* fix: run in development should use built-in fetch

* add full-width model names (#6350)

* fix: prevent relocation to root directories (#6547)

* fix: prevent relocation to root directories

* Update web-app/src/locales/zh-TW/settings.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: web remote conversation (#6554)

* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Akarshan Biswas <[email protected]>
Co-authored-by: Minh141120 <[email protected]>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Co-authored-by: Louis <[email protected]>
Co-authored-by: Bui Quang Huy <[email protected]>
Co-authored-by: Roushan Singh <[email protected]>
Co-authored-by: hiento09 <[email protected]>
Co-authored-by: Alexey Haidamaka <[email protected]>
* ✨ feat: Re-arrange docs as needed

* 🔧 chore: re-arrange the folder structure

* Add server docs

Add server docs

* enhancement: migrate handbook and janv2

* Update docs/src/components/ui/dropdown-button.tsx

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* Update docs/src/pages/_meta.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: update feedback #1

* fix: layout ability model

* feat: add azure as first class provider (#6555)

* feat: add azure as first class provider

* fix: deployment url

* Update handbook: restructure content and add new sections

- Add betting-on-open-source.mdx and open-superintelligence.mdx
- Update handbook index with new structure
- Remove outdated handbook sections (growth, happy, history, money, talent, teams, users, why)
- Update handbook _meta.json to reflect new structure

* chore: fix meta data json

* chore: update missing install

* fix: Catch local API server various errors (#6548)

* fix: Catch local API server various errors

* chore: Add tests to cover error catches

* fix: LocalAPI server trusted host should accept asterisk (#6551)

* feat: support .zip archives for manual backend install (#6534)

* feat(llamacpp): support .zip archives for manual backend install

* Update Lock Files

* Merge pull request #6563 from menloresearch/feat/web-minor-ui-tweak-login

feat: tweak login UI

---------

Co-authored-by: LazyYuuki <[email protected]>
Co-authored-by: nngostuds <[email protected]>
Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Louis <[email protected]>
Co-authored-by: eckartal <[email protected]>
Co-authored-by: Nghia Doan <[email protected]>
Co-authored-by: Roushan Kumar Singh <[email protected]>
* fix: standardize log timestamps to UTC timezone

- Update formatTimestamp functions in both log viewers to use UTC
- Replace toLocaleTimeString() with explicit UTC formatting

* French Translation

* feat: Allow to save the last message upon interrupting llm response

* feat: Continue with AI response button if it got interrupted

* feat: Continue with AI response for llamacpp

* feat: Modify on-going response instead of creating new message to avoid message ID duplication

* feat: Add tests for the Continuing with AI response

* fix: Consolidate comments

* fix: Exposing PromptProgress to be passed as param

* fix: Fix tests on useChat

* fix: truncated tool name available on chat input

* fix: wording disable all tools

* fix: Incorrect proactive icon display

* feat: avoid switching model midway

Once the user switches model after they interrupt the response midway, force the user to start generating the response from the beginning to avoid cross model lemma

* fix: migrate flash_attn settings (#6864)

* fix: migrate flash_attn settings

* Update web-app/src/hooks/useModelProvider.ts

Co-authored-by: Copilot <[email protected]>

* Update core/src/browser/extension.ts

Co-authored-by: Copilot <[email protected]>

---------

Co-authored-by: Copilot <[email protected]>

* fix: chatinput debounce tokenize (#6855)

* fix: chatinput debounce tokenize

* fix error

* fix: could not cancel the unintialized download (#6867)

* fix: could not cancel the unintialized download

* fix: could not open app folder

* fix: tests

* feat: loader screen before load FE

* chore: remove nested RAF

* chore: refactor filereader to tauri dialog

* chore: update call funtion direct to handle image attachment

* chore: update PR comment

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* feat: add configurable timeout for llamacpp connections (#6872)

* feat: add configurable timeout for llamacpp connections

This change introduces a user-configurable read/write timeout (in seconds) for llamacpp connections, replacing the hard-coded 600s value. The timeout is now settable via the extension settings and used in both HTTP requests and server readiness checks. This provides flexibility for different deployment scenarios, allowing users to adjust connection duration based on their specific use cases while maintaining the default 10-minute timeout behavior.

* fix: correct timeout conversion factor and clarify settings description

The previous timeout conversion used `timeout * 100` instead of `timeout * 1000`, which incorrectly shortened the timeout to 1/10 of the intended value (e.g., 10 minutes became 1 minute). This change corrects the conversion factor to milliseconds. Additionally, the settings description was updated to explicitly state that this timeout applies to both connection and load operations, improving user understanding of its scope.

* style: replace loose equality with strict equality in key comparison

This change updates the comparison operator from loose equality (`==`) to strict equality (`===`) when checking for the 'timeout' key. While the key is always a string in this context (making the behavior identical), using strict equality prevents potential type conversion issues and adheres to JavaScript best practices for reliable comparisons.

* fix: hide thread dropdown on delete dialog confirmation popup

* fix: model download state update (#6882)

* Fix Discord Community link in CONTRIBUTING.md (#6883)

* feat: Russian localization (#6869)

* Add files via upload

Updating localization files

* Update LanguageSwitcher.tsx

Added Russian language option

* Add files via upload

Removing the trailing newline character

* Add files via upload

UI Testing, Translation & Contextual QA

* chore: address PR comments

* feat: replace Tauri dialog plugin with rfd integration (#6850)

* feat: replace Tauri dialog plugin with rfd integration

Remove the legacy `tauri-plugin-dialog` dependency and its capability entry, adding `rfd` as a cross‑platform native file dialog library.
Introduce `open_dialog` and `save_dialog` commands that expose file‑selection and save dialogs to the frontend, along with a `DialogOpenOptions` model for filter, directory, and multiple‑file support.
Update the `TauriDialogService` to invoke these new commands instead of the removed plugin, ensuring a cleaner build and consistent dialog behaviour across desktop targets.

* chore: remove unused serde_json import

Remove the unnecessary serde_json import from `src-tauri/src/core/filesystem/commands.rs` to keep the codebase clean and eliminate unused dependencies. This small refactor improves build clarity and reduces potential lint warnings.

* fix: command + N does not work (#6890)

* fix: add mcp tool call timeout config (#6891)

Update web-app/src/locales/vn/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/zh-CN/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/pl/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/pt-BR/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/de-DE/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

fix: tests

Update web-app/src/locales/ja/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/zh-TW/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/id/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update src-tauri/src/core/mcp/commands.rs

Co-authored-by: Copilot <[email protected]>

fix: utf translation

* Fix: add conditional RAG tool injection only on document attachment (#6887)

* feat: add conditional RAG tool injection for attachments

The chat logic now only requests RAG tools when document attachments are enabled and the
model supports tools. This improves performance by avoiding unnecessary API calls
and reduces payloads for models that do not need external knowledge.
The change also cleans up temporary chat messages on reload, sets a navigation flag,
and updates `sendCompletion` and `postMessageProcessing` to use the new conditional
tool loading logic.  The refactor introduces clearer imports and formatting.

* chore: restore formatting

* completion.ts: restore formatting

* feat: track document attachment in thread metadata and update RAG logic

Add a `hasDocuments` flag to the active thread’s metadata when a document is ingested.
Update the RAG eligibility check to use this flag rather than the raw `documents` array, ensuring that the thread’s state accurately reflects its attachment status.

This keeps the thread UI in sync with attachments and prevents unnecessary re‑processing when the same documents are added to a thread.

* refactor: consolidate thread update after attachment ingestion

Remove duplicate `useThreads.getState().updateThread` calls that were present
inside the attachment ingestion logic. The previous implementation updated the
thread metadata twice (once inside the `try` block and again later), which
could lead to unnecessary state changes and made debugging harder. The new
approach updates the thread only once, after all attachments have been
processed, ensuring consistent metadata and simplifying the flow.

* test: improve useChat test mocks and capability handling

Refactor the test environment for `useChat`:
- Updated the `useModelProvider` mock to expose a test model with full capabilities (`tools`, `vision`, `proactive`) and a matching provider, enabling the hook to perform model‑specific logic without runtime errors.
- Added a `setTokenSpeed` mock to `useAppState` to satisfy the hook’s usage of token‑speed settings.
- Refactored `useThreads` to use `Object.assign` for consistent selector behaviour and added a `getThreadById` implementation.
- Introduced an attachments mock and platform feature constants so that attachment handling tests can execute correctly.
- Normalised content arrays in `newUserThreadContent` and `newAssistantThreadContent` to match the actual content format.
- Cleared and reset builder mocks in `beforeEach` to avoid stale state across test cases.
- Made minor formatting and type corrections throughout the test file.

These changes resolve failing tests caused by missing provider models, incomplete capabilities, and broken mocks, and they enable coverage of proactive mode detection and attachment handling.

* fix: glibc linux

* feat: hide file attachments properly (#6895)

* Guard attachment setters when feature disabled

* fix lint issue

* fix: get mcp servers spam request issue (#6901)

* resolve rust clippy warnings (#6888)

* resolve rust clippy warnings

* fix: start_server expects a single config

* resolve eslint error

* fix(#6902): update Bun download link for darwin-86x -> darwin-64x (#6903)

* fix: regression on reasoning models (#6914)

* fix: regression on reasoning models

* fix: reset accumulated text when not continuing message generation

* fix: new chat shortcut stopped working (#6915)

* fix: glitch UI issues (#6916)

* fix: glitch UI issues

* fix: tests

* chore: bump rmcp to 0.8.5 (#6918)

* feat: add backend migration mapping and update backend handling (#6917)

Added `mapOldBackendToNew` to translate legacy backend strings (e.g., `win-avx2-x64`, `win-avx512-cuda-cu12.0-x64`) into the new unified names (`win-common_cpus-x64`, `win-cuda-12-common_cpus-x64`). Updated backend selection, installation, and download logic to use the mapper, ensuring consistent naming across the extension and tests. Updated tests to verify the mapping, new download items, and correct extraction paths. Minor formatting updates to the Tauri command file for clearer logging. This change enables smoother migration for stored user preferences and reduces duplicate asset handling.

* add temp auth fix

* add image upload

* presigned upload

* fix image upload and refresh tokens

* fix images extensions

* add project extensions

* dev for testing

---------

Co-authored-by: Roushan Singh <[email protected]>
Co-authored-by: Roushan Kumar Singh <[email protected]>
Co-authored-by: fred <[email protected]>
Co-authored-by: Vanalite <[email protected]>
Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Dinh Long Nguyen <[email protected]>
Co-authored-by: Akarshan Biswas <[email protected]>
Co-authored-by: @Kuzmich55 <[email protected]>
Co-authored-by: Minh141120 <[email protected]>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Co-authored-by: Volodya Lombrozo <[email protected]>
* fix: standardize log timestamps to UTC timezone

- Update formatTimestamp functions in both log viewers to use UTC
- Replace toLocaleTimeString() with explicit UTC formatting

* French Translation

* feat: Allow to save the last message upon interrupting llm response

* feat: Continue with AI response button if it got interrupted

* feat: Continue with AI response for llamacpp

* feat: Modify on-going response instead of creating new message to avoid message ID duplication

* feat: Add tests for the Continuing with AI response

* fix: Consolidate comments

* fix: Exposing PromptProgress to be passed as param

* fix: Fix tests on useChat

* fix: truncated tool name available on chat input

* fix: wording disable all tools

* fix: Incorrect proactive icon display

* feat: avoid switching model midway

Once the user switches model after they interrupt the response midway, force the user to start generating the response from the beginning to avoid cross model lemma

* fix: migrate flash_attn settings (#6864)

* fix: migrate flash_attn settings

* Update web-app/src/hooks/useModelProvider.ts

Co-authored-by: Copilot <[email protected]>

* Update core/src/browser/extension.ts

Co-authored-by: Copilot <[email protected]>

---------

Co-authored-by: Copilot <[email protected]>

* fix: chatinput debounce tokenize (#6855)

* fix: chatinput debounce tokenize

* fix error

* fix: could not cancel the unintialized download (#6867)

* fix: could not cancel the unintialized download

* fix: could not open app folder

* fix: tests

* feat: loader screen before load FE

* chore: remove nested RAF

* chore: refactor filereader to tauri dialog

* chore: update call funtion direct to handle image attachment

* chore: update PR comment

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* feat: add configurable timeout for llamacpp connections (#6872)

* feat: add configurable timeout for llamacpp connections

This change introduces a user-configurable read/write timeout (in seconds) for llamacpp connections, replacing the hard-coded 600s value. The timeout is now settable via the extension settings and used in both HTTP requests and server readiness checks. This provides flexibility for different deployment scenarios, allowing users to adjust connection duration based on their specific use cases while maintaining the default 10-minute timeout behavior.

* fix: correct timeout conversion factor and clarify settings description

The previous timeout conversion used `timeout * 100` instead of `timeout * 1000`, which incorrectly shortened the timeout to 1/10 of the intended value (e.g., 10 minutes became 1 minute). This change corrects the conversion factor to milliseconds. Additionally, the settings description was updated to explicitly state that this timeout applies to both connection and load operations, improving user understanding of its scope.

* style: replace loose equality with strict equality in key comparison

This change updates the comparison operator from loose equality (`==`) to strict equality (`===`) when checking for the 'timeout' key. While the key is always a string in this context (making the behavior identical), using strict equality prevents potential type conversion issues and adheres to JavaScript best practices for reliable comparisons.

* fix: hide thread dropdown on delete dialog confirmation popup

* fix: model download state update (#6882)

* Fix Discord Community link in CONTRIBUTING.md (#6883)

* feat: Russian localization (#6869)

* Add files via upload

Updating localization files

* Update LanguageSwitcher.tsx

Added Russian language option

* Add files via upload

Removing the trailing newline character

* Add files via upload

UI Testing, Translation & Contextual QA

* chore: address PR comments

* feat: replace Tauri dialog plugin with rfd integration (#6850)

* feat: replace Tauri dialog plugin with rfd integration

Remove the legacy `tauri-plugin-dialog` dependency and its capability entry, adding `rfd` as a cross‑platform native file dialog library.
Introduce `open_dialog` and `save_dialog` commands that expose file‑selection and save dialogs to the frontend, along with a `DialogOpenOptions` model for filter, directory, and multiple‑file support.
Update the `TauriDialogService` to invoke these new commands instead of the removed plugin, ensuring a cleaner build and consistent dialog behaviour across desktop targets.

* chore: remove unused serde_json import

Remove the unnecessary serde_json import from `src-tauri/src/core/filesystem/commands.rs` to keep the codebase clean and eliminate unused dependencies. This small refactor improves build clarity and reduces potential lint warnings.

* fix: command + N does not work (#6890)

* fix: add mcp tool call timeout config (#6891)

Update web-app/src/locales/vn/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/zh-CN/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/pl/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/pt-BR/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/de-DE/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

fix: tests

Update web-app/src/locales/ja/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/zh-TW/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/id/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update src-tauri/src/core/mcp/commands.rs

Co-authored-by: Copilot <[email protected]>

fix: utf translation

* Fix: add conditional RAG tool injection only on document attachment (#6887)

* feat: add conditional RAG tool injection for attachments

The chat logic now only requests RAG tools when document attachments are enabled and the
model supports tools. This improves performance by avoiding unnecessary API calls
and reduces payloads for models that do not need external knowledge.
The change also cleans up temporary chat messages on reload, sets a navigation flag,
and updates `sendCompletion` and `postMessageProcessing` to use the new conditional
tool loading logic.  The refactor introduces clearer imports and formatting.

* chore: restore formatting

* completion.ts: restore formatting

* feat: track document attachment in thread metadata and update RAG logic

Add a `hasDocuments` flag to the active thread’s metadata when a document is ingested.
Update the RAG eligibility check to use this flag rather than the raw `documents` array, ensuring that the thread’s state accurately reflects its attachment status.

This keeps the thread UI in sync with attachments and prevents unnecessary re‑processing when the same documents are added to a thread.

* refactor: consolidate thread update after attachment ingestion

Remove duplicate `useThreads.getState().updateThread` calls that were present
inside the attachment ingestion logic. The previous implementation updated the
thread metadata twice (once inside the `try` block and again later), which
could lead to unnecessary state changes and made debugging harder. The new
approach updates the thread only once, after all attachments have been
processed, ensuring consistent metadata and simplifying the flow.

* test: improve useChat test mocks and capability handling

Refactor the test environment for `useChat`:
- Updated the `useModelProvider` mock to expose a test model with full capabilities (`tools`, `vision`, `proactive`) and a matching provider, enabling the hook to perform model‑specific logic without runtime errors.
- Added a `setTokenSpeed` mock to `useAppState` to satisfy the hook’s usage of token‑speed settings.
- Refactored `useThreads` to use `Object.assign` for consistent selector behaviour and added a `getThreadById` implementation.
- Introduced an attachments mock and platform feature constants so that attachment handling tests can execute correctly.
- Normalised content arrays in `newUserThreadContent` and `newAssistantThreadContent` to match the actual content format.
- Cleared and reset builder mocks in `beforeEach` to avoid stale state across test cases.
- Made minor formatting and type corrections throughout the test file.

These changes resolve failing tests caused by missing provider models, incomplete capabilities, and broken mocks, and they enable coverage of proactive mode detection and attachment handling.

* fix: glibc linux

* feat: hide file attachments properly (#6895)

* Guard attachment setters when feature disabled

* fix lint issue

* fix: get mcp servers spam request issue (#6901)

* resolve rust clippy warnings (#6888)

* resolve rust clippy warnings

* fix: start_server expects a single config

* resolve eslint error

* fix(#6902): update Bun download link for darwin-86x -> darwin-64x (#6903)

* fix: regression on reasoning models (#6914)

* fix: regression on reasoning models

* fix: reset accumulated text when not continuing message generation

* fix: new chat shortcut stopped working (#6915)

* fix: glitch UI issues (#6916)

* fix: glitch UI issues

* fix: tests

* chore: bump rmcp to 0.8.5 (#6918)

* feat: add backend migration mapping and update backend handling (#6917)

Added `mapOldBackendToNew` to translate legacy backend strings (e.g., `win-avx2-x64`, `win-avx512-cuda-cu12.0-x64`) into the new unified names (`win-common_cpus-x64`, `win-cuda-12-common_cpus-x64`). Updated backend selection, installation, and download logic to use the mapper, ensuring consistent naming across the extension and tests. Updated tests to verify the mapping, new download items, and correct extraction paths. Minor formatting updates to the Tauri command file for clearer logging. This change enables smoother migration for stored user preferences and reduces duplicate asset handling.

* add temp auth fix

* add image upload

* presigned upload

* fix image upload and refresh tokens

* fix images extensions

* add project extensions

* dev for testing

* fix project

---------

Co-authored-by: Roushan Singh <[email protected]>
Co-authored-by: Roushan Kumar Singh <[email protected]>
Co-authored-by: fred <[email protected]>
Co-authored-by: Vanalite <[email protected]>
Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Dinh Long Nguyen <[email protected]>
Co-authored-by: Akarshan Biswas <[email protected]>
Co-authored-by: @Kuzmich55 <[email protected]>
Co-authored-by: Minh141120 <[email protected]>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Co-authored-by: Volodya Lombrozo <[email protected]>
* fix: standardize log timestamps to UTC timezone

- Update formatTimestamp functions in both log viewers to use UTC
- Replace toLocaleTimeString() with explicit UTC formatting

* French Translation

* feat: Allow to save the last message upon interrupting llm response

* feat: Continue with AI response button if it got interrupted

* feat: Continue with AI response for llamacpp

* feat: Modify on-going response instead of creating new message to avoid message ID duplication

* feat: Add tests for the Continuing with AI response

* fix: Consolidate comments

* fix: Exposing PromptProgress to be passed as param

* fix: Fix tests on useChat

* fix: truncated tool name available on chat input

* fix: wording disable all tools

* fix: Incorrect proactive icon display

* feat: avoid switching model midway

Once the user switches model after they interrupt the response midway, force the user to start generating the response from the beginning to avoid cross model lemma

* fix: migrate flash_attn settings (#6864)

* fix: migrate flash_attn settings

* Update web-app/src/hooks/useModelProvider.ts

Co-authored-by: Copilot <[email protected]>

* Update core/src/browser/extension.ts

Co-authored-by: Copilot <[email protected]>

---------

Co-authored-by: Copilot <[email protected]>

* fix: chatinput debounce tokenize (#6855)

* fix: chatinput debounce tokenize

* fix error

* fix: could not cancel the unintialized download (#6867)

* fix: could not cancel the unintialized download

* fix: could not open app folder

* fix: tests

* feat: loader screen before load FE

* chore: remove nested RAF

* chore: refactor filereader to tauri dialog

* chore: update call funtion direct to handle image attachment

* chore: update PR comment

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* Update web-app/src/locales/fr/common.json

Co-authored-by: Copilot <[email protected]>

* feat: add configurable timeout for llamacpp connections (#6872)

* feat: add configurable timeout for llamacpp connections

This change introduces a user-configurable read/write timeout (in seconds) for llamacpp connections, replacing the hard-coded 600s value. The timeout is now settable via the extension settings and used in both HTTP requests and server readiness checks. This provides flexibility for different deployment scenarios, allowing users to adjust connection duration based on their specific use cases while maintaining the default 10-minute timeout behavior.

* fix: correct timeout conversion factor and clarify settings description

The previous timeout conversion used `timeout * 100` instead of `timeout * 1000`, which incorrectly shortened the timeout to 1/10 of the intended value (e.g., 10 minutes became 1 minute). This change corrects the conversion factor to milliseconds. Additionally, the settings description was updated to explicitly state that this timeout applies to both connection and load operations, improving user understanding of its scope.

* style: replace loose equality with strict equality in key comparison

This change updates the comparison operator from loose equality (`==`) to strict equality (`===`) when checking for the 'timeout' key. While the key is always a string in this context (making the behavior identical), using strict equality prevents potential type conversion issues and adheres to JavaScript best practices for reliable comparisons.

* fix: hide thread dropdown on delete dialog confirmation popup

* fix: model download state update (#6882)

* Fix Discord Community link in CONTRIBUTING.md (#6883)

* feat: Russian localization (#6869)

* Add files via upload

Updating localization files

* Update LanguageSwitcher.tsx

Added Russian language option

* Add files via upload

Removing the trailing newline character

* Add files via upload

UI Testing, Translation & Contextual QA

* chore: address PR comments

* feat: replace Tauri dialog plugin with rfd integration (#6850)

* feat: replace Tauri dialog plugin with rfd integration

Remove the legacy `tauri-plugin-dialog` dependency and its capability entry, adding `rfd` as a cross‑platform native file dialog library.
Introduce `open_dialog` and `save_dialog` commands that expose file‑selection and save dialogs to the frontend, along with a `DialogOpenOptions` model for filter, directory, and multiple‑file support.
Update the `TauriDialogService` to invoke these new commands instead of the removed plugin, ensuring a cleaner build and consistent dialog behaviour across desktop targets.

* chore: remove unused serde_json import

Remove the unnecessary serde_json import from `src-tauri/src/core/filesystem/commands.rs` to keep the codebase clean and eliminate unused dependencies. This small refactor improves build clarity and reduces potential lint warnings.

* fix: command + N does not work (#6890)

* fix: add mcp tool call timeout config (#6891)

Update web-app/src/locales/vn/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/zh-CN/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/pl/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/pt-BR/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/de-DE/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

fix: tests

Update web-app/src/locales/ja/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/zh-TW/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update web-app/src/locales/id/mcp-servers.json

Co-authored-by: Copilot <[email protected]>

Update src-tauri/src/core/mcp/commands.rs

Co-authored-by: Copilot <[email protected]>

fix: utf translation

* Fix: add conditional RAG tool injection only on document attachment (#6887)

* feat: add conditional RAG tool injection for attachments

The chat logic now only requests RAG tools when document attachments are enabled and the
model supports tools. This improves performance by avoiding unnecessary API calls
and reduces payloads for models that do not need external knowledge.
The change also cleans up temporary chat messages on reload, sets a navigation flag,
and updates `sendCompletion` and `postMessageProcessing` to use the new conditional
tool loading logic.  The refactor introduces clearer imports and formatting.

* chore: restore formatting

* completion.ts: restore formatting

* feat: track document attachment in thread metadata and update RAG logic

Add a `hasDocuments` flag to the active thread’s metadata when a document is ingested.
Update the RAG eligibility check to use this flag rather than the raw `documents` array, ensuring that the thread’s state accurately reflects its attachment status.

This keeps the thread UI in sync with attachments and prevents unnecessary re‑processing when the same documents are added to a thread.

* refactor: consolidate thread update after attachment ingestion

Remove duplicate `useThreads.getState().updateThread` calls that were present
inside the attachment ingestion logic. The previous implementation updated the
thread metadata twice (once inside the `try` block and again later), which
could lead to unnecessary state changes and made debugging harder. The new
approach updates the thread only once, after all attachments have been
processed, ensuring consistent metadata and simplifying the flow.

* test: improve useChat test mocks and capability handling

Refactor the test environment for `useChat`:
- Updated the `useModelProvider` mock to expose a test model with full capabilities (`tools`, `vision`, `proactive`) and a matching provider, enabling the hook to perform model‑specific logic without runtime errors.
- Added a `setTokenSpeed` mock to `useAppState` to satisfy the hook’s usage of token‑speed settings.
- Refactored `useThreads` to use `Object.assign` for consistent selector behaviour and added a `getThreadById` implementation.
- Introduced an attachments mock and platform feature constants so that attachment handling tests can execute correctly.
- Normalised content arrays in `newUserThreadContent` and `newAssistantThreadContent` to match the actual content format.
- Cleared and reset builder mocks in `beforeEach` to avoid stale state across test cases.
- Made minor formatting and type corrections throughout the test file.

These changes resolve failing tests caused by missing provider models, incomplete capabilities, and broken mocks, and they enable coverage of proactive mode detection and attachment handling.

* fix: glibc linux

* feat: hide file attachments properly (#6895)

* Guard attachment setters when feature disabled

* fix lint issue

* fix: get mcp servers spam request issue (#6901)

* resolve rust clippy warnings (#6888)

* resolve rust clippy warnings

* fix: start_server expects a single config

* resolve eslint error

* fix(#6902): update Bun download link for darwin-86x -> darwin-64x (#6903)

* fix: regression on reasoning models (#6914)

* fix: regression on reasoning models

* fix: reset accumulated text when not continuing message generation

* fix: new chat shortcut stopped working (#6915)

* fix: glitch UI issues (#6916)

* fix: glitch UI issues

* fix: tests

* chore: bump rmcp to 0.8.5 (#6918)

* feat: add backend migration mapping and update backend handling (#6917)

Added `mapOldBackendToNew` to translate legacy backend strings (e.g., `win-avx2-x64`, `win-avx512-cuda-cu12.0-x64`) into the new unified names (`win-common_cpus-x64`, `win-cuda-12-common_cpus-x64`). Updated backend selection, installation, and download logic to use the mapper, ensuring consistent naming across the extension and tests. Updated tests to verify the mapping, new download items, and correct extraction paths. Minor formatting updates to the Tauri command file for clearer logging. This change enables smoother migration for stored user preferences and reduces duplicate asset handling.

* add temp auth fix

* add image upload

* presigned upload

* fix image upload and refresh tokens

* fix images extensions

* add project extensions

* dev for testing

* fix project

* increase images size with presign for testing

---------

Co-authored-by: Roushan Singh <[email protected]>
Co-authored-by: Roushan Kumar Singh <[email protected]>
Co-authored-by: fred <[email protected]>
Co-authored-by: Vanalite <[email protected]>
Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Dinh Long Nguyen <[email protected]>
Co-authored-by: Akarshan Biswas <[email protected]>
Co-authored-by: @Kuzmich55 <[email protected]>
Co-authored-by: Minh141120 <[email protected]>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Co-authored-by: Volodya Lombrozo <[email protected]>
Copilot AI review requested due to automatic review settings November 18, 2025 18:02
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements projects/folders management for the web platform with server-side persistence, media server integration for image handling via jan_id references, and switches authentication from Google to Keycloak OAuth with PKCE flow.

Key Changes:

  • Adds server-based project management with API integration
  • Implements media server integration for efficient image uploads with presigned URLs
  • Migrates authentication provider from Google to Keycloak

Reviewed Changes

Copilot reviewed 57 out of 58 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
web-app/src/types/enhanced-attachment.ts New comprehensive attachment types with media server support
web-app/src/services/projects/server.ts Server-based projects service implementation
web-app/src/services/projects/web.ts Removed (replaced by server implementation)
extensions-web/src/shared/media/ Media service for upload/download with jan_id system
extensions-web/src/services/uploads/web.ts Web uploads service with presigned upload support
extensions-web/src/project-web/ Project extension for web platform
extensions-web/src/shared/auth/ Updated auth service with Keycloak and refresh token management
web-app/src/routes/auth.keycloak.callback.tsx New Keycloak OAuth callback handler
web-app/src/lib/completion.ts Smart image handling with jan_id vs base64 priority
core/src/types/project/ Core project type definitions
Comments suppressed due to low confidence (1)

extensions-web/src/shared/auth/service.ts:1

  • Potential crash if name parts contain empty strings. If parts[0] or parts[parts.length - 1] is an empty string, accessing [0] will be undefined. Add checks: if (parts[0]?.length && parts[parts.length - 1]?.length) before accessing character indices.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 18, 2025

Barecheck - Code coverage report

Total: 28.79%

Your code coverage diff: -1.23% ▾

Uncovered files and lines
FileLines
core/src/browser/extension.ts97-98, 105-108, 117-119, 134, 138, 144-149, 154-155, 189, 191-200
core/src/browser/extensions/project.ts25-26
extensions-web/vite.config.ts1, 3-17, 19-20
extensions-web/src/index.ts8-11, 14, 23, 26, 74-79
extensions-web/src/conversational-web/api.ts5-6, 30, 33-35, 37-40, 42-49, 51-55, 57-64, 66-69, 71-79, 81-82, 84-93, 98-105, 107-112, 114-117, 119-120, 122-129, 131-132, 134-140, 142-146, 148-156, 158-159, 161-167, 169-174, 176-178, 184-185, 187-194, 197-198, 200-207, 209-210, 212-218, 220-224, 226-233, 235-236, 238-243, 245-246, 248-254, 256-257, 259-265, 268-271, 273-280, 282-285, 287-298, 300-301, 303-310
extensions-web/src/conversational-web/const.ts6-12, 14-19
extensions-web/src/conversational-web/extension.ts6, 12-14, 16, 18, 21-24, 26, 29-41, 43-50, 52-63, 65-79, 81-92, 95, 99-101, 103-108, 111, 114, 116-119, 121, 124-127, 129-131, 133-136, 138-140, 142-145, 147-153, 155-161, 163-169
extensions-web/src/conversational-web/index.ts1, 3, 6, 25
extensions-web/src/conversational-web/utils.ts1, 3, 5-24, 26-28, 30-46, 48-49, 51-65, 67-70, 72-75, 77-124, 127-136, 139-147, 150-157, 160-163, 165, 168, 170, 172-185, 187-190, 192-196, 202-210, 213-244, 247, 249-251, 253, 256-261, 263-266, 268-277, 279-287, 289-290, 292-293, 295-296, 298-305, 307-309, 315-316, 318-347, 349-369
extensions-web/src/jan-provider-web/api.ts7-10, 15, 22-24, 26-29, 31-32, 34-39, 41-42, 130, 133-134, 136-138, 140-145, 147-149, 151-155, 157-159, 161-162, 164-167, 169, 171-174, 176-185, 187-188, 190-191, 193-196, 198-206, 208-212, 214, 216-229, 231-238, 240-241, 243-250, 252-255, 257-259, 261-262, 264-265, 267-268, 270-272, 274-275, 278, 280-283, 285-288, 290-298, 300-311, 313-315, 317-327, 329-338, 340-345, 347-350, 352-355, 357-360, 362-363, 365-366, 368-370, 372-374, 376-378, 380
extensions-web/src/jan-provider-web/const.ts1-5, 7
extensions-web/src/jan-provider-web/provider.ts6, 16-19, 21-23, 25-26, 28, 30, 32, 34-38, 40-41, 44-46, 48-49, 51-60, 62-63, 66-68, 70-72, 74-95, 97-99, 101-118, 120-121, 124, 126-133, 135, 137-145, 147-149, 151-156, 158-159, 161-174, 176-180, 182-184, 188-191, 194-195, 198-199, 201-214, 216, 218-219, 221, 224-226, 228-242, 248-256, 258-266, 268-271, 274-277, 279-284, 286, 288-321, 324-328, 330-333, 336-337, 340-343, 345-347, 349-350, 352-356, 358-362, 364-368, 370-374, 376-380, 382-386, 388, 390-393
extensions-web/src/project-web/extension.ts6, 14, 16, 19-22, 24-26, 31-47, 49-57, 59-67, 69-77, 79-90, 92-99, 101-117, 119-127, 129-138
extensions-web/src/project-web/index.ts5, 11
extensions-web/src/services/uploads/web.ts6-7, 16-18, 21, 24, 27-32, 77, 80, 82-87, 93-102, 104-106, 109, 111-114, 118-119, 121-124, 127-128, 130-133, 136-139, 141-144, 146-147, 150-154, 156-161, 164-167, 169-180, 183-185, 190-194, 197, 199-206, 208-216, 218-221, 223, 226-228, 230-231, 236-245, 250-253, 257-258, 263-266, 268-280, 285-293, 298-301, 304-305, 308-312, 314-318, 324-328, 330-332, 335-337, 340-343, 345-347, 349, 351, 353, 355, 357-361, 363, 365-371, 373-375, 377, 379, 381, 383-387, 389-398, 404-408, 410-412, 414-415, 417, 419-424, 426, 428-434, 436-438, 440, 442, 444, 446-450, 452-462, 467, 469
extensions-web/src/shared/index.ts1-3
extensions-web/src/shared/auth/api.ts7, 47-54, 56-59, 64-71, 73-77, 79-80, 86-96, 98-102, 104-105, 110-122, 124-128, 130-131, 136-148, 150-154, 156-157, 162-172, 174-178, 180-181, 186-197, 199-204
extensions-web/src/shared/auth/const.ts7-12, 15-23, 26, 30, 34, 37-40
extensions-web/src/shared/auth/index.ts2-3, 18
extensions-web/src/shared/auth/service.ts8-9, 15, 29-30, 32, 34, 36-40, 42-43, 45-49, 54-59, 65-66, 68-70, 73-76, 78-79, 82-83, 85-87, 89-102, 104-107, 109-112, 114-117, 119-122, 126, 129-133, 135-137, 139-140, 142, 145-146, 148-151, 153-159, 164-165, 167-170, 172-178, 183-188, 190-193, 195, 197, 200-201, 203-208, 214-215, 217-227, 229, 231-233, 235-236, 238-243, 245-249, 253-257, 262-263, 265-268, 271-273, 275-285, 287-288, 293-294, 296-297, 299-301, 303, 306, 308, 310-315, 317-319, 324-330, 335-337, 342-346, 348-349, 354-359, 364-365, 367-371, 376-380, 382-383, 385-393, 395-398, 400-405, 410-412, 417-421, 426-427, 429-431, 436-437, 439-441, 446-447, 449-451, 456-457, 459-461, 466-469, 474-477, 480-485, 490-493, 495-499, 501-508, 510-512, 514-516, 521-529, 531-533, 538-541, 546-549, 552, 554-555, 558, 560, 562-566, 571-573, 578-580, 585-592, 596-601, 613-618
extensions-web/src/shared/auth/types.ts9-12
extensions-web/src/shared/auth/providers/api.ts14-17, 19-20, 22, 24-35, 37-41, 43-49, 51-55, 57-58, 60-64, 67, 69-70, 73, 75-76, 80, 82-89, 91-95, 97-98
extensions-web/src/shared/auth/providers/base.ts7, 9, 17-18, 20-21, 24-25, 29-32, 34-39, 41-42, 46-52
extensions-web/src/shared/auth/providers/index.ts6-7, 10, 13
extensions-web/src/shared/auth/providers/keycloak.ts16, 18-21, 27-29, 36-39
extensions-web/src/shared/auth/providers/types.ts8
extensions-web/src/shared/media/service.ts6, 15, 19, 22-32, 34-35, 37-38, 40-47, 49-58, 60-67, 69, 72-74, 76-83, 85-86, 88-90, 92-93, 95-96, 98-104, 106-111, 115-121, 123-125, 127, 129-138, 140-142, 144-152, 154-155, 157-158, 160-166, 168-173, 175, 177, 179-184, 186-189, 191-192, 194-195, 197-203, 205-210, 212, 214-216, 218-219, 221-223, 225-228, 230-239, 241-248, 250, 252-254, 256-263, 266-269, 271-275, 277-281, 283-288, 290-293, 295-299, 301-315, 325-328, 332-333, 335-336, 341-343, 352-353, 355, 357-358, 360-364, 366-368, 377-381, 389, 391-392, 394-396, 398, 401-404, 406-412, 414-416, 418-419, 425, 435-441, 443, 446-447, 449-450, 455-462, 465, 468
extensions-web/src/shared/media/types.ts192-197, 199-204
web-app/src/routeTree.gen.ts13-36, 40-44, 46-50, 52-56, 58-62, 64-68, 70-74, 76-80, 82-86, 88-92, 94-98, 100-104, 106-110, 112-116, 118-122, 124-128, 130-134, 136-140, 142-146, 148-152, 154-158, 160-164, 166-171, 173-177, 533-557, 559-561
web-app/src/components/UploadedAttachmentImage.tsx6-8, 16-23, 25-26, 28-31, 33-42, 44-46, 48-49, 51-54, 56-57, 59, 61-65, 67, 69-78, 80
web-app/src/containers/ThreadContent.tsx3-12, 17, 22-23, 25, 27-31, 33-35, 37-41, 43-46, 48-52, 54-61, 63, 65, 68-70, 84-86, 89-92, 94-97, 99-101, 103-106, 109-114, 116, 118-121, 123-124, 127-134, 137-146, 148-149, 151-153, 155, 157-165, 167-188, 190-192, 194-206, 208-211, 213-218, 220-224, 226, 230-233, 235-246, 250-256, 258-262, 264-275, 277-280, 284-294, 296-307, 309-313, 316-323, 325-335, 337-348, 351-358, 360-361, 364-369, 371-372, 375-378, 380-389, 391-394, 396-401, 403-410, 412-415, 417-420, 422-427, 429-435, 437, 439-446, 448, 451-459, 461-462, 464-465
web-app/src/containers/auth/UserProfileMenu.tsx6-7, 15-20, 22-28, 30-31, 33-39, 41-42, 45-49, 51-55, 57-58, 60-69, 71-73, 75-87, 89-95, 97-108, 110-127, 129-135, 137
web-app/src/lib/completion.ts72-76, 94-97, 169-177, 179, 181-182, 184-186, 188, 190, 193-198, 200-207, 209-219, 222-230, 232-245, 247-263, 265-286, 302-308, 359, 362-367, 369-372, 446-447, 450-451, 466, 468, 470-476, 478-480, 482-484, 526-527, 536-537, 562-567, 577, 581-585, 589-605, 614-615, 628-637, 639-647, 650-659, 707, 709, 713-721, 724-732, 735-743, 745-747, 759-764, 766-767, 769-772, 775-778, 781-782, 784-787, 789, 791-793, 795-803, 805-806, 813, 815-817, 820-825, 827-828, 838-843, 845-847, 850-851, 853-855, 858-863, 866, 869-875, 877-879, 881-883, 885-887, 889-891, 893-894
web-app/src/lib/platform/const.ts6-7, 13, 15-16, 19-20, 23-24, 27-28, 31-32, 35-36, 39, 42, 45-46, 49, 52, 55, 58-59, 62-63, 66, 69, 72, 75-76, 79, 82, 85, 88-90
web-app/src/routes/auth.keycloak.callback.tsx13-18, 20-24, 26, 28-30, 32-37, 40-42, 44-46, 50-52, 54-55, 59, 61, 64-66, 69-70, 72-74, 76, 79, 81, 84-86, 88-90, 93-95, 97-98, 100-108, 110
web-app/src/routes/project/$projectId.tsx1-2, 4-5, 8, 10-13, 15-21, 23-25, 27-31, 33, 35-38, 40-41, 44-46, 49-51, 54-55, 58, 60-61, 63-66, 68-74, 76, 78-89, 91, 93-98, 100-101, 103-109, 111-123, 125, 128-134, 136-151, 153-156, 158-163, 165-171, 173
web-app/src/services/index.ts166-190, 192-203, 248-251, 256-259, 354-356, 359-361, 364-366
web-app/src/services/projects/default.ts14-23, 26-35, 38-39, 42-46, 48-50, 52-53, 56-63, 66-69, 72-74, 77-78, 83-85
web-app/src/services/projects/server.ts18-25, 28-36, 43-45, 47-52, 55-59, 62-65, 67-73, 76-80, 82-87, 90-99, 102-111, 114-117, 119-122, 124-130, 135-136, 138-141, 144-156, 162-166, 168-170, 172-175, 177-182, 185-188, 190-204, 210-214, 217-226, 229-230, 233-239, 241-246, 252-261, 267-270, 272, 274-279, 281-286, 288-293, 299-302, 304, 306-313, 315-320
web-app/src/services/providers/tauri.ts5-13, 15-16, 18-19, 21-29, 31-33, 35-41, 43-47, 49-71, 74-76, 78-87, 89-90, 92-121, 123-128, 130-133, 135-138, 142-147, 150-153, 155-159, 162-165, 167, 169-186, 188, 191, 193-196, 198-203, 205-215, 218-223, 225-232, 235-239, 242-246, 248-273
web-app/src/services/uploads/default.ts9, 11-13, 17-18, 23-25, 28-43
web-app/src/types/enhanced-attachment.ts55-57, 59-61, 64, 69-75, 77, 82-87, 90, 100-110, 113-115, 117-119, 121-123, 125, 127-128, 130-135, 138, 144-151, 154-156, 158-160, 162-164, 166-168, 170-172, 175-176, 178-180, 182-184, 186-188, 190-192, 194-195

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copilot AI review requested due to automatic review settings November 19, 2025 04:57
Copilot finished reviewing on behalf of dinhlongviolin1 November 19, 2025 04:59
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Copilot reviewed 54 out of 55 changed files in this pull request and generated 6 comments.

Comments suppressed due to low confidence (1)

Dockerfile:1

  • Removing the COPY ./pre-install ./pre-install line from Dockerfile may break the build if the pre-install script is still required. Ensure this is intentional and that any pre-installation steps are handled elsewhere or are no longer needed.
# Stage 1: Build stage with Node.js and Yarn v4

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@louis-jan louis-jan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants