Version 0.9.9.3 adds support for new OpenAI, Anthropic, Gemini and xAI models, better Markdown support, visual enhancements like LLM response highlighting and link indicators, powerful preset composition and many UI tweaks and bug fixes. It also fully decouples the gptel-request API from gptel's UI, to better aid user scripting for general purpose uses and the development of alternative UIs.
Breaking changes
-
The models
gpt-4-copilotando1have been removed from the default list of GitHub Copilot models. These models are no longer available in the GitHub Copilot API. -
Link handling in gptel chat buffers has changed, hopefully for the better. When
gptel-track-mediais non-nil, gptel follows links in the prompt and includes their contents with queries. Previously, links to files had to be placed “standalone”, surrounded by blank lines, for the files to be included in the prompt. This limitation has been removed – all supported links in the prompt will be followed now.The “standalone” limitation was imposed to make included links stand out visually and avoid accidental inclusions, but in practice users were often confused about whether a link would be sent. gptel now prominently annotates links that will be followed and sent (see below), so it should be visually obvious when links will be followed. You can revert to the old behavior by customizing gptel, see below.
-
The model
claude-3-sonnet-20240229has been removed from the default list of Anthropic models. This model is no longer available in the Anthropic API. -
The models
gemini-1.5-flash-8b,gemini-1.5-flash,gemini-1.5-pro-latest,gemini-2.0-flash-thinking-exp-01-21,gemini-2.0-flash-lite-preview-02-05,gemini-2.5-flash-lite-preview-06-17,gemini-2.5-pro-preview-06-05,gemini-2.5-pro-preview-05-06,gemini-2.5-flash-preview-05-20,gemini-2.5-pro-preview-03-25andgemini-2.5-pro-exp-03-25have been removed from the default list of Gemini models. These models are either no longer available, or they have been superseded by their stable, non-preview versions. If required, you can add these models back to the Gemini backend in your personal configuration:(push 'gemini-2.5-pro-preview-05-06 (gptel-backend-models (gptel-get-backend "Gemini")))
New models and backends
-
GitHub Copilot backend: Add support for
gpt-5-codex,claude-sonnet-4.5andclaude-haiku-4.5 -
Add support for
claude-sonnet-4-5-20250929andclaude-haiku-4-5-20251001. -
Add support for
gemini-pro-latest,gemini-flash-latestandgemini-flash-lite-latest. These models point to the latest Gemini models of the corresponding type. -
Add support for
gemini-2.5-flash-preview-09-2025andgemini-2.5-flash-lite-preview-09-2025.
New features and UI changes
-
New minor-mode
gptel-highlight-modeto highlight LLM responses and more. An oft-requested feature, gptel can now highlight responses by decorating the (left) margin or fringe, and apply a face to the response region. To use it, just turn ongptel-highlight-modein any buffer (and not just dedicated chat buffers). You can customize the type of decoration performed viagptel-highlight-methods, which see. -
Link annotations: When
gptel-track-mediais enabled in gptel chat buffers, gptel follows (Markdown/Org) links to files in the prompt and includes these files with queries. However, it was not clear if a link type was supported and would be included, making this feature unreliable and difficult to use.Now all links in the prompt are explicitly annotated in real-time in gptel buffers. Links that will not be sent are marked as such, and the link tooltip explains why. Links that will be sent are explicitly indicated as well.
-
New user options
gptel-markdown-validate-linkandgptel-org-validate-link: These control whether links in Markdown/Org buffers are followed and their sources included in gptel’s prompt. Their value should be a function that determines if a link is to be considered valid for inclusion with the gptel query. By default they allow all links, but they can be customized to require “standalone” link placement, which is gptel’s past behavior. -
gptel preset specifications can now modify the current values of gptel options instead of replacing them, allowing better composition of presets with your Emacs environment and with each other.
For example, it is common to want to add more LLM tools via a preset to an existing set in gptel-tools. To this end, add a small, declarative DSL for use in gptel preset definitions. For example, you can now do the following:
(gptel-make-preset 'websearch :tools '(:append ("search_web" "read_url")))
to add to the current list in
gptel-toolsinstead of replacing it. See the documentation ofgptel-make-presetfor more details. -
You can now apply a preset from gptel’s menu using
completing-readinstead of the menu. This is bound to@in the presets menu, so that@ @in gptel’s menu will bring up thecompleting-readprompter.This is an interim solution to the problem of the gptel presets menu not scaling well to more than about 25 presets. This menu is intended to be redesigned eventually.
-
Tool result and reasoning blocks are now folded by default in Markdown and text buffers. You can cycle their folded state by pressing
Tabwith the cursor on the opening or closing line containing the code fences. -
gptel-requestis now a standalone library, independent of gptel and its UI. This is intended- to provide a clean separation between
gptel-request(the LLM querying library) andgptel(the LLM interaction UI). - To make it simpler to create alternative UIs for gptel, wherein the package author may simply
(require 'gptel-request)to access the gptel-request API. - To make it so gptel does not need to be loaded to use
gptel-request.
The
gptel-requestfeature does not provide any response handling, and expects the user to provide a response callback. If you want to reusegptel-send’s response handler you can(require 'gptel).For logistical reasons, the
gptel-requestlibrary will continue to be shipped withgptel. - to provide a clean separation between
-
New user option
gptel-context: This variable can be used to specify additional context sources for gptel queries, usually files or buffers. It serves the longstanding requests of enabling buffer-local context specification, as well as context specification in gptel presets and programmatic gptel use. As always, in a preset definition this corresponds to the key with name of the variable with the “gptel-” prefix stripped:(gptel-make-preset 'with-docs :context '("./README.md" "./README" "./README.org"))
Each entry in
gptel-contextis a file path or a buffer object, but other kinds of specification are possible. See its documentation for details. -
gptel-mcp-connectcan now start MCP servers synchronously. This is useful for scripting purposes, when MCP tools need to be available before performing other actions. One common use is starting MCP servers when applying a gptel preset. -
“gitignored” files are omitted by default when adding directories to gptel’s context. This setting can be controlled via the user option
gptel-context-restrict-to-project-files. (This only applies to directories, individual files specified viagptel-add-filewill always be added to the context.) -
gptel-make-bedrocknow checks for theAWS_BEARER_TOKEN_BEDROCKenvironment variable parameter and uses it for Bedrock API key based authentication if present. See https://docs.aws.amazon.com/bedrock/latest/userguide/api-keys.html.
What's Changed
- gptel-gemini: Update model information by @benthamite in #1063
- gptel-integrations: Add sync mcp server init by @FrauH0lle in #1055
- gptel-context: Allow exclusion of gitignored files by @benthamite in #665
- gptel: allow functions for tool confirmation by @tobjaw in #968
- gptel-gh: add grok-code-fast-1 model by @DiogoDoreto in #1070
- AWS bedrock-update by @akssri in #1053
- Fix typo in reasoning content description by @ponelat in #1079
- gptel--gh-models: add new models by @kiennq in #1083
- Bugfix: additional directive on nil system message by @marcolgl in #1103
- Fix context ordering for KV cache reuse by @aagit in #1108
- gptel-context.el: move the context before the prompt by @aagit in #1110
- Update README with GITHUB private endpoint configuration by @CsBigDataHub in #1116
- Support thinking responses from Ollama by @nottwo in #1120
- gptel--markdown-validate-link: dont increase 3 to file link extraction if file:// is not presented by @kiennq in #1127
- Add latest anthropic models to bedrock, remove eol model by @matthemsteger in #1131
- Support streaming tool calls with Ollama by @nottwo in #1124
- Refactor gptel-gh-login for SSH and terminal Emacs users by @jsntn in #1133
New Contributors
- @tobjaw made their first contribution in #968
- @DiogoDoreto made their first contribution in #1070
- @ponelat made their first contribution in #1079
- @marcolgl made their first contribution in #1103
- @aagit made their first contribution in #1108
- @CsBigDataHub made their first contribution in #1116
- @nottwo made their first contribution in #1120
- @jsntn made their first contribution in #1133
Full Changelog: v0.9.9...v0.9.9.3