Starting 2.0 of LobeHub(LobeChat): A System Reconstruction and Reflection #10007
Replies: 16 comments 14 replies
-
|
English | 简体中文 在过去的一段时间里,LobeChat 得到了社区的广泛关注与支持,这让我们备受鼓舞。但同时,我们也清醒地认识到,随着用户体量的增长和应用场景的复杂化,LobeChat 1.0 的架构瓶颈与产品定位局限性正日益凸显。社区反馈中被反复提及的性能问题,以及我们自身对 AI 交互未来形态的思考,都指向了同一个结论:一次简单的版本迭代已不足以应对挑战,我们需要的是一场彻底的体系重构与方向升级。 因此,LobeChat 2.0 的开发正式启动,同时也将对产品内核进行一次重新定义。 核心理念的演进:从「Chat」到「Hub」首先要进行的是品牌层面的升级:我们的产品名将从 「LobeChat」 升级为 「LobeHub」。 我们认为,下一代 AI 应用的形态,将不再局限于人与单个 AI 的“对话”,而是演变为一个连接用户、多元化 Agent、海量知识与外部服务的“枢纽”。「LobeHub」一词将能更精准地描绘了我们对未来的构想:一个开放、模块化、可扩展的 AI 生态平台,一个人与 Agent 的协同中枢。 架构的根本性变革:以 Server 为核心LobeChat 的架构演进并非一蹴而就,而是经历了一个清晰的、不断趋近问题本质的迭代过程:
这种混合模式在支撑更复杂的 AI 应用场景时,其内在的矛盾便暴露无遗。这引出了一个根本性的问题:一个只能在用户打开浏览器时运行的程序,如何执行一个可能耗时数十分钟的深度研究任务(DeepResearch)?又如何实现一个定时的、在后台自动执行的任务? 答案显然是否定的。这正是 2.0 决心以 Server 为核心 进行重构的根本原因。AI 的许多能力,本质上是异步的、长周期的。用户需要的是一个可靠的后端,在接收指令后能够独立、持续地执行任务,而前端界面仅仅是下达指令和接收结果的窗口。 因此,我们认为,异步架构才是通往人与 AI 高效协作的正道。 将所有 Agent 的状态、记忆、知识库以及任务执行都统一到服务端执行,将使得 LobeHub 从一个临时对话工具进化为一个真正可靠的 AI 工作平台。 直面用户痛点:性能与体验的彻底革新社区中关于应用卡顿、响应迟缓的反馈,我们不仅听到了,也对其技术根源进行了深入的分析——其症结在于 0.x 时期就引入的 RSC 架构。 我得坦诚地说,当初选择 RSC 是因为它被 Next.js 与 React 社区普遍视为未来的方向,而我们作为技术探索者,第一时间拥抱了这个新特性。这背后是对技术前沿的积极跟进,也是希望通过最新的解决方案为产品带来优势。 然而,在长达近 2 年的深度实践后,我们得出了一个可能与这个趋势相悖,但却基于我们产品现实的结论:对于 LobeChat 这样高频、即时交互的应用,RSC 可能并非正确的答案。 RSC 在理论上能够减少初始加载的 JavaScript 体积,并有助于 SEO。但在 LobeChat 这样的高频交互应用场景中,它的弊端则非常突出。RSC 架构的本质,是通过服务端渲染组件,再将结果流式传输到客户端。这意味着即便是像切换会话这样的高频次、轻量级的用户交互,都可能触发一次到服务器的往返网络请求。对于追求即时响应与流畅体验的对话式应用而言,这种内嵌在架构中的网络延迟,是无法根除的性能瓶颈。每次点击都伴随着难以察觉却真实存在的“微卡顿”,这累积起来严重影响了产品的交互体验。 因此,在 2.0 中,我们决定将全面回归到 SPA。这是我们基于长期实践,对技术与产品关系的深度反思后做出的选择。 LobeChat 2.0 的核心功能支柱在全新的理念与架构之上,我们将构建一系列真正面向未来、体现协同价值的功能:
远景展望:Workspace、API 与商业化模式的思考2.0 是一个全新的起点。在此之后,我们的路线图将聚焦于生态的进一步扩展:
结语:下一个时代,不止于“聊天”LobeHub 2.0 的核心目标,是构建一个连接用户、Agent、知识与服务的统一平台。我们坚信,AI 的下一个时代,价值不止在于「聊天」,更在于高效、智能的「协作」。 我们正在 无论你是试用新版、参与设计讨论,还是直接提交代码,你的每一次贡献都是在为构建下一代 AI 平台注入力量。期待与你同行。 |
Beta Was this translation helpful? Give feedback.
-
|
赞,异步完成确实是个非常合理的方向,没准等到3.0就可以是个OS了😂 |
Beta Was this translation helpful? Give feedback.
-
|
精彩,期待。 |
Beta Was this translation helpful? Give feedback.
-
|
作为个人用户,用过很多同类应用,除了 lobechat 都无法满足我的需求,但是 lobechat 的性能问题一直困扰着我,期待基于 SPA 的重构 |
Beta Was this translation helpful? Give feedback.
-
|
支持,LobeChat 的 UI 一直都很出色,我也一直在向朋友推荐 LobeChat。但他们都说太卡了,我自己也觉得使用流畅度远不如其他(比如 gemini.google.com),我用的还是 M4 芯片,网页性能应当是十分出色的,Speedometer 跑分也非常高。性能问题是我身边很多人不使用的主要原因。 |
Beta Was this translation helpful? Give feedback.
-
|
我自己部署和使用的ai服务中,lobechat从产品形态上是我最喜欢的,支持在线同步,更新迭代也足够快。 另外,2.0的发布计划是怎么样的?什么时候能否释放呢?我看到release中已经有了不少next的构建了,目前是否建议试用2.0版本呢?如果试用的话,怎么识别哪些版本是可用的呢? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Expecting a new start~ |
Beta Was this translation helpful? Give feedback.
-
|
我的 1 核 E5-26xx v4 的 VPS 迫不及待了,现在用起来卡卡的,内存 400MB,CPU 动不动就 70% |
Beta Was this translation helpful? Give feedback.
-
|
我就想知道1.x什么时候把gpt-5.1加进去,我现在用不了5.1的内置联网搜索 |
Beta Was this translation helpful? Give feedback.
-
|
请问升级到2.x,自部署的话需要重新部署吗?还是像1.x的时候,直接更新docker就行了,配置什么的都保持不变。 |
Beta Was this translation helpful? Give feedback.
-
|
想问一下如果现在想从1.x的最后一个版本升级到2.0-next测试的话,有什么需要注意的吗,切换镜像就能无缝升级上来吗? |
Beta Was this translation helpful? Give feedback.
-
|
@yziqi @0xff26b9a8 在docker compose文件里直接把lobechat换成新的lobehub镜像就行 |
Beta Was this translation helpful? Give feedback.
-
|
docker镜像没有更新v2.0.0吗 |
Beta Was this translation helpful? Give feedback.
-
|
Hey 👋 Quick question - Thanks! |
Beta Was this translation helpful? Give feedback.
-
|
从日志上看,貌似 2.0 也是把语言的渲染放在了 node 端? |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
English | 简体中文
Over the past period, LobeChat has received widespread attention and support from the community, which has greatly encouraged us. However, at the same time, we are fully aware that as the user base grows and application scenarios become more complex, the architectural bottlenecks and product positioning limitations of LobeChat 1.0 are increasingly apparent. The performance issues repeatedly mentioned in community feedback, along with our own reflections on the future form of AI interaction, all point to the same conclusion: a simple version iteration is no longer sufficient to meet the challenges; what we need is a complete system reconstruction and directional upgrade.
Therefore, the development of LobeChat 2.0 has officially begun, and the product core will also undergo a redefinition.
Evolution of the Core Concept: From "Chat" to "Hub"
The first step is a brand-level upgrade: our product name will be upgraded from "LobeChat" to "LobeHub".
We believe that the next generation of AI applications will no longer be limited to "conversations" between a user and a single AI, but will evolve into a "hub" connecting users, diverse agents, vast knowledge, and external services. The term "LobeHub" more accurately captures our vision for the future: an open, modular, and scalable AI ecosystem platform.
Fundamental Architectural Transformation: Server-Centric
The architectural evolution of LobeChat was not achieved overnight but went through a clear iterative process that progressively approached the core of the problem:
This hybrid model’s inherent contradictions became evident when supporting more complex AI application scenarios. This raises a fundamental question: how can a program that only runs when the user opens the browser execute a DeepResearch task that may take tens of minutes? And how can it perform scheduled tasks that run automatically in the background?
The answer is clearly no. This is precisely why version 2.0 is determined to restructure with the Server as the core. Many AI capabilities are essentially asynchronous and long-cycle. What users need is a reliable backend that can independently and continuously execute tasks after receiving instructions, while the frontend interface merely serves as a window to issue commands and receive results.
Therefore, we believe that an asynchronous architecture is the true path to efficient human-AI collaboration. Unifying the state, memory, knowledge base, and task execution of all Agents on the server side will evolve LobeChat from a temporary chat tool into a truly reliable AI work platform.
Addressing User Pain Points: A Complete Overhaul of Performance and Experience
We have not only heard the community feedback about app lag and slow response times but have also conducted an in-depth analysis of its technical root causes. The crux lies in the RSC architecture introduced during the 0.x phase.
I must honestly say that we chose RSC initially because it was widely regarded by the Next.js and React communities as the future direction, and as technology explorers, we embraced this new feature at the earliest opportunity. This was driven by a proactive pursuit of cutting-edge technology and a hope to leverage the latest solutions to benefit our product.
However, after nearly two years of deep practical experience, we have reached a firm conclusion that may run counter to this trend but is grounded in the reality of our product: For an application like LobeChat, which involves high-frequency, real-time interactions, RSC may not be the right answer.
RSC can theoretically reduce the initial JavaScript bundle size and help with SEO. However, in high-frequency interactive application scenarios like LobeChat, its drawbacks become very prominent. The essence of the RSC architecture is to render components on the server and then stream the results to the client. This means that even lightweight, high-frequency user interactions such as switching conversations may trigger a round-trip network request to the server. For conversational applications that demand instant response and smooth experience, this network latency embedded in the architecture is an unavoidable performance bottleneck. Every click is accompanied by subtle but real "micro-lags," which accumulate and seriously affect the product's interactive experience.
Therefore, in version 2.0, we decided to fully return to SPA. This choice was made based on long-term practice and a deep reflection on the relationship between technology and product.
Core Functional Pillars of LobeChat 2.0
On a brand-new concept and architecture, we will build a series of truly future-oriented features that embody collaborative value:
Future Outlook: Thoughts on Workspace, API, and Commercialization Models
2.0 is a brand new starting point. Moving forward, our roadmap will focus on further expanding the ecosystem:
Conclusion: The Next Era Is More Than Just "Chat"
The core goal of LobeHub 2.0 is to build a unified platform connecting users, agents, knowledge, and services. We firmly believe that the next era of AI is not only about "chatting" but also about efficient and intelligent "collaboration."
We are advancing this work on the
nextbranch and welcome you to join us: 👉 GitHub Development BranchWhether you are trying out the new version, participating in design discussions, or directly submitting code, each of your contributions is powering the construction of the next-generation AI platform. Looking forward to working with you.
Beta Was this translation helpful? Give feedback.
All reactions