-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Hey 👋
I'm trying to figure out what slows our VSCode intellisense (TS-backed code auto-complete) down and may have found the main cause.
Not 100% sure it's RTK Query (createApi
) or not, but figured it's a decent-enough thing I found that I may as well share it.
Where I work at we have a pretty hefty React/RTK app (not OSS 😞), we've been dealing with slow (not unbearable, but annoying) response rate from VSCode intellisense (feels like >1s until the suggestions list shows up, but looking at the TS Server logs it's probably ~800ms).
I tried a few things, eventually landed on this:
If I any-fy the call to createApi
, the TS Server logs report that completionInfo
(which is in charge of computing the list of suggested items that show up in VSCode's autocomplete) drops from 840ms to 122ms.
Here's a video before the change (note how slow it takes from the time I hit .
to when I see the suggestions:
CleanShot.2023-02-25.at.21.15.29.mp4
Here it is when I make the following change:
export const api = createApi({
To:
export const api = (createApi as any)({
Activity
markerikson commentedon Feb 25, 2023
To be honest, we've never tried to do any kind of perf measurements for how long our TS typings take to analyze.
I've only ever seen a couple people even try to do that.
It's something we can look at eventually, but I don't think there's anything we can immediately do.
markerikson commentedon Feb 25, 2023
@Andarist if you've got any suggestions for analyzing this or improving our types perf, I'm interested :)
dutzi commentedon Feb 25, 2023
I started examining this after reading this post.
I think it's a good place to start (check out the comment section, it has some interesting discussion with useful links).
Or, tl;dr:
TS-Wiki on Performance Tracing
A better tool to inspect TSC's trace.json
Anyhow, I'll try helping!
Andarist commentedon Feb 25, 2023
Those performance tracings are quite good - I used them at least twice or twice to get more insights into stuff. I could take a look at this if you share your trace.
joshuajung commentedon Apr 20, 2023
Hi there & thanks for opening this issue @dutzi. I'm happy to have come across it, as it confirms my own testing.
We're using RTK Query with the OpenAPI code generator, resulting in about 7000 lines of generated endpoint definitions. I can fully reproduce your observations with VSCode IntelliSense population being significantly slow (1-3s). Changing the API type to
any
as described above immediately 'solves' the issue.Unfortunately, I'm lacking the knowledge to provide helpful input here, but I'll be monitoring the issue and happy to help with triage.
bschuedzig commentedon Jun 12, 2023
Maybe it is not connected directly, but I also experienced a performance degradation (type completion) in a medium sized project (using @rtk-query/codegen-openapi)
In our case we found the culprit to be multiple calls to .enhanceEndpoints().
After refactoring the code to use it only once in the whole application, performance was back to expected levels.
ConcernedHobbit commentedon Jul 17, 2023
createApi seems painfully slow! A cascade starting from
EndpointBuilder
takes ~652ms by itself.I'll do some experimentation to see if I can find a root cause in our usage of
creteApi
, or if it's just becausecreteApi
is an inherently expensive operation.Unfortunately we're not OSS, but I can disclose that the
createApi
call does not useenhanceEndpoints
(directly), but consists ofbuilder.mutation
calls with a single query defined in each) and nothing else. Otherwise thecreateApi
call references just simple types (two interfaces defined without recursive steps, e.g. simplyexport type Dummy = { object: string; variables?: string; }
)This is on latest (1.9.5) with TypeScript 5.1.6
rrebase commentedon Jul 29, 2023
Hey, I've also noticed a very significant TS IntelliSense slowdown originating from
createApi
. As a temp workaround I've been any-fying it while working on anything non-RTK relatedHere's a minimal repro https://github.com/rrebase/rtk-query-ts-perf with a few endpoints generated from an open-api schema highlighting the types perf issue. Hopefully someone with experience with the complex types of this codebase can pinpoint the improvement areas from the traces.
Referencing microsoft/TypeScript#34801 as the investigations in MUI about similar TS perf issues might be useful. Personally, I'd always take the trade-off of faster perf over correctness if we're faced with a choice.
mpressmar commentedon Aug 14, 2023
We'd like to use RTK Query with the generated react hooks, but with our roughly 400 endpoints (using a custom queryFn) the performance of TypeScript is so dramatically impacted that I'm afraid it's not usable. In IntelliJ, the autocompletion on the "api" object will run for minutes without returning a suggestion list.
joshuajung commentedon Aug 16, 2023
I think I made some progress on this issue, at least for our specific setup.
In our project, we are already using
.enhanceEndpoints()
only once, to add caching behavior (providesTags
/invalidatesTags
). Still, @bschuedzig's comment got me thinking whether it wouldn't be possible to short-circuit types from the unenhanced API to the enhanced version. This does not come with any side effects for our case, as our caching enhancements do not modify the API typings in any way.So here is what I did (only the last line changed):
For our project, this improved IntelliSense performance by about 70%, which makes the difference between slow and unusable. Delay until IntelliSense for a regular Query Hook shows up is now down to around 2-3 seconds – still hurts, but no longer enough to grab a coffee. Hope this helps some of you folks as well.
markerikson commentedon Aug 22, 2023
No idea when we'll ever have time to look into this, but slapping a link here for later reference. Tanner Linsley just pointed to some TS perf debugging resources:
And some more here:
markerikson commentedon Aug 22, 2023
@ConcernedHobbit : how did you generate that step percentage perf information?
ConcernedHobbit commentedon Aug 23, 2023
I used tsc with the --generateTrace flag and manually took a look at it in the Perfetto.dev web-app.
79 remaining items