-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: separate tsconfck caches per config in a weakmap #17317
Conversation
|
Why can't multiple dev servers share the same cache? Shouldn't it not be dependant on the values of the config? |
@patak-dev mentioned that some projects use mutiple server instances simultaneously with different configs. They might watch entirely different subtrees of a monorepo and then having a cache for each one limits the refresh to that subtree. |
I see. Though there is still the tradeoff that if the subtrees overlap, we would have duplicated the cache in memory. Personally I think since editing the |
it's not only about configs outside of the root. even if a new tsconfig is added inside, you have to restart that server. Its kind of niche because most setups only involve one devserver, so they won't see much of a difference, but it can help with multiples. Having overlap and read+cache twice is the nature of that overlap. Given that we try to avoid global caches (even more so with the new environments splitting things more) i'm inclined to follow this one. The alternative PR still has that global cache. |
Do you mean that the current
I still don't quite understand why we need to duplicate the cache per server though. If a tsconfig is added inside a root, the server already restarts today? If it's outside the root (e.g. another subpackage), in the current |
but the tsconfck cache would still get flushed if it is just one global one... |
I think it's ok since editing the If we can solve granular invalidation (which is understandably tricky), we can also keep the current single cache today. |
We currently have a separate But I'm ok keeping a global one too. Restarting all servers is an ok tradeoff to pay for sharing a single one. @dominikg I think #17304 won't do that though. It will currently only restart the server that is watching the json file and clear the global cache. So the other servers will end up with a possible outdated state. For this to work, it should be maybe similar to the current code in Vite 5 but with a |
granular invalidation is too complex and given how rare it is to edit tsconfig while dev is running i'd still avoid that. If we keep one global tsconfck cache, the question becomes how we deal with ensureWatchedFile if there are multiple servers and also how to reload all environments of all servers if just one of them detected a change that flushed the global tsconfig cache. Having one tsconfig cache per server (and its watcher) solves this issue as each server is in control of its own fate. Keeping one global cache could work if we declared a breaking change and not call |
We have a free exported function Actually, this is a problem in general. If we have a single tsconfck cache, every time a server is destroyed, all the cached I think that properly sharing this cache is quite hard as its tied to the assumption of an existing watcher lifecycle. What this PR is doing with a cache per We still have the same per-server (per-watcher actually) caches in Vite 6. FsUtils, PackageData, etc, all work against the server Probably the right thing to do here is to have a new Caches abstraction (that is associated to a server watcher), and we can pass around (kind of like the package data cache). I don't think we need to do a refactoring here in Vite 6 given how much has changed already, and I don't really now how deep it will go. Associating caches to Given the difficulties to keeping a global cache, I think we have two clean options:
Or we keep trying to have a global cache that only work in some cases and we document which things are supported. My vote goes to one of the two clean options above. |
@dominikg I reverted the changes to esbuild tsconfck in the v6/environment-api branch 60f90a0. It will be easier to test the Environment API compat if we fix this in a separate PR once everything is green again. I still think we should do this for v6. We can rebase this PR to point to main once we merge the other PR. |
This reverts commit 60f90a0.
fe5b0ef
to
9f56692
Compare
9f56692
to
22ff43f
Compare
/ecosystem-ci run |
📝 Ran ecosystem CI on
✅ analogjs, astro, ladle, laravel, marko, nuxt, previewjs, quasar, qwik, rakkas, redwoodjs, storybook, unocss, vite-plugin-pwa, vite-plugin-react, vite-plugin-react-swc, vite-plugin-svelte, vite-plugin-vue, vite-setup-catalogue, vitepress, vuepress |
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as duplicate.
vite-environment-example is failing because of #18255. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a bug fix. The paramter of transformWithEsbuild
is a bit bloated but I guess it can't be helped. I thought about replacing config
and watcher
with a pluginContext (because it's possible to obtain both), but that made it difficult to use it from non-plugin context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is ok to merge this to at least remove the global tsconfck cache. @bluwy I'll leave this one open in case you'd still like to check for other options here
Description
alternative to #17304
the weakmap on config ensures that multiple vite devserver instances don't share the global tsconfck cache.