perf: add per-directory module resolution cache to typescript-go#764
perf: add per-directory module resolution cache to typescript-go#764
Conversation
How to use the Graphite Merge QueueAdd the label 0-merge to this PR to add it to the merge queue. You must have a Graphite account in order to use the merge queue. Sign up using this link. An organization admin has enabled the Graphite Merge Queue in this repository. Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue. This stack of pull requests is managed by Graphite. Learn more about stacking. |
patches/0008-perf-add-per-directory-module-resolution-cache.patch
Outdated
Show resolved
Hide resolved
patches/0008-perf-add-per-directory-module-resolution-cache.patch
Outdated
Show resolved
Hide resolved
870c5f8 to
0d7ede9
Compare
0e00268 to
6c7b054
Compare
0d7ede9 to
69acd77
Compare
6c7b054 to
9e0ccc1
Compare
69acd77 to
c4e951f
Compare
|
Okay I have de-slopified this PR. All of the code from the last commit was mostly handwritten and I updated the design to make more sense. |
|
@jakebailey I'm sorry to ping you directly, but this might be of interest to you. I did some profiling on Two questions:
|
@camchenry can we this so we do a more gradual release? First we can release this, with a comparison with the cache vs recomputing (panic if it's different). After 2/3 weeks, we can drop the comparison and just use the cache. |
good idea - any thoughts on how we would like to surface that? given it's likely to be correct, we could maybe just return a diagnostic that indicates there was a caching issue? I'll be out on vacation this week - so feel free to implement that while I'm away. I probably won't pick this up for a week or so. |
|
I think @andrewbranch might have a better idea than me as to whether or not this is a valid optimization. Of course, all of the patches you do are worth talking about upstreaming (since we would like to actually just have a Go API where you do not need to do any of this at some point in the future). |
Yes, that will be great! I'd like to try and reduce the number of patches we have, and we'll definitely try out the Go API when it's available. In the meantime, I will see if I can upstream more of this code. For this PR specifically, I'll try to benchmark it in typescript-go and see if it has any impact. |
|
Ironically, we used to have basically this, and @auvred showed that it was worse than not having any caching: microsoft/typescript-go#673 |
Oh wow, that is really interesting. I wonder if this patch is correct then, given I'm not using any mutexes or anything, just depending on the |
|
I haven't looked too closely at the differences yet, and the original cache implementation is one of the first bits of Go code I ever wrote, so it's highly possible I did something dumb! |
I'm not too familiar with Go either, so this has been a good learning opportunity for me! When I get back from being out-of-office, I can definitely try benchmarking this. It seems like if we had this before, then it might be worth trying this new implementation and see how it compares. I'm sure a lot has changed in the codebase since April that might change the calculus of whether this is worth it or not. |

Spent some time profiling our linting runs. One hot function that kept coming up was
resolveImportsAndModuleAugmentations, as in many cases around 44% of stack frames were sampled as being in this function or a function that is called by it.The idea of this optimization is to cache the results of
ResolveModuleNameandResolveTypeReferenceDirectivewhich are called frequently by this function. In theory, I think that for the same directory and the same module name, it should always return the same results?So far, this seems to be correct: the diagnostics we return when linting
kibana,vscode,posthogand other projects all appear to be identical.This has a big effect on performance, since this is a hot path for us seemingly.
initial benchmark results
flamegraph before:
flamegraph after:
allocs before (kibana):
allocs after (kibana):