Temporarily reintroduce a global type inference lock#60689
Temporarily reintroduce a global type inference lock#60689KristofferC merged 1 commit intoJuliaLang:masterfrom
Conversation
It is likely the performance regressions caused by unecessary `tojlinvoke`/`specsig_to_fptr1` trampolines outweighs the benefit of multithreaded inference (see JuliaLang#60241). With this change, we have a global lock protecting `jl_type_infer`, serializing all type inference and LLVM code generation (similar to 1.11), but leave LLVM compilation alone. All the infrastructure to do multithreaded inference is left in place, so that it can be re-enabled when we're able to emit LLVM IR before all the outgoing edges are themselves emitted (for example, with JuliaLang#60031).
|
Why not just get #60031 merged instead? |
|
Not backportable and a much more complicated change? If it was so easy to merge it probably would have been already 😆 |
|
Yeah, this unfortunately feels like the right direction to me. Making progress on big features like multi-threaded inference is great, but we can't turn them on until they avoid major downsides. |
vchuravy
left a comment
There was a problem hiding this comment.
Can we add a define to options.h that turns it back on?
|
Is this planned to go into 1.12.5? I've run into #60241 in some of my code so it'd be great if this went in the next patch release. |
(cherry picked from commit 23a98ff)
(cherry picked from commit 23a98ff)
|
I understand that we can't backport the fundamental fix for the parallel codegen all the way to 1.12, because it requires a lot of heavy infrastructure changes. In order to reduce the impact of having to put the lock back, can we backport a more targeted lock to 1.12.6? As far as I understand, it seems like the issue resulted from multiple threads trying to compile the same method instance in parallel. Can we reduce the scope of the lock that was added to 1.12.5 to be a per-MethodInstance lock instead, rather than global around all of inference, as a patch in 1.12.6? ^ The above was discussed w/ @JeffBezanson and @KristofferC. |
It is likely the performance regressions caused by unecessary
tojlinvoke/specsig_to_fptr1trampolines outweigh the benefit of multithreaded inference (see #60241). With this change, we have a global lock protectingjl_type_infer, serializing all type inference and LLVM code generation (similar to 1.11), but leave LLVM compilation alone.All the infrastructure to do multithreaded inference is left in place, so that it can be re-enabled when we're able to emit LLVM IR before all the outgoing edges are themselves emitted (for example, with #60031).