Skip to content

Temporarily reintroduce a global type inference lock#60689

Merged
KristofferC merged 1 commit intoJuliaLang:masterfrom
xal-0:global-typeinf-lock
Jan 20, 2026
Merged

Temporarily reintroduce a global type inference lock#60689
KristofferC merged 1 commit intoJuliaLang:masterfrom
xal-0:global-typeinf-lock

Conversation

@xal-0
Copy link
Copy Markdown
Member

@xal-0 xal-0 commented Jan 14, 2026

It is likely the performance regressions caused by unecessary tojlinvoke/specsig_to_fptr1 trampolines outweigh the benefit of multithreaded inference (see #60241). With this change, we have a global lock protecting jl_type_infer, serializing all type inference and LLVM code generation (similar to 1.11), but leave LLVM compilation alone.

All the infrastructure to do multithreaded inference is left in place, so that it can be re-enabled when we're able to emit LLVM IR before all the outgoing edges are themselves emitted (for example, with #60031).

It is likely the performance regressions caused by unecessary
`tojlinvoke`/`specsig_to_fptr1` trampolines outweighs the benefit of
multithreaded inference (see JuliaLang#60241).  With this change, we have a global lock
protecting `jl_type_infer`, serializing all type inference and LLVM code
generation (similar to 1.11), but leave LLVM compilation alone.

All the infrastructure to do multithreaded inference is left in place, so that
it can be re-enabled when we're able to emit LLVM IR before all the outgoing
edges are themselves emitted (for example, with JuliaLang#60031).
@oscardssmith
Copy link
Copy Markdown
Member

Why not just get #60031 merged instead?

@KristofferC KristofferC added backport 1.12 Change should be backported to release-1.12 backport 1.13 Change should be backported to release-1.13 labels Jan 15, 2026
@KristofferC
Copy link
Copy Markdown
Member

Not backportable and a much more complicated change? If it was so easy to merge it probably would have been already 😆

@topolarity
Copy link
Copy Markdown
Member

Yeah, this unfortunately feels like the right direction to me.

Making progress on big features like multi-threaded inference is great, but we can't turn them on until they avoid major downsides.

Copy link
Copy Markdown
Member

@vchuravy vchuravy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a define to options.h that turns it back on?

@DilumAluthge DilumAluthge mentioned this pull request Jan 15, 2026
40 tasks
@JamesWrigley
Copy link
Copy Markdown
Member

Is this planned to go into 1.12.5? I've run into #60241 in some of my code so it'd be great if this went in the next patch release.

@KristofferC KristofferC merged commit 23a98ff into JuliaLang:master Jan 20, 2026
11 checks passed
KristofferC pushed a commit that referenced this pull request Jan 26, 2026
@KristofferC KristofferC mentioned this pull request Jan 26, 2026
43 tasks
KristofferC pushed a commit that referenced this pull request Jan 28, 2026
@KristofferC KristofferC removed backport 1.12 Change should be backported to release-1.12 backport 1.13 Change should be backported to release-1.13 labels Feb 3, 2026
xal-0 added a commit to xal-0/julia that referenced this pull request Mar 7, 2026
@NHDaly
Copy link
Copy Markdown
Member

NHDaly commented Mar 19, 2026

I understand that we can't backport the fundamental fix for the parallel codegen all the way to 1.12, because it requires a lot of heavy infrastructure changes.

In order to reduce the impact of having to put the lock back, can we backport a more targeted lock to 1.12.6?

As far as I understand, it seems like the issue resulted from multiple threads trying to compile the same method instance in parallel. Can we reduce the scope of the lock that was added to 1.12.5 to be a per-MethodInstance lock instead, rather than global around all of inference, as a patch in 1.12.6?

^ The above was discussed w/ @JeffBezanson and @KristofferC.
CC: @xal-0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants