-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Move the implementation of Monitor to managed in CoreCLR #118371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
src/coreclr/System.Private.CoreLib/src/System/Threading/SyncTable.CoreCLR.cs
Outdated
Show resolved
Hide resolved
…er once to avoid racing. Furthermore, handle an uninitialized lock ID in ObjectHeader.CoreCLR.cs. There's a possibility that the first time the managed Lock type's thread id tracking is used is on releasing a lock that was upgraded to a fat lock.
…inlock for thin->fat lock upgrading
I've improved the uncontended thick-lock case to only 1.5ns slower and I've exhausted all of my ideas.
At this point it's as fast as the uncontended thin-lock case (within measurement error). Can I get another review pass? |
@MihuBot benchmark System.Collections.Concurrent |
It does appear to be disabled for AOT already Here's a core dump if that helps: https://1drv.ms/f/c/17a1c1fca6517cd3/EhG6thpcjJFAsd09HR6rxHwBEGOhuj-rKC_j_WKUg5P0rg |
Sounds like a bug that we are "porting" to coreclr now? |
I'll take a look at the failure. It's possible it's the same as the one in the PR checks here (which is new as of 2 days ago, so that's fun). Otherwise, I'd bet that we need to introduce a |
Looking at the linked issue, I think the NativeAOT failure was due to #67805 (linked from #66987). I also think that #73033 may have contributed to fixing it. I think the failures from this PR's run of the benchmark were due to bugs in the lock-free algorithms I wrote (same cause as the PR failures). |
@MihuBot benchmark System.Collections.Concurrent |
System.Collections.Concurrent.IsEmpty_String_
System.Collections.Concurrent.IsEmpty_Int32_
System.Collections.Concurrent.Count_String_
System.Collections.Concurrent.Count_Int32_
System.Collections.Concurrent.AddRemoveFromSameThreads_String_
System.Collections.Concurrent.AddRemoveFromSameThreads_Int32_
System.Collections.Concurrent.AddRemoveFromDifferentThreads_String_
System.Collections.Concurrent.AddRemoveFromDifferentThreads_Int32_
|
@MihuBot benchmark System.Threading |
System.Threading.Tests.Perf_Volatile
System.Threading.Tests.Perf_Timer
System.Threading.Tests.Perf_ThreadStatic
System.Threading.Tests.Perf_ThreadPool
System.Threading.Tests.Perf_Thread
System.Threading.Tests.Perf_SpinLock
System.Threading.Tests.Perf_SemaphoreSlim
System.Threading.Tests.Perf_Monitor
System.Threading.Tests.Perf_Lock
System.Threading.Tests.Perf_Interlocked
System.Threading.Tests.Perf_EventWaitHandle
System.Threading.Tests.Perf_CancellationToken
System.Threading.Tasks.Tests.Perf_AsyncMethods
System.Threading.Tasks.ValueTaskPerfTest
System.Threading.Channels.Tests.UnboundedChannelPerfTests
System.Threading.Channels.Tests.SpscUnboundedChannelPerfTests
System.Threading.Channels.Tests.BoundedChannelPerfTests
|
I ran these benchmarks locally and used the perf team's ResultsComparer tooling (with their recommended 2% threshold) and got the following results:
The I'm not sure if this is due to different core counts, the fact that MihuBot runs on cloud VMs, or the natural instability of multithreading benchmarks. |
/azp run runtime-nativeaot-outerloop, runtime-coreclr outerloop, runtime-coreclr gcstress-extra, runtime-coreclr gcstress0x3-gcstress0xc |
Azure Pipelines successfully started running 4 pipeline(s). |
This allows us to share more code with NativeAOT, reduce a decent amount of complexity in the runtime, and fixes a blocking issue for #117788