-
Notifications
You must be signed in to change notification settings - Fork 0
[master] Bump msbuild to track xplat-master #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
radical
wants to merge
52
commits into
master
from
bump_msbuild_master_936d6d75-8ec0-4e49-9fae-e3a8293e9965
Closed
[master] Bump msbuild to track xplat-master #20
radical
wants to merge
52
commits into
master
from
bump_msbuild_master_936d6d75-8ec0-4e49-9fae-e3a8293e9965
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…d stores. (mono#18296) Should fix mono#18221.
See msys2/MINGW-packages#5803 Without -lssp (or -fstack-protector), -D_FORTIFY_SOURCE=2 leads to linker errors.
…address families. (mono#18302) Fixes mono#16513.
-> throw in a async task with a try catch in the caller -> don't stop -> throw in a new thread with a try catch in the caller -> stop -> throw in the main thread without any try catch -> stop Including a new test to test the fix of mono#17601. Fixing mono#16588, create a while 1 to stops the program execution when an exception is thrown and no try catch is found and MONO_DEBUG=suspend-on-unhandled. Fixes mono#17601 Fixes mono#16588
mono#18176) * [reflection] Avoid creating object[] in GetCustomAttributesBase * [reflection] Migrate Attribute type checking to CustomAttribute.cs
Improves Windows compatibility
…test (mono#18272) * [loader] Rename assembly name check strictness flag * [loader] Add predicate for assembly name matching on netcore * [loader] Enable LoadInDefaultContext test * Make mono_assembly_names_equal external, change flags in domain search * Move ifdef inside mono_loader_get_strict_assembly_name_check * Fix pedump * No reason to make this external * Feedback
…18313) * [metadata] Size 0 Blob heap is ok when resolving assembly refs Sometimes ILasm can produce images with a Blob heap of size 0. In cases where we're loading assembly references from such an image, the hash (which is optional) can be at index = 0 and the Blob heap size is also 0. Also: add the hash value to the output of `monodis --assemblyref` * [metadata] Add a separate assertion for the index == size == 0 case And a comment explaining how it is likely to be triggered. If the assembly is reasonable the caller of mono_metadata_blob_heap should be updated to use mono_metadata_blob_heap_null_ok instead * [bcl] Allow Mono's ILASM to produce a size 0 Blob heap ECMA 335 II.24.2.4 says that the user string and blob heaps should have an entry at index 0 consisting of the single byte 0. However .NET Framework (and .NET Core) ilasm will entirely omit the Blob heap (that is, create a blob heap entry of size 0) if it is not needed. This PR changes Mono's ILASM to emit the initial byte on demand only if one of the MetaDataStream.Add() methods is called. Otherwise we will also emit a stream of size 0. This is needed to compile some test cases. * [tests] Add regression test for loading assemblies with size 0 Blob heap Depends on ILASM that can emit a size 0 Blob heap
Don't run `make install` in boehm submodule
…d when they are actually used.
…s out of gsharedvt methods. (mono#18334) * [jit] Avoid passing a vtable argument to DIM methods when making calls out of gsharedvt methods. Fixes mono#18276. * Fix compilation with mcs.
* [System.Private.CoreLib] Cleanup intrinsic tracking
1. Removed icalls where intrinsic version for the method exists
2. Add [Intrinsic] attribute to methods which have runtime intrinsic
to make them easier to track and to tell linker not to analyze them
3. Use recursive syntax for any intrinsic which can be called via
reflection to correctly apply the intrinsic when invoked.
```c#
public static bool IsSupported { get => IsSupported; }
```
When one calls X86.IsSupported, the call gets replaced by an
intrinsics. When one calls it with an Invoke, the body of IsSupported
gets replaced by intrinsics. It's just a hack to get Invoke support for free.
* [interp] implement System.Type::op_Equality
* Undo changes which are not yet supported by interpreter
* [interp] Implement System.Runtime.CompilerServices.RuntimeHelpers::OffsetToStringData
Co-authored-by: Bernhard Urban-Forster <[email protected]>
….Pkcs facade (mono#18325) * Add SignedCms to the typeforwards in the System.Security.Cryptography.Pkcs facade * Added additional missing types from System.Security.Cryptography.Pkcs. * It would be nice if the files I added had namespaces ;-) * Bump API snapshot submodule * [csproj] Update project files Co-authored-by: monojenkins <[email protected]> Fixes mono#18323
…#16637) * Set thread pool thread name just based on string pointer check. When the generation-based solution went in, there were not yet constant thread names, so a pointer based approach could fail due to reuse. We can do slightly simpler now. * Remove the generation field which is no longer needed. Remove the duplicated knowledge of how to set a constant and put it behind a funny sounding but perhaps reasonable flag. Revise corlib version.
* [sgen] Include also derived classes as bridges When using `MONO_GC_DEBUG=bridge=` debug option * [sgen] Fix xref computation with tarjan bridge Between C# and java (on android) there are objects that live on both worlds. This means that there exists a C# object with a corresponding java object. The relationship between them is strong, meaning if C# object is alive then java object must stay alive, and vice-versa. We keep java bridge objects always alive through a GCHandle (on the java gc). When doing a C# collection we select all bridge objects that appear to be dead on the C# side. These objects are candidates for collection, assuming the java side has nothing against it. Before triggering a collection on the java side (following the mono gc) we switch all the strong gchandles to the java objects to be weak (for these objects that are candidate to be collected) and we recreate the reference graph from the C# side to the java side (if C# Bridge1 can reference C# Bridge2 then, on the java side, we add a reference from Java Bridge1 to Java Bridge2; this is done by adding to an array of references inside Java Bridge1). In order to minimize the amount of work that needs to be done on the java side, we compute the minimal amount of references that need to be added, by computing the strongly connected components of the object graph. An optimized way to construct the SCCs and the xrefs is by using the tarjan algorithm (https://en.wikipedia.org/wiki/Tarjan%27s_strongly_connected_components_algorithm). Our algorithm is non recursive (scan_stack emulates the recursive order of traversal in a dfs algorithm, while loop_stack is the stack used by the algorithm). A color represents an SCC. We have some optimizations in place where we might merge colors if they don't contain bridges, since the client only cares about SCCs containing bridge objects and the links between them. color_merge_array is used to keep track of all neighbors of a node until we are creating the scc for that node. It is populated when scanning all the refs inside an object (compute_low). All the colors in the color_merge_array will be cross references with that scc. Before this commit we were only clearing the color_merge_array when creating an SCC. This is problematic because we could end up with xrefs inside of an SCC that belong to another SCC. Consider the simple graph of nodes 0,1,2,3 where 0 <=> 2, 0->1, 2->3. Assume we start scanning with node 0. When creating the SCC for node 3 the followed path is 0 -> 2 -> 3, while the loop stack will contain (0,1,2,3). After creating SCC for node 3, we will finish scanning node 2 which would detect the xref to Bridge3, which would have been added to the color_merge_array. Because node 2 is not the root of the SCC it belongs to (its lowlink points towards node 0 which has a lower index), we are not creating an SCC with it, and the link to node 3 remains in color_merge_array. Because the next node from the scan_stack is node 1, which is also the root of the SCC that it belongs to, we will create an SCC for it and wrongly add the node 3 reference from color_merge_array to it. In order to fix this issue, we will always clear the color_merge_array once we finished scanning the xrefs for a node. If the node in question is not the root of an scc, then we will remember them as xrefs pointing out from this object. When we finally reach node 0 (which will be the root of the SCC containing nodes 0 and 2), we will then know that all xrefs for this color are the union of the xrefs of all objects belonging to this color (which represents the objects that we are popping from the loop_stack until we encounter the root node). Even though this change adds required bookeeping for xrefs, I didn't notice any change in performance on the bridge tests that we have in mono/tests. * [sgen] Some logging improvements in tarjan bridge * [sgen] Disable optimization when comparing bridge outputs When this optimization is enabled, the tarjan bridge will create more SCCs in order to reduce amount of xrefs in the graph. This would render the `bridge-compare-to` debug flag unusable with tarjan bridge.
This enables parallel build support with msbuild!
* Fix byte/string incompatiblity in Python 3
The base image appears to have perl5.26 which conflicts with perl5-5.30, resulting in the /usr/local/bin/perl binary not being placed by perl5-5.30, which results in failed builds. The `perl5.26` appears to be part of the base image and has no packages actually dependent on it.
…ons (mono#17644) [interp] run regression with all combinations of available optimizations Like we do for the JIT.
…em_RuntimeFieldHandle_SetValueDirect (). (mono#18378) Fixes mono#18364.
It didn't work otherwise.
* Update dependencies from https://github.com/dotnet/arcade build 20191215.1 - Microsoft.DotNet.Arcade.Sdk - 5.0.0-beta.19615.1 - Microsoft.DotNet.Helix.Sdk - 5.0.0-beta.19615.1 * Update dependencies from https://github.com/dotnet/arcade build 20191222.1 - Microsoft.DotNet.Arcade.Sdk - 5.0.0-beta.19622.1 - Microsoft.DotNet.Helix.Sdk - 5.0.0-beta.19622.1 * Update dependencies from https://github.com/dotnet/arcade build 20191229.1 - Microsoft.DotNet.Arcade.Sdk - 5.0.0-beta.19629.1 - Microsoft.DotNet.Helix.Sdk - 5.0.0-beta.19629.1 * Update dependencies from https://github.com/dotnet/arcade build 20200105.1 - Microsoft.DotNet.Arcade.Sdk - 5.0.0-beta.20055.1 - Microsoft.DotNet.Helix.Sdk - 5.0.0-beta.20055.1
This was prompted by mono#18339, which seems to be failing to locate a native library and then hitting an assertion. We should propagate the error instead, so that if others run into something similar they can see what's actually going wrong instead of just getting a runtime crash.
Updated test to follow in dotnet/runtime
radical
pushed a commit
that referenced
this pull request
Jan 9, 2020
[2019-12] [debugger] skip suspend for unattached threads When the debugger tries to suspend all the VM threads, it can happen with cooperative-suspend that it tries to suspend a thread that has previously been attached, but then did a "light" detach (that only unsets the domain). With the domain set to `NULL`, looking up a JitInfo for a given `ip` will result into a crash, but it isn't even necessarily needed for the purpose of suspending a thread. More details: Consider the following: ``` (lldb) c error: Process is running. Use 'process interrupt' to pause execution. Process 12832 stopped * thread #9, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1 frame #0: 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6 2713 MonoDomain *domain = (MonoDomain *) mono_thread_info_get_suspend_state (info)->unwind_data [MONO_UNWIND_DATA_DOMAIN]; 2714 if (!domain) { 2715 /* not attached */ -> 2716 ji = NULL; 2717 } else { 2718 ji = mono_jit_info_table_find_internal ( domain, MONO_CONTEXT_GET_IP (&mono_thread_info_get_suspend_state (info)->ctx), TRUE, TRUE); 2719 } Target 0: (WatchWCSessionAppWatchOSExtension) stopped. (lldb) bt * thread #9, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1 * frame #0: 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6 frame #1: 0x004e177b WatchWCSessionAppWatchOSExtension`mono_thread_info_safe_suspend_and_run(id=0xb0767000, interrupt_kernel=0, callback=(WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical at debugger-agent.c:2708), user_data=0xb04dc718) at mono-threads.c:1358:19 frame #2: 0x00222799 WatchWCSessionAppWatchOSExtension`notify_thread(key=0x03994508, value=0x81378c00, user_data=0x00000000) at debugger-agent.c:2747:2 frame #3: 0x00355bd8 WatchWCSessionAppWatchOSExtension`mono_g_hash_table_foreach(hash=0x8007b4e0, func=(WatchWCSessionAppWatchOSExtension`notify_thread at debugger-agent.c:2733), user_data=0x00000000) at mono-hash.c:310:4 frame #4: 0x0021e955 WatchWCSessionAppWatchOSExtension`suspend_vm at debugger-agent.c:2844:3 frame #5: 0x00225ff0 WatchWCSessionAppWatchOSExtension`process_event(event=EVENT_KIND_THREAD_START, arg=0x039945d0, il_offset=0, ctx=0x00000000, events=0x805b28e0, suspend_policy=2) at debugger-agent.c:4012:3 frame #6: 0x00227c7c WatchWCSessionAppWatchOSExtension`process_profiler_event(event=EVENT_KIND_THREAD_START, arg=0x039945d0) at debugger-agent.c:4072:2 frame #7: 0x0021b174 WatchWCSessionAppWatchOSExtension`thread_startup(prof=0x00000000, tid=2957889536) at debugger-agent.c:4149:2 frame #8: 0x0037912d WatchWCSessionAppWatchOSExtension`mono_profiler_raise_thread_started(tid=2957889536) at profiler-events.h:103:1 frame #9: 0x003d53da WatchWCSessionAppWatchOSExtension`fire_attach_profiler_events(tid=0xb04dd000) at threads.c:1120:2 frame #10: 0x003d4d83 WatchWCSessionAppWatchOSExtension`mono_thread_attach(domain=0x801750a0) at threads.c:1547:2 frame #11: 0x003df1a1 WatchWCSessionAppWatchOSExtension`mono_threads_attach_coop_internal(domain=0x801750a0, cookie=0xb04dcc0c, stackdata=0xb04dcba8) at threads.c:6008:3 frame #12: 0x003df287 WatchWCSessionAppWatchOSExtension`mono_threads_attach_coop(domain=0x00000000, dummy=0xb04dcc0c) at threads.c:6045:9 frame #13: 0x005034b8 WatchWCSessionAppWatchOSExtension`::xamarin_switch_gchandle(self=0x80762c20, to_weak=false) at runtime.m:1805:2 frame #14: 0x005065c1 WatchWCSessionAppWatchOSExtension`::xamarin_retain_trampoline(self=0x80762c20, sel="retain") at trampolines.m:693:2 frame #15: 0x657ea520 libobjc.A.dylib`objc_retain + 64 frame #16: 0x4b4d9caa WatchConnectivity`__66-[WCSession onqueue_handleDictionaryMessageRequest:withPairingID:]_block_invoke + 279 frame #17: 0x453c7df7 Foundation`__NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 12 frame #18: 0x453c7cf4 Foundation`-[NSBlockOperation main] + 88 frame #19: 0x453cacee Foundation`__NSOPERATION_IS_INVOKING_MAIN__ + 27 frame #20: 0x453c6ebd Foundation`-[NSOperation start] + 835 frame #21: 0x453cb606 Foundation`__NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 27 frame #22: 0x453cb12e Foundation`__NSOQSchedule_f + 194 frame #23: 0x453cb26e Foundation`____addOperations_block_invoke_4 + 20 frame #24: 0x65de007b libdispatch.dylib`_dispatch_call_block_and_release + 15 frame #25: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14 frame #26: 0x65de3788 libdispatch.dylib`_dispatch_continuation_pop + 421 frame #27: 0x65de2ee3 libdispatch.dylib`_dispatch_async_redirect_invoke + 818 frame #28: 0x65df087d libdispatch.dylib`_dispatch_root_queue_drain + 354 frame #29: 0x65df0ff3 libdispatch.dylib`_dispatch_worker_thread2 + 109 frame #30: 0x66024fa0 libsystem_pthread.dylib`_pthread_wqthread + 208 frame #31: 0x66024e44 libsystem_pthread.dylib`start_wqthread + 36 ``` Going further, `info` is about this thread: ``` (lldb) p/x *(int *)(((char *) info->node.key) + 0xa0) (int) $2 = 0x01243f93 (lldb) thread list Process 12832 stopped thread #1: tid = 0x1243ee1, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'tid_303', queue = 'com.apple.main-thread' thread #2: tid = 0x1243ee6, 0x65f816e2 libsystem_kernel.dylib`__recvfrom + 10 thread #3: tid = 0x1243ee7, 0x65f81aea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'SGen worker' thread #4: tid = 0x1243ee9, 0x65f7e3d2 libsystem_kernel.dylib`semaphore_wait_trap + 10, name = 'Finalizer' thread #5: tid = 0x1243eea, 0x65f816e2 libsystem_kernel.dylib`__recvfrom + 10, name = 'Debugger agent' thread #6: tid = 0x1243f1d, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'com.apple.uikit.eventfetch-thread' thread #8: tid = 0x1243f93, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)' * thread #9: tid = 0x12443a9, 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1 thread #10: tid = 0x1244581, 0x65f7fd32 libsystem_kernel.dylib`__workq_kernreturn + 10 (lldb) thread select 8 * thread #8, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)' frame #0: 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10 libsystem_kernel.dylib`mach_msg_trap: -> 0x65f7e396 <+10>: retl 0x65f7e397 <+11>: nop libsystem_kernel.dylib`mach_msg_overwrite_trap: 0x65f7e398 <+0>: movl $0xffffffe0, %eax ; imm = 0xFFFFFFE0 0x65f7e39d <+5>: calll 0x65f85f44 ; _sysenter_trap (lldb) bt * thread #8, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)' * frame #0: 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10 frame #1: 0x65f7e8ff libsystem_kernel.dylib`mach_msg + 47 frame #2: 0x66079679 libxpc.dylib`_xpc_send_serializer + 104 frame #3: 0x660794da libxpc.dylib`_xpc_pipe_simpleroutine + 80 frame #4: 0x66079852 libxpc.dylib`xpc_pipe_simpleroutine + 43 frame #5: 0x66043a8f libsystem_trace.dylib`___os_activity_stream_reflect_block_invoke_2 + 30 frame #6: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14 frame #7: 0x65de3d71 libdispatch.dylib`_dispatch_block_invoke_direct + 257 frame #8: 0x65de3c62 libdispatch.dylib`dispatch_block_perform + 112 frame #9: 0x6604349a libsystem_trace.dylib`_os_activity_stream_reflect + 725 frame #10: 0x6604ef19 libsystem_trace.dylib`_os_log_impl_stream + 468 frame #11: 0x6604e44d libsystem_trace.dylib`_os_log_impl_flatten_and_send + 6410 frame #12: 0x6604cb3b libsystem_trace.dylib`_os_log + 137 frame #13: 0x6604f4aa libsystem_trace.dylib`_os_log_impl + 31 frame #14: 0x4b4eb4e9 WatchConnectivity`WCSerializePayloadDictionary + 393 frame #15: 0x4b4d7c4d WatchConnectivity`-[WCSession onqueue_sendResponseDictionary:identifier:] + 195 frame #16: 0x4b4da435 WatchConnectivity`__66-[WCSession onqueue_handleDictionaryMessageRequest:withPairingID:]_block_invoke.411 + 35 frame #17: 0x453c7df7 Foundation`__NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 12 frame #18: 0x453c7cf4 Foundation`-[NSBlockOperation main] + 88 frame #19: 0x453cacee Foundation`__NSOPERATION_IS_INVOKING_MAIN__ + 27 frame #20: 0x453c6ebd Foundation`-[NSOperation start] + 835 frame #21: 0x453cb606 Foundation`__NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 27 frame #22: 0x453cb12e Foundation`__NSOQSchedule_f + 194 frame #23: 0x453cb067 Foundation`____addOperations_block_invoke_2 + 20 frame #24: 0x65dedf49 libdispatch.dylib`_dispatch_block_async_invoke2 + 77 frame #25: 0x65de4461 libdispatch.dylib`_dispatch_block_async_invoke_and_release + 17 frame #26: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14 frame #27: 0x65de3788 libdispatch.dylib`_dispatch_continuation_pop + 421 frame #28: 0x65de2ee3 libdispatch.dylib`_dispatch_async_redirect_invoke + 818 frame #29: 0x65df087d libdispatch.dylib`_dispatch_root_queue_drain + 354 frame #30: 0x65df0ff3 libdispatch.dylib`_dispatch_worker_thread2 + 109 frame #31: 0x66024fa0 libsystem_pthread.dylib`_pthread_wqthread + 208 frame mono#32: 0x66024e44 libsystem_pthread.dylib`start_wqthread + 36 ``` which is a thread in a "light" detach state (aka. coop detach), where we only unset the domain: https://github.com/mono/mono/blob/4cefdcb7ce2d939ee78fb45d1b4913eb3bc064fd/mono/metadata/threads.c#L6084-L6111 Fixes mono#17926 <!-- Thank you for your Pull Request! If you are new to contributing to Mono, please try to do your best at conforming to our coding guidelines http://www.mono-project.com/community/contributing/coding-guidelines/ but don't worry if you get something wrong. One of the project members will help you to get things landed. Does your pull request fix any of the existing issues? Please use the following format: Fixes #issue-number --> Backport of mono#18105. /cc @lewurm
radical
pushed a commit
that referenced
this pull request
Jan 10, 2020
[2019-06] [arm64] Correct exception filter stack frame size Fixes: mono#14170 Fixes: dotnet/android#3112 The `call_filter()` function generated by `mono_arch_get_call_filter()` was overwriting a part of the previous stack frame because it was not creating a large enough new stack frame for itself. This had been working by luck before the switch to building the runtime with Android NDK r19. I used a local build of [xamarin/xamarin-android/d16-1@87a80b][0] to verify that this change resolved both of the issues linked above. I also confirmed that I was able to reintroduce the issues in my local environment by removing the change and rebuilding. After this change, the generated `call_filter()` function now starts by reserving sufficient space on the stack to hold a 64-bit value at the `ctx_offset` location. The 64-bit `ctx` value is saved to the `ctx_offset` location (offset 344 in the new stack frame) shortly afterwards: stp x29, x30, [sp,#-352]! mov x29, sp str x0, [x29,mono#344] As expected, the invocation of `call_filter()` now no longer modifies the top of the previous stack frame. Top of the stack before `call_filter()`: (gdb) x/8x $sp 0x7fcd2774a0: 0x7144d250 0x00000000 0x2724b700 0x00000000 0x7fcd2774b0: 0x1ee19ff0 0x00000071 0x0cc00300 0x00000071 Top of the stack after `call_filter()` (unchanged): (gdb) x/8x $sp 0x7fcd2774a0: 0x7144d250 0x00000000 0x2724b700 0x00000000 0x7fcd2774b0: 0x1ee19ff0 0x00000071 0x0cc00300 0x00000071 <details> <summary>Additional background information</summary> ### The original lucky, "good" behavior for [mono/mono@725ba2a built with Android NDK r14][1] 1. The `resume` parameter for `mono_handle_exception_internal()` is held in register `w23`. 2. That register is saved into the current stack frame at offset 20 by a `str` instruction: 0x00000000000bc1bc <+3012>: str w23, [sp,#20] 3. `handle_exception_first_pass()` invokes `call_filter()` via a `blr` instruction: 2279 filtered = call_filter (ctx, ei->data.filter); 0x00000000000bc60c <+4116>: add x9, x22, x24, lsl #6 0x00000000000bc610 <+4120>: ldr x8, [x8,mono#3120] 0x00000000000bc614 <+4124>: ldr x1, [x9,mono#112] 0x00000000000bc618 <+4128>: add x0, sp, #0x110 0x00000000000bc61c <+4132>: blr x8 Before the `blr` instruction, the top of the stack looks like this: (gdb) x/8x $sp 0x7fcd2774a0: 0x7144d250 0x00000000 0x0a8b31d8 0x00000071 0x7fcd2774b0: 0x00000000 0x00000000 0x1ef51980 0x00000071 4. `call_filter()` runs. This function is generated by `mono_arch_get_call_filter()` and starts with: stp x29, x30, [sp,#-336]! mov x29, sp str x0, [x29,mono#344] Note in particular how the first line subtracts 336 from `sp` to start a new stack frame, but the third line writes to a position that is 344 bytes beyond that, back within the *previous* stack frame. 5. After the invocations of `call_filter()` and `handle_exception_first_pass()`, `w23` is restored from the stack: 0x00000000000bc820 <+4648>: ldr w23, [sp,#20] At this step, the top of the stack looks like this: (gdb) x/8x $sp 0x7fcd2774a0: 0x7144d250 0x00000000 0xcd2775b0 0x0000007f 0x7fcd2774b0: 0x00000000 0x00000000 0x1ef51980 0x00000071 Notice how `call_filter()` has overwritten bytes 8 through 15 of the stack frame. In this original lucky scenario, this does not affect the value restored into register `w23` because that value starts at byte 20. 6. `mono_handle_exception_internal()` tests the value of `w23` to decide how to set the `ji` local variable: 2574 if (resume) { 0x00000000000bb960 <+872>: cbz w23, 0xbb9c4 <mono_handle_exception_internal+972> Since `w23` is `0`, `mono_handle_exception_internal()` correctly continues into the `else` branch. ### The bad behavior for [mono/mono@725ba2a built with Android NDK r19][2] works slightly differently 1. As before, the local `resume` parameter starts in register `w23`. 2. This time, the register is saved into the stack frame at offset 12 (instead of 20): 0x00000000000bed7c <+3200>: str w23, [sp,#12] 3. As before, `handle_exception_first_pass()` invokes `call_filter()` via a `blr` instruction. At this step, the top of the stack looks like this: (gdb) x/8x $sp 0x7fcd2774a0: 0x7144d250 0x00000000 0x2724b700 0x00000000 0x7fcd2774b0: 0x1ee19ff0 0x00000071 0x0cc00300 0x00000071 4. `call_filter()` runs as before. And the first few instructions of that function are the same as before: stp x29, x30, [sp,#-336]! mov x29, sp str x0, [x29,mono#344] 5. `w23` is again restored from the stack, but this time from offset 12: 0x00000000000bf2c0 <+4548>: ldr w23, [sp,#12] The problem is that `call_filter()` has again overwritten bytes 8 through 15 of the stack frame: (gdb) x/8x $sp 0x7fcd2774a0: 0x7144d250 0x00000000 0xcd2775b0 0x0000007f 0x7fcd2774b0: 0x1ee19ff0 0x00000071 0x0cc00300 0x00000071 So after the `ldr` instruction, `w23` has an incorrect value of `0x7f` rather than the correct value `0`. 6. As before, `mono_handle_exception_internal()` tests the value of `w23` to decide how to set the `ji` local variable: 2574 if (resume) { 0x00000000000be430 <+820>: cbz w23, 0xbe834 <mono_handle_exception_internal+1848> But this time because `w23` is *not* zero, execution continues *incorrectly* into the `if` branch rather than the `else` branch. The result is that `ji` is set to `0` rather than a valid `(MonoJitInfo *)` value. This incorrect value for `ji` leads to a null pointer dereference, as seen in the bug reports. [0]: https://github.com/xamarin/xamarin-android/tree/xamarin-android-9.3.0.22 [1]: https://jenkins.xamarin.com/view/Xamarin.Android/job/xamarin-android-freestyle/1432/Azure/ [2]: https://jenkins.xamarin.com/view/Xamarin.Android/job/xamarin-android-freestyle/1436/Azure/ </details> Backport of mono#14757. /cc @lambdageek @brendanzagaeski
radical
pushed a commit
that referenced
this pull request
Jan 10, 2020
The thread_stopped profiler event can be raised by the thread_info_key_dtor tls key destructor when the thread is already doesn't have a domain set. In that case, don't call process_profiler_event since it cannot handle a thread with null TLS values. Addresses dotnet/android#2920 with the following stack trace ``` * thread #20, name = 'Filter', stop reason = signal SIGSEGV: invalid address (fault address: 0xbc) * frame #0: libmonosgen-2.0.so`mono_class_vtable_checked(domain=0x0000000000000000, klass=0x0000007200230648, error=0x00000071e92f9178) at object.c:1890 frame #1: libmonosgen-2.0.so`get_current_thread_ptr_for_domain(domain=0x0000000000000000, thread=0x00000071ebfec508) at threads.c:595 frame #2: libmonosgen-2.0.so`mono_thread_current at threads.c:1939 frame #3: libmonosgen-2.0.so`process_event(event=<unavailable>, arg=<unavailable>, il_offset=<unavailable>, ctx=<unavailable>, events=<unavailable>, suspend_policy=<unavailable>) at debugger-agent.c:3715 frame #4: libmonosgen-2.0.so`thread_end [inlined] process_profiler_event(event=EVENT_KIND_THREAD_DEATH, arg=0x00000071ebfec508) at debugger-agent.c:3875 frame #5: libmonosgen-2.0.so`thread_end(prof=<unavailable>, tid=<unavailable>) at debugger-agent.c:3991 frame #6: libmonosgen-2.0.so`mono_profiler_raise_thread_stopped(tid=<unavailable>) at profiler-events.h:105 frame #7: libmonosgen-2.0.so`mono_thread_detach_internal(thread=<unavailable>) at threads.c:979 frame #8: libmonosgen-2.0.so`thread_detach(info=0x00000071e949a000) at threads.c:3215 frame #9: libmonosgen-2.0.so`unregister_thread(arg=<unavailable>) at mono-threads.c:544 frame #10: libmonosgen-2.0.so`thread_info_key_dtor(arg=0x00000071e949a000) at mono-threads.c:774 frame #11: 0x00000072899c58e8 libc.so`pthread_key_clean_all() + 124 frame #12: 0x00000072899c5374 libc.so`pthread_exit + 76 frame #13: 0x00000072899c5264 libc.so`__pthread_start(void*) + 44 frame #14: 0x000000728996617c libc.so`__start_thread + 72 ```
radical
pushed a commit
that referenced
this pull request
Jan 24, 2020
…ono#18533) `jit_tls->interp_context` gets initialized lazily, that is, upon the first interpreter execution on a specific thread (e.g. via interp_runtime_invoke). However, with mixed mode the execution can purely happen in AOT code upon the first interaction with the managed debugger. Stack trace: ``` thread #1, name = 'tid_407', queue = 'com.apple.main-thread' frame #0: 0x0000000190aedc94 libsystem_kernel.dylib`__psynch_cvwait + 8 frame #1: 0x0000000190a0f094 libsystem_pthread.dylib`_pthread_cond_wait$VARIANT$armv81 + 672 frame #2: 0x000000010431318c reloadcontext.iOS`mono_os_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-os-mutex.h:219:8 frame #3: 0x0000000104312a68 reloadcontext.iOS`mono_coop_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-coop-mutex.h:91:2 frame #4: 0x0000000104312858 reloadcontext.iOS`suspend_current at debugger-agent.c:3021:4 frame #5: 0x000000010431be18 reloadcontext.iOS`process_event(event=EVENT_KIND_BREAKPOINT, arg=0x0000000145d09ae8, il_offset=0, ctx=0x0000000149015c20, events=0x0000000000000000, suspend_policy=2) at debugger-agent.c:4058:3 frame #6: 0x0000000104310cf4 reloadcontext.iOS`process_breakpoint_events(_evts=0x000000028351a680, method=0x0000000145d09ae8, ctx=0x0000000149015c20, il_offset=0) at debugger-agent.c:4722:3 frame #7: 0x000000010432f1c8 reloadcontext.iOS`mono_de_process_breakpoint(void_tls=0x0000000149014e00, from_signal=0) at debugger-engine.c:1141:2 frame #8: 0x000000010430f238 reloadcontext.iOS`debugger_agent_breakpoint_from_context(ctx=0x000000016f656790) at debugger-agent.c:4938:2 frame #9: 0x00000001011b73a4 reloadcontext.iOS`sdb_breakpoint_trampoline + 148 frame #10: 0x00000001008511b4 reloadcontext.iOS`reloadcontext_iOS_Application_Main_string__(args=0x000000010703a030) at Main.cs:14 frame #11: 0x00000001010f9730 reloadcontext.iOS`wrapper_runtime_invoke_object_runtime_invoke_dynamic_intptr_intptr_intptr_intptr + 272 frame #12: 0x00000001042fd8b8 reloadcontext.iOS`mono_jit_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at mini-runtime.c:3162:3 frame #13: 0x0000000104411950 reloadcontext.iOS`do_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at object.c:3052:11 frame #14: 0x000000010440c4dc reloadcontext.iOS`mono_runtime_invoke_checked(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, error=0x000000016f656ff8) at object.c:3220:9 frame #15: 0x0000000104415ae0 reloadcontext.iOS`do_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5184:3 frame #16: 0x00000001044144ac reloadcontext.iOS`mono_runtime_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5281:9 frame #17: 0x0000000104414500 reloadcontext.iOS`mono_runtime_run_main_checked(method=0x0000000145d09ae8, argc=1, argv=0x000000016f6570d0, error=0x000000016f656ff8) at object.c:4734:9 frame #18: 0x00000001042d3b54 reloadcontext.iOS`mono_jit_exec_internal(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1320:13 frame #19: 0x00000001042d39a4 reloadcontext.iOS`mono_jit_exec(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1265:7 frame #20: 0x0000000104597994 reloadcontext.iOS`::xamarin_main(argc=5, argv=0x000000016f657a80, launch_mode=XamarinLaunchModeApp) at monotouch-main.m:483:8 frame #21: 0x00000001008510dc reloadcontext.iOS`main(argc=5, argv=0x000000016f657a80) at main.m:104:11 frame #22: 0x0000000190af8360 libdyld.dylib`start + 4 [...] * thread #5, name = 'Debugger agent', stop reason = signal SIGABRT * frame #0: 0x0000000190aedec4 libsystem_kernel.dylib`__pthread_kill + 8 frame #1: 0x0000000190a0d724 libsystem_pthread.dylib`pthread_kill$VARIANT$armv81 + 216 frame #2: 0x000000019095d844 libsystem_c.dylib`abort + 100 frame #3: 0x00000001045871b4 reloadcontext.iOS`log_callback(log_domain=0x0000000000000000, log_level="error", message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", fatal=4, user_data=0x0000000000000000) at runtime.m:1213:3 frame #4: 0x0000000104544fc8 reloadcontext.iOS`eglib_log_adapter(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", user_data=0x0000000000000000) at mono-logger.c:405:2 frame #5: 0x000000010456093c reloadcontext.iOS`monoeg_g_logstr(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, msg="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n") at goutput.c:134:2 frame #6: 0x0000000104560598 reloadcontext.iOS`monoeg_g_logv_nofree(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, format="* Assertion at %s:%d, condition `%s' not met\n", args="e\x12z\x04\x01") at goutput.c:149:2 frame #7: 0x000000010456061c reloadcontext.iOS`monoeg_assertion_message(format="* Assertion at %s:%d, condition `%s' not met\n") at goutput.c:184:22 frame #8: 0x0000000104560674 reloadcontext.iOS`mono_assertion_message(file="../../../../../mono/mini/interp/interp.c", line=7176, condition="context") at goutput.c:203:2 frame #9: 0x000000010459b570 reloadcontext.iOS`interp_get_resume_state(jit_tls=0x000000014900d000, has_resume_state=0x000000016fc7a9f4, interp_frame=0x000000016fc7a9e8, handler_ip=0x000000016fc7a9e0) at interp.c:7176:2 frame #10: 0x0000000104319420 reloadcontext.iOS`compute_frame_info(thread=0x0000000104fe4130, tls=0x0000000149014e00, force_update=1) at debugger-agent.c:3422:3 frame #11: 0x0000000104320d40 reloadcontext.iOS`thread_commands(command=1, p="", end="", buf=0x000000016fc7acf8) at debugger-agent.c:9048:3 frame #12: 0x000000010431cca0 reloadcontext.iOS`debugger_thread(arg=0x0000000000000000) at debugger-agent.c:10132:10 frame #13: 0x000000010447eb04 reloadcontext.iOS`start_wrapper_internal(start_info=0x0000000000000000, stack_ptr=0x000000016fc7b000) at threads.c:1232:3 frame #14: 0x000000010447e788 reloadcontext.iOS`start_wrapper(data=0x000000028203ef40) at threads.c:1305:8 frame #15: 0x0000000190a11d8c libsystem_pthread.dylib`_pthread_start + 15 [...] ``` Thanks to @drasticactions for helping me to reproduce. Fixes https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1050615
radical
pushed a commit
that referenced
this pull request
Jan 24, 2020
…ono#18536) `jit_tls->interp_context` gets initialized lazily, that is, upon the first interpreter execution on a specific thread (e.g. via interp_runtime_invoke). However, with mixed mode the execution can purely happen in AOT code upon the first interaction with the managed debugger. Stack trace: ``` thread #1, name = 'tid_407', queue = 'com.apple.main-thread' frame #0: 0x0000000190aedc94 libsystem_kernel.dylib`__psynch_cvwait + 8 frame #1: 0x0000000190a0f094 libsystem_pthread.dylib`_pthread_cond_wait$VARIANT$armv81 + 672 frame #2: 0x000000010431318c reloadcontext.iOS`mono_os_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-os-mutex.h:219:8 frame #3: 0x0000000104312a68 reloadcontext.iOS`mono_coop_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-coop-mutex.h:91:2 frame #4: 0x0000000104312858 reloadcontext.iOS`suspend_current at debugger-agent.c:3021:4 frame #5: 0x000000010431be18 reloadcontext.iOS`process_event(event=EVENT_KIND_BREAKPOINT, arg=0x0000000145d09ae8, il_offset=0, ctx=0x0000000149015c20, events=0x0000000000000000, suspend_policy=2) at debugger-agent.c:4058:3 frame #6: 0x0000000104310cf4 reloadcontext.iOS`process_breakpoint_events(_evts=0x000000028351a680, method=0x0000000145d09ae8, ctx=0x0000000149015c20, il_offset=0) at debugger-agent.c:4722:3 frame #7: 0x000000010432f1c8 reloadcontext.iOS`mono_de_process_breakpoint(void_tls=0x0000000149014e00, from_signal=0) at debugger-engine.c:1141:2 frame #8: 0x000000010430f238 reloadcontext.iOS`debugger_agent_breakpoint_from_context(ctx=0x000000016f656790) at debugger-agent.c:4938:2 frame #9: 0x00000001011b73a4 reloadcontext.iOS`sdb_breakpoint_trampoline + 148 frame #10: 0x00000001008511b4 reloadcontext.iOS`reloadcontext_iOS_Application_Main_string__(args=0x000000010703a030) at Main.cs:14 frame #11: 0x00000001010f9730 reloadcontext.iOS`wrapper_runtime_invoke_object_runtime_invoke_dynamic_intptr_intptr_intptr_intptr + 272 frame #12: 0x00000001042fd8b8 reloadcontext.iOS`mono_jit_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at mini-runtime.c:3162:3 frame #13: 0x0000000104411950 reloadcontext.iOS`do_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at object.c:3052:11 frame #14: 0x000000010440c4dc reloadcontext.iOS`mono_runtime_invoke_checked(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, error=0x000000016f656ff8) at object.c:3220:9 frame #15: 0x0000000104415ae0 reloadcontext.iOS`do_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5184:3 frame #16: 0x00000001044144ac reloadcontext.iOS`mono_runtime_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5281:9 frame #17: 0x0000000104414500 reloadcontext.iOS`mono_runtime_run_main_checked(method=0x0000000145d09ae8, argc=1, argv=0x000000016f6570d0, error=0x000000016f656ff8) at object.c:4734:9 frame #18: 0x00000001042d3b54 reloadcontext.iOS`mono_jit_exec_internal(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1320:13 frame #19: 0x00000001042d39a4 reloadcontext.iOS`mono_jit_exec(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1265:7 frame #20: 0x0000000104597994 reloadcontext.iOS`::xamarin_main(argc=5, argv=0x000000016f657a80, launch_mode=XamarinLaunchModeApp) at monotouch-main.m:483:8 frame #21: 0x00000001008510dc reloadcontext.iOS`main(argc=5, argv=0x000000016f657a80) at main.m:104:11 frame #22: 0x0000000190af8360 libdyld.dylib`start + 4 [...] * thread #5, name = 'Debugger agent', stop reason = signal SIGABRT * frame #0: 0x0000000190aedec4 libsystem_kernel.dylib`__pthread_kill + 8 frame #1: 0x0000000190a0d724 libsystem_pthread.dylib`pthread_kill$VARIANT$armv81 + 216 frame #2: 0x000000019095d844 libsystem_c.dylib`abort + 100 frame #3: 0x00000001045871b4 reloadcontext.iOS`log_callback(log_domain=0x0000000000000000, log_level="error", message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", fatal=4, user_data=0x0000000000000000) at runtime.m:1213:3 frame #4: 0x0000000104544fc8 reloadcontext.iOS`eglib_log_adapter(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", user_data=0x0000000000000000) at mono-logger.c:405:2 frame #5: 0x000000010456093c reloadcontext.iOS`monoeg_g_logstr(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, msg="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n") at goutput.c:134:2 frame #6: 0x0000000104560598 reloadcontext.iOS`monoeg_g_logv_nofree(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, format="* Assertion at %s:%d, condition `%s' not met\n", args="e\x12z\x04\x01") at goutput.c:149:2 frame #7: 0x000000010456061c reloadcontext.iOS`monoeg_assertion_message(format="* Assertion at %s:%d, condition `%s' not met\n") at goutput.c:184:22 frame #8: 0x0000000104560674 reloadcontext.iOS`mono_assertion_message(file="../../../../../mono/mini/interp/interp.c", line=7176, condition="context") at goutput.c:203:2 frame #9: 0x000000010459b570 reloadcontext.iOS`interp_get_resume_state(jit_tls=0x000000014900d000, has_resume_state=0x000000016fc7a9f4, interp_frame=0x000000016fc7a9e8, handler_ip=0x000000016fc7a9e0) at interp.c:7176:2 frame #10: 0x0000000104319420 reloadcontext.iOS`compute_frame_info(thread=0x0000000104fe4130, tls=0x0000000149014e00, force_update=1) at debugger-agent.c:3422:3 frame #11: 0x0000000104320d40 reloadcontext.iOS`thread_commands(command=1, p="", end="", buf=0x000000016fc7acf8) at debugger-agent.c:9048:3 frame #12: 0x000000010431cca0 reloadcontext.iOS`debugger_thread(arg=0x0000000000000000) at debugger-agent.c:10132:10 frame #13: 0x000000010447eb04 reloadcontext.iOS`start_wrapper_internal(start_info=0x0000000000000000, stack_ptr=0x000000016fc7b000) at threads.c:1232:3 frame #14: 0x000000010447e788 reloadcontext.iOS`start_wrapper(data=0x000000028203ef40) at threads.c:1305:8 frame #15: 0x0000000190a11d8c libsystem_pthread.dylib`_pthread_start + 15 [...] ``` Thanks to @drasticactions for helping me to reproduce. Fixes https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1050615 Co-authored-by: Bernhard Urban-Forster <[email protected]>
radical
pushed a commit
that referenced
this pull request
Jan 24, 2020
…ono#18535) `jit_tls->interp_context` gets initialized lazily, that is, upon the first interpreter execution on a specific thread (e.g. via interp_runtime_invoke). However, with mixed mode the execution can purely happen in AOT code upon the first interaction with the managed debugger. Stack trace: ``` thread #1, name = 'tid_407', queue = 'com.apple.main-thread' frame #0: 0x0000000190aedc94 libsystem_kernel.dylib`__psynch_cvwait + 8 frame #1: 0x0000000190a0f094 libsystem_pthread.dylib`_pthread_cond_wait$VARIANT$armv81 + 672 frame #2: 0x000000010431318c reloadcontext.iOS`mono_os_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-os-mutex.h:219:8 frame #3: 0x0000000104312a68 reloadcontext.iOS`mono_coop_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-coop-mutex.h:91:2 frame #4: 0x0000000104312858 reloadcontext.iOS`suspend_current at debugger-agent.c:3021:4 frame #5: 0x000000010431be18 reloadcontext.iOS`process_event(event=EVENT_KIND_BREAKPOINT, arg=0x0000000145d09ae8, il_offset=0, ctx=0x0000000149015c20, events=0x0000000000000000, suspend_policy=2) at debugger-agent.c:4058:3 frame #6: 0x0000000104310cf4 reloadcontext.iOS`process_breakpoint_events(_evts=0x000000028351a680, method=0x0000000145d09ae8, ctx=0x0000000149015c20, il_offset=0) at debugger-agent.c:4722:3 frame #7: 0x000000010432f1c8 reloadcontext.iOS`mono_de_process_breakpoint(void_tls=0x0000000149014e00, from_signal=0) at debugger-engine.c:1141:2 frame #8: 0x000000010430f238 reloadcontext.iOS`debugger_agent_breakpoint_from_context(ctx=0x000000016f656790) at debugger-agent.c:4938:2 frame #9: 0x00000001011b73a4 reloadcontext.iOS`sdb_breakpoint_trampoline + 148 frame #10: 0x00000001008511b4 reloadcontext.iOS`reloadcontext_iOS_Application_Main_string__(args=0x000000010703a030) at Main.cs:14 frame #11: 0x00000001010f9730 reloadcontext.iOS`wrapper_runtime_invoke_object_runtime_invoke_dynamic_intptr_intptr_intptr_intptr + 272 frame #12: 0x00000001042fd8b8 reloadcontext.iOS`mono_jit_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at mini-runtime.c:3162:3 frame #13: 0x0000000104411950 reloadcontext.iOS`do_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at object.c:3052:11 frame #14: 0x000000010440c4dc reloadcontext.iOS`mono_runtime_invoke_checked(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, error=0x000000016f656ff8) at object.c:3220:9 frame #15: 0x0000000104415ae0 reloadcontext.iOS`do_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5184:3 frame #16: 0x00000001044144ac reloadcontext.iOS`mono_runtime_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5281:9 frame #17: 0x0000000104414500 reloadcontext.iOS`mono_runtime_run_main_checked(method=0x0000000145d09ae8, argc=1, argv=0x000000016f6570d0, error=0x000000016f656ff8) at object.c:4734:9 frame #18: 0x00000001042d3b54 reloadcontext.iOS`mono_jit_exec_internal(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1320:13 frame #19: 0x00000001042d39a4 reloadcontext.iOS`mono_jit_exec(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1265:7 frame #20: 0x0000000104597994 reloadcontext.iOS`::xamarin_main(argc=5, argv=0x000000016f657a80, launch_mode=XamarinLaunchModeApp) at monotouch-main.m:483:8 frame #21: 0x00000001008510dc reloadcontext.iOS`main(argc=5, argv=0x000000016f657a80) at main.m:104:11 frame #22: 0x0000000190af8360 libdyld.dylib`start + 4 [...] * thread #5, name = 'Debugger agent', stop reason = signal SIGABRT * frame #0: 0x0000000190aedec4 libsystem_kernel.dylib`__pthread_kill + 8 frame #1: 0x0000000190a0d724 libsystem_pthread.dylib`pthread_kill$VARIANT$armv81 + 216 frame #2: 0x000000019095d844 libsystem_c.dylib`abort + 100 frame #3: 0x00000001045871b4 reloadcontext.iOS`log_callback(log_domain=0x0000000000000000, log_level="error", message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", fatal=4, user_data=0x0000000000000000) at runtime.m:1213:3 frame #4: 0x0000000104544fc8 reloadcontext.iOS`eglib_log_adapter(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", user_data=0x0000000000000000) at mono-logger.c:405:2 frame #5: 0x000000010456093c reloadcontext.iOS`monoeg_g_logstr(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, msg="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n") at goutput.c:134:2 frame #6: 0x0000000104560598 reloadcontext.iOS`monoeg_g_logv_nofree(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, format="* Assertion at %s:%d, condition `%s' not met\n", args="e\x12z\x04\x01") at goutput.c:149:2 frame #7: 0x000000010456061c reloadcontext.iOS`monoeg_assertion_message(format="* Assertion at %s:%d, condition `%s' not met\n") at goutput.c:184:22 frame #8: 0x0000000104560674 reloadcontext.iOS`mono_assertion_message(file="../../../../../mono/mini/interp/interp.c", line=7176, condition="context") at goutput.c:203:2 frame #9: 0x000000010459b570 reloadcontext.iOS`interp_get_resume_state(jit_tls=0x000000014900d000, has_resume_state=0x000000016fc7a9f4, interp_frame=0x000000016fc7a9e8, handler_ip=0x000000016fc7a9e0) at interp.c:7176:2 frame #10: 0x0000000104319420 reloadcontext.iOS`compute_frame_info(thread=0x0000000104fe4130, tls=0x0000000149014e00, force_update=1) at debugger-agent.c:3422:3 frame #11: 0x0000000104320d40 reloadcontext.iOS`thread_commands(command=1, p="", end="", buf=0x000000016fc7acf8) at debugger-agent.c:9048:3 frame #12: 0x000000010431cca0 reloadcontext.iOS`debugger_thread(arg=0x0000000000000000) at debugger-agent.c:10132:10 frame #13: 0x000000010447eb04 reloadcontext.iOS`start_wrapper_internal(start_info=0x0000000000000000, stack_ptr=0x000000016fc7b000) at threads.c:1232:3 frame #14: 0x000000010447e788 reloadcontext.iOS`start_wrapper(data=0x000000028203ef40) at threads.c:1305:8 frame #15: 0x0000000190a11d8c libsystem_pthread.dylib`_pthread_start + 15 [...] ``` Thanks to @drasticactions for helping me to reproduce. Fixes https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1050615 Co-authored-by: Bernhard Urban-Forster <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.