Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@monojenkins
Copy link
Collaborator

Backport of PR #5 to 2018-03.

@monojenkins monojenkins force-pushed the backport-pr-5-to-2018-03 branch 5 times, most recently from 4b7b0d9 to df242c5 Compare January 26, 2018 00:20
@monojenkins monojenkins force-pushed the backport-pr-5-to-2018-03 branch from df242c5 to df4e525 Compare January 26, 2018 00:20
akoeplinger pushed a commit that referenced this pull request Feb 19, 2018
Commit list for mono/NUnitLite:

* mono/NUnitLite@0dc7716 [FinallyDelegate] make key for `lookupTable` actually unique (#6)

Diff: mono/NUnitLite@a8bef89...0dc7716
akoeplinger pushed a commit that referenced this pull request Mar 8, 2018
To pick up:

commit 345755cb49ee1979df280399f03178aae0b4029a
Author: Alex Rønne Petersen <[email protected]>
Date:   Tue Mar 6 13:40:16 2018 +0100

    Add build support for RISC-V. (#6)
akoeplinger pushed a commit that referenced this pull request Mar 19, 2018
Commit list for mono/NUnitLite:

* mono/NUnitLite@b777682 [FinallyDelegate] only call it if available, on some platforms (e.g. iOS) it can be set to null (#7)
* mono/NUnitLite@0dc7716 [FinallyDelegate] make key for `lookupTable` actually unique (#6)
* mono/NUnitLite@a8bef89 make FinallyDelegate more resilient regarding unhandled exceptions (#5)

Diff: mono/NUnitLite@764656c...b777682
akoeplinger pushed a commit that referenced this pull request Oct 31, 2018
We were previously only setting it from the icall. The icall was
therefore fine, while invocations associated with actual dumps
caused crashes like this:

```

thread #10, name = 'Thread Pool Worker'
    frame #0: 0x00007fff978dac22 libsystem_kernel.dylib`__psynch_mutexwait + 10
    frame #1: 0x00007fff979c5dfa libsystem_pthread.dylib`_pthread_mutex_lock_wait + 100
    frame #2: 0x00007fff979c3519 libsystem_pthread.dylib`_pthread_mutex_lock_slow + 285
    frame #3: 0x0000000108e0d4a9 mono`mono_loader_lock + 73
    frame #4: 0x0000000108dc9106 mono`mono_class_create_from_typedef + 182
    frame #5: 0x0000000108dc1f2c mono`mono_class_get_checked + 92
    frame #6: 0x0000000108e21da2 mono`mono_metadata_parse_type_internal + 1378
    frame #7: 0x0000000108e25e11 mono`mono_metadata_parse_mh_full + 1233
    frame #8: 0x0000000108ca4f79 mono`mono_debug_add_aot_method + 121
    frame #9: 0x0000000108cc6b1e mono`decode_exception_debug_info + 5918
    frame #10: 0x0000000108cc5047 mono`mono_aot_find_jit_info + 1367
    frame #11: 0x0000000108e082cc mono`mono_jit_info_table_find_internal + 204
    frame #12: 0x0000000108cdb035 mono`mini_jit_info_table_find_ext + 69
    frame #13: 0x0000000108cdad5c mono`mono_find_jit_info_ext + 124
    frame #14: 0x0000000108cdc3a5 mono`unwinder_unwind_frame + 229
    frame #15: 0x0000000108cdbade mono`mono_walk_stack_full + 798
    frame #16: 0x0000000108cda1a4 mono`mono_summarize_managed_stack + 100
    frame #17: 0x0000000108e6dc03 mono`mono_threads_summarize_execute + 1347
    frame #18: 0x0000000108e6df88 mono`mono_threads_summarize + 360

```
akoeplinger pushed a commit that referenced this pull request Nov 19, 2018
We were previously only setting it from the icall. The icall was
therefore fine, while invocations associated with actual dumps
caused crashes like this:

```

thread #10, name = 'Thread Pool Worker'
    frame #0: 0x00007fff978dac22 libsystem_kernel.dylib`__psynch_mutexwait + 10
    frame #1: 0x00007fff979c5dfa libsystem_pthread.dylib`_pthread_mutex_lock_wait + 100
    frame #2: 0x00007fff979c3519 libsystem_pthread.dylib`_pthread_mutex_lock_slow + 285
    frame #3: 0x0000000108e0d4a9 mono`mono_loader_lock + 73
    frame #4: 0x0000000108dc9106 mono`mono_class_create_from_typedef + 182
    frame #5: 0x0000000108dc1f2c mono`mono_class_get_checked + 92
    frame #6: 0x0000000108e21da2 mono`mono_metadata_parse_type_internal + 1378
    frame #7: 0x0000000108e25e11 mono`mono_metadata_parse_mh_full + 1233
    frame #8: 0x0000000108ca4f79 mono`mono_debug_add_aot_method + 121
    frame #9: 0x0000000108cc6b1e mono`decode_exception_debug_info + 5918
    frame #10: 0x0000000108cc5047 mono`mono_aot_find_jit_info + 1367
    frame #11: 0x0000000108e082cc mono`mono_jit_info_table_find_internal + 204
    frame #12: 0x0000000108cdb035 mono`mini_jit_info_table_find_ext + 69
    frame #13: 0x0000000108cdad5c mono`mono_find_jit_info_ext + 124
    frame #14: 0x0000000108cdc3a5 mono`unwinder_unwind_frame + 229
    frame #15: 0x0000000108cdbade mono`mono_walk_stack_full + 798
    frame #16: 0x0000000108cda1a4 mono`mono_summarize_managed_stack + 100
    frame #17: 0x0000000108e6dc03 mono`mono_threads_summarize_execute + 1347
    frame #18: 0x0000000108e6df88 mono`mono_threads_summarize + 360

```
akoeplinger pushed a commit that referenced this pull request Feb 1, 2019
* [runtime] Make infrastructure for merp tests

* [runtime] Fix dumping when crash happens without sigctx

* [runtime] Disable crashy (886/1000 runs) stacktrace walker

* [crash] Remove usage of allocating build/os info functions

* [crash] Remove often-crashing g_free on native crash path

* [crash] Add crash_reporter checked build

We add a checked build mode that asserts when mono mallocs inside of
the crash reporter. It makes risky allocations into assertions. It's
useful for automated testing because the double-abort often represents
itself as an indefinite hang. If it happens before the thread dumping
supervisor process is started, or after it ends, the crash reporter
hangs.

* [crash] Remove reliance on nested SIGABRT/double-fault (broken on OSX)

* [crash] Fix top-level handling of double faults/assertions

* [runtime] Make fatal unwinding errors return into handled error paths

* [crash] Change dumper logging for better info

* [runtime] Fix handling of segfault on sgen thread

Threads without domains that get segfaults will end up in
this handler. It's not safe to call this function with a NULL domain.

See crash below:

```
* thread #1, name = 'tid_307', queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x10eff40f8)
  * frame #0: 0x000000010e1510d9 mono-sgen`mono_threads_summarize_execute(ctx=0x0000000000000000, out=0x0000001000000000, hashes=0x0000100000100000, silent=4096, mem="", provided_size=2199023296512) at threads.c:6414
    frame #1: 0x000000010e152092 mono-sgen`mono_threads_summarize(ctx=0x000000010effda00, out=0x000000010effdba0, hashes=0x000000010effdb90, silent=0, signal_handler_controller=1, mem=0x0000000000000000, provided_size=0) at threads.c:6508
    frame #2: 0x000000010df7c69f mono-sgen`dump_native_stacktrace(signal="SIGSEGV", ctx=0x000000010effef48) at mini-posix.c:1026
    frame #3: 0x000000010df7c37f mono-sgen`mono_dump_native_crash_info(signal="SIGSEGV", ctx=0x000000010effef48, info=0x000000010effeee0) at mini-posix.c:1147
    frame #4: 0x000000010de720a9 mono-sgen`mono_handle_native_crash(signal="SIGSEGV", ctx=0x000000010effef48, info=0x000000010effeee0) at mini-exceptions.c:3227
    frame #5: 0x000000010dd6ac0d mono-sgen`mono_sigsegv_signal_handler_debug(_dummy=11, _info=0x000000010effeee0, context=0x000000010effef48, debug_fault_addr=0xffffffffffffffff) at mini-runtime.c:3574
    frame #6: 0x000000010dd6a8d3 mono-sgen`mono_sigsegv_signal_handler(_dummy=11, _info=0x000000010effeee0, context=0x000000010effef48) at mini-runtime.c:3612
    frame #7: 0x00007fff73dbdf5a libsystem_platform.dylib`_sigtramp + 26
    frame #8: 0x0000000110bb81c1
    frame #9: 0x000000011085ffe1
    frame #10: 0x000000010dd6d4f3 mono-sgen`mono_jit_runtime_invoke(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, exc=0x00007ffee1ea9f08, error=0x00007ffee1eaa250) at mini-runtime.c:3215
    frame #11: 0x000000010e11509d mono-sgen`do_runtime_invoke(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, exc=0x0000000000000000, error=0x00007ffee1eaa250) at object.c:2977
    frame #12: 0x000000010e10d961 mono-sgen`mono_runtime_invoke_checked(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, error=0x00007ffee1eaa250) at object.c:3145
    frame #13: 0x000000010e11aa58 mono-sgen`do_exec_main_checked(method=0x00007faae4f01fe8, args=0x000000010f0003e8, error=0x00007ffee1eaa250) at object.c:5042
    frame #14: 0x000000010e118803 mono-sgen`mono_runtime_exec_main_checked(method=0x00007faae4f01fe8, args=0x000000010f0003e8, error=0x00007ffee1eaa250) at object.c:5138
    frame #15: 0x000000010e118856 mono-sgen`mono_runtime_run_main_checked(method=0x00007faae4f01fe8, argc=2, argv=0x00007ffee1eaa760, error=0x00007ffee1eaa250) at object.c:4599
    frame #16: 0x000000010de1db2f mono-sgen`mono_jit_exec_internal(domain=0x00007faae4f00860, assembly=0x00007faae4c02ab0, argc=2, argv=0x00007ffee1eaa760) at driver.c:1298
    frame #17: 0x000000010de1d95d mono-sgen`mono_jit_exec(domain=0x00007faae4f00860, assembly=0x00007faae4c02ab0, argc=2, argv=0x00007ffee1eaa760) at driver.c:1257
    frame #18: 0x000000010de2257f mono-sgen`main_thread_handler(user_data=0x00007ffee1eaa6a0) at driver.c:1375
    frame #19: 0x000000010de20852 mono-sgen`mono_main(argc=3, argv=0x00007ffee1eaa758) at driver.c:2551
    frame #20: 0x000000010dd56d7e mono-sgen`mono_main_with_options(argc=3, argv=0x00007ffee1eaa758) at main.c:50
    frame #21: 0x000000010dd5638d mono-sgen`main(argc=3, argv=0x00007ffee1eaa758) at main.c:406
    frame #22: 0x00007fff73aaf015 libdyld.dylib`start + 1
    frame #23: 0x00007fff73aaf015 libdyld.dylib`start + 1
  thread #2, name = 'SGen worker'
    frame #0: 0x000000010e2afd77 mono-sgen`mono_get_hazardous_pointer(pp=0x0000000000000178, hp=0x000000010ef87618, hazard_index=0) at hazard-pointer.c:208
    frame #1: 0x000000010e0b28e1 mono-sgen`mono_jit_info_table_find_internal(domain=0x0000000000000000, addr=0x00007fff73bffa16, try_aot=1, allow_trampolines=1) at jit-info.c:304
    frame #2: 0x000000010dd6aa5f mono-sgen`mono_sigsegv_signal_handler_debug(_dummy=11, _info=0x000070000fb81c58, context=0x000070000fb81cc0, debug_fault_addr=0x000000010e28fb20) at mini-runtime.c:3540
    frame #3: 0x000000010dd6a8d3 mono-sgen`mono_sigsegv_signal_handler(_dummy=11, _info=0x000070000fb81c58, context=0x000070000fb81cc0) at mini-runtime.c:3612
    frame #4: 0x00007fff73dbdf5a libsystem_platform.dylib`_sigtramp + 26
    frame #5: 0x00007fff73bffa17 libsystem_kernel.dylib`__psynch_cvwait + 11
    frame #6: 0x00007fff73dc8589 libsystem_pthread.dylib`_pthread_cond_wait + 732
    frame #7: 0x000000010e28d76d mono-sgen`mono_os_cond_wait(cond=0x000000010e44c9d8, mutex=0x000000010e44c998) at mono-os-mutex.h:168
    frame #8: 0x000000010e28df4f mono-sgen`get_work(worker_index=0, work_context=0x000070000fb81ee0, do_idle=0x000070000fb81ed4, job=0x000070000fb81ec8) at sgen-thread-pool.c:165
    frame #9: 0x000000010e28d2cb mono-sgen`thread_func(data=0x0000000000000000) at sgen-thread-pool.c:196
    frame #10: 0x00007fff73dc7661 libsystem_pthread.dylib`_pthread_body + 340
    frame #11: 0x00007fff73dc750d libsystem_pthread.dylib`_pthread_start + 377
    frame #12: 0x00007fff73dc6bf9 libsystem_pthread.dylib`thread_start + 13
  thread #3, name = 'Finalizer'
    frame #0: 0x00007fff73bf6246 libsystem_kernel.dylib`semaphore_wait_trap + 10
    frame #1: 0x000000010e1d9c0a mono-sgen`mono_os_sem_wait(sem=0x000000010e43e400, flags=MONO_SEM_FLAGS_ALERTABLE) at mono-os-semaphore.h:84
    frame #2: 0x000000010e1d832d mono-sgen`mono_coop_sem_wait(sem=0x000000010e43e400, flags=MONO_SEM_FLAGS_ALERTABLE) at mono-coop-semaphore.h:41
    frame #3: 0x000000010e1da787 mono-sgen`finalizer_thread(unused=0x0000000000000000) at gc.c:920
    frame #4: 0x000000010e152919 mono-sgen`start_wrapper_internal(start_info=0x0000000000000000, stack_ptr=0x000070000fd85000) at threads.c:1178
    frame #5: 0x000000010e1525b6 mono-sgen`start_wrapper(data=0x00007faae4f31bd0) at threads.c:1238
    frame #6: 0x00007fff73dc7661 libsystem_pthread.dylib`_pthread_body + 340
    frame #7: 0x00007fff73dc750d libsystem_pthread.dylib`_pthread_start + 377
    frame #8: 0x00007fff73dc6bf9 libsystem_pthread.dylib`thread_start + 13
  thread #4
    frame #0: 0x00007fff73c0028a libsystem_kernel.dylib`__workq_kernreturn + 10
    frame #1: 0x00007fff73dc7009 libsystem_pthread.dylib`_pthread_wqthread + 1035
    frame #2: 0x00007fff73dc6be9 libsystem_pthread.dylib`start_wqthread + 13
(lldb)
```

* [crash] Add signal-safe mmap/file "allocator"

* [crash] Remove use of static memory from dumper

* [runtime] Reduce print buffer size for lockless printer.

Each frame that prints ends up increased by the size of buff.
In practice, clang often fails to deduplicate some of these buffers,
leading to 30k-big stackframes.

It was noticed by a series of hard-to-diagnose segfaults on stacks that
looked otherwise fine during the crash reporting stress test.

This change fixes this, making stacks a 1/10th of the size. It doesn't
seem to break the crash reporter messages anywhere (may need to shrink
other "max name length" fields), and it's not mission-critical anywhere
else.

* [crash] Use async-safe file memory for dumper internals

* [crash] Add memory barriers around merp configuration

* [crash] Use signal-safe printers on all native crash paths

* [crash] Move gdb/lldb lookup to startup

* [runtime] Move MOSTLY_ASYNC_SAFE_FPRINTF to eglib

* [runtime] Fix all callsites of MOSTLY_ASYNC_SAFE_PRINTF

* [runtime] Fix all callsites of MOSTLY_ASYNC_SAFE_FPRINTF

* [crash] Switch to signal-safe exit function

* [crash] Make dumper enum in managed

* [runtime] Add more information to managed frame

* [crash] Make async_safe printers inlined

* [runtime] Move basic pe_file functionality into proclib

* [crash] Fix handling of thread attributes
akoeplinger pushed a commit that referenced this pull request Feb 5, 2019
We were previously only setting it from the icall. The icall was
therefore fine, while invocations associated with actual dumps
caused crashes like this:

```

thread #10, name = 'Thread Pool Worker'
    frame #0: 0x00007fff978dac22 libsystem_kernel.dylib`__psynch_mutexwait + 10
    frame #1: 0x00007fff979c5dfa libsystem_pthread.dylib`_pthread_mutex_lock_wait + 100
    frame #2: 0x00007fff979c3519 libsystem_pthread.dylib`_pthread_mutex_lock_slow + 285
    frame #3: 0x0000000108e0d4a9 mono`mono_loader_lock + 73
    frame #4: 0x0000000108dc9106 mono`mono_class_create_from_typedef + 182
    frame #5: 0x0000000108dc1f2c mono`mono_class_get_checked + 92
    frame #6: 0x0000000108e21da2 mono`mono_metadata_parse_type_internal + 1378
    frame #7: 0x0000000108e25e11 mono`mono_metadata_parse_mh_full + 1233
    frame #8: 0x0000000108ca4f79 mono`mono_debug_add_aot_method + 121
    frame #9: 0x0000000108cc6b1e mono`decode_exception_debug_info + 5918
    frame #10: 0x0000000108cc5047 mono`mono_aot_find_jit_info + 1367
    frame #11: 0x0000000108e082cc mono`mono_jit_info_table_find_internal + 204
    frame #12: 0x0000000108cdb035 mono`mini_jit_info_table_find_ext + 69
    frame #13: 0x0000000108cdad5c mono`mono_find_jit_info_ext + 124
    frame #14: 0x0000000108cdc3a5 mono`unwinder_unwind_frame + 229
    frame #15: 0x0000000108cdbade mono`mono_walk_stack_full + 798
    frame #16: 0x0000000108cda1a4 mono`mono_summarize_managed_stack + 100
    frame #17: 0x0000000108e6dc03 mono`mono_threads_summarize_execute + 1347
    frame #18: 0x0000000108e6df88 mono`mono_threads_summarize + 360

```
akoeplinger pushed a commit that referenced this pull request Feb 5, 2019
* [crash] Support extra merp params

* [runtime] Make infrastructure for merp tests

* [runtime] Fix dumping when crash happens without sigctx

* [runtime] Disable crashy (886/1000 runs) stacktrace walker

* [crash] Remove usage of allocating build/os info functions

* [crash] Remove often-crashing g_free on native crash path

* [crash] Add crash_reporter checked build

We add a checked build mode that asserts when mono mallocs inside of
the crash reporter. It makes risky allocations into assertions. It's
useful for automated testing because the double-abort often represents
itself as an indefinite hang. If it happens before the thread dumping
supervisor process is started, or after it ends, the crash reporter
hangs.

* [crash] Remove reliance on nested SIGABRT/double-fault (broken on OSX)

* [crash] Fix top-level handling of double faults/assertions

* [runtime] Make fatal unwinding errors return into handled error paths

* [crash] Change dumper logging for better info

* [runtime] Fix handling of segfault on sgen thread

Threads without domains that get segfaults will end up in
this handler. It's not safe to call this function with a NULL domain.

See crash below:

```
* thread #1, name = 'tid_307', queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x10eff40f8)
  * frame #0: 0x000000010e1510d9 mono-sgen`mono_threads_summarize_execute(ctx=0x0000000000000000, out=0x0000001000000000, hashes=0x0000100000100000, silent=4096, mem="", provided_size=2199023296512) at threads.c:6414
    frame #1: 0x000000010e152092 mono-sgen`mono_threads_summarize(ctx=0x000000010effda00, out=0x000000010effdba0, hashes=0x000000010effdb90, silent=0, signal_handler_controller=1, mem=0x0000000000000000, provided_size=0) at threads.c:6508
    frame #2: 0x000000010df7c69f mono-sgen`dump_native_stacktrace(signal="SIGSEGV", ctx=0x000000010effef48) at mini-posix.c:1026
    frame #3: 0x000000010df7c37f mono-sgen`mono_dump_native_crash_info(signal="SIGSEGV", ctx=0x000000010effef48, info=0x000000010effeee0) at mini-posix.c:1147
    frame #4: 0x000000010de720a9 mono-sgen`mono_handle_native_crash(signal="SIGSEGV", ctx=0x000000010effef48, info=0x000000010effeee0) at mini-exceptions.c:3227
    frame #5: 0x000000010dd6ac0d mono-sgen`mono_sigsegv_signal_handler_debug(_dummy=11, _info=0x000000010effeee0, context=0x000000010effef48, debug_fault_addr=0xffffffffffffffff) at mini-runtime.c:3574
    frame #6: 0x000000010dd6a8d3 mono-sgen`mono_sigsegv_signal_handler(_dummy=11, _info=0x000000010effeee0, context=0x000000010effef48) at mini-runtime.c:3612
    frame #7: 0x00007fff73dbdf5a libsystem_platform.dylib`_sigtramp + 26
    frame #8: 0x0000000110bb81c1
    frame #9: 0x000000011085ffe1
    frame #10: 0x000000010dd6d4f3 mono-sgen`mono_jit_runtime_invoke(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, exc=0x00007ffee1ea9f08, error=0x00007ffee1eaa250) at mini-runtime.c:3215
    frame #11: 0x000000010e11509d mono-sgen`do_runtime_invoke(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, exc=0x0000000000000000, error=0x00007ffee1eaa250) at object.c:2977
    frame #12: 0x000000010e10d961 mono-sgen`mono_runtime_invoke_checked(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, error=0x00007ffee1eaa250) at object.c:3145
    frame #13: 0x000000010e11aa58 mono-sgen`do_exec_main_checked(method=0x00007faae4f01fe8, args=0x000000010f0003e8, error=0x00007ffee1eaa250) at object.c:5042
    frame #14: 0x000000010e118803 mono-sgen`mono_runtime_exec_main_checked(method=0x00007faae4f01fe8, args=0x000000010f0003e8, error=0x00007ffee1eaa250) at object.c:5138
    frame #15: 0x000000010e118856 mono-sgen`mono_runtime_run_main_checked(method=0x00007faae4f01fe8, argc=2, argv=0x00007ffee1eaa760, error=0x00007ffee1eaa250) at object.c:4599
    frame #16: 0x000000010de1db2f mono-sgen`mono_jit_exec_internal(domain=0x00007faae4f00860, assembly=0x00007faae4c02ab0, argc=2, argv=0x00007ffee1eaa760) at driver.c:1298
    frame #17: 0x000000010de1d95d mono-sgen`mono_jit_exec(domain=0x00007faae4f00860, assembly=0x00007faae4c02ab0, argc=2, argv=0x00007ffee1eaa760) at driver.c:1257
    frame #18: 0x000000010de2257f mono-sgen`main_thread_handler(user_data=0x00007ffee1eaa6a0) at driver.c:1375
    frame #19: 0x000000010de20852 mono-sgen`mono_main(argc=3, argv=0x00007ffee1eaa758) at driver.c:2551
    frame #20: 0x000000010dd56d7e mono-sgen`mono_main_with_options(argc=3, argv=0x00007ffee1eaa758) at main.c:50
    frame #21: 0x000000010dd5638d mono-sgen`main(argc=3, argv=0x00007ffee1eaa758) at main.c:406
    frame #22: 0x00007fff73aaf015 libdyld.dylib`start + 1
    frame #23: 0x00007fff73aaf015 libdyld.dylib`start + 1
  thread #2, name = 'SGen worker'
    frame #0: 0x000000010e2afd77 mono-sgen`mono_get_hazardous_pointer(pp=0x0000000000000178, hp=0x000000010ef87618, hazard_index=0) at hazard-pointer.c:208
    frame #1: 0x000000010e0b28e1 mono-sgen`mono_jit_info_table_find_internal(domain=0x0000000000000000, addr=0x00007fff73bffa16, try_aot=1, allow_trampolines=1) at jit-info.c:304
    frame #2: 0x000000010dd6aa5f mono-sgen`mono_sigsegv_signal_handler_debug(_dummy=11, _info=0x000070000fb81c58, context=0x000070000fb81cc0, debug_fault_addr=0x000000010e28fb20) at mini-runtime.c:3540
    frame #3: 0x000000010dd6a8d3 mono-sgen`mono_sigsegv_signal_handler(_dummy=11, _info=0x000070000fb81c58, context=0x000070000fb81cc0) at mini-runtime.c:3612
    frame #4: 0x00007fff73dbdf5a libsystem_platform.dylib`_sigtramp + 26
    frame #5: 0x00007fff73bffa17 libsystem_kernel.dylib`__psynch_cvwait + 11
    frame #6: 0x00007fff73dc8589 libsystem_pthread.dylib`_pthread_cond_wait + 732
    frame #7: 0x000000010e28d76d mono-sgen`mono_os_cond_wait(cond=0x000000010e44c9d8, mutex=0x000000010e44c998) at mono-os-mutex.h:168
    frame #8: 0x000000010e28df4f mono-sgen`get_work(worker_index=0, work_context=0x000070000fb81ee0, do_idle=0x000070000fb81ed4, job=0x000070000fb81ec8) at sgen-thread-pool.c:165
    frame #9: 0x000000010e28d2cb mono-sgen`thread_func(data=0x0000000000000000) at sgen-thread-pool.c:196
    frame #10: 0x00007fff73dc7661 libsystem_pthread.dylib`_pthread_body + 340
    frame #11: 0x00007fff73dc750d libsystem_pthread.dylib`_pthread_start + 377
    frame #12: 0x00007fff73dc6bf9 libsystem_pthread.dylib`thread_start + 13
  thread #3, name = 'Finalizer'
    frame #0: 0x00007fff73bf6246 libsystem_kernel.dylib`semaphore_wait_trap + 10
    frame #1: 0x000000010e1d9c0a mono-sgen`mono_os_sem_wait(sem=0x000000010e43e400, flags=MONO_SEM_FLAGS_ALERTABLE) at mono-os-semaphore.h:84
    frame #2: 0x000000010e1d832d mono-sgen`mono_coop_sem_wait(sem=0x000000010e43e400, flags=MONO_SEM_FLAGS_ALERTABLE) at mono-coop-semaphore.h:41
    frame #3: 0x000000010e1da787 mono-sgen`finalizer_thread(unused=0x0000000000000000) at gc.c:920
    frame #4: 0x000000010e152919 mono-sgen`start_wrapper_internal(start_info=0x0000000000000000, stack_ptr=0x000070000fd85000) at threads.c:1178
    frame #5: 0x000000010e1525b6 mono-sgen`start_wrapper(data=0x00007faae4f31bd0) at threads.c:1238
    frame #6: 0x00007fff73dc7661 libsystem_pthread.dylib`_pthread_body + 340
    frame #7: 0x00007fff73dc750d libsystem_pthread.dylib`_pthread_start + 377
    frame #8: 0x00007fff73dc6bf9 libsystem_pthread.dylib`thread_start + 13
  thread #4
    frame #0: 0x00007fff73c0028a libsystem_kernel.dylib`__workq_kernreturn + 10
    frame #1: 0x00007fff73dc7009 libsystem_pthread.dylib`_pthread_wqthread + 1035
    frame #2: 0x00007fff73dc6be9 libsystem_pthread.dylib`start_wqthread + 13
(lldb)
```

* [crash] Add signal-safe mmap/file "allocator"

* [crash] Remove use of static memory from dumper

* [runtime] Reduce print buffer size for lockless printer.

Each frame that prints ends up increased by the size of buff.
In practice, clang often fails to deduplicate some of these buffers,
leading to 30k-big stackframes.

It was noticed by a series of hard-to-diagnose segfaults on stacks that
looked otherwise fine during the crash reporting stress test.

This change fixes this, making stacks a 1/10th of the size. It doesn't
seem to break the crash reporter messages anywhere (may need to shrink
other "max name length" fields), and it's not mission-critical anywhere
else.

* [crash] Use async-safe file memory for dumper internals

* [crash] Add memory barriers around merp configuration

* [crash] Use signal-safe printers on all native crash paths

* [crash] Move gdb/lldb lookup to startup

* [runtime] Move MOSTLY_ASYNC_SAFE_FPRINTF to eglib

* [runtime] Fix all callsites of MOSTLY_ASYNC_SAFE_PRINTF

* [runtime] Fix all callsites of MOSTLY_ASYNC_SAFE_FPRINTF

* [crash] Switch to signal-safe exit function

* [crash] Make dumper enum in managed

* [runtime] Add more information to managed frame

* [crash] Make async_safe printers inlined

* [runtime] Move basic pe_file functionality into proclib

* [crash] Fix handling of thread attributes

* [crash] Place hashes into the json file for all threads

* [crash] Improve reporting on merp test

* [crash] Log the locations of written-to files, for better post-portems
akoeplinger pushed a commit that referenced this pull request Feb 13, 2019
* [crash] Support extra merp params

* [runtime] Make infrastructure for merp tests

* [runtime] Fix dumping when crash happens without sigctx

* [runtime] Disable crashy (886/1000 runs) stacktrace walker

* [crash] Remove usage of allocating build/os info functions

* [crash] Remove often-crashing g_free on native crash path

* [crash] Add crash_reporter checked build

We add a checked build mode that asserts when mono mallocs inside of
the crash reporter. It makes risky allocations into assertions. It's
useful for automated testing because the double-abort often represents
itself as an indefinite hang. If it happens before the thread dumping
supervisor process is started, or after it ends, the crash reporter
hangs.

* [crash] Remove reliance on nested SIGABRT/double-fault (broken on OSX)

* [crash] Fix top-level handling of double faults/assertions

* [runtime] Make fatal unwinding errors return into handled error paths

* [crash] Change dumper logging for better info

* [runtime] Fix handling of segfault on sgen thread

Threads without domains that get segfaults will end up in
this handler. It's not safe to call this function with a NULL domain.

See crash below:

```
* thread #1, name = 'tid_307', queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x10eff40f8)
  * frame #0: 0x000000010e1510d9 mono-sgen`mono_threads_summarize_execute(ctx=0x0000000000000000, out=0x0000001000000000, hashes=0x0000100000100000, silent=4096, mem="", provided_size=2199023296512) at threads.c:6414
    frame #1: 0x000000010e152092 mono-sgen`mono_threads_summarize(ctx=0x000000010effda00, out=0x000000010effdba0, hashes=0x000000010effdb90, silent=0, signal_handler_controller=1, mem=0x0000000000000000, provided_size=0) at threads.c:6508
    frame #2: 0x000000010df7c69f mono-sgen`dump_native_stacktrace(signal="SIGSEGV", ctx=0x000000010effef48) at mini-posix.c:1026
    frame #3: 0x000000010df7c37f mono-sgen`mono_dump_native_crash_info(signal="SIGSEGV", ctx=0x000000010effef48, info=0x000000010effeee0) at mini-posix.c:1147
    frame #4: 0x000000010de720a9 mono-sgen`mono_handle_native_crash(signal="SIGSEGV", ctx=0x000000010effef48, info=0x000000010effeee0) at mini-exceptions.c:3227
    frame #5: 0x000000010dd6ac0d mono-sgen`mono_sigsegv_signal_handler_debug(_dummy=11, _info=0x000000010effeee0, context=0x000000010effef48, debug_fault_addr=0xffffffffffffffff) at mini-runtime.c:3574
    frame #6: 0x000000010dd6a8d3 mono-sgen`mono_sigsegv_signal_handler(_dummy=11, _info=0x000000010effeee0, context=0x000000010effef48) at mini-runtime.c:3612
    frame #7: 0x00007fff73dbdf5a libsystem_platform.dylib`_sigtramp + 26
    frame #8: 0x0000000110bb81c1
    frame #9: 0x000000011085ffe1
    frame #10: 0x000000010dd6d4f3 mono-sgen`mono_jit_runtime_invoke(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, exc=0x00007ffee1ea9f08, error=0x00007ffee1eaa250) at mini-runtime.c:3215
    frame #11: 0x000000010e11509d mono-sgen`do_runtime_invoke(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, exc=0x0000000000000000, error=0x00007ffee1eaa250) at object.c:2977
    frame #12: 0x000000010e10d961 mono-sgen`mono_runtime_invoke_checked(method=0x00007faae4f01fe8, obj=0x0000000000000000, params=0x00007ffee1eaa180, error=0x00007ffee1eaa250) at object.c:3145
    frame #13: 0x000000010e11aa58 mono-sgen`do_exec_main_checked(method=0x00007faae4f01fe8, args=0x000000010f0003e8, error=0x00007ffee1eaa250) at object.c:5042
    frame #14: 0x000000010e118803 mono-sgen`mono_runtime_exec_main_checked(method=0x00007faae4f01fe8, args=0x000000010f0003e8, error=0x00007ffee1eaa250) at object.c:5138
    frame #15: 0x000000010e118856 mono-sgen`mono_runtime_run_main_checked(method=0x00007faae4f01fe8, argc=2, argv=0x00007ffee1eaa760, error=0x00007ffee1eaa250) at object.c:4599
    frame #16: 0x000000010de1db2f mono-sgen`mono_jit_exec_internal(domain=0x00007faae4f00860, assembly=0x00007faae4c02ab0, argc=2, argv=0x00007ffee1eaa760) at driver.c:1298
    frame #17: 0x000000010de1d95d mono-sgen`mono_jit_exec(domain=0x00007faae4f00860, assembly=0x00007faae4c02ab0, argc=2, argv=0x00007ffee1eaa760) at driver.c:1257
    frame #18: 0x000000010de2257f mono-sgen`main_thread_handler(user_data=0x00007ffee1eaa6a0) at driver.c:1375
    frame #19: 0x000000010de20852 mono-sgen`mono_main(argc=3, argv=0x00007ffee1eaa758) at driver.c:2551
    frame #20: 0x000000010dd56d7e mono-sgen`mono_main_with_options(argc=3, argv=0x00007ffee1eaa758) at main.c:50
    frame #21: 0x000000010dd5638d mono-sgen`main(argc=3, argv=0x00007ffee1eaa758) at main.c:406
    frame #22: 0x00007fff73aaf015 libdyld.dylib`start + 1
    frame #23: 0x00007fff73aaf015 libdyld.dylib`start + 1
  thread #2, name = 'SGen worker'
    frame #0: 0x000000010e2afd77 mono-sgen`mono_get_hazardous_pointer(pp=0x0000000000000178, hp=0x000000010ef87618, hazard_index=0) at hazard-pointer.c:208
    frame #1: 0x000000010e0b28e1 mono-sgen`mono_jit_info_table_find_internal(domain=0x0000000000000000, addr=0x00007fff73bffa16, try_aot=1, allow_trampolines=1) at jit-info.c:304
    frame #2: 0x000000010dd6aa5f mono-sgen`mono_sigsegv_signal_handler_debug(_dummy=11, _info=0x000070000fb81c58, context=0x000070000fb81cc0, debug_fault_addr=0x000000010e28fb20) at mini-runtime.c:3540
    frame #3: 0x000000010dd6a8d3 mono-sgen`mono_sigsegv_signal_handler(_dummy=11, _info=0x000070000fb81c58, context=0x000070000fb81cc0) at mini-runtime.c:3612
    frame #4: 0x00007fff73dbdf5a libsystem_platform.dylib`_sigtramp + 26
    frame #5: 0x00007fff73bffa17 libsystem_kernel.dylib`__psynch_cvwait + 11
    frame #6: 0x00007fff73dc8589 libsystem_pthread.dylib`_pthread_cond_wait + 732
    frame #7: 0x000000010e28d76d mono-sgen`mono_os_cond_wait(cond=0x000000010e44c9d8, mutex=0x000000010e44c998) at mono-os-mutex.h:168
    frame #8: 0x000000010e28df4f mono-sgen`get_work(worker_index=0, work_context=0x000070000fb81ee0, do_idle=0x000070000fb81ed4, job=0x000070000fb81ec8) at sgen-thread-pool.c:165
    frame #9: 0x000000010e28d2cb mono-sgen`thread_func(data=0x0000000000000000) at sgen-thread-pool.c:196
    frame #10: 0x00007fff73dc7661 libsystem_pthread.dylib`_pthread_body + 340
    frame #11: 0x00007fff73dc750d libsystem_pthread.dylib`_pthread_start + 377
    frame #12: 0x00007fff73dc6bf9 libsystem_pthread.dylib`thread_start + 13
  thread #3, name = 'Finalizer'
    frame #0: 0x00007fff73bf6246 libsystem_kernel.dylib`semaphore_wait_trap + 10
    frame #1: 0x000000010e1d9c0a mono-sgen`mono_os_sem_wait(sem=0x000000010e43e400, flags=MONO_SEM_FLAGS_ALERTABLE) at mono-os-semaphore.h:84
    frame #2: 0x000000010e1d832d mono-sgen`mono_coop_sem_wait(sem=0x000000010e43e400, flags=MONO_SEM_FLAGS_ALERTABLE) at mono-coop-semaphore.h:41
    frame #3: 0x000000010e1da787 mono-sgen`finalizer_thread(unused=0x0000000000000000) at gc.c:920
    frame #4: 0x000000010e152919 mono-sgen`start_wrapper_internal(start_info=0x0000000000000000, stack_ptr=0x000070000fd85000) at threads.c:1178
    frame #5: 0x000000010e1525b6 mono-sgen`start_wrapper(data=0x00007faae4f31bd0) at threads.c:1238
    frame #6: 0x00007fff73dc7661 libsystem_pthread.dylib`_pthread_body + 340
    frame #7: 0x00007fff73dc750d libsystem_pthread.dylib`_pthread_start + 377
    frame #8: 0x00007fff73dc6bf9 libsystem_pthread.dylib`thread_start + 13
  thread #4
    frame #0: 0x00007fff73c0028a libsystem_kernel.dylib`__workq_kernreturn + 10
    frame #1: 0x00007fff73dc7009 libsystem_pthread.dylib`_pthread_wqthread + 1035
    frame #2: 0x00007fff73dc6be9 libsystem_pthread.dylib`start_wqthread + 13
(lldb)
```

* [crash] Add signal-safe mmap/file "allocator"

* [crash] Remove use of static memory from dumper

* [runtime] Reduce print buffer size for lockless printer.

Each frame that prints ends up increased by the size of buff.
In practice, clang often fails to deduplicate some of these buffers,
leading to 30k-big stackframes.

It was noticed by a series of hard-to-diagnose segfaults on stacks that
looked otherwise fine during the crash reporting stress test.

This change fixes this, making stacks a 1/10th of the size. It doesn't
seem to break the crash reporter messages anywhere (may need to shrink
other "max name length" fields), and it's not mission-critical anywhere
else.

* [crash] Use async-safe file memory for dumper internals

* [crash] Add memory barriers around merp configuration

* [crash] Use signal-safe printers on all native crash paths

* [crash] Move gdb/lldb lookup to startup

* [runtime] Move MOSTLY_ASYNC_SAFE_FPRINTF to eglib

* [runtime] Fix all callsites of MOSTLY_ASYNC_SAFE_PRINTF

* [runtime] Fix all callsites of MOSTLY_ASYNC_SAFE_FPRINTF

* [crash] Switch to signal-safe exit function

* [crash] Make dumper enum in managed

* [runtime] Add more information to managed frame

* [crash] Make async_safe printers inlined

* [runtime] Move basic pe_file functionality into proclib

* [crash] Fix handling of thread attributes

* [crash] Place hashes into the json file for all threads

* [crash] Fix 2018-08 CI, disable test
akoeplinger pushed a commit that referenced this pull request Apr 16, 2019
…13939)

The thread_stopped profiler event can be raised by the thread_info_key_dtor tls
key destructor when the thread is already doesn't have a domain set.  In that
case, don't call process_profiler_event since it cannot handle a thread with
null TLS values.

Addresses dotnet/android#2920
with the following stack trace

```
* thread #20, name = 'Filter', stop reason = signal SIGSEGV: invalid address (fault address: 0xbc)
  * frame #0: libmonosgen-2.0.so`mono_class_vtable_checked(domain=0x0000000000000000, klass=0x0000007200230648, error=0x00000071e92f9178) at object.c:1890
    frame #1: libmonosgen-2.0.so`get_current_thread_ptr_for_domain(domain=0x0000000000000000, thread=0x00000071ebfec508) at threads.c:595
    frame #2: libmonosgen-2.0.so`mono_thread_current at threads.c:1939
    frame #3: libmonosgen-2.0.so`process_event(event=<unavailable>, arg=<unavailable>, il_offset=<unavailable>, ctx=<unavailable>, events=<unavailable>, suspend_policy=<unavailable>) at debugger-agent.c:3715
    frame #4: libmonosgen-2.0.so`thread_end [inlined] process_profiler_event(event=EVENT_KIND_THREAD_DEATH, arg=0x00000071ebfec508) at debugger-agent.c:3875
    frame #5: libmonosgen-2.0.so`thread_end(prof=<unavailable>, tid=<unavailable>) at debugger-agent.c:3991
    frame #6: libmonosgen-2.0.so`mono_profiler_raise_thread_stopped(tid=<unavailable>) at profiler-events.h:105
    frame #7: libmonosgen-2.0.so`mono_thread_detach_internal(thread=<unavailable>) at threads.c:979
    frame #8: libmonosgen-2.0.so`thread_detach(info=0x00000071e949a000) at threads.c:3215
    frame #9: libmonosgen-2.0.so`unregister_thread(arg=<unavailable>) at mono-threads.c:544
    frame #10: libmonosgen-2.0.so`thread_info_key_dtor(arg=0x00000071e949a000) at mono-threads.c:774
    frame #11: 0x00000072899c58e8 libc.so`pthread_key_clean_all() + 124
    frame #12: 0x00000072899c5374 libc.so`pthread_exit + 76
    frame #13: 0x00000072899c5264 libc.so`__pthread_start(void*) + 44
    frame #14: 0x000000728996617c libc.so`__start_thread + 72
```
akoeplinger pushed a commit that referenced this pull request Apr 30, 2019
## Summary

This change allows LLVM to identify unnecessary null checks. We mark GOT accesses (including those made by LDSTR) and new object calls as nonnull.

Since LLVM won't propagate this, we propagate from definition to usage, across casts, and from usage to definition (conditionally).

This enables it to remove a significant portion of some benchmarks. 

In real-world code, we can expect to see this remove the unnecessary null checks made by private methods. 

## Example

### C#:

Note that the constant strings are trivially non-null. This PR spots and propagates that. 

```

   static void ThrowIfNull(string s)
    {
        if (s == null)
            ThrowArgumentNullException();
    }

    static void ThrowArgumentNullException()
    {
        throw new ArgumentNullException();
    }

    [MethodImpl(MethodImplOptions.NoInlining)]
    static int Bench(string a, string b, string c, string d)
    {
        ThrowIfNull(a);
        ThrowIfNull(b);
        ThrowIfNull(c);
        ThrowIfNull(d);

        return a.Length + b.Length + c.Length + d.Length;
    }

    [Benchmark(Description = nameof(NoThrowInline))]
    public int Test() => Bench("a", "bc", "def", "ghij");

```

### Before:

```

define hidden monocc i32 @NoThrowInline_MainClass_Bench_string_string_string_string(i64*  %arg_a, i64* %arg_b, i64* %arg_c, i64*  %arg_d) #6 gc "mono" {
BB0:
  br label %INIT_BB1

INIT_BB1:                                         ; preds = %BB0
  br label %INITED_BB2

INITED_BB2:                                       ; preds = %INIT_BB1
  br label %BB3

BB3:                                              ; preds = %INITED_BB2
  br label %BB2

BB2:                                              ; preds = %BB3
  notail call monocc void @NoThrowInline_MainClass_ThrowIfNull_string(i64* %arg_a)
  notail call monocc void @NoThrowInline_MainClass_ThrowIfNull_string(i64* %arg_b)
  notail call monocc void @NoThrowInline_MainClass_ThrowIfNull_string(i64* %arg_c)
  notail call monocc void @NoThrowInline_MainClass_ThrowIfNull_string(i64* %arg_d)
  %0 = bitcast i64* %arg_a to i32*
  %1 = getelementptr i32, i32* %0, i32 4
  %t50 = load volatile i32, i32* %1
  %2 = bitcast i64* %arg_b to i32*
  %3 = getelementptr i32, i32* %2, i32 4
  %t52 = load volatile i32, i32* %3
  %t53 = add i32 %t50, %t52
  %4 = bitcast i64* %arg_c to i32*
  %5 = getelementptr i32, i32* %4, i32 4
  %t55 = load volatile i32, i32* %5
  %t56 = add i32 %t53, %t55
  %6 = bitcast i64* %arg_d to i32*
  %7 = getelementptr i32, i32* %6, i32 4
  %t58 = load volatile i32, i32* %7
  %t60 = add i32 %t56, %t58
  br label %BB1

BB1:                                              ; preds = %BB2
  ret i32 %t60
}

```

### After:

Note: safepoint in below code is added by backend, not part of this change

```

define hidden monocc i32 @NoThrowInline_MainClass_Bench_string_string_string_string(i64* nonnull %arg_a, i64* nonnull %arg_b, i64* nonnull %arg_c, i64* nonnull %arg_d) #6 gc "mono" {
BB0:
  %0 = getelementptr i64, i64* %arg_a, i64 2
  %1 = bitcast i64* %0 to i32*
  %t50 = load volatile i32, i32* %1, align 4
  %2 = getelementptr i64, i64* %arg_b, i64 2
  %3 = bitcast i64* %2 to i32*
  %t52 = load volatile i32, i32* %3, align 4
  %t53 = add i32 %t52, %t50
  %4 = getelementptr i64, i64* %arg_c, i64 2
  %5 = bitcast i64* %4 to i32*
  %t55 = load volatile i32, i32* %5, align 4
  %t56 = add i32 %t53, %t55
  %6 = getelementptr i64, i64* %arg_d, i64 2
  %7 = bitcast i64* %6 to i32*
  %t58 = load volatile i32, i32* %7, align 4
  %t60 = add i32 %t56, %t58
  %8 = load i64*, i64** getelementptr inbounds ([37 x i64*], [37 x i64*]* @mono_aot_NoThrowInline_llvm_got, i64 0, i64 7), align 8
  %9 = load i64, i64* %8, align 4
  %10 = icmp eq i64 %9, 0
  br i1 %10, label %gc.safepoint_poll.exit, label %gc.safepoint_poll.poll.i

gc.safepoint_poll.poll.i:                         ; preds = %BB0
  %11 = load void ()*, void ()** bitcast (i64** getelementptr inbounds ([37 x i64*], [37 x i64*]* @mono_aot_NoThrowInline_llvm_got, i64 0, i64 25) to void ()**), align 8
  call void %11() #8
  br label %gc.safepoint_poll.exit

gc.safepoint_poll.exit:                           ; preds = %BB0, %gc.safepoint_poll.poll.i
  ret i32 %t60
}

```

## Dependencies 

This depends on dotnet/linker#528.
akoeplinger pushed a commit that referenced this pull request May 27, 2019
…13938)

The thread_stopped profiler event can be raised by the thread_info_key_dtor tls
key destructor when the thread is already doesn't have a domain set.  In that
case, don't call process_profiler_event since it cannot handle a thread with
null TLS values.

Addresses dotnet/android#2920
with the following stack trace

```
* thread #20, name = 'Filter', stop reason = signal SIGSEGV: invalid address (fault address: 0xbc)
  * frame #0: libmonosgen-2.0.so`mono_class_vtable_checked(domain=0x0000000000000000, klass=0x0000007200230648, error=0x00000071e92f9178) at object.c:1890
    frame #1: libmonosgen-2.0.so`get_current_thread_ptr_for_domain(domain=0x0000000000000000, thread=0x00000071ebfec508) at threads.c:595
    frame #2: libmonosgen-2.0.so`mono_thread_current at threads.c:1939
    frame #3: libmonosgen-2.0.so`process_event(event=<unavailable>, arg=<unavailable>, il_offset=<unavailable>, ctx=<unavailable>, events=<unavailable>, suspend_policy=<unavailable>) at debugger-agent.c:3715
    frame #4: libmonosgen-2.0.so`thread_end [inlined] process_profiler_event(event=EVENT_KIND_THREAD_DEATH, arg=0x00000071ebfec508) at debugger-agent.c:3875
    frame #5: libmonosgen-2.0.so`thread_end(prof=<unavailable>, tid=<unavailable>) at debugger-agent.c:3991
    frame #6: libmonosgen-2.0.so`mono_profiler_raise_thread_stopped(tid=<unavailable>) at profiler-events.h:105
    frame #7: libmonosgen-2.0.so`mono_thread_detach_internal(thread=<unavailable>) at threads.c:979
    frame #8: libmonosgen-2.0.so`thread_detach(info=0x00000071e949a000) at threads.c:3215
    frame #9: libmonosgen-2.0.so`unregister_thread(arg=<unavailable>) at mono-threads.c:544
    frame #10: libmonosgen-2.0.so`thread_info_key_dtor(arg=0x00000071e949a000) at mono-threads.c:774
    frame #11: 0x00000072899c58e8 libc.so`pthread_key_clean_all() + 124
    frame #12: 0x00000072899c5374 libc.so`pthread_exit + 76
    frame #13: 0x00000072899c5264 libc.so`__pthread_start(void*) + 44
    frame #14: 0x000000728996617c libc.so`__start_thread + 72
```
akoeplinger pushed a commit that referenced this pull request Jun 5, 2019
[2019-02] [arm64] Correct exception filter stack frame size

Fixes: mono#14170
Fixes: dotnet/android#3112

The `call_filter()` function generated by `mono_arch_get_call_filter()`
was overwriting a part of the previous stack frame because it was not
creating a large enough new stack frame for itself.  This had been
working by luck before the switch to building the runtime with Android
NDK r19.

I used a local build of [xamarin/xamarin-android/d16-1@87a80b][0] to
verify that this change resolved both of the issues linked above.  I
also confirmed that I was able to reintroduce the issues in my local
environment by removing the change and rebuilding.

After this change, the generated `call_filter()` function now starts by
reserving sufficient space on the stack to hold a 64-bit value at the
`ctx_offset` location.  The 64-bit `ctx` value is saved to the
`ctx_offset` location (offset 344 in the new stack frame) shortly
afterwards:

    stp	x29, x30, [sp,#-352]!
    mov	x29, sp
    str	x0, [x29,mono#344]

As expected, the invocation of `call_filter()` now no longer modifies
the top of the previous stack frame.

Top of the stack before `call_filter()`:

    (gdb) x/8x $sp
    0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
    0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

Top of the stack after `call_filter()` (unchanged):

    (gdb) x/8x $sp
    0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
    0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

<details>
<summary>Additional background information</summary>

### The original lucky, "good" behavior for [mono/mono@725ba2a built with Android NDK r14][1]

 1. The `resume` parameter for `mono_handle_exception_internal()` is
    held in register `w23`.

 2. That register is saved into the current stack frame at offset 20 by
    a `str` instruction:

           0x00000000000bc1bc <+3012>:	str	w23, [sp,#20]

 3. `handle_exception_first_pass()` invokes `call_filter()` via a `blr`
    instruction:

        2279							filtered = call_filter (ctx, ei->data.filter);
           0x00000000000bc60c <+4116>:	add	x9, x22, x24, lsl #6
           0x00000000000bc610 <+4120>:	ldr	x8, [x8,mono#3120]
           0x00000000000bc614 <+4124>:	ldr	x1, [x9,mono#112]
           0x00000000000bc618 <+4128>:	add	x0, sp, #0x110
           0x00000000000bc61c <+4132>:	blr	x8

    Before the `blr` instruction, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0x0a8b31d8	0x00000071
        0x7fcd2774b0:	0x00000000	0x00000000	0x1ef51980	0x00000071

 4. `call_filter()` runs.  This function is generated by
    `mono_arch_get_call_filter()` and starts with:

           stp	x29, x30, [sp,#-336]!
           mov	x29, sp
           str	x0, [x29,mono#344]

    Note in particular how the first line subtracts 336 from `sp` to
    start a new stack frame, but the third line writes to a position
    that is 344 bytes beyond that, back within the *previous* stack
    frame.

 5. After the invocations of `call_filter()` and
    `handle_exception_first_pass()`, `w23` is restored from the stack:

           0x00000000000bc820 <+4648>:	ldr	w23, [sp,#20]

    At this step, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0xcd2775b0	0x0000007f
        0x7fcd2774b0:	0x00000000	0x00000000	0x1ef51980	0x00000071

    Notice how `call_filter()` has overwritten bytes 8 through 15 of the
    stack frame.  In this original lucky scenario, this does not affect
    the value restored into register `w23` because that value starts at
    byte 20.

 6. `mono_handle_exception_internal()` tests the value of `w23` to
    decide how to set the `ji` local variable:

        2574			if (resume) {
           0x00000000000bb960 <+872>:	cbz	w23, 0xbb9c4 <mono_handle_exception_internal+972>

    Since `w23` is `0`, `mono_handle_exception_internal()` correctly
    continues into the `else` branch.

### The bad behavior for [mono/mono@725ba2a built with Android NDK r19][2] works slightly differently

 1. As before, the local `resume` parameter starts in register `w23`.
 2. This time, the register is saved into the stack frame at offset 12
    (instead of 20):

           0x00000000000bed7c <+3200>:	str	w23, [sp,#12]

 3. As before, `handle_exception_first_pass()` invokes `call_filter()`
    via a `blr` instruction.

    At this step, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
        0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

 4. `call_filter()` runs as before.  And the first few instructions of
    that function are the same as before:

        stp	x29, x30, [sp,#-336]!
        mov	x29, sp
        str	x0, [x29,mono#344]

 5. `w23` is again restored from the stack, but this time from offset 12:

           0x00000000000bf2c0 <+4548>:	ldr	w23, [sp,#12]

    The problem is that `call_filter()` has again overwritten bytes 8
    through 15 of the stack frame:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0xcd2775b0	0x0000007f
        0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

    So after the `ldr` instruction, `w23` has an incorrect value of
    `0x7f` rather than the correct value `0`.

 6. As before, `mono_handle_exception_internal()` tests the value of `w23` to
    decide how to set the `ji` local variable:

        2574			if (resume) {
           0x00000000000be430 <+820>:	cbz	w23, 0xbe834 <mono_handle_exception_internal+1848>

    But this time because `w23` is *not* zero, execution continues
    *incorrectly* into the `if` branch rather than the `else` branch.
    The result is that `ji` is set to `0` rather than a valid
    `(MonoJitInfo *)` value.  This incorrect value for `ji` leads to a
    null pointer dereference, as seen in the bug reports.

[0]: https://github.com/xamarin/xamarin-android/tree/xamarin-android-9.3.0.22
[1]: https://jenkins.xamarin.com/view/Xamarin.Android/job/xamarin-android-freestyle/1432/Azure/
[2]: https://jenkins.xamarin.com/view/Xamarin.Android/job/xamarin-android-freestyle/1436/Azure/

</details>

Backport of mono#14757.

/cc @lambdageek @brendanzagaeski
akoeplinger pushed a commit that referenced this pull request Jun 6, 2019
* [arm64] Correct exception filter stack frame size

Fixes: mono#14170
Fixes: dotnet/android#3112

The `call_filter()` function generated by `mono_arch_get_call_filter()`
was overwriting a part of the previous stack frame because it was not
creating a large enough new stack frame for itself.  This had been
working by luck before the switch to building the runtime with Android
NDK r19.

I used a local build of [xamarin/xamarin-android/d16-1@87a80b][0] to
verify that this change resolved both of the issues linked above.  I
also confirmed that I was able to reintroduce the issues in my local
environment by removing the change and rebuilding.

After this change, the generated `call_filter()` function now starts by
reserving sufficient space on the stack to hold a 64-bit value at the
`ctx_offset` location.  The 64-bit `ctx` value is saved to the
`ctx_offset` location (offset 344 in the new stack frame) shortly
afterwards:

    stp	x29, x30, [sp,#-352]!
    mov	x29, sp
    str	x0, [x29,mono#344]

As expected, the invocation of `call_filter()` now no longer modifies
the top of the previous stack frame.

Top of the stack before `call_filter()`:

    (gdb) x/8x $sp
    0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
    0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

Top of the stack after `call_filter()` (unchanged):

    (gdb) x/8x $sp
    0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
    0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

Additional background information
=================================

The original lucky, "good" behavior worked as follows for
[mono/mono@725ba2a built with Android NDK r14][1]:

 1. The `resume` parameter for `mono_handle_exception_internal()` is
    held in register `w23`.

 2. That register is saved into the current stack frame at offset 20 by
    a `str` instruction:

           0x00000000000bc1bc <+3012>:	str	w23, [sp,#20]

 3. `handle_exception_first_pass()` invokes `call_filter()` via a `blr`
    instruction:

        2279							filtered = call_filter (ctx, ei->data.filter);
           0x00000000000bc60c <+4116>:	add	x9, x22, x24, lsl #6
           0x00000000000bc610 <+4120>:	ldr	x8, [x8,mono#3120]
           0x00000000000bc614 <+4124>:	ldr	x1, [x9,mono#112]
           0x00000000000bc618 <+4128>:	add	x0, sp, #0x110
           0x00000000000bc61c <+4132>:	blr	x8

    Before the `blr` instruction, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0x0a8b31d8	0x00000071
        0x7fcd2774b0:	0x00000000	0x00000000	0x1ef51980	0x00000071

 4. `call_filter()` runs.  This function is generated by
    `mono_arch_get_call_filter()` and starts with:

           stp	x29, x30, [sp,#-336]!
           mov	x29, sp
           str	x0, [x29,mono#344]

    Note in particular how the first line subtracts 336 from `sp` to
    start a new stack frame, but the third line writes to a position
    that is 344 bytes beyond that, back within the *previous* stack
    frame.

 5. After the invocations of `call_filter()` and
    `handle_exception_first_pass()`, `w23` is restored from the stack:

           0x00000000000bc820 <+4648>:	ldr	w23, [sp,#20]

    At this step, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0xcd2775b0	0x0000007f
        0x7fcd2774b0:	0x00000000	0x00000000	0x1ef51980	0x00000071

    Notice how `call_filter()` has overwritten bytes 8 through 15 of the
    stack frame.  In this original lucky scenario, this does not affect
    the value restored into register `w23` because that value starts at
    byte 20.

 6. `mono_handle_exception_internal()` tests the value of `w23` to
    decide how to set the `ji` local variable:

        2574			if (resume) {
           0x00000000000bb960 <+872>:	cbz	w23, 0xbb9c4 <mono_handle_exception_internal+972>

    Since `w23` is `0`, `mono_handle_exception_internal()` correctly
    continues into the `else` branch.

The bad behavior for
[mono/mono@725ba2a built with Android NDK r19][2]
works just slightly differently:

 1. As before, the local `resume` parameter starts in register `w23`.
 2. This time, the register is saved into the stack frame at offset 12
    (instead of 20):

           0x00000000000bed7c <+3200>:	str	w23, [sp,#12]

 3. As before, `handle_exception_first_pass()` invokes `call_filter()`
    via a `blr` instruction.

    At this step, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
        0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

 4. `call_filter()` runs as before.  And the first few instructions of
    that function are the same as before:

        stp	x29, x30, [sp,#-336]!
        mov	x29, sp
        str	x0, [x29,mono#344]

 5. `w23` is again restored from the stack, but this time from offset 12:

           0x00000000000bf2c0 <+4548>:	ldr	w23, [sp,#12]

    The problem is that `call_filter()` has again overwritten bytes 8
    through 15 of the stack frame:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0xcd2775b0	0x0000007f
        0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

    So after the `ldr` instruction, `w23` has an incorrect value of
    `0x7f` rather than the correct value `0`.

 6. As before, `mono_handle_exception_internal()` tests the value of `w23` to
    decide how to set the `ji` local variable:

        2574			if (resume) {
           0x00000000000be430 <+820>:	cbz	w23, 0xbe834 <mono_handle_exception_internal+1848>

    But this time because `w23` is *not* zero, execution continues
    *incorrectly* into the `if` branch rather than the `else` branch.
    The result is that `ji` is set to `0` rather than a valid
    `(MonoJitInfo *)` value.  This incorrect value for `ji` leads to a
    null pointer dereference, as seen in the bug reports.

[0]: https://github.com/xamarin/xamarin-android/tree/xamarin-android-9.3.0.22
[1]: https://jenkins.xamarin.com/view/Xamarin.Android/job/xamarin-android-freestyle/1432/Azure/
[2]: https://jenkins.xamarin.com/view/Xamarin.Android/job/xamarin-android-freestyle/1436/Azure/

* reduce frame size
akoeplinger pushed a commit that referenced this pull request Jun 25, 2019
The thread_stopped profiler event can be raised by the thread_info_key_dtor tls
key destructor when the thread is already doesn't have a domain set.  In that
case, don't call process_profiler_event since it cannot handle a thread with
null TLS values.

Addresses dotnet/android#2920
with the following stack trace

```
* thread #20, name = 'Filter', stop reason = signal SIGSEGV: invalid address (fault address: 0xbc)
  * frame #0: libmonosgen-2.0.so`mono_class_vtable_checked(domain=0x0000000000000000, klass=0x0000007200230648, error=0x00000071e92f9178) at object.c:1890
    frame #1: libmonosgen-2.0.so`get_current_thread_ptr_for_domain(domain=0x0000000000000000, thread=0x00000071ebfec508) at threads.c:595
    frame #2: libmonosgen-2.0.so`mono_thread_current at threads.c:1939
    frame #3: libmonosgen-2.0.so`process_event(event=<unavailable>, arg=<unavailable>, il_offset=<unavailable>, ctx=<unavailable>, events=<unavailable>, suspend_policy=<unavailable>) at debugger-agent.c:3715
    frame #4: libmonosgen-2.0.so`thread_end [inlined] process_profiler_event(event=EVENT_KIND_THREAD_DEATH, arg=0x00000071ebfec508) at debugger-agent.c:3875
    frame #5: libmonosgen-2.0.so`thread_end(prof=<unavailable>, tid=<unavailable>) at debugger-agent.c:3991
    frame #6: libmonosgen-2.0.so`mono_profiler_raise_thread_stopped(tid=<unavailable>) at profiler-events.h:105
    frame #7: libmonosgen-2.0.so`mono_thread_detach_internal(thread=<unavailable>) at threads.c:979
    frame #8: libmonosgen-2.0.so`thread_detach(info=0x00000071e949a000) at threads.c:3215
    frame #9: libmonosgen-2.0.so`unregister_thread(arg=<unavailable>) at mono-threads.c:544
    frame #10: libmonosgen-2.0.so`thread_info_key_dtor(arg=0x00000071e949a000) at mono-threads.c:774
    frame #11: 0x00000072899c58e8 libc.so`pthread_key_clean_all() + 124
    frame #12: 0x00000072899c5374 libc.so`pthread_exit + 76
    frame #13: 0x00000072899c5264 libc.so`__pthread_start(void*) + 44
    frame #14: 0x000000728996617c libc.so`__start_thread + 72
```
akoeplinger pushed a commit that referenced this pull request Jul 2, 2019
[2019-06] [arm64] Correct exception filter stack frame size

Fixes: mono#14170
Fixes: dotnet/android#3112

The `call_filter()` function generated by `mono_arch_get_call_filter()`
was overwriting a part of the previous stack frame because it was not
creating a large enough new stack frame for itself.  This had been
working by luck before the switch to building the runtime with Android
NDK r19.

I used a local build of [xamarin/xamarin-android/d16-1@87a80b][0] to
verify that this change resolved both of the issues linked above.  I
also confirmed that I was able to reintroduce the issues in my local
environment by removing the change and rebuilding.

After this change, the generated `call_filter()` function now starts by
reserving sufficient space on the stack to hold a 64-bit value at the
`ctx_offset` location.  The 64-bit `ctx` value is saved to the
`ctx_offset` location (offset 344 in the new stack frame) shortly
afterwards:

    stp	x29, x30, [sp,#-352]!
    mov	x29, sp
    str	x0, [x29,mono#344]

As expected, the invocation of `call_filter()` now no longer modifies
the top of the previous stack frame.

Top of the stack before `call_filter()`:

    (gdb) x/8x $sp
    0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
    0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

Top of the stack after `call_filter()` (unchanged):

    (gdb) x/8x $sp
    0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
    0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

<details>
<summary>Additional background information</summary>

### The original lucky, "good" behavior for [mono/mono@725ba2a built with Android NDK r14][1]

 1. The `resume` parameter for `mono_handle_exception_internal()` is
    held in register `w23`.

 2. That register is saved into the current stack frame at offset 20 by
    a `str` instruction:

           0x00000000000bc1bc <+3012>:	str	w23, [sp,#20]

 3. `handle_exception_first_pass()` invokes `call_filter()` via a `blr`
    instruction:

        2279							filtered = call_filter (ctx, ei->data.filter);
           0x00000000000bc60c <+4116>:	add	x9, x22, x24, lsl #6
           0x00000000000bc610 <+4120>:	ldr	x8, [x8,mono#3120]
           0x00000000000bc614 <+4124>:	ldr	x1, [x9,mono#112]
           0x00000000000bc618 <+4128>:	add	x0, sp, #0x110
           0x00000000000bc61c <+4132>:	blr	x8

    Before the `blr` instruction, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0x0a8b31d8	0x00000071
        0x7fcd2774b0:	0x00000000	0x00000000	0x1ef51980	0x00000071

 4. `call_filter()` runs.  This function is generated by
    `mono_arch_get_call_filter()` and starts with:

           stp	x29, x30, [sp,#-336]!
           mov	x29, sp
           str	x0, [x29,mono#344]

    Note in particular how the first line subtracts 336 from `sp` to
    start a new stack frame, but the third line writes to a position
    that is 344 bytes beyond that, back within the *previous* stack
    frame.

 5. After the invocations of `call_filter()` and
    `handle_exception_first_pass()`, `w23` is restored from the stack:

           0x00000000000bc820 <+4648>:	ldr	w23, [sp,#20]

    At this step, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0xcd2775b0	0x0000007f
        0x7fcd2774b0:	0x00000000	0x00000000	0x1ef51980	0x00000071

    Notice how `call_filter()` has overwritten bytes 8 through 15 of the
    stack frame.  In this original lucky scenario, this does not affect
    the value restored into register `w23` because that value starts at
    byte 20.

 6. `mono_handle_exception_internal()` tests the value of `w23` to
    decide how to set the `ji` local variable:

        2574			if (resume) {
           0x00000000000bb960 <+872>:	cbz	w23, 0xbb9c4 <mono_handle_exception_internal+972>

    Since `w23` is `0`, `mono_handle_exception_internal()` correctly
    continues into the `else` branch.

### The bad behavior for [mono/mono@725ba2a built with Android NDK r19][2] works slightly differently

 1. As before, the local `resume` parameter starts in register `w23`.
 2. This time, the register is saved into the stack frame at offset 12
    (instead of 20):

           0x00000000000bed7c <+3200>:	str	w23, [sp,#12]

 3. As before, `handle_exception_first_pass()` invokes `call_filter()`
    via a `blr` instruction.

    At this step, the top of the stack looks like this:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0x2724b700	0x00000000
        0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

 4. `call_filter()` runs as before.  And the first few instructions of
    that function are the same as before:

        stp	x29, x30, [sp,#-336]!
        mov	x29, sp
        str	x0, [x29,mono#344]

 5. `w23` is again restored from the stack, but this time from offset 12:

           0x00000000000bf2c0 <+4548>:	ldr	w23, [sp,#12]

    The problem is that `call_filter()` has again overwritten bytes 8
    through 15 of the stack frame:

        (gdb) x/8x $sp
        0x7fcd2774a0:	0x7144d250	0x00000000	0xcd2775b0	0x0000007f
        0x7fcd2774b0:	0x1ee19ff0	0x00000071	0x0cc00300	0x00000071

    So after the `ldr` instruction, `w23` has an incorrect value of
    `0x7f` rather than the correct value `0`.

 6. As before, `mono_handle_exception_internal()` tests the value of `w23` to
    decide how to set the `ji` local variable:

        2574			if (resume) {
           0x00000000000be430 <+820>:	cbz	w23, 0xbe834 <mono_handle_exception_internal+1848>

    But this time because `w23` is *not* zero, execution continues
    *incorrectly* into the `if` branch rather than the `else` branch.
    The result is that `ji` is set to `0` rather than a valid
    `(MonoJitInfo *)` value.  This incorrect value for `ji` leads to a
    null pointer dereference, as seen in the bug reports.

[0]: https://github.com/xamarin/xamarin-android/tree/xamarin-android-9.3.0.22
[1]: https://jenkins.xamarin.com/view/Xamarin.Android/job/xamarin-android-freestyle/1432/Azure/
[2]: https://jenkins.xamarin.com/view/Xamarin.Android/job/xamarin-android-freestyle/1436/Azure/

</details>

Backport of mono#14757.

/cc @lambdageek @brendanzagaeski
akoeplinger pushed a commit that referenced this pull request Jul 2, 2019
The thread_stopped profiler event can be raised by the thread_info_key_dtor tls
key destructor when the thread is already doesn't have a domain set.  In that
case, don't call process_profiler_event since it cannot handle a thread with
null TLS values.

Addresses dotnet/android#2920
with the following stack trace

```
* thread #20, name = 'Filter', stop reason = signal SIGSEGV: invalid address (fault address: 0xbc)
  * frame #0: libmonosgen-2.0.so`mono_class_vtable_checked(domain=0x0000000000000000, klass=0x0000007200230648, error=0x00000071e92f9178) at object.c:1890
    frame #1: libmonosgen-2.0.so`get_current_thread_ptr_for_domain(domain=0x0000000000000000, thread=0x00000071ebfec508) at threads.c:595
    frame #2: libmonosgen-2.0.so`mono_thread_current at threads.c:1939
    frame #3: libmonosgen-2.0.so`process_event(event=<unavailable>, arg=<unavailable>, il_offset=<unavailable>, ctx=<unavailable>, events=<unavailable>, suspend_policy=<unavailable>) at debugger-agent.c:3715
    frame #4: libmonosgen-2.0.so`thread_end [inlined] process_profiler_event(event=EVENT_KIND_THREAD_DEATH, arg=0x00000071ebfec508) at debugger-agent.c:3875
    frame #5: libmonosgen-2.0.so`thread_end(prof=<unavailable>, tid=<unavailable>) at debugger-agent.c:3991
    frame #6: libmonosgen-2.0.so`mono_profiler_raise_thread_stopped(tid=<unavailable>) at profiler-events.h:105
    frame #7: libmonosgen-2.0.so`mono_thread_detach_internal(thread=<unavailable>) at threads.c:979
    frame #8: libmonosgen-2.0.so`thread_detach(info=0x00000071e949a000) at threads.c:3215
    frame #9: libmonosgen-2.0.so`unregister_thread(arg=<unavailable>) at mono-threads.c:544
    frame #10: libmonosgen-2.0.so`thread_info_key_dtor(arg=0x00000071e949a000) at mono-threads.c:774
    frame #11: 0x00000072899c58e8 libc.so`pthread_key_clean_all() + 124
    frame #12: 0x00000072899c5374 libc.so`pthread_exit + 76
    frame #13: 0x00000072899c5264 libc.so`__pthread_start(void*) + 44
    frame #14: 0x000000728996617c libc.so`__start_thread + 72
```
akoeplinger pushed a commit that referenced this pull request Sep 25, 2019
)

* [threads] clear small_id_key TLS when unregistering a thread

Fixes
```
* thread #12, name = 'tid_a507', stop reason = EXC_BREAKPOINT (code=1, subcode=0x1be66144)
  * frame #0: 0x1be66144 libsystem_c.dylib`__abort + 184
    frame #1: 0x1be6608c libsystem_c.dylib`abort + 152
    frame #2: 0x003e1fa0 monotouchtest`log_callback(log_domain=0x00000000, log_level="error", message="* Assertion at ../../../../../mono/utils/hazard-pointer.c:158, condition `mono_bitset_test_fast (small_id_table, id)' not met\n", fatal=4, user_data=0x00000000) at runtime.m:1251:3
    frame #3: 0x003abf44 monotouchtest`monoeg_g_logv_nofree(log_domain=0x00000000, log_level=G_LOG_LEVEL_ERROR, format=<unavailable>, args=<unavailable>) at goutput.c:149:2 [opt]
    frame #4: 0x003abfb4 monotouchtest`monoeg_assertion_message(format=<unavailable>) at goutput.c:184:22 [opt]
    frame #5: 0x003904dc monotouchtest`mono_thread_small_id_free(id=<unavailable>) at hazard-pointer.c:0:2 [opt]
    frame #6: 0x003a0a74 monotouchtest`unregister_thread(arg=0x15c88400) at mono-threads.c:588:2 [opt]
    frame #7: 0x00336110 monotouchtest`mono_thread_detach_if_exiting at threads.c:1571:4 [opt]
    frame #8: 0x003e7a14 monotouchtest`::xamarin_release_trampoline(self=0x166452f0, sel="release") at trampolines.m:644:3
    frame #9: 0x001cdc40 monotouchtest`::-[__Xamarin_NSTimerActionDispatcher release](self=0x166452f0, _cmd="release") at registrar.m:83445:3
    frame #10: 0x1ce2ae68 Foundation`_timerRelease + 80
    frame #11: 0x1c31b56c CoreFoundation`CFRunLoopTimerInvalidate + 612
    frame #12: 0x1c31a554 CoreFoundation`__CFRunLoopTimerDeallocate + 32
    frame #13: 0x1c31dde4 CoreFoundation`_CFRelease + 220
    frame #14: 0x1c2be6e8 CoreFoundation`__CFArrayReleaseValues + 500
    frame #15: 0x1c2be4c4 CoreFoundation`CFArrayRemoveAllValues + 104
    frame #16: 0x1c31ff64 CoreFoundation`__CFSetApplyFunction_block_invoke + 24
    frame #17: 0x1c3b2e18 CoreFoundation`CFBasicHashApply + 116
    frame #18: 0x1c31ff10 CoreFoundation`CFSetApplyFunction + 160
    frame #19: 0x1c3152cc CoreFoundation`__CFRunLoopDeallocate + 204
    frame #20: 0x1c31dde4 CoreFoundation`_CFRelease + 220
    frame #21: 0x1c304a80 CoreFoundation`__CFTSDFinalize + 144
    frame #22: 0x1bfa324c libsystem_pthread.dylib`_pthread_tsd_cleanup + 644
    frame #23: 0x1bf9cc08 libsystem_pthread.dylib`_pthread_exit + 80
    frame #24: 0x1bf9af24 libsystem_pthread.dylib`pthread_exit + 36
    frame #25: 0x0039df84 monotouchtest`mono_threads_platform_exit(exit_code=<unavailable>) at mono-threads-posix.c:145:2 [opt]
    frame #26: 0x0033bb84 monotouchtest`start_wrapper(data=0x1609e1c0) at threads.c:1296:2 [opt]
    frame #27: 0x1bf9b914 libsystem_pthread.dylib`_pthread_body + 128
    frame #28: 0x1bf9b874 libsystem_pthread.dylib`_pthread_start + 44
    frame #29: 0x1bfa3b94 libsystem_pthread.dylib`thread_start + 4
```

* Update mono/utils/mono-threads.c

Co-Authored-By: Aleksey Kliger (λgeek) <[email protected]>
akoeplinger pushed a commit that referenced this pull request Sep 25, 2019
Avoids this warning printed in the VSMac application output when debugging:
```
2019-09-13 17:47:27.315540+0200 aWatchOSExtension[31768:24665480] ../../../../../mono/eglib/giconv.c:1029: assertion 'str != NULL' failed
```

With this trace:
```
(lldb) mbt
* thread #2
  * frame #0: 0x00583453 aWatchOSExtension`break_on_me at giconv.c:1023:2
    frame #1: 0x005834a6 aWatchOSExtension`monoeg_g_utf16_to_utf8(str=0x00000000, len=0, items_read=0x00000000, items_written=0x00000000, err=0x00000000) at giconv.c:1036:3
    frame #2: 0x0046d871 aWatchOSExtension`mono_thread_get_name_utf8(thread=0x24d08120) at threads.c:1812:16
    frame #3: 0x002cb7ef aWatchOSExtension`thread_commands(command=2, p="�>\x02\x18\x10", end="�>\x02\x18\x10", buf=0xb0ae0d20) at debugger-agent.c:8818:13
    frame #4: 0x002c7170 aWatchOSExtension`debugger_thread(arg=0x00000000) at debugger-agent.c:9932:10
    frame #5: 0x0047652b aWatchOSExtension`start_wrapper_internal(start_info=0x00000000, stack_ptr=0xb0ae1000) at threads.c:1221:3
    frame #6: 0x0047616c aWatchOSExtension`start_wrapper(data=0x7b63a050) at threads.c:1294:8
    frame #7: 0x077365f8 libsystem_pthread.dylib`_pthread_body + 137
    frame #8: 0x077397f7 libsystem_pthread.dylib`_pthread_start + 78
    frame #9: 0x077357ce libsystem_pthread.dylib`thread_start + 34
```

Introduced by b5c0c83 and seemingly fixed by accident in mono@7b64f1c#diff-f1a9287559d3aa620b058619f47f00c9R1832

Instead of backporting 7b64f1c to 2019-08, I'm proposing this PR for the release branch.
akoeplinger pushed a commit that referenced this pull request Sep 25, 2019
mono#17015)

[2019-08] [threads] clear small_id_key TLS when unregistering a thread

Fixes
```
* thread #12, name = 'tid_a507', stop reason = EXC_BREAKPOINT (code=1, subcode=0x1be66144)
  * frame #0: 0x1be66144 libsystem_c.dylib`__abort + 184
    frame #1: 0x1be6608c libsystem_c.dylib`abort + 152
    frame #2: 0x003e1fa0 monotouchtest`log_callback(log_domain=0x00000000, log_level="error", message="* Assertion at ../../../../../mono/utils/hazard-pointer.c:158, condition `mono_bitset_test_fast (small_id_table, id)' not met\n", fatal=4, user_data=0x00000000) at runtime.m:1251:3
    frame #3: 0x003abf44 monotouchtest`monoeg_g_logv_nofree(log_domain=0x00000000, log_level=G_LOG_LEVEL_ERROR, format=<unavailable>, args=<unavailable>) at goutput.c:149:2 [opt]
    frame #4: 0x003abfb4 monotouchtest`monoeg_assertion_message(format=<unavailable>) at goutput.c:184:22 [opt]
    frame #5: 0x003904dc monotouchtest`mono_thread_small_id_free(id=<unavailable>) at hazard-pointer.c:0:2 [opt]
    frame #6: 0x003a0a74 monotouchtest`unregister_thread(arg=0x15c88400) at mono-threads.c:588:2 [opt]
    frame #7: 0x00336110 monotouchtest`mono_thread_detach_if_exiting at threads.c:1571:4 [opt]
    frame #8: 0x003e7a14 monotouchtest`::xamarin_release_trampoline(self=0x166452f0, sel="release") at trampolines.m:644:3
    frame #9: 0x001cdc40 monotouchtest`::-[__Xamarin_NSTimerActionDispatcher release](self=0x166452f0, _cmd="release") at registrar.m:83445:3
    frame #10: 0x1ce2ae68 Foundation`_timerRelease + 80
    frame #11: 0x1c31b56c CoreFoundation`CFRunLoopTimerInvalidate + 612
    frame #12: 0x1c31a554 CoreFoundation`__CFRunLoopTimerDeallocate + 32
    frame #13: 0x1c31dde4 CoreFoundation`_CFRelease + 220
    frame #14: 0x1c2be6e8 CoreFoundation`__CFArrayReleaseValues + 500
    frame #15: 0x1c2be4c4 CoreFoundation`CFArrayRemoveAllValues + 104
    frame #16: 0x1c31ff64 CoreFoundation`__CFSetApplyFunction_block_invoke + 24
    frame #17: 0x1c3b2e18 CoreFoundation`CFBasicHashApply + 116
    frame #18: 0x1c31ff10 CoreFoundation`CFSetApplyFunction + 160
    frame #19: 0x1c3152cc CoreFoundation`__CFRunLoopDeallocate + 204
    frame #20: 0x1c31dde4 CoreFoundation`_CFRelease + 220
    frame #21: 0x1c304a80 CoreFoundation`__CFTSDFinalize + 144
    frame #22: 0x1bfa324c libsystem_pthread.dylib`_pthread_tsd_cleanup + 644
    frame #23: 0x1bf9cc08 libsystem_pthread.dylib`_pthread_exit + 80
    frame #24: 0x1bf9af24 libsystem_pthread.dylib`pthread_exit + 36
    frame #25: 0x0039df84 monotouchtest`mono_threads_platform_exit(exit_code=<unavailable>) at mono-threads-posix.c:145:2 [opt]
    frame #26: 0x0033bb84 monotouchtest`start_wrapper(data=0x1609e1c0) at threads.c:1296:2 [opt]
    frame #27: 0x1bf9b914 libsystem_pthread.dylib`_pthread_body + 128
    frame #28: 0x1bf9b874 libsystem_pthread.dylib`_pthread_start + 44
    frame #29: 0x1bfa3b94 libsystem_pthread.dylib`thread_start + 4
```


Debugged by @lambdageek. Contributes to mono#10641

Edit: C/p into PR description so it's in the commit message:

We think what happens is:

1. `start_wrapper` runs `unregister_thread` when the thread is exiting.  At that point the `small_id` for the thread is cleared from the hazard pointer bitset by `mono_thread_small_id_free (small_id)`

2. Some other TLS destructor runs and attaches the thread again (which runs `mono_thread_info_register_small_id` which first calls `mono_thread_info_get_small_id` which tries to get the small id from the `small_id_key` TLS key and so the new `MonoThreadInfo` has the same `small_id` as the previously destroyed `MonoThreadInfo` - but the hazard pointer bitset is **not** updated).

3. This other TLS destructor runs and calls `mono_thread_detach_if_exiting` which calls `unregister_thread` again.

4. `unregister_thread` calls `	mono_thread_small_id_free (small_id)` a second time which asserts because we already cleared that id from the hazard pointer bitset.



Backport of mono#16973.

/cc @lewurm
akoeplinger pushed a commit that referenced this pull request Oct 8, 2019
[lldb] use mono_method_full_name helper

Instead of:

```
(lldb) mbt 20
* thread #11
  * frame #0: 0x1bf28f28 libsystem_kernel.dylib`__pthread_kill + 8
    frame #1: 0x1bf9caac libsystem_pthread.dylib`pthread_kill + 300
    frame #2: 0x1be66080 libsystem_c.dylib`abort + 140
    frame #3: 0x024bdfa0 monotouchtest`log_callback(log_domain=0x00000000, log_level="error", message="../../../../../mono/metadata/object.c:1905: Expected GC Unsafe mode but was in STATE_BLOCKING state", fatal=4, user_data=0x00000000) at runtime.m:1251:3
    frame #4: 0x02487f44 monotouchtest`monoeg_g_logv_nofree(log_domain=0x00000000, log_level=G_LOG_LEVEL_ERROR, format=<unavailable>, args=<unavailable>) at goutput.c:149:2 [opt]
    frame #5: 0x02487ee0 monotouchtest`monoeg_g_logv(log_domain=<unavailable>, log_level=<unavailable>, format=<unavailable>, args=<unavailable>) at goutput.c:156:10 [opt]
    frame #6: 0x02487f74 monotouchtest`monoeg_g_log(log_domain=<unavailable>, log_level=<unavailable>, format=<unavailable>) at goutput.c:165:2 [opt]
    frame #7: 0x024694c4 monotouchtest`assert_gc_unsafe_mode(file="../../../../../mono/metadata/object.c", lineno=1905) at checked-build.c:396:3 [opt]
    frame #8: 0x023d2f64 monotouchtest`mono_class_vtable_checked(domain=0x16d64800, klass=0x173c1b98, error=0x194b0ee8) at object.c:1905:2 [opt]
    frame #9: 0x024130a8 monotouchtest`get_current_thread_ptr_for_domain(domain=0x16d64800, thread=0x02db45d0) at threads.c:635:2 [opt]
    frame #10: 0x024119a8 monotouchtest`mono_thread_current at threads.c:2026:23 [opt]
    frame #11: 0x02416998 monotouchtest`mono_thread_interruption_checkpoint_request(bypass_abort_protection=0) at threads.c:5101:35 [opt]
    Messaging::void_objc_msgSendSuper @ 388435850 "calli.nat.fast" || frame #12: 0x024d5de8 monotouchtest`interp_exec_method_full(frame=0x194b11c0, context=<unavailable>, clause_args=<unavailable>, error=<unavailable>) at interp.c:3230:4 [opt]
    ObjCExceptionTest::InvokeManagedExceptionThrower @ 388435752 "vcall" || frame #13: 0x024d64dc monotouchtest`interp_exec_method_full(frame=0x194b1380, context=<unavailable>, clause_args=<unavailable>, error=<unavailable>) at interp.c:3400:4 [opt]
    ExceptionsTest::ManagedExceptionPassthrough @ 388502134 "vcallvirt.fast" || frame #14: 0x024d61c0 monotouchtest`interp_exec_method_full(frame=0x194b14e0, context=<unavailable>, clause_args=<unavailable>, error=<unavailable>) at interp.c:3325:4 [opt]
    Object::runtime_invoke_direct_void__this__ @ 388462550 "vcall" || frame #15: 0x024d64dc monotouchtest`interp_exec_method_full(frame=0x194b1598, context=<unavailable>, clause_args=<unavailable>, error=<unavailable>) at interp.c:3400:4 [opt]
    frame #16: 0x024d46ec monotouchtest`interp_runtime_invoke(method=<unavailable>, obj=0x0302c5f0, params=0x00000000, exc=0x194b166c, error=0x194b18c8) at interp.c:1766:2 [opt]
    frame #17: 0x0232b39c monotouchtest`mono_jit_runtime_invoke(method=0x174fda58, obj=<unavailable>, params=0x00000000, exc=<unavailable>, error=0x194b18c8) at mini-runtime.c:3170:12 [opt]
    frame #18: 0x023d5cc8 monotouchtest`do_runtime_invoke(method=0x174fda58, obj=0x0302c5f0, params=0x00000000, exc=0x00000000, error=0x194b18c8) at object.c:3017:11 [opt]
    frame #19: 0x023d26a0 monotouchtest`mono_runtime_invoke_checked(method=<unavailable>, obj=<unavailable>, params=<unavailable>, error=<unavailable>) at class-getters.h:24:1 [opt] [artificial]
```

It prints now:

```
(lldb) mbt 20
* thread #11
  * frame #0: 0x1bf28f28 libsystem_kernel.dylib`__pthread_kill + 8
    frame #1: 0x1bf9caac libsystem_pthread.dylib`pthread_kill + 300
    frame #2: 0x1be66080 libsystem_c.dylib`abort + 140
    frame #3: 0x024bdfa0 monotouchtest`log_callback(log_domain=0x00000000, log_level="error", message="../../../../../mono/metadata/object.c:1905: Expected GC Unsafe mode but was in STATE_BLOCKING state", fatal=4, user_data=0x00000000) at runtime.m:1251:3
    frame #4: 0x02487f44 monotouchtest`monoeg_g_logv_nofree(log_domain=0x00000000, log_level=G_LOG_LEVEL_ERROR, format=<unavailable>, args=<unavailable>) at goutput.c:149:2 [opt]
    frame #5: 0x02487ee0 monotouchtest`monoeg_g_logv(log_domain=<unavailable>, log_level=<unavailable>, format=<unavailable>, args=<unavailable>) at goutput.c:156:10 [opt]
    frame #6: 0x02487f74 monotouchtest`monoeg_g_log(log_domain=<unavailable>, log_level=<unavailable>, format=<unavailable>) at goutput.c:165:2 [opt]
    frame #7: 0x024694c4 monotouchtest`assert_gc_unsafe_mode(file="../../../../../mono/metadata/object.c", lineno=1905) at checked-build.c:396:3 [opt]
    frame #8: 0x023d2f64 monotouchtest`mono_class_vtable_checked(domain=0x16d64800, klass=0x173c1b98, error=0x194b0ee8) at object.c:1905:2 [opt]
    frame #9: 0x024130a8 monotouchtest`get_current_thread_ptr_for_domain(domain=0x16d64800, thread=0x02db45d0) at threads.c:635:2 [opt]
    frame #10: 0x024119a8 monotouchtest`mono_thread_current at threads.c:2026:23 [opt]
    frame #11: 0x02416998 monotouchtest`mono_thread_interruption_checkpoint_request(bypass_abort_protection=0) at threads.c:5101:35 [opt]
    (wrapper managed-to-native) ApiDefinition.Messaging:void_objc_msgSendSuper (intptr,intptr) @ 388435850 "calli.nat.fast" || frame #12: 0x024d5de8 monotouchtest`interp_exec_method_full(frame=0x194b11c0, context=<unavailable>, clause_args=<unavailable>, error=<unavailable>) at interp.c:3230:4 [opt]
    Bindings.Test.ObjCExceptionTest:InvokeManagedExceptionThrower () @ 388435752 "vcall" || frame #13: 0x024d64dc monotouchtest`interp_exec_method_full(frame=0x194b1380, context=<unavailable>, clause_args=<unavailable>, error=<unavailable>) at interp.c:3400:4 [opt]
    MonoTouchFixtures.ObjCRuntime.ExceptionsTest:ManagedExceptionPassthrough () @ 388502134 "vcallvirt.fast" || frame #14: 0x024d61c0 monotouchtest`interp_exec_method_full(frame=0x194b14e0, context=<unavailable>, clause_args=<unavailable>, error=<unavailable>) at interp.c:3325:4 [opt]
    (wrapper runtime-invoke) object:runtime_invoke_direct_void__this__ (object,intptr,intptr,intptr) @ 388462550 "vcall" || frame #15: 0x024d64dc monotouchtest`interp_exec_method_full(frame=0x194b1598, context=<unavailable>, clause_args=<unavailable>, error=<unavailable>) at interp.c:3400:4 [opt]
    frame #16: 0x024d46ec monotouchtest`interp_runtime_invoke(method=<unavailable>, obj=0x0302c5f0, params=0x00000000, exc=0x194b166c, error=0x194b18c8) at interp.c:1766:2 [opt]
    frame #17: 0x0232b39c monotouchtest`mono_jit_runtime_invoke(method=0x174fda58, obj=<unavailable>, params=0x00000000, exc=<unavailable>, error=0x194b18c8) at mini-runtime.c:3170:12 [opt]
    frame #18: 0x023d5cc8 monotouchtest`do_runtime_invoke(method=0x174fda58, obj=0x0302c5f0, params=0x00000000, exc=0x00000000, error=0x194b18c8) at object.c:3017:11 [opt]
    frame #19: 0x023d26a0 monotouchtest`mono_runtime_invoke_checked(method=<unavailable>, obj=<unavailable>, params=<unavailable>, error=<unavailable>) at class-getters.h:24:1 [opt] [artificial]
```



<!--
Thank you for your Pull Request!

If you are new to contributing to Mono, please try to do your best at conforming to our coding guidelines http://www.mono-project.com/community/contributing/coding-guidelines/ but don't worry if you get something wrong. One of the project members will help you to get things landed.

Does your pull request fix any of the existing issues? Please use the following format: Fixes #issue-number
-->
akoeplinger pushed a commit that referenced this pull request Dec 10, 2019
When the debugger tries to suspend all the VM threads, it can happen with cooperative-suspend that it tries to suspend a thread that has previously been attached, but then did a "light" detach (that only unsets the domain). With the domain set to `NULL`, looking up a JitInfo for a given `ip` will result into a crash, but it isn't even necessarily needed for the purpose of suspending a thread.

More details: Consider the following:
```
(lldb) c
error: Process is running.  Use 'process interrupt' to pause execution.
Process 12832 stopped
* thread #9, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
    frame #0: 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6
   2713         MonoDomain *domain = (MonoDomain *) mono_thread_info_get_suspend_state (info)->unwind_data [MONO_UNWIND_DATA_DOMAIN];
   2714         if (!domain) {
   2715                 /* not attached */
-> 2716                 ji = NULL;
   2717         } else {
   2718                 ji = mono_jit_info_table_find_internal ( domain, MONO_CONTEXT_GET_IP (&mono_thread_info_get_suspend_state (info)->ctx), TRUE, TRUE);
   2719         }
Target 0: (WatchWCSessionAppWatchOSExtension) stopped.
(lldb) bt
* thread #9, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
  * frame #0: 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6
    frame #1: 0x004e177b WatchWCSessionAppWatchOSExtension`mono_thread_info_safe_suspend_and_run(id=0xb0767000, interrupt_kernel=0, callback=(WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical at debugger-agent.c:2708), user_data=0xb04dc718) at mono-threads.c:1358:19
    frame #2: 0x00222799 WatchWCSessionAppWatchOSExtension`notify_thread(key=0x03994508, value=0x81378c00, user_data=0x00000000) at debugger-agent.c:2747:2
    frame #3: 0x00355bd8 WatchWCSessionAppWatchOSExtension`mono_g_hash_table_foreach(hash=0x8007b4e0, func=(WatchWCSessionAppWatchOSExtension`notify_thread at debugger-agent.c:2733), user_data=0x00000000) at mono-hash.c:310:4
    frame #4: 0x0021e955 WatchWCSessionAppWatchOSExtension`suspend_vm at debugger-agent.c:2844:3
    frame #5: 0x00225ff0 WatchWCSessionAppWatchOSExtension`process_event(event=EVENT_KIND_THREAD_START, arg=0x039945d0, il_offset=0, ctx=0x00000000, events=0x805b28e0, suspend_policy=2) at debugger-agent.c:4012:3
    frame #6: 0x00227c7c WatchWCSessionAppWatchOSExtension`process_profiler_event(event=EVENT_KIND_THREAD_START, arg=0x039945d0) at debugger-agent.c:4072:2
    frame #7: 0x0021b174 WatchWCSessionAppWatchOSExtension`thread_startup(prof=0x00000000, tid=2957889536) at debugger-agent.c:4149:2
    frame #8: 0x0037912d WatchWCSessionAppWatchOSExtension`mono_profiler_raise_thread_started(tid=2957889536) at profiler-events.h:103:1
    frame #9: 0x003d53da WatchWCSessionAppWatchOSExtension`fire_attach_profiler_events(tid=0xb04dd000) at threads.c:1120:2
    frame #10: 0x003d4d83 WatchWCSessionAppWatchOSExtension`mono_thread_attach(domain=0x801750a0) at threads.c:1547:2
    frame #11: 0x003df1a1 WatchWCSessionAppWatchOSExtension`mono_threads_attach_coop_internal(domain=0x801750a0, cookie=0xb04dcc0c, stackdata=0xb04dcba8) at threads.c:6008:3
    frame #12: 0x003df287 WatchWCSessionAppWatchOSExtension`mono_threads_attach_coop(domain=0x00000000, dummy=0xb04dcc0c) at threads.c:6045:9
    frame #13: 0x005034b8 WatchWCSessionAppWatchOSExtension`::xamarin_switch_gchandle(self=0x80762c20, to_weak=false) at runtime.m:1805:2
    frame #14: 0x005065c1 WatchWCSessionAppWatchOSExtension`::xamarin_retain_trampoline(self=0x80762c20, sel="retain") at trampolines.m:693:2
    frame #15: 0x657ea520 libobjc.A.dylib`objc_retain + 64
    frame #16: 0x4b4d9caa WatchConnectivity`__66-[WCSession onqueue_handleDictionaryMessageRequest:withPairingID:]_block_invoke + 279
    frame #17: 0x453c7df7 Foundation`__NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 12
    frame #18: 0x453c7cf4 Foundation`-[NSBlockOperation main] + 88
    frame #19: 0x453cacee Foundation`__NSOPERATION_IS_INVOKING_MAIN__ + 27
    frame #20: 0x453c6ebd Foundation`-[NSOperation start] + 835
    frame #21: 0x453cb606 Foundation`__NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 27
    frame #22: 0x453cb12e Foundation`__NSOQSchedule_f + 194
    frame #23: 0x453cb26e Foundation`____addOperations_block_invoke_4 + 20
    frame #24: 0x65de007b libdispatch.dylib`_dispatch_call_block_and_release + 15
    frame #25: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #26: 0x65de3788 libdispatch.dylib`_dispatch_continuation_pop + 421
    frame #27: 0x65de2ee3 libdispatch.dylib`_dispatch_async_redirect_invoke + 818
    frame #28: 0x65df087d libdispatch.dylib`_dispatch_root_queue_drain + 354
    frame #29: 0x65df0ff3 libdispatch.dylib`_dispatch_worker_thread2 + 109
    frame #30: 0x66024fa0 libsystem_pthread.dylib`_pthread_wqthread + 208
    frame #31: 0x66024e44 libsystem_pthread.dylib`start_wqthread + 36
```

Going further, `info` is about this thread:
```
(lldb) p/x *(int *)(((char *) info->node.key) + 0xa0)
(int) $2 = 0x01243f93
(lldb) thread list
Process 12832 stopped
  thread #1: tid = 0x1243ee1, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'tid_303', queue = 'com.apple.main-thread'
  thread #2: tid = 0x1243ee6, 0x65f816e2 libsystem_kernel.dylib`__recvfrom + 10
  thread #3: tid = 0x1243ee7, 0x65f81aea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'SGen worker'
  thread #4: tid = 0x1243ee9, 0x65f7e3d2 libsystem_kernel.dylib`semaphore_wait_trap + 10, name = 'Finalizer'
  thread #5: tid = 0x1243eea, 0x65f816e2 libsystem_kernel.dylib`__recvfrom + 10, name = 'Debugger agent'
  thread #6: tid = 0x1243f1d, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'com.apple.uikit.eventfetch-thread'
  thread #8: tid = 0x1243f93, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
* thread #9: tid = 0x12443a9, 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
  thread #10: tid = 0x1244581, 0x65f7fd32 libsystem_kernel.dylib`__workq_kernreturn + 10
(lldb) thread select 8
* thread #8, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
    frame #0: 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10
libsystem_kernel.dylib`mach_msg_trap:
->  0x65f7e396 <+10>: retl
    0x65f7e397 <+11>: nop

libsystem_kernel.dylib`mach_msg_overwrite_trap:
    0x65f7e398 <+0>:  movl   $0xffffffe0, %eax         ; imm = 0xFFFFFFE0
    0x65f7e39d <+5>:  calll  0x65f85f44                ; _sysenter_trap
(lldb) bt
* thread #8, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
  * frame #0: 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10
    frame #1: 0x65f7e8ff libsystem_kernel.dylib`mach_msg + 47
    frame #2: 0x66079679 libxpc.dylib`_xpc_send_serializer + 104
    frame #3: 0x660794da libxpc.dylib`_xpc_pipe_simpleroutine + 80
    frame #4: 0x66079852 libxpc.dylib`xpc_pipe_simpleroutine + 43
    frame #5: 0x66043a8f libsystem_trace.dylib`___os_activity_stream_reflect_block_invoke_2 + 30
    frame #6: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #7: 0x65de3d71 libdispatch.dylib`_dispatch_block_invoke_direct + 257
    frame #8: 0x65de3c62 libdispatch.dylib`dispatch_block_perform + 112
    frame #9: 0x6604349a libsystem_trace.dylib`_os_activity_stream_reflect + 725
    frame #10: 0x6604ef19 libsystem_trace.dylib`_os_log_impl_stream + 468
    frame #11: 0x6604e44d libsystem_trace.dylib`_os_log_impl_flatten_and_send + 6410
    frame #12: 0x6604cb3b libsystem_trace.dylib`_os_log + 137
    frame #13: 0x6604f4aa libsystem_trace.dylib`_os_log_impl + 31
    frame #14: 0x4b4eb4e9 WatchConnectivity`WCSerializePayloadDictionary + 393
    frame #15: 0x4b4d7c4d WatchConnectivity`-[WCSession onqueue_sendResponseDictionary:identifier:] + 195
    frame #16: 0x4b4da435 WatchConnectivity`__66-[WCSession onqueue_handleDictionaryMessageRequest:withPairingID:]_block_invoke.411 + 35
    frame #17: 0x453c7df7 Foundation`__NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 12
    frame #18: 0x453c7cf4 Foundation`-[NSBlockOperation main] + 88
    frame #19: 0x453cacee Foundation`__NSOPERATION_IS_INVOKING_MAIN__ + 27
    frame #20: 0x453c6ebd Foundation`-[NSOperation start] + 835
    frame #21: 0x453cb606 Foundation`__NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 27
    frame #22: 0x453cb12e Foundation`__NSOQSchedule_f + 194
    frame #23: 0x453cb067 Foundation`____addOperations_block_invoke_2 + 20
    frame #24: 0x65dedf49 libdispatch.dylib`_dispatch_block_async_invoke2 + 77
    frame #25: 0x65de4461 libdispatch.dylib`_dispatch_block_async_invoke_and_release + 17
    frame #26: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #27: 0x65de3788 libdispatch.dylib`_dispatch_continuation_pop + 421
    frame #28: 0x65de2ee3 libdispatch.dylib`_dispatch_async_redirect_invoke + 818
    frame #29: 0x65df087d libdispatch.dylib`_dispatch_root_queue_drain + 354
    frame #30: 0x65df0ff3 libdispatch.dylib`_dispatch_worker_thread2 + 109
    frame #31: 0x66024fa0 libsystem_pthread.dylib`_pthread_wqthread + 208
    frame #32: 0x66024e44 libsystem_pthread.dylib`start_wqthread + 36
```
which is a thread in a "light" detach state (aka. coop detach), where we only unset the domain:
https://github.com/mono/mono/blob/4cefdcb7ce2d939ee78fb45d1b4913eb3bc064fd/mono/metadata/threads.c#L6084-L6111

Fixes mono#17926
akoeplinger pushed a commit that referenced this pull request Jan 29, 2020
…ono#18533)

`jit_tls->interp_context` gets initialized lazily, that is, upon the first interpreter execution on a specific thread (e.g. via interp_runtime_invoke). However, with mixed mode the execution can purely happen in AOT code upon the first interaction with the managed debugger.

Stack trace:

```
  thread #1, name = 'tid_407', queue = 'com.apple.main-thread'
    frame #0: 0x0000000190aedc94 libsystem_kernel.dylib`__psynch_cvwait + 8
    frame #1: 0x0000000190a0f094 libsystem_pthread.dylib`_pthread_cond_wait$VARIANT$armv81 + 672
    frame #2: 0x000000010431318c reloadcontext.iOS`mono_os_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-os-mutex.h:219:8
    frame #3: 0x0000000104312a68 reloadcontext.iOS`mono_coop_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-coop-mutex.h:91:2
    frame #4: 0x0000000104312858 reloadcontext.iOS`suspend_current at debugger-agent.c:3021:4
    frame #5: 0x000000010431be18 reloadcontext.iOS`process_event(event=EVENT_KIND_BREAKPOINT, arg=0x0000000145d09ae8, il_offset=0, ctx=0x0000000149015c20, events=0x0000000000000000, suspend_policy=2) at debugger-agent.c:4058:3
    frame #6: 0x0000000104310cf4 reloadcontext.iOS`process_breakpoint_events(_evts=0x000000028351a680, method=0x0000000145d09ae8, ctx=0x0000000149015c20, il_offset=0) at debugger-agent.c:4722:3
    frame #7: 0x000000010432f1c8 reloadcontext.iOS`mono_de_process_breakpoint(void_tls=0x0000000149014e00, from_signal=0) at debugger-engine.c:1141:2
    frame #8: 0x000000010430f238 reloadcontext.iOS`debugger_agent_breakpoint_from_context(ctx=0x000000016f656790) at debugger-agent.c:4938:2
    frame #9: 0x00000001011b73a4 reloadcontext.iOS`sdb_breakpoint_trampoline + 148
    frame #10: 0x00000001008511b4 reloadcontext.iOS`reloadcontext_iOS_Application_Main_string__(args=0x000000010703a030) at Main.cs:14
    frame #11: 0x00000001010f9730 reloadcontext.iOS`wrapper_runtime_invoke_object_runtime_invoke_dynamic_intptr_intptr_intptr_intptr + 272
    frame #12: 0x00000001042fd8b8 reloadcontext.iOS`mono_jit_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at mini-runtime.c:3162:3
    frame #13: 0x0000000104411950 reloadcontext.iOS`do_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at object.c:3052:11
    frame #14: 0x000000010440c4dc reloadcontext.iOS`mono_runtime_invoke_checked(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, error=0x000000016f656ff8) at object.c:3220:9
    frame #15: 0x0000000104415ae0 reloadcontext.iOS`do_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5184:3
    frame #16: 0x00000001044144ac reloadcontext.iOS`mono_runtime_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5281:9
    frame #17: 0x0000000104414500 reloadcontext.iOS`mono_runtime_run_main_checked(method=0x0000000145d09ae8, argc=1, argv=0x000000016f6570d0, error=0x000000016f656ff8) at object.c:4734:9
    frame #18: 0x00000001042d3b54 reloadcontext.iOS`mono_jit_exec_internal(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1320:13
    frame #19: 0x00000001042d39a4 reloadcontext.iOS`mono_jit_exec(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1265:7
    frame #20: 0x0000000104597994 reloadcontext.iOS`::xamarin_main(argc=5, argv=0x000000016f657a80, launch_mode=XamarinLaunchModeApp) at monotouch-main.m:483:8
    frame #21: 0x00000001008510dc reloadcontext.iOS`main(argc=5, argv=0x000000016f657a80) at main.m:104:11
    frame #22: 0x0000000190af8360 libdyld.dylib`start + 4
[...]
* thread #5, name = 'Debugger agent', stop reason = signal SIGABRT
  * frame #0: 0x0000000190aedec4 libsystem_kernel.dylib`__pthread_kill + 8
    frame #1: 0x0000000190a0d724 libsystem_pthread.dylib`pthread_kill$VARIANT$armv81 + 216
    frame #2: 0x000000019095d844 libsystem_c.dylib`abort + 100
    frame #3: 0x00000001045871b4 reloadcontext.iOS`log_callback(log_domain=0x0000000000000000, log_level="error", message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", fatal=4, user_data=0x0000000000000000) at runtime.m:1213:3
    frame #4: 0x0000000104544fc8 reloadcontext.iOS`eglib_log_adapter(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", user_data=0x0000000000000000) at mono-logger.c:405:2
    frame #5: 0x000000010456093c reloadcontext.iOS`monoeg_g_logstr(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, msg="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n") at goutput.c:134:2
    frame #6: 0x0000000104560598 reloadcontext.iOS`monoeg_g_logv_nofree(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, format="* Assertion at %s:%d, condition `%s' not met\n", args="e\x12z\x04\x01") at goutput.c:149:2
    frame #7: 0x000000010456061c reloadcontext.iOS`monoeg_assertion_message(format="* Assertion at %s:%d, condition `%s' not met\n") at goutput.c:184:22
    frame #8: 0x0000000104560674 reloadcontext.iOS`mono_assertion_message(file="../../../../../mono/mini/interp/interp.c", line=7176, condition="context") at goutput.c:203:2
    frame #9: 0x000000010459b570 reloadcontext.iOS`interp_get_resume_state(jit_tls=0x000000014900d000, has_resume_state=0x000000016fc7a9f4, interp_frame=0x000000016fc7a9e8, handler_ip=0x000000016fc7a9e0) at interp.c:7176:2
    frame #10: 0x0000000104319420 reloadcontext.iOS`compute_frame_info(thread=0x0000000104fe4130, tls=0x0000000149014e00, force_update=1) at debugger-agent.c:3422:3
    frame #11: 0x0000000104320d40 reloadcontext.iOS`thread_commands(command=1, p="", end="", buf=0x000000016fc7acf8) at debugger-agent.c:9048:3
    frame #12: 0x000000010431cca0 reloadcontext.iOS`debugger_thread(arg=0x0000000000000000) at debugger-agent.c:10132:10
    frame #13: 0x000000010447eb04 reloadcontext.iOS`start_wrapper_internal(start_info=0x0000000000000000, stack_ptr=0x000000016fc7b000) at threads.c:1232:3
    frame #14: 0x000000010447e788 reloadcontext.iOS`start_wrapper(data=0x000000028203ef40) at threads.c:1305:8
    frame #15: 0x0000000190a11d8c libsystem_pthread.dylib`_pthread_start + 15
[...]
```

Thanks to @drasticactions for helping me to reproduce.

Fixes https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1050615
akoeplinger pushed a commit that referenced this pull request Feb 12, 2020
[2019-08] [debugger] skip suspend for unattached threads

When the debugger tries to suspend all the VM threads, it can happen with cooperative-suspend that it tries to suspend a thread that has previously been attached, but then did a "light" detach (that only unsets the domain). With the domain set to `NULL`, looking up a JitInfo for a given `ip` will result into a crash, but it isn't even necessarily needed for the purpose of suspending a thread.

More details: Consider the following:
```
(lldb) c
error: Process is running.  Use 'process interrupt' to pause execution.
Process 12832 stopped
* thread #9, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
    frame #0: 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6
   2713         MonoDomain *domain = (MonoDomain *) mono_thread_info_get_suspend_state (info)->unwind_data [MONO_UNWIND_DATA_DOMAIN];
   2714         if (!domain) {
   2715                 /* not attached */
-> 2716                 ji = NULL;
   2717         } else {
   2718                 ji = mono_jit_info_table_find_internal ( domain, MONO_CONTEXT_GET_IP (&mono_thread_info_get_suspend_state (info)->ctx), TRUE, TRUE);
   2719         }
Target 0: (WatchWCSessionAppWatchOSExtension) stopped.
(lldb) bt
* thread #9, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
  * frame #0: 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6
    frame #1: 0x004e177b WatchWCSessionAppWatchOSExtension`mono_thread_info_safe_suspend_and_run(id=0xb0767000, interrupt_kernel=0, callback=(WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical at debugger-agent.c:2708), user_data=0xb04dc718) at mono-threads.c:1358:19
    frame #2: 0x00222799 WatchWCSessionAppWatchOSExtension`notify_thread(key=0x03994508, value=0x81378c00, user_data=0x00000000) at debugger-agent.c:2747:2
    frame #3: 0x00355bd8 WatchWCSessionAppWatchOSExtension`mono_g_hash_table_foreach(hash=0x8007b4e0, func=(WatchWCSessionAppWatchOSExtension`notify_thread at debugger-agent.c:2733), user_data=0x00000000) at mono-hash.c:310:4
    frame #4: 0x0021e955 WatchWCSessionAppWatchOSExtension`suspend_vm at debugger-agent.c:2844:3
    frame #5: 0x00225ff0 WatchWCSessionAppWatchOSExtension`process_event(event=EVENT_KIND_THREAD_START, arg=0x039945d0, il_offset=0, ctx=0x00000000, events=0x805b28e0, suspend_policy=2) at debugger-agent.c:4012:3
    frame #6: 0x00227c7c WatchWCSessionAppWatchOSExtension`process_profiler_event(event=EVENT_KIND_THREAD_START, arg=0x039945d0) at debugger-agent.c:4072:2
    frame #7: 0x0021b174 WatchWCSessionAppWatchOSExtension`thread_startup(prof=0x00000000, tid=2957889536) at debugger-agent.c:4149:2
    frame #8: 0x0037912d WatchWCSessionAppWatchOSExtension`mono_profiler_raise_thread_started(tid=2957889536) at profiler-events.h:103:1
    frame #9: 0x003d53da WatchWCSessionAppWatchOSExtension`fire_attach_profiler_events(tid=0xb04dd000) at threads.c:1120:2
    frame #10: 0x003d4d83 WatchWCSessionAppWatchOSExtension`mono_thread_attach(domain=0x801750a0) at threads.c:1547:2
    frame #11: 0x003df1a1 WatchWCSessionAppWatchOSExtension`mono_threads_attach_coop_internal(domain=0x801750a0, cookie=0xb04dcc0c, stackdata=0xb04dcba8) at threads.c:6008:3
    frame #12: 0x003df287 WatchWCSessionAppWatchOSExtension`mono_threads_attach_coop(domain=0x00000000, dummy=0xb04dcc0c) at threads.c:6045:9
    frame #13: 0x005034b8 WatchWCSessionAppWatchOSExtension`::xamarin_switch_gchandle(self=0x80762c20, to_weak=false) at runtime.m:1805:2
    frame #14: 0x005065c1 WatchWCSessionAppWatchOSExtension`::xamarin_retain_trampoline(self=0x80762c20, sel="retain") at trampolines.m:693:2
    frame #15: 0x657ea520 libobjc.A.dylib`objc_retain + 64
    frame #16: 0x4b4d9caa WatchConnectivity`__66-[WCSession onqueue_handleDictionaryMessageRequest:withPairingID:]_block_invoke + 279
    frame #17: 0x453c7df7 Foundation`__NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 12
    frame #18: 0x453c7cf4 Foundation`-[NSBlockOperation main] + 88
    frame #19: 0x453cacee Foundation`__NSOPERATION_IS_INVOKING_MAIN__ + 27
    frame #20: 0x453c6ebd Foundation`-[NSOperation start] + 835
    frame #21: 0x453cb606 Foundation`__NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 27
    frame #22: 0x453cb12e Foundation`__NSOQSchedule_f + 194
    frame #23: 0x453cb26e Foundation`____addOperations_block_invoke_4 + 20
    frame #24: 0x65de007b libdispatch.dylib`_dispatch_call_block_and_release + 15
    frame #25: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #26: 0x65de3788 libdispatch.dylib`_dispatch_continuation_pop + 421
    frame #27: 0x65de2ee3 libdispatch.dylib`_dispatch_async_redirect_invoke + 818
    frame #28: 0x65df087d libdispatch.dylib`_dispatch_root_queue_drain + 354
    frame #29: 0x65df0ff3 libdispatch.dylib`_dispatch_worker_thread2 + 109
    frame #30: 0x66024fa0 libsystem_pthread.dylib`_pthread_wqthread + 208
    frame #31: 0x66024e44 libsystem_pthread.dylib`start_wqthread + 36
```

Going further, `info` is about this thread:
```
(lldb) p/x *(int *)(((char *) info->node.key) + 0xa0)
(int) $2 = 0x01243f93
(lldb) thread list
Process 12832 stopped
  thread #1: tid = 0x1243ee1, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'tid_303', queue = 'com.apple.main-thread'
  thread #2: tid = 0x1243ee6, 0x65f816e2 libsystem_kernel.dylib`__recvfrom + 10
  thread #3: tid = 0x1243ee7, 0x65f81aea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'SGen worker'
  thread #4: tid = 0x1243ee9, 0x65f7e3d2 libsystem_kernel.dylib`semaphore_wait_trap + 10, name = 'Finalizer'
  thread #5: tid = 0x1243eea, 0x65f816e2 libsystem_kernel.dylib`__recvfrom + 10, name = 'Debugger agent'
  thread #6: tid = 0x1243f1d, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'com.apple.uikit.eventfetch-thread'
  thread #8: tid = 0x1243f93, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
* thread #9: tid = 0x12443a9, 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
  thread #10: tid = 0x1244581, 0x65f7fd32 libsystem_kernel.dylib`__workq_kernreturn + 10
(lldb) thread select 8
* thread #8, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
    frame #0: 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10
libsystem_kernel.dylib`mach_msg_trap:
->  0x65f7e396 <+10>: retl
    0x65f7e397 <+11>: nop

libsystem_kernel.dylib`mach_msg_overwrite_trap:
    0x65f7e398 <+0>:  movl   $0xffffffe0, %eax         ; imm = 0xFFFFFFE0
    0x65f7e39d <+5>:  calll  0x65f85f44                ; _sysenter_trap
(lldb) bt
* thread #8, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
  * frame #0: 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10
    frame #1: 0x65f7e8ff libsystem_kernel.dylib`mach_msg + 47
    frame #2: 0x66079679 libxpc.dylib`_xpc_send_serializer + 104
    frame #3: 0x660794da libxpc.dylib`_xpc_pipe_simpleroutine + 80
    frame #4: 0x66079852 libxpc.dylib`xpc_pipe_simpleroutine + 43
    frame #5: 0x66043a8f libsystem_trace.dylib`___os_activity_stream_reflect_block_invoke_2 + 30
    frame #6: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #7: 0x65de3d71 libdispatch.dylib`_dispatch_block_invoke_direct + 257
    frame #8: 0x65de3c62 libdispatch.dylib`dispatch_block_perform + 112
    frame #9: 0x6604349a libsystem_trace.dylib`_os_activity_stream_reflect + 725
    frame #10: 0x6604ef19 libsystem_trace.dylib`_os_log_impl_stream + 468
    frame #11: 0x6604e44d libsystem_trace.dylib`_os_log_impl_flatten_and_send + 6410
    frame #12: 0x6604cb3b libsystem_trace.dylib`_os_log + 137
    frame #13: 0x6604f4aa libsystem_trace.dylib`_os_log_impl + 31
    frame #14: 0x4b4eb4e9 WatchConnectivity`WCSerializePayloadDictionary + 393
    frame #15: 0x4b4d7c4d WatchConnectivity`-[WCSession onqueue_sendResponseDictionary:identifier:] + 195
    frame #16: 0x4b4da435 WatchConnectivity`__66-[WCSession onqueue_handleDictionaryMessageRequest:withPairingID:]_block_invoke.411 + 35
    frame #17: 0x453c7df7 Foundation`__NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 12
    frame #18: 0x453c7cf4 Foundation`-[NSBlockOperation main] + 88
    frame #19: 0x453cacee Foundation`__NSOPERATION_IS_INVOKING_MAIN__ + 27
    frame #20: 0x453c6ebd Foundation`-[NSOperation start] + 835
    frame #21: 0x453cb606 Foundation`__NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 27
    frame #22: 0x453cb12e Foundation`__NSOQSchedule_f + 194
    frame #23: 0x453cb067 Foundation`____addOperations_block_invoke_2 + 20
    frame #24: 0x65dedf49 libdispatch.dylib`_dispatch_block_async_invoke2 + 77
    frame #25: 0x65de4461 libdispatch.dylib`_dispatch_block_async_invoke_and_release + 17
    frame #26: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #27: 0x65de3788 libdispatch.dylib`_dispatch_continuation_pop + 421
    frame #28: 0x65de2ee3 libdispatch.dylib`_dispatch_async_redirect_invoke + 818
    frame #29: 0x65df087d libdispatch.dylib`_dispatch_root_queue_drain + 354
    frame #30: 0x65df0ff3 libdispatch.dylib`_dispatch_worker_thread2 + 109
    frame #31: 0x66024fa0 libsystem_pthread.dylib`_pthread_wqthread + 208
    frame #32: 0x66024e44 libsystem_pthread.dylib`start_wqthread + 36
```
which is a thread in a "light" detach state (aka. coop detach), where we only unset the domain:
https://github.com/mono/mono/blob/4cefdcb7ce2d939ee78fb45d1b4913eb3bc064fd/mono/metadata/threads.c#L6084-L6111

Fixes mono#17926



<!--
Thank you for your Pull Request!

If you are new to contributing to Mono, please try to do your best at conforming to our coding guidelines http://www.mono-project.com/community/contributing/coding-guidelines/ but don't worry if you get something wrong. One of the project members will help you to get things landed.

Does your pull request fix any of the existing issues? Please use the following format: Fixes #issue-number
-->


Backport of mono#18105.

/cc @lewurm
akoeplinger pushed a commit that referenced this pull request Feb 13, 2020
When the debugger tries to suspend all the VM threads, it can happen with cooperative-suspend that it tries to suspend a thread that has previously been attached, but then did a "light" detach (that only unsets the domain). With the domain set to `NULL`, looking up a JitInfo for a given `ip` will result into a crash, but it isn't even necessarily needed for the purpose of suspending a thread.

More details: Consider the following:
```
(lldb) c
error: Process is running.  Use 'process interrupt' to pause execution.
Process 12832 stopped
* thread #9, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
    frame #0: 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6
   2713         MonoDomain *domain = (MonoDomain *) mono_thread_info_get_suspend_state (info)->unwind_data [MONO_UNWIND_DATA_DOMAIN];
   2714         if (!domain) {
   2715                 /* not attached */
-> 2716                 ji = NULL;
   2717         } else {
   2718                 ji = mono_jit_info_table_find_internal ( domain, MONO_CONTEXT_GET_IP (&mono_thread_info_get_suspend_state (info)->ctx), TRUE, TRUE);
   2719         }
Target 0: (WatchWCSessionAppWatchOSExtension) stopped.
(lldb) bt
* thread #9, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
  * frame #0: 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6
    frame #1: 0x004e177b WatchWCSessionAppWatchOSExtension`mono_thread_info_safe_suspend_and_run(id=0xb0767000, interrupt_kernel=0, callback=(WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical at debugger-agent.c:2708), user_data=0xb04dc718) at mono-threads.c:1358:19
    frame #2: 0x00222799 WatchWCSessionAppWatchOSExtension`notify_thread(key=0x03994508, value=0x81378c00, user_data=0x00000000) at debugger-agent.c:2747:2
    frame #3: 0x00355bd8 WatchWCSessionAppWatchOSExtension`mono_g_hash_table_foreach(hash=0x8007b4e0, func=(WatchWCSessionAppWatchOSExtension`notify_thread at debugger-agent.c:2733), user_data=0x00000000) at mono-hash.c:310:4
    frame #4: 0x0021e955 WatchWCSessionAppWatchOSExtension`suspend_vm at debugger-agent.c:2844:3
    frame #5: 0x00225ff0 WatchWCSessionAppWatchOSExtension`process_event(event=EVENT_KIND_THREAD_START, arg=0x039945d0, il_offset=0, ctx=0x00000000, events=0x805b28e0, suspend_policy=2) at debugger-agent.c:4012:3
    frame #6: 0x00227c7c WatchWCSessionAppWatchOSExtension`process_profiler_event(event=EVENT_KIND_THREAD_START, arg=0x039945d0) at debugger-agent.c:4072:2
    frame #7: 0x0021b174 WatchWCSessionAppWatchOSExtension`thread_startup(prof=0x00000000, tid=2957889536) at debugger-agent.c:4149:2
    frame #8: 0x0037912d WatchWCSessionAppWatchOSExtension`mono_profiler_raise_thread_started(tid=2957889536) at profiler-events.h:103:1
    frame #9: 0x003d53da WatchWCSessionAppWatchOSExtension`fire_attach_profiler_events(tid=0xb04dd000) at threads.c:1120:2
    frame #10: 0x003d4d83 WatchWCSessionAppWatchOSExtension`mono_thread_attach(domain=0x801750a0) at threads.c:1547:2
    frame #11: 0x003df1a1 WatchWCSessionAppWatchOSExtension`mono_threads_attach_coop_internal(domain=0x801750a0, cookie=0xb04dcc0c, stackdata=0xb04dcba8) at threads.c:6008:3
    frame #12: 0x003df287 WatchWCSessionAppWatchOSExtension`mono_threads_attach_coop(domain=0x00000000, dummy=0xb04dcc0c) at threads.c:6045:9
    frame #13: 0x005034b8 WatchWCSessionAppWatchOSExtension`::xamarin_switch_gchandle(self=0x80762c20, to_weak=false) at runtime.m:1805:2
    frame #14: 0x005065c1 WatchWCSessionAppWatchOSExtension`::xamarin_retain_trampoline(self=0x80762c20, sel="retain") at trampolines.m:693:2
    frame #15: 0x657ea520 libobjc.A.dylib`objc_retain + 64
    frame #16: 0x4b4d9caa WatchConnectivity`__66-[WCSession onqueue_handleDictionaryMessageRequest:withPairingID:]_block_invoke + 279
    frame #17: 0x453c7df7 Foundation`__NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 12
    frame #18: 0x453c7cf4 Foundation`-[NSBlockOperation main] + 88
    frame #19: 0x453cacee Foundation`__NSOPERATION_IS_INVOKING_MAIN__ + 27
    frame #20: 0x453c6ebd Foundation`-[NSOperation start] + 835
    frame #21: 0x453cb606 Foundation`__NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 27
    frame #22: 0x453cb12e Foundation`__NSOQSchedule_f + 194
    frame #23: 0x453cb26e Foundation`____addOperations_block_invoke_4 + 20
    frame #24: 0x65de007b libdispatch.dylib`_dispatch_call_block_and_release + 15
    frame #25: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #26: 0x65de3788 libdispatch.dylib`_dispatch_continuation_pop + 421
    frame #27: 0x65de2ee3 libdispatch.dylib`_dispatch_async_redirect_invoke + 818
    frame #28: 0x65df087d libdispatch.dylib`_dispatch_root_queue_drain + 354
    frame #29: 0x65df0ff3 libdispatch.dylib`_dispatch_worker_thread2 + 109
    frame #30: 0x66024fa0 libsystem_pthread.dylib`_pthread_wqthread + 208
    frame #31: 0x66024e44 libsystem_pthread.dylib`start_wqthread + 36
```

Going further, `info` is about this thread:
```
(lldb) p/x *(int *)(((char *) info->node.key) + 0xa0)
(int) $2 = 0x01243f93
(lldb) thread list
Process 12832 stopped
  thread #1: tid = 0x1243ee1, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'tid_303', queue = 'com.apple.main-thread'
  thread #2: tid = 0x1243ee6, 0x65f816e2 libsystem_kernel.dylib`__recvfrom + 10
  thread #3: tid = 0x1243ee7, 0x65f81aea libsystem_kernel.dylib`__psynch_cvwait + 10, name = 'SGen worker'
  thread #4: tid = 0x1243ee9, 0x65f7e3d2 libsystem_kernel.dylib`semaphore_wait_trap + 10, name = 'Finalizer'
  thread #5: tid = 0x1243eea, 0x65f816e2 libsystem_kernel.dylib`__recvfrom + 10, name = 'Debugger agent'
  thread #6: tid = 0x1243f1d, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'com.apple.uikit.eventfetch-thread'
  thread #8: tid = 0x1243f93, 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
* thread #9: tid = 0x12443a9, 0x00222870 WatchWCSessionAppWatchOSExtension`debugger_interrupt_critical(info=0x81365400, user_data=0xb04dc718) at debugger-agent.c:2716:6, name = 'tid_a31f', queue = 'NSOperationQueue 0x8069b670 (QOS: UTILITY)', stop reason = breakpoint 1.1
  thread #10: tid = 0x1244581, 0x65f7fd32 libsystem_kernel.dylib`__workq_kernreturn + 10
(lldb) thread select 8
* thread #8, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
    frame #0: 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10
libsystem_kernel.dylib`mach_msg_trap:
->  0x65f7e396 <+10>: retl
    0x65f7e397 <+11>: nop

libsystem_kernel.dylib`mach_msg_overwrite_trap:
    0x65f7e398 <+0>:  movl   $0xffffffe0, %eax         ; imm = 0xFFFFFFE0
    0x65f7e39d <+5>:  calll  0x65f85f44                ; _sysenter_trap
(lldb) bt
* thread #8, name = 'tid_6d0f', queue = 'NSOperationQueue 0x8069b300 (QOS: UTILITY)'
  * frame #0: 0x65f7e396 libsystem_kernel.dylib`mach_msg_trap + 10
    frame #1: 0x65f7e8ff libsystem_kernel.dylib`mach_msg + 47
    frame #2: 0x66079679 libxpc.dylib`_xpc_send_serializer + 104
    frame #3: 0x660794da libxpc.dylib`_xpc_pipe_simpleroutine + 80
    frame #4: 0x66079852 libxpc.dylib`xpc_pipe_simpleroutine + 43
    frame #5: 0x66043a8f libsystem_trace.dylib`___os_activity_stream_reflect_block_invoke_2 + 30
    frame #6: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #7: 0x65de3d71 libdispatch.dylib`_dispatch_block_invoke_direct + 257
    frame #8: 0x65de3c62 libdispatch.dylib`dispatch_block_perform + 112
    frame #9: 0x6604349a libsystem_trace.dylib`_os_activity_stream_reflect + 725
    frame #10: 0x6604ef19 libsystem_trace.dylib`_os_log_impl_stream + 468
    frame #11: 0x6604e44d libsystem_trace.dylib`_os_log_impl_flatten_and_send + 6410
    frame #12: 0x6604cb3b libsystem_trace.dylib`_os_log + 137
    frame #13: 0x6604f4aa libsystem_trace.dylib`_os_log_impl + 31
    frame #14: 0x4b4eb4e9 WatchConnectivity`WCSerializePayloadDictionary + 393
    frame #15: 0x4b4d7c4d WatchConnectivity`-[WCSession onqueue_sendResponseDictionary:identifier:] + 195
    frame #16: 0x4b4da435 WatchConnectivity`__66-[WCSession onqueue_handleDictionaryMessageRequest:withPairingID:]_block_invoke.411 + 35
    frame #17: 0x453c7df7 Foundation`__NSBLOCKOPERATION_IS_CALLING_OUT_TO_A_BLOCK__ + 12
    frame #18: 0x453c7cf4 Foundation`-[NSBlockOperation main] + 88
    frame #19: 0x453cacee Foundation`__NSOPERATION_IS_INVOKING_MAIN__ + 27
    frame #20: 0x453c6ebd Foundation`-[NSOperation start] + 835
    frame #21: 0x453cb606 Foundation`__NSOPERATIONQUEUE_IS_STARTING_AN_OPERATION__ + 27
    frame #22: 0x453cb12e Foundation`__NSOQSchedule_f + 194
    frame #23: 0x453cb067 Foundation`____addOperations_block_invoke_2 + 20
    frame #24: 0x65dedf49 libdispatch.dylib`_dispatch_block_async_invoke2 + 77
    frame #25: 0x65de4461 libdispatch.dylib`_dispatch_block_async_invoke_and_release + 17
    frame #26: 0x65de126f libdispatch.dylib`_dispatch_client_callout + 14
    frame #27: 0x65de3788 libdispatch.dylib`_dispatch_continuation_pop + 421
    frame #28: 0x65de2ee3 libdispatch.dylib`_dispatch_async_redirect_invoke + 818
    frame #29: 0x65df087d libdispatch.dylib`_dispatch_root_queue_drain + 354
    frame #30: 0x65df0ff3 libdispatch.dylib`_dispatch_worker_thread2 + 109
    frame #31: 0x66024fa0 libsystem_pthread.dylib`_pthread_wqthread + 208
    frame #32: 0x66024e44 libsystem_pthread.dylib`start_wqthread + 36
```
which is a thread in a "light" detach state (aka. coop detach), where we only unset the domain:
https://github.com/mono/mono/blob/4cefdcb7ce2d939ee78fb45d1b4913eb3bc064fd/mono/metadata/threads.c#L6084-L6111

Fixes mono#17926
akoeplinger pushed a commit that referenced this pull request Feb 13, 2020
…ono#18536)

`jit_tls->interp_context` gets initialized lazily, that is, upon the first interpreter execution on a specific thread (e.g. via interp_runtime_invoke). However, with mixed mode the execution can purely happen in AOT code upon the first interaction with the managed debugger.

Stack trace:

```
  thread #1, name = 'tid_407', queue = 'com.apple.main-thread'
    frame #0: 0x0000000190aedc94 libsystem_kernel.dylib`__psynch_cvwait + 8
    frame #1: 0x0000000190a0f094 libsystem_pthread.dylib`_pthread_cond_wait$VARIANT$armv81 + 672
    frame #2: 0x000000010431318c reloadcontext.iOS`mono_os_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-os-mutex.h:219:8
    frame #3: 0x0000000104312a68 reloadcontext.iOS`mono_coop_cond_wait(cond=0x0000000104b9ba78, mutex=0x0000000104b9ba30) at mono-coop-mutex.h:91:2
    frame #4: 0x0000000104312858 reloadcontext.iOS`suspend_current at debugger-agent.c:3021:4
    frame #5: 0x000000010431be18 reloadcontext.iOS`process_event(event=EVENT_KIND_BREAKPOINT, arg=0x0000000145d09ae8, il_offset=0, ctx=0x0000000149015c20, events=0x0000000000000000, suspend_policy=2) at debugger-agent.c:4058:3
    frame #6: 0x0000000104310cf4 reloadcontext.iOS`process_breakpoint_events(_evts=0x000000028351a680, method=0x0000000145d09ae8, ctx=0x0000000149015c20, il_offset=0) at debugger-agent.c:4722:3
    frame #7: 0x000000010432f1c8 reloadcontext.iOS`mono_de_process_breakpoint(void_tls=0x0000000149014e00, from_signal=0) at debugger-engine.c:1141:2
    frame #8: 0x000000010430f238 reloadcontext.iOS`debugger_agent_breakpoint_from_context(ctx=0x000000016f656790) at debugger-agent.c:4938:2
    frame #9: 0x00000001011b73a4 reloadcontext.iOS`sdb_breakpoint_trampoline + 148
    frame #10: 0x00000001008511b4 reloadcontext.iOS`reloadcontext_iOS_Application_Main_string__(args=0x000000010703a030) at Main.cs:14
    frame #11: 0x00000001010f9730 reloadcontext.iOS`wrapper_runtime_invoke_object_runtime_invoke_dynamic_intptr_intptr_intptr_intptr + 272
    frame #12: 0x00000001042fd8b8 reloadcontext.iOS`mono_jit_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at mini-runtime.c:3162:3
    frame #13: 0x0000000104411950 reloadcontext.iOS`do_runtime_invoke(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, exc=0x0000000000000000, error=0x000000016f656ff8) at object.c:3052:11
    frame #14: 0x000000010440c4dc reloadcontext.iOS`mono_runtime_invoke_checked(method=0x0000000145d09ae8, obj=0x0000000000000000, params=0x000000016f656f20, error=0x000000016f656ff8) at object.c:3220:9
    frame #15: 0x0000000104415ae0 reloadcontext.iOS`do_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5184:3
    frame #16: 0x00000001044144ac reloadcontext.iOS`mono_runtime_exec_main_checked(method=0x0000000145d09ae8, args=0x000000010703a030, error=0x000000016f656ff8) at object.c:5281:9
    frame #17: 0x0000000104414500 reloadcontext.iOS`mono_runtime_run_main_checked(method=0x0000000145d09ae8, argc=1, argv=0x000000016f6570d0, error=0x000000016f656ff8) at object.c:4734:9
    frame #18: 0x00000001042d3b54 reloadcontext.iOS`mono_jit_exec_internal(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1320:13
    frame #19: 0x00000001042d39a4 reloadcontext.iOS`mono_jit_exec(domain=0x0000000145f00130, assembly=0x0000000281ba2900, argc=1, argv=0x000000016f6570d0) at driver.c:1265:7
    frame #20: 0x0000000104597994 reloadcontext.iOS`::xamarin_main(argc=5, argv=0x000000016f657a80, launch_mode=XamarinLaunchModeApp) at monotouch-main.m:483:8
    frame #21: 0x00000001008510dc reloadcontext.iOS`main(argc=5, argv=0x000000016f657a80) at main.m:104:11
    frame #22: 0x0000000190af8360 libdyld.dylib`start + 4
[...]
* thread #5, name = 'Debugger agent', stop reason = signal SIGABRT
  * frame #0: 0x0000000190aedec4 libsystem_kernel.dylib`__pthread_kill + 8
    frame #1: 0x0000000190a0d724 libsystem_pthread.dylib`pthread_kill$VARIANT$armv81 + 216
    frame #2: 0x000000019095d844 libsystem_c.dylib`abort + 100
    frame #3: 0x00000001045871b4 reloadcontext.iOS`log_callback(log_domain=0x0000000000000000, log_level="error", message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", fatal=4, user_data=0x0000000000000000) at runtime.m:1213:3
    frame #4: 0x0000000104544fc8 reloadcontext.iOS`eglib_log_adapter(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, message="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n", user_data=0x0000000000000000) at mono-logger.c:405:2
    frame #5: 0x000000010456093c reloadcontext.iOS`monoeg_g_logstr(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, msg="* Assertion at ../../../../../mono/mini/interp/interp.c:7176, condition `context' not met\n") at goutput.c:134:2
    frame #6: 0x0000000104560598 reloadcontext.iOS`monoeg_g_logv_nofree(log_domain=0x0000000000000000, log_level=G_LOG_LEVEL_ERROR, format="* Assertion at %s:%d, condition `%s' not met\n", args="e\x12z\x04\x01") at goutput.c:149:2
    frame #7: 0x000000010456061c reloadcontext.iOS`monoeg_assertion_message(format="* Assertion at %s:%d, condition `%s' not met\n") at goutput.c:184:22
    frame #8: 0x0000000104560674 reloadcontext.iOS`mono_assertion_message(file="../../../../../mono/mini/interp/interp.c", line=7176, condition="context") at goutput.c:203:2
    frame #9: 0x000000010459b570 reloadcontext.iOS`interp_get_resume_state(jit_tls=0x000000014900d000, has_resume_state=0x000000016fc7a9f4, interp_frame=0x000000016fc7a9e8, handler_ip=0x000000016fc7a9e0) at interp.c:7176:2
    frame #10: 0x0000000104319420 reloadcontext.iOS`compute_frame_info(thread=0x0000000104fe4130, tls=0x0000000149014e00, force_update=1) at debugger-agent.c:3422:3
    frame #11: 0x0000000104320d40 reloadcontext.iOS`thread_commands(command=1, p="", end="", buf=0x000000016fc7acf8) at debugger-agent.c:9048:3
    frame #12: 0x000000010431cca0 reloadcontext.iOS`debugger_thread(arg=0x0000000000000000) at debugger-agent.c:10132:10
    frame #13: 0x000000010447eb04 reloadcontext.iOS`start_wrapper_internal(start_info=0x0000000000000000, stack_ptr=0x000000016fc7b000) at threads.c:1232:3
    frame #14: 0x000000010447e788 reloadcontext.iOS`start_wrapper(data=0x000000028203ef40) at threads.c:1305:8
    frame #15: 0x0000000190a11d8c libsystem_pthread.dylib`_pthread_start + 15
[...]
```

Thanks to @drasticactions for helping me to reproduce.

Fixes https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1050615

Co-authored-by: Bernhard Urban-Forster <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants