diff --git a/docs/design/coreclr/botr/method-descriptor.md b/docs/design/coreclr/botr/method-descriptor.md index 496ecc792af6f3..4bf3358928d581 100644 --- a/docs/design/coreclr/botr/method-descriptor.md +++ b/docs/design/coreclr/botr/method-descriptor.md @@ -85,7 +85,9 @@ DWORD MethodDesc::GetAttrs() Method Slots ------------ -Each MethodDesc has a slot, which contains the entry point of the method. The slot and entry point must exist for all methods, even the ones that never run like abstract methods. There are multiple places in the runtime that depend on the 1:1 mapping between entry points and MethodDescs, making this relationship an invariant. +Each MethodDesc has a slot, which contains the current entry point of the method. The slot must exist for all methods, even the ones that never run like abstract methods. There are multiple places in the runtime that depend on mapping between entry points and MethodDescs. + +Each MethodDesc logically has an entry point, but we do not allocate these eagerly at MethodDesc creation time. The invariant is that once the method is identified as a method to run, or is used in virtual overriding, we will allocate the entrypoint. The slot is either in MethodTable or in MethodDesc itself. The location of the slot is determined by `mdcHasNonVtableSlot` bit on MethodDesc. @@ -185,8 +187,6 @@ The target of the temporary entry point is a PreStub, which is a special kind of The **stable entry point** is either the native code or the precode. The **native code** is either jitted code or code saved in NGen image. It is common to talk about jitted code when we actually mean native code. -Temporary entry points are never saved into NGen images. All entry points in NGen images are stable entry points that are never changed. It is an important optimization that reduced private working set. - ![Figure 2](images/methoddesc-fig2.png) Figure 2 Entry Point State Diagram @@ -208,6 +208,7 @@ The methods to get callable entry points from MethodDesc are: - `MethodDesc::GetSingleCallableAddrOfCode` - `MethodDesc::GetMultiCallableAddrOfCode` +- `MethodDesc::TryGetMultiCallableAddrOfCode` - `MethodDesc::GetSingleCallableAddrOfVirtualizedCode` - `MethodDesc::GetMultiCallableAddrOfVirtualizedCode` @@ -220,7 +221,7 @@ The type of precode has to be cheaply computable from the instruction sequence. **StubPrecode** -StubPrecode is the basic precode type. It loads MethodDesc into a scratch register and then jumps. It must be implemented for precodes to work. It is used as fallback when no other specialized precode type is available. +StubPrecode is the basic precode type. It loads MethodDesc into a scratch register2 and then jumps. It must be implemented for precodes to work. It is used as fallback when no other specialized precode type is available. All other precodes types are optional optimizations that the platform specific files turn on via HAS\_XXX\_PRECODE defines. @@ -236,7 +237,7 @@ StubPrecode looks like this on x86: FixupPrecode is used when the final target does not require MethodDesc in scratch register2. The FixupPrecode saves a few cycles by avoiding loading MethodDesc into the scratch register. -The most common usage of FixupPrecode is for method fixups in NGen images. +Most stubs used are the more efficient form, we currently can use this form for everything but interop methods when a specialized form of Precode is not required. The initial state of the FixupPrecode on x86: @@ -254,67 +255,6 @@ Once it has been patched to point to final target: 2 Passing MethodDesc in scratch register is sometimes referred to as **MethodDesc Calling Convention**. -**FixupPrecode chunks** - -FixupPrecode chunk is a space efficient representation of multiple FixupPrecodes. It mirrors the idea of MethodDescChunk by hoisting the similar MethodDesc pointers from multiple FixupPrecodes to a shared area. - -The FixupPrecode chunk saves space and improves code density of the precodes. The code density improvement from FixupPrecode chunks resulted in 1% - 2% gain in big server scenarios on x64. - -The FixupPrecode chunks looks like this on x86: - - jmp Target2 - pop edi // dummy instruction that marks the type of the precode - db MethodDescChunkIndex - db 2 (PrecodeChunkIndex) - - jmp Target1 - pop edi - db MethodDescChunkIndex - db 1 (PrecodeChunkIndex) - - jmp Target0 - pop edi - db MethodDescChunkIndex - db 0 (PrecodeChunkIndex) - - dw pMethodDescBase - -One FixupPrecode chunk corresponds to one MethodDescChunk. There is no 1:1 mapping between the FixupPrecodes in the chunk and MethodDescs in MethodDescChunk though. Each FixupPrecode has index of the method it belongs to. It allows allocating the FixupPrecode in the chunk only for methods that need it. - -**Compact entry points** - -Compact entry point is a space efficient implementation of temporary entry points. - -Temporary entry points implemented using StubPrecode or FixupPrecode can be patched to point to the actual code. Jitted code can call temporary entry point directly. The temporary entry point can be multicallable entry points in this case. - -Compact entry points cannot be patched to point to the actual code. Jitted code cannot call them directly. They are trading off speed for size. Calls to these entry points are indirected via slots in a table (FuncPtrStubs) that are patched to point to the actual entry point eventually. A request for a multicallable entry point allocates a StubPrecode or FixupPrecode on demand in this case. - -The raw speed difference is the cost of an indirect call for a compact entry point vs. the cost of one direct call and one direct jump on the given platform. The later used to be faster by a few percent in large server scenario since it can be predicted by the hardware better (2005). It is not always the case on current (2015) hardware. - -The compact entry points have been historically implemented on x86 only. Their additional complexity, space vs. speed trade-off and hardware advancements made them unjustified on other platforms. - -The compact entry point on x86 looks like this: - - entrypoint0: - mov al,0 - jmp short Dispatch - - entrypoint1: - mov al,1 - jmp short Dispatch - - entrypoint2: - mov al,2 - jmp short Dispatch - - Dispatch: - movzx eax,al - shl eax, 3 - add eax, pBaseMD - jmp PreStub - -The allocation of temporary entry points always tries to pick the smallest temporary entry point from the available choices. For example, a single compact entry point is bigger than a single StubPrecode on x86. The StubPrecode will be preferred over the compact entry point in this case. The allocation of the precode for a stable entry point will try to reuse an allocated temporary entry point precode if one exists of the matching type. - **ThisPtrRetBufPrecode** ThisPtrRetBufPrecode is used to switch a return buffer and the this pointer for open instance delegates returning valuetypes. It is used to convert the calling convention of MyValueType Bar(Foo x) to the calling convention of MyValueType Foo::Bar(). diff --git a/src/coreclr/debug/daccess/daccess.cpp b/src/coreclr/debug/daccess/daccess.cpp index 6dd0f52fa2e55f..5a91413e58fa1c 100644 --- a/src/coreclr/debug/daccess/daccess.cpp +++ b/src/coreclr/debug/daccess/daccess.cpp @@ -3239,6 +3239,10 @@ ClrDataAccess::QueryInterface(THIS_ { ifaceRet = static_cast(this); } + else if (IsEqualIID(interfaceId, __uuidof(ISOSDacInterface15))) + { + ifaceRet = static_cast(this); + } else { *iface = NULL; @@ -8340,6 +8344,44 @@ HRESULT DacMemoryEnumerator::Next(unsigned int count, SOSMemoryRegion regions[], return i < count ? S_FALSE : S_OK; } +HRESULT DacMethodTableSlotEnumerator::Skip(unsigned int count) +{ + mIteratorIndex += count; + return S_OK; +} + +HRESULT DacMethodTableSlotEnumerator::Reset() +{ + mIteratorIndex = 0; + return S_OK; +} + +HRESULT DacMethodTableSlotEnumerator::GetCount(unsigned int* pCount) +{ + if (!pCount) + return E_POINTER; + + *pCount = mMethods.GetCount(); + return S_OK; +} + +HRESULT DacMethodTableSlotEnumerator::Next(unsigned int count, SOSMethodData methods[], unsigned int* pFetched) +{ + if (!pFetched) + return E_POINTER; + + if (!methods) + return E_POINTER; + + unsigned int i = 0; + while (i < count && mIteratorIndex < mMethods.GetCount()) + { + methods[i++] = mMethods.Get(mIteratorIndex++); + } + + *pFetched = i; + return i < count ? S_FALSE : S_OK; +} HRESULT DacGCBookkeepingEnumerator::Init() { diff --git a/src/coreclr/debug/daccess/dacimpl.h b/src/coreclr/debug/daccess/dacimpl.h index 2d2aad5bd1f965..e80e0dd27301ea 100644 --- a/src/coreclr/debug/daccess/dacimpl.h +++ b/src/coreclr/debug/daccess/dacimpl.h @@ -818,7 +818,8 @@ class ClrDataAccess public ISOSDacInterface11, public ISOSDacInterface12, public ISOSDacInterface13, - public ISOSDacInterface14 + public ISOSDacInterface14, + public ISOSDacInterface15 { public: ClrDataAccess(ICorDebugDataTarget * pTarget, ICLRDataTarget * pLegacyTarget=0); @@ -1223,6 +1224,9 @@ class ClrDataAccess virtual HRESULT STDMETHODCALLTYPE GetThreadStaticBaseAddress(CLRDATA_ADDRESS methodTable, CLRDATA_ADDRESS thread, CLRDATA_ADDRESS *nonGCStaticsAddress, CLRDATA_ADDRESS *GCStaticsAddress); virtual HRESULT STDMETHODCALLTYPE GetMethodTableInitializationFlags(CLRDATA_ADDRESS methodTable, MethodTableInitializationFlags *initializationStatus); + // ISOSDacInterface15 + virtual HRESULT STDMETHODCALLTYPE GetMethodTableSlotEnumerator(CLRDATA_ADDRESS mt, ISOSMethodEnum **enumerator); + // // ClrDataAccess. // @@ -1991,6 +1995,29 @@ class DacMemoryEnumerator : public DefaultCOMImpl +{ +public: + DacMethodTableSlotEnumerator() : mIteratorIndex(0) + { + } + + virtual ~DacMethodTableSlotEnumerator() {} + + HRESULT Init(PTR_MethodTable mTable); + + HRESULT STDMETHODCALLTYPE Skip(unsigned int count); + HRESULT STDMETHODCALLTYPE Reset(); + HRESULT STDMETHODCALLTYPE GetCount(unsigned int *pCount); + HRESULT STDMETHODCALLTYPE Next(unsigned int count, SOSMethodData methods[], unsigned int *pFetched); + +protected: + DacReferenceList mMethods; + +private: + unsigned int mIteratorIndex; +}; + class DacHandleTableMemoryEnumerator : public DacMemoryEnumerator { public: diff --git a/src/coreclr/debug/daccess/request.cpp b/src/coreclr/debug/daccess/request.cpp index 291d048eed7ed7..46020eef2f6b11 100644 --- a/src/coreclr/debug/daccess/request.cpp +++ b/src/coreclr/debug/daccess/request.cpp @@ -214,11 +214,15 @@ BOOL DacValidateMD(PTR_MethodDesc pMD) if (retval) { - MethodDesc *pMDCheck = MethodDesc::GetMethodDescFromStubAddr(pMD->GetTemporaryEntryPoint(), TRUE); - - if (PTR_HOST_TO_TADDR(pMD) != PTR_HOST_TO_TADDR(pMDCheck)) + PCODE tempEntryPoint = pMD->GetTemporaryEntryPointIfExists(); + if (tempEntryPoint != (PCODE)NULL) { - retval = FALSE; + MethodDesc *pMDCheck = MethodDesc::GetMethodDescFromStubAddr(tempEntryPoint, TRUE); + + if (PTR_HOST_TO_TADDR(pMD) != PTR_HOST_TO_TADDR(pMDCheck)) + { + retval = FALSE; + } } } @@ -419,7 +423,11 @@ ClrDataAccess::GetMethodTableSlot(CLRDATA_ADDRESS mt, unsigned int slot, CLRDATA else if (slot < mTable->GetNumVtableSlots()) { // Now get the slot: - *value = mTable->GetRestoredSlot(slot); + *value = mTable->GetSlot(slot); + if (*value == 0) + { + hr = S_FALSE; + } } else { @@ -430,8 +438,16 @@ ClrDataAccess::GetMethodTableSlot(CLRDATA_ADDRESS mt, unsigned int slot, CLRDATA MethodDesc * pMD = it.GetMethodDesc(); if (pMD->GetSlot() == slot) { - *value = pMD->GetMethodEntryPoint(); - hr = S_OK; + *value = pMD->GetMethodEntryPointIfExists(); + if (*value == 0) + { + hr = S_FALSE; + } + else + { + hr = S_OK; + } + break; } } } @@ -440,6 +456,89 @@ ClrDataAccess::GetMethodTableSlot(CLRDATA_ADDRESS mt, unsigned int slot, CLRDATA return hr; } +HRESULT +ClrDataAccess::GetMethodTableSlotEnumerator(CLRDATA_ADDRESS mt, ISOSMethodEnum **enumerator) +{ + if (mt == 0 || enumerator == NULL) + return E_INVALIDARG; + + SOSDacEnter(); + + PTR_MethodTable mTable = PTR_MethodTable(TO_TADDR(mt)); + BOOL bIsFree = FALSE; + if (!DacValidateMethodTable(mTable, bIsFree)) + { + hr = E_INVALIDARG; + } + else + { + DacMethodTableSlotEnumerator *methodTableSlotEnumerator = new (nothrow) DacMethodTableSlotEnumerator(); + *enumerator = methodTableSlotEnumerator; + if (*enumerator == NULL) + { + hr = E_OUTOFMEMORY; + } + else + { + hr = methodTableSlotEnumerator->Init(mTable); + } + } + + SOSDacLeave(); + return hr; +} + +HRESULT DacMethodTableSlotEnumerator::Init(PTR_MethodTable mTable) +{ + unsigned int slot = 0; + + WORD numVtableSlots = mTable->GetNumVtableSlots(); + while (slot < numVtableSlots) + { + MethodDesc* pMD = mTable->GetMethodDescForSlot_NoThrow(slot); + SOSMethodData methodData = {0}; + methodData.MethodDesc = HOST_CDADDR(pMD); + methodData.Entrypoint = mTable->GetSlot(slot); + methodData.DefininingMethodTable = PTR_CDADDR(pMD->GetMethodTable()); + methodData.DefiningModule = HOST_CDADDR(pMD->GetModule()); + methodData.Token = pMD->GetMemberDef(); + + methodData.Slot = slot++; + + if (!mMethods.Add(methodData)) + return E_OUTOFMEMORY; + } + + MethodTable::IntroducedMethodIterator it(mTable); + for (; it.IsValid(); it.Next()) + { + MethodDesc* pMD = it.GetMethodDesc(); + WORD slot = pMD->GetSlot(); + if (slot >= numVtableSlots) + { + SOSMethodData methodData = {0}; + methodData.MethodDesc = HOST_CDADDR(pMD); + methodData.Entrypoint = pMD->GetMethodEntryPointIfExists(); + methodData.DefininingMethodTable = PTR_CDADDR(pMD->GetMethodTable()); + methodData.DefiningModule = HOST_CDADDR(pMD->GetModule()); + methodData.Token = pMD->GetMemberDef(); + + if (slot == MethodTable::NO_SLOT) + { + methodData.Slot = 0xFFFFFFFF; + } + else + { + methodData.Slot = slot; + } + + if (!mMethods.Add(methodData)) + return E_OUTOFMEMORY; + } + } + + return S_OK; +} HRESULT ClrDataAccess::GetCodeHeapList(CLRDATA_ADDRESS jitManager, unsigned int count, struct DacpJitCodeHeapInfo codeHeaps[], unsigned int *pNeeded) diff --git a/src/coreclr/inc/corinfo.h b/src/coreclr/inc/corinfo.h index cc1ca62f47485e..3103aa2ec00e3d 100644 --- a/src/coreclr/inc/corinfo.h +++ b/src/coreclr/inc/corinfo.h @@ -894,7 +894,7 @@ enum CORINFO_ACCESS_FLAGS { CORINFO_ACCESS_ANY = 0x0000, // Normal access CORINFO_ACCESS_THIS = 0x0001, // Accessed via the this reference - // UNUSED = 0x0002, + CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT = 0x0002, // Prefer access to a method via slot over using the temporary entrypoint CORINFO_ACCESS_NONNULL = 0x0004, // Instance is guaranteed non-null diff --git a/src/coreclr/inc/gfunc_list.h b/src/coreclr/inc/gfunc_list.h index d5c5b67d9633e3..b7bfa5dc6a5ebd 100644 --- a/src/coreclr/inc/gfunc_list.h +++ b/src/coreclr/inc/gfunc_list.h @@ -13,10 +13,6 @@ DEFINE_DACGFN(DACNotifyCompilationFinished) DEFINE_DACGFN(ThePreStub) -#ifdef TARGET_ARM -DEFINE_DACGFN(ThePreStubCompactARM) -#endif - DEFINE_DACGFN(ThePreStubPatchLabel) #ifdef FEATURE_COMINTEROP DEFINE_DACGFN(Unknown_AddRef) diff --git a/src/coreclr/inc/sospriv.idl b/src/coreclr/inc/sospriv.idl index 98cfa0afe9a515..141f597dcb4e94 100644 --- a/src/coreclr/inc/sospriv.idl +++ b/src/coreclr/inc/sospriv.idl @@ -519,3 +519,46 @@ interface ISOSDacInterface14 : IUnknown HRESULT GetThreadStaticBaseAddress(CLRDATA_ADDRESS methodTable, CLRDATA_ADDRESS thread, CLRDATA_ADDRESS *nonGCStaticsAddress, CLRDATA_ADDRESS *GCStaticsAddress); HRESULT GetMethodTableInitializationFlags(CLRDATA_ADDRESS methodTable, MethodTableInitializationFlags *initializationStatus); } + +cpp_quote("#ifndef _SOS_MethodData") +cpp_quote("#define _SOS_MethodData") + +typedef struct _SOSMethodData +{ + // At least one of MethodDesc, Entrypoint, or Token/DefiningMethodTable/DefiningModule is guaranteed to be set. + // Multiple of them may be set as well + CLRDATA_ADDRESS MethodDesc; + + CLRDATA_ADDRESS Entrypoint; + + CLRDATA_ADDRESS DefininingMethodTable; // Useful for when the method is inherited from a parent type which is instantiated + CLRDATA_ADDRESS DefiningModule; + unsigned int Token; + + // Slot data, a given MethodDesc may be present in multiple slots for a single MethodTable + unsigned int Slot; // Will be set to 0xFFFFFFFF for EnC added methods +} SOSMethodData; + +cpp_quote("#endif //_SOS_MethodData") + +[ + object, + local, + uuid(3c0fe725-c324-4a4f-8100-d399588a662e) +] +interface ISOSMethodEnum : ISOSEnum +{ + HRESULT Next([in] unsigned int count, + [out, size_is(count), length_is(*pNeeded)] SOSMethodData handles[], + [out] unsigned int *pNeeded); +} + +[ + object, + local, + uuid(7ed81261-52a9-4a23-a358-c3313dea30a8) +] +interface ISOSDacInterface15 : IUnknown +{ + HRESULT GetMethodTableSlotEnumerator(CLRDATA_ADDRESS mt, ISOSMethodEnum **enumerator); +} diff --git a/src/coreclr/pal/prebuilt/idl/sospriv_i.cpp b/src/coreclr/pal/prebuilt/idl/sospriv_i.cpp index f070ae5816a8a8..579be51d356f7f 100644 --- a/src/coreclr/pal/prebuilt/idl/sospriv_i.cpp +++ b/src/coreclr/pal/prebuilt/idl/sospriv_i.cpp @@ -121,6 +121,12 @@ MIDL_DEFINE_GUID(IID, IID_ISOSDacInterface13,0x3176a8ed,0x597b,0x4f54,0xa7,0x1f, MIDL_DEFINE_GUID(IID, IID_ISOSDacInterface14,0x9aa22aca,0x6dc6,0x4a0c,0xb4,0xe0,0x70,0xd2,0x41,0x6b,0x98,0x37); + +MIDL_DEFINE_GUID(IID, IID_ISOSMethodEnum,0x3c0fe725,0xc324,0x4a4f,0x81,0x00,0xd3,0x99,0x58,0x8a,0x66,0x2e); + + +MIDL_DEFINE_GUID(IID, IID_ISOSDacInterface15,0x7ed81261,0x52a9,0x4a23,0xa3,0x58,0xc3,0x31,0x3d,0xea,0x30,0xa8); + #undef MIDL_DEFINE_GUID #ifdef __cplusplus diff --git a/src/coreclr/pal/prebuilt/inc/sospriv.h b/src/coreclr/pal/prebuilt/inc/sospriv.h index 64db79c7921cc1..a3d741f740defa 100644 --- a/src/coreclr/pal/prebuilt/inc/sospriv.h +++ b/src/coreclr/pal/prebuilt/inc/sospriv.h @@ -3333,6 +3333,27 @@ EXTERN_C const IID IID_ISOSDacInterface13; #define ISOSDacInterface13_TraverseLoaderHeap(This,loaderHeapAddr,kind,pCallback) \ ( (This)->lpVtbl -> TraverseLoaderHeap(This,loaderHeapAddr,kind,pCallback) ) +#define ISOSDacInterface13_GetDomainLoaderAllocator(This,domainAddress,pLoaderAllocator) \ + ( (This)->lpVtbl -> GetDomainLoaderAllocator(This,domainAddress,pLoaderAllocator) ) + +#define ISOSDacInterface13_GetLoaderAllocatorHeapNames(This,count,ppNames,pNeeded) \ + ( (This)->lpVtbl -> GetLoaderAllocatorHeapNames(This,count,ppNames,pNeeded) ) + +#define ISOSDacInterface13_GetLoaderAllocatorHeaps(This,loaderAllocator,count,pLoaderHeaps,pKinds,pNeeded) \ + ( (This)->lpVtbl -> GetLoaderAllocatorHeaps(This,loaderAllocator,count,pLoaderHeaps,pKinds,pNeeded) ) + +#define ISOSDacInterface13_GetHandleTableMemoryRegions(This,ppEnum) \ + ( (This)->lpVtbl -> GetHandleTableMemoryRegions(This,ppEnum) ) + +#define ISOSDacInterface13_GetGCBookkeepingMemoryRegions(This,ppEnum) \ + ( (This)->lpVtbl -> GetGCBookkeepingMemoryRegions(This,ppEnum) ) + +#define ISOSDacInterface13_GetGCFreeRegions(This,ppEnum) \ + ( (This)->lpVtbl -> GetGCFreeRegions(This,ppEnum) ) + +#define ISOSDacInterface13_LockedFlush(This) \ + ( (This)->lpVtbl -> LockedFlush(This) ) + #endif /* COBJMACROS */ @@ -3456,6 +3477,214 @@ EXTERN_C const IID IID_ISOSDacInterface14; #endif /* __ISOSDacInterface14_INTERFACE_DEFINED__ */ +/* interface __MIDL_itf_sospriv_0000_0019 */ +/* [local] */ + +#ifndef _SOS_MethodData +#define _SOS_MethodData +typedef struct _SOSMethodData + { + CLRDATA_ADDRESS MethodDesc; + CLRDATA_ADDRESS Entrypoint; + CLRDATA_ADDRESS DefininingMethodTable; + CLRDATA_ADDRESS DefiningModule; + unsigned int Token; + unsigned int Slot; + } SOSMethodData; + +#endif //_SOS_MethodData + + +extern RPC_IF_HANDLE __MIDL_itf_sospriv_0000_0019_v0_0_c_ifspec; +extern RPC_IF_HANDLE __MIDL_itf_sospriv_0000_0019_v0_0_s_ifspec; + +#ifndef __ISOSMethodEnum_INTERFACE_DEFINED__ +#define __ISOSMethodEnum_INTERFACE_DEFINED__ + +/* interface ISOSMethodEnum */ +/* [uuid][local][object] */ + + +EXTERN_C const IID IID_ISOSMethodEnum; + +#if defined(__cplusplus) && !defined(CINTERFACE) + + MIDL_INTERFACE("3c0fe725-c324-4a4f-8100-d399588a662e") + ISOSMethodEnum : public ISOSEnum + { + public: + virtual HRESULT STDMETHODCALLTYPE Next( + /* [in] */ unsigned int count, + /* [length_is][size_is][out] */ SOSMethodData handles[ ], + /* [out] */ unsigned int *pNeeded) = 0; + + }; + + +#else /* C style interface */ + + typedef struct ISOSMethodEnumVtbl + { + BEGIN_INTERFACE + + HRESULT ( STDMETHODCALLTYPE *QueryInterface )( + ISOSMethodEnum * This, + /* [in] */ REFIID riid, + /* [annotation][iid_is][out] */ + _COM_Outptr_ void **ppvObject); + + ULONG ( STDMETHODCALLTYPE *AddRef )( + ISOSMethodEnum * This); + + ULONG ( STDMETHODCALLTYPE *Release )( + ISOSMethodEnum * This); + + HRESULT ( STDMETHODCALLTYPE *Skip )( + ISOSMethodEnum * This, + /* [in] */ unsigned int count); + + HRESULT ( STDMETHODCALLTYPE *Reset )( + ISOSMethodEnum * This); + + HRESULT ( STDMETHODCALLTYPE *GetCount )( + ISOSMethodEnum * This, + /* [out] */ unsigned int *pCount); + + HRESULT ( STDMETHODCALLTYPE *Next )( + ISOSMethodEnum * This, + /* [in] */ unsigned int count, + /* [length_is][size_is][out] */ SOSMethodData handles[ ], + /* [out] */ unsigned int *pNeeded); + + END_INTERFACE + } ISOSMethodEnumVtbl; + + interface ISOSMethodEnum + { + CONST_VTBL struct ISOSMethodEnumVtbl *lpVtbl; + }; + + + +#ifdef COBJMACROS + + +#define ISOSMethodEnum_QueryInterface(This,riid,ppvObject) \ + ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) + +#define ISOSMethodEnum_AddRef(This) \ + ( (This)->lpVtbl -> AddRef(This) ) + +#define ISOSMethodEnum_Release(This) \ + ( (This)->lpVtbl -> Release(This) ) + + +#define ISOSMethodEnum_Skip(This,count) \ + ( (This)->lpVtbl -> Skip(This,count) ) + +#define ISOSMethodEnum_Reset(This) \ + ( (This)->lpVtbl -> Reset(This) ) + +#define ISOSMethodEnum_GetCount(This,pCount) \ + ( (This)->lpVtbl -> GetCount(This,pCount) ) + + +#define ISOSMethodEnum_Next(This,count,handles,pNeeded) \ + ( (This)->lpVtbl -> Next(This,count,handles,pNeeded) ) + +#endif /* COBJMACROS */ + + +#endif /* C style interface */ + + + + +#endif /* __ISOSMethodEnum_INTERFACE_DEFINED__ */ + + +#ifndef __ISOSDacInterface15_INTERFACE_DEFINED__ +#define __ISOSDacInterface15_INTERFACE_DEFINED__ + +/* interface ISOSDacInterface15 */ +/* [uuid][local][object] */ + + +EXTERN_C const IID IID_ISOSDacInterface15; + +#if defined(__cplusplus) && !defined(CINTERFACE) + + MIDL_INTERFACE("7ed81261-52a9-4a23-a358-c3313dea30a8") + ISOSDacInterface15 : public IUnknown + { + public: + virtual HRESULT STDMETHODCALLTYPE GetMethodTableSlotEnumerator( + CLRDATA_ADDRESS mt, + ISOSMethodEnum **enumerator) = 0; + + }; + + +#else /* C style interface */ + + typedef struct ISOSDacInterface15Vtbl + { + BEGIN_INTERFACE + + HRESULT ( STDMETHODCALLTYPE *QueryInterface )( + ISOSDacInterface15 * This, + /* [in] */ REFIID riid, + /* [annotation][iid_is][out] */ + _COM_Outptr_ void **ppvObject); + + ULONG ( STDMETHODCALLTYPE *AddRef )( + ISOSDacInterface15 * This); + + ULONG ( STDMETHODCALLTYPE *Release )( + ISOSDacInterface15 * This); + + HRESULT ( STDMETHODCALLTYPE *GetMethodTableSlotEnumerator )( + ISOSDacInterface15 * This, + CLRDATA_ADDRESS mt, + ISOSMethodEnum **enumerator); + + END_INTERFACE + } ISOSDacInterface15Vtbl; + + interface ISOSDacInterface15 + { + CONST_VTBL struct ISOSDacInterface15Vtbl *lpVtbl; + }; + + + +#ifdef COBJMACROS + + +#define ISOSDacInterface15_QueryInterface(This,riid,ppvObject) \ + ( (This)->lpVtbl -> QueryInterface(This,riid,ppvObject) ) + +#define ISOSDacInterface15_AddRef(This) \ + ( (This)->lpVtbl -> AddRef(This) ) + +#define ISOSDacInterface15_Release(This) \ + ( (This)->lpVtbl -> Release(This) ) + + +#define ISOSDacInterface15_GetMethodTableSlotEnumerator(This,mt,enumerator) \ + ( (This)->lpVtbl -> GetMethodTableSlotEnumerator(This,mt,enumerator) ) + +#endif /* COBJMACROS */ + + +#endif /* C style interface */ + + + + +#endif /* __ISOSDacInterface15_INTERFACE_DEFINED__ */ + + /* Additional Prototypes for ALL interfaces */ /* end of Additional Prototypes */ diff --git a/src/coreclr/tools/Common/JitInterface/CorInfoTypes.cs b/src/coreclr/tools/Common/JitInterface/CorInfoTypes.cs index 12e2af5bd3a36a..8b6c585721754e 100644 --- a/src/coreclr/tools/Common/JitInterface/CorInfoTypes.cs +++ b/src/coreclr/tools/Common/JitInterface/CorInfoTypes.cs @@ -546,7 +546,7 @@ public enum CORINFO_ACCESS_FLAGS { CORINFO_ACCESS_ANY = 0x0000, // Normal access CORINFO_ACCESS_THIS = 0x0001, // Accessed via the this reference - // CORINFO_ACCESS_UNUSED = 0x0002, + CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT = 0x0002, // Prefer access to a method via slot over using the temporary entrypoint CORINFO_ACCESS_NONNULL = 0x0004, // Instance is guaranteed non-null diff --git a/src/coreclr/vm/arm/asmhelpers.S b/src/coreclr/vm/arm/asmhelpers.S index 81d92b7a107f09..efd2bcd074bc1e 100644 --- a/src/coreclr/vm/arm/asmhelpers.S +++ b/src/coreclr/vm/arm/asmhelpers.S @@ -210,24 +210,6 @@ LOCAL_LABEL(LNullThis): NESTED_END ThePreStub, _TEXT -// ------------------------------------------------------------------ - NESTED_ENTRY ThePreStubCompactARM, _TEXT, NoHandler - - // r12 - address of compact entry point + PC_REG_RELATIVE_OFFSET - - PROLOG_WITH_TRANSITION_BLOCK - - mov r0, r12 - - bl C_FUNC(PreStubGetMethodDescForCompactEntryPoint) - - mov r12, r0 // pMethodDesc - - EPILOG_WITH_TRANSITION_BLOCK_TAILCALL - - b C_FUNC(ThePreStub) - - NESTED_END ThePreStubCompactARM, _TEXT // ------------------------------------------------------------------ // This method does nothing. It's just a fixed function for the debugger to put a breakpoint on. LEAF_ENTRY ThePreStubPatch, _TEXT diff --git a/src/coreclr/vm/arm/cgencpu.h b/src/coreclr/vm/arm/cgencpu.h index 7c2e3b8160bc67..c98d94ad6affa3 100644 --- a/src/coreclr/vm/arm/cgencpu.h +++ b/src/coreclr/vm/arm/cgencpu.h @@ -71,8 +71,6 @@ EXTERN_C void checkStack(void); #define JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a jump instruction #define BACK_TO_BACK_JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a back to back jump instruction -#define HAS_COMPACT_ENTRYPOINTS 1 - #define HAS_NDIRECT_IMPORT_PRECODE 1 EXTERN_C void getFPReturn(int fpSize, INT64 *pRetVal); diff --git a/src/coreclr/vm/arm/stubs.cpp b/src/coreclr/vm/arm/stubs.cpp index e6302b08bc3c8b..fe58e072c1195b 100644 --- a/src/coreclr/vm/arm/stubs.cpp +++ b/src/coreclr/vm/arm/stubs.cpp @@ -1381,11 +1381,14 @@ VOID StubLinkerCPU::EmitShuffleThunk(ShuffleEntry *pShuffleEntryArray) void StubLinkerCPU::ThumbEmitTailCallManagedMethod(MethodDesc *pMD) { + STANDARD_VM_CONTRACT; + + PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT); // Use direct call if possible. - if (pMD->HasStableEntryPoint()) + if (multiCallableAddr != (PCODE)NULL) { // mov r12, #entry_point - ThumbEmitMovConstant(ThumbReg(12), (TADDR)pMD->GetStableEntryPoint()); + ThumbEmitMovConstant(ThumbReg(12), (TADDR)multiCallableAddr); } else { diff --git a/src/coreclr/vm/arm64/stubs.cpp b/src/coreclr/vm/arm64/stubs.cpp index d1c41a1309f6f7..f12caa85834886 100644 --- a/src/coreclr/vm/arm64/stubs.cpp +++ b/src/coreclr/vm/arm64/stubs.cpp @@ -1614,6 +1614,8 @@ VOID StubLinkerCPU::EmitComputedInstantiatingMethodStub(MethodDesc* pSharedMD, s void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect) { + STANDARD_VM_CONTRACT; + BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP; if (!fTailCall) variationCode = static_cast(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL); @@ -1626,10 +1628,14 @@ void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndir void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall) { + STANDARD_VM_CONTRACT; + + PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT); + // Use direct call if possible. - if (pMD->HasStableEntryPoint()) + if (multiCallableAddr != (PCODE)NULL) { - EmitCallLabel(NewExternalCodeLabel((LPVOID)pMD->GetStableEntryPoint()), fTailCall, FALSE); + EmitCallLabel(NewExternalCodeLabel((LPVOID)multiCallableAddr), fTailCall, FALSE); } else { diff --git a/src/coreclr/vm/array.cpp b/src/coreclr/vm/array.cpp index 546b2292b35270..b034357ef3c13a 100644 --- a/src/coreclr/vm/array.cpp +++ b/src/coreclr/vm/array.cpp @@ -185,7 +185,6 @@ void ArrayClass::InitArrayMethodDesc( PCCOR_SIGNATURE pShortSig, DWORD cShortSig, DWORD dwVtableSlot, - LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker) { STANDARD_VM_CONTRACT; @@ -198,7 +197,7 @@ void ArrayClass::InitArrayMethodDesc( pNewMD->SetStoredMethodSig(pShortSig, cShortSig); _ASSERTE(!pNewMD->MayHaveNativeCode()); - pNewMD->SetTemporaryEntryPoint(pLoaderAllocator, pamTracker); + pNewMD->SetTemporaryEntryPoint(pamTracker); #ifdef _DEBUG _ASSERTE(pNewMD->GetMethodName() && GetDebugClassName()); @@ -509,7 +508,7 @@ MethodTable* Module::CreateArrayMethodTable(TypeHandle elemTypeHnd, CorElementTy pClass->GenerateArrayAccessorCallSig(dwFuncRank, dwFuncType, &pSig, &cSig, pAllocator, pamTracker, FALSE); - pClass->InitArrayMethodDesc(pNewMD, pSig, cSig, numVirtuals + dwMethodIndex, pAllocator, pamTracker); + pClass->InitArrayMethodDesc(pNewMD, pSig, cSig, numVirtuals + dwMethodIndex, pamTracker); dwMethodIndex++; } diff --git a/src/coreclr/vm/class.cpp b/src/coreclr/vm/class.cpp index 9d85bc141e115a..e7aec3bce335e8 100644 --- a/src/coreclr/vm/class.cpp +++ b/src/coreclr/vm/class.cpp @@ -801,7 +801,7 @@ HRESULT EEClass::AddMethodDesc( COMMA_INDEBUG(NULL) ); - pNewMD->SetTemporaryEntryPoint(pAllocator, &dummyAmTracker); + pNewMD->SetTemporaryEntryPoint(&dummyAmTracker); // [TODO] if an exception is thrown, asserts will fire in EX_CATCH_HRESULT() // during an EnC operation due to the debugger thread not being able to @@ -1407,7 +1407,7 @@ void ClassLoader::ValidateMethodsWithCovariantReturnTypes(MethodTable* pMT) { // The real check is that the MethodDesc's must not match, but a simple VTable check will // work most of the time, and is far faster than the GetMethodDescForSlot method. - _ASSERTE(pMT->GetMethodDescForSlot(i) == pParentMT->GetMethodDescForSlot(i)); + _ASSERTE(pMT->GetMethodDescForSlot_NoThrow(i) == pParentMT->GetMethodDescForSlot_NoThrow(i)); continue; } MethodDesc* pMD = pMT->GetMethodDescForSlot(i); @@ -1525,7 +1525,7 @@ void ClassLoader::PropagateCovariantReturnMethodImplSlots(MethodTable* pMT) { // The real check is that the MethodDesc's must not match, but a simple VTable check will // work most of the time, and is far faster than the GetMethodDescForSlot method. - _ASSERTE(pMT->GetMethodDescForSlot(i) == pParentMT->GetMethodDescForSlot(i)); + _ASSERTE(pMT->GetMethodDescForSlot_NoThrow(i) == pParentMT->GetMethodDescForSlot_NoThrow(i)); continue; } @@ -1575,7 +1575,7 @@ void ClassLoader::PropagateCovariantReturnMethodImplSlots(MethodTable* pMT) // This is a vtable slot that needs to be updated to the new overriding method because of the // presence of the attribute. pMT->SetSlot(j, pMT->GetSlot(i)); - _ASSERT(pMT->GetMethodDescForSlot(j) == pMD); + _ASSERT(pMT->GetMethodDescForSlot_NoThrow(j) == pMD); if (!hMTData.IsNull()) hMTData->UpdateImplMethodDesc(pMD, j); diff --git a/src/coreclr/vm/class.h b/src/coreclr/vm/class.h index 74c66714555f36..1eff90b672bb0b 100644 --- a/src/coreclr/vm/class.h +++ b/src/coreclr/vm/class.h @@ -1983,7 +1983,6 @@ class ArrayClass : public EEClass PCCOR_SIGNATURE pShortSig, DWORD cShortSig, DWORD dwVtableSlot, - LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker); // Generate a short sig for an array accessor @@ -2064,17 +2063,6 @@ inline PCODE GetPreStubEntryPoint() return GetEEFuncEntryPoint(ThePreStub); } -#if defined(HAS_COMPACT_ENTRYPOINTS) && defined(TARGET_ARM) - -EXTERN_C void STDCALL ThePreStubCompactARM(); - -inline PCODE GetPreStubCompactARMEntryPoint() -{ - return GetEEFuncEntryPoint(ThePreStubCompactARM); -} - -#endif // defined(HAS_COMPACT_ENTRYPOINTS) && defined(TARGET_ARM) - PCODE TheUMThunkPreStub(); PCODE TheVarargNDirectStub(BOOL hasRetBuffArg); diff --git a/src/coreclr/vm/clsload.cpp b/src/coreclr/vm/clsload.cpp index f694bce1ade52f..c2a2f9c7a90ad1 100644 --- a/src/coreclr/vm/clsload.cpp +++ b/src/coreclr/vm/clsload.cpp @@ -2775,6 +2775,16 @@ TypeHandle ClassLoader::PublishType(const TypeKey *pTypeKey, TypeHandle typeHnd) } CONTRACTL_END; +#ifdef _DEBUG + if (!typeHnd.IsTypeDesc()) + { + // The IsPublished flag is used by various asserts to assure that allocations of + // MethodTable associated memory which do not use the AllocMemTracker of the MethodTableBuilder + // aren't permitted until the MethodTable is in a state where the MethodTable object + // cannot be freed (except by freeing an entire LoaderAllocator) + typeHnd.AsMethodTable()->GetAuxiliaryDataForWrite()->SetIsPublished(); + } +#endif if (pTypeKey->IsConstructed()) { diff --git a/src/coreclr/vm/comutilnative.cpp b/src/coreclr/vm/comutilnative.cpp index a281ac7505d089..5136cb83994b39 100644 --- a/src/coreclr/vm/comutilnative.cpp +++ b/src/coreclr/vm/comutilnative.cpp @@ -1503,7 +1503,7 @@ extern "C" void QCALLTYPE Interlocked_MemoryBarrierProcessWide() static BOOL HasOverriddenMethod(MethodTable* mt, MethodTable* classMT, WORD methodSlot) { CONTRACTL{ - NOTHROW; + THROWS; GC_NOTRIGGER; MODE_ANY; } CONTRACTL_END; @@ -1811,7 +1811,7 @@ static WORD g_slotBeginWrite, g_slotEndWrite; static bool HasOverriddenStreamMethod(MethodTable * pMT, WORD slot) { CONTRACTL{ - NOTHROW; + THROWS; GC_NOTRIGGER; MODE_ANY; } CONTRACTL_END; diff --git a/src/coreclr/vm/dynamicmethod.cpp b/src/coreclr/vm/dynamicmethod.cpp index 12c5d6f0386f74..6d674130010f8c 100644 --- a/src/coreclr/vm/dynamicmethod.cpp +++ b/src/coreclr/vm/dynamicmethod.cpp @@ -189,7 +189,7 @@ void DynamicMethodTable::AddMethodsToList() pResolver->m_DynamicMethodTable = this; pNewMD->m_pResolver = pResolver; - pNewMD->SetTemporaryEntryPoint(m_pDomain->GetLoaderAllocator(), &amt); + pNewMD->SetTemporaryEntryPoint(&amt); #ifdef _DEBUG pNewMD->m_pDebugMethodTable = m_pMethodTable; diff --git a/src/coreclr/vm/frames.cpp b/src/coreclr/vm/frames.cpp index 62a1a78c6c1283..650d1ba485f589 100644 --- a/src/coreclr/vm/frames.cpp +++ b/src/coreclr/vm/frames.cpp @@ -566,7 +566,7 @@ BOOL PrestubMethodFrame::TraceFrame(Thread *thread, BOOL fromPatch, // native code versions, even if they aren't the one that was reported by this trace, see // DebuggerController::PatchTrace() under case TRACE_MANAGED. This alleviates the StubManager from having to prevent the // race that occurs here. - trace->InitForStub(GetFunction()->GetMethodEntryPoint()); + trace->InitForStub(GetFunction()->GetMethodEntryPointIfExists()); } else { @@ -612,7 +612,7 @@ MethodDesc* StubDispatchFrame::GetFunction() { if (m_pRepresentativeMT != NULL) { - pMD = m_pRepresentativeMT->GetMethodDescForSlot(m_representativeSlot); + pMD = m_pRepresentativeMT->GetMethodDescForSlot_NoThrow(m_representativeSlot); #ifndef DACCESS_COMPILE m_pMD = pMD; #endif diff --git a/src/coreclr/vm/genmeth.cpp b/src/coreclr/vm/genmeth.cpp index a4d28d12eff614..64394585c8fc3d 100644 --- a/src/coreclr/vm/genmeth.cpp +++ b/src/coreclr/vm/genmeth.cpp @@ -440,7 +440,7 @@ InstantiatedMethodDesc::NewInstantiatedMethodDesc(MethodTable *pExactMT, // Check that whichever field holds the inst. got setup correctly _ASSERTE((PVOID)pNewMD->GetMethodInstantiation().GetRawArgs() == (PVOID)pInstOrPerInstInfo); - pNewMD->SetTemporaryEntryPoint(pAllocator, &amt); + pNewMD->SetTemporaryEntryPoint(&amt); { // The canonical instantiation is exempt from constraint checks. It's used as the basis @@ -905,7 +905,7 @@ MethodDesc::FindOrCreateAssociatedMethodDesc(MethodDesc* pDefMD, pResultMD->SetIsUnboxingStub(); pResultMD->AsInstantiatedMethodDesc()->SetupWrapperStubWithInstantiations(pMDescInCanonMT, 0, NULL); - pResultMD->SetTemporaryEntryPoint(pAllocator, &amt); + pResultMD->SetTemporaryEntryPoint(&amt); amt.SuppressRelease(); @@ -986,7 +986,7 @@ MethodDesc::FindOrCreateAssociatedMethodDesc(MethodDesc* pDefMD, pNonUnboxingStub->GetNumGenericMethodArgs(), (TypeHandle *)pNonUnboxingStub->GetMethodInstantiation().GetRawArgs()); - pResultMD->SetTemporaryEntryPoint(pAllocator, &amt); + pResultMD->SetTemporaryEntryPoint(&amt); amt.SuppressRelease(); diff --git a/src/coreclr/vm/i386/cgencpu.h b/src/coreclr/vm/i386/cgencpu.h index e99b8f542b5900..05013a5018512b 100644 --- a/src/coreclr/vm/i386/cgencpu.h +++ b/src/coreclr/vm/i386/cgencpu.h @@ -51,8 +51,6 @@ EXTERN_C void SinglecastDelegateInvokeStub(); #define JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a jump instruction #define BACK_TO_BACK_JUMP_ALLOCATE_SIZE 8 // # bytes to allocate for a back to back jump instruction -#define HAS_COMPACT_ENTRYPOINTS 1 - // Needed for PInvoke inlining in ngened images #define HAS_NDIRECT_IMPORT_PRECODE 1 diff --git a/src/coreclr/vm/i386/stublinkerx86.cpp b/src/coreclr/vm/i386/stublinkerx86.cpp index cfe9eec74af2e5..cfd814678fdc11 100644 --- a/src/coreclr/vm/i386/stublinkerx86.cpp +++ b/src/coreclr/vm/i386/stublinkerx86.cpp @@ -2977,9 +2977,13 @@ VOID StubLinkerCPU::EmitComputedInstantiatingMethodStub(MethodDesc* pSharedMD, s #ifdef TARGET_AMD64 VOID StubLinkerCPU::EmitLoadMethodAddressIntoAX(MethodDesc *pMD) { - if (pMD->HasStableEntryPoint()) + STANDARD_VM_CONTRACT; + + PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT); + + if (multiCallableAddr != (PCODE)NULL) { - X86EmitRegLoad(kRAX, pMD->GetStableEntryPoint());// MOV RAX, DWORD + X86EmitRegLoad(kRAX, multiCallableAddr);// MOV RAX, DWORD } else { @@ -2992,14 +2996,17 @@ VOID StubLinkerCPU::EmitLoadMethodAddressIntoAX(MethodDesc *pMD) VOID StubLinkerCPU::EmitTailJumpToMethod(MethodDesc *pMD) { + STANDARD_VM_CONTRACT; + #ifdef TARGET_AMD64 EmitLoadMethodAddressIntoAX(pMD); Emit16(X86_INSTR_JMP_EAX); #else + PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT); // Use direct call if possible - if (pMD->HasStableEntryPoint()) + if (multiCallableAddr != (PCODE)NULL) { - X86EmitNearJump(NewExternalCodeLabel((LPVOID) pMD->GetStableEntryPoint())); + X86EmitNearJump(NewExternalCodeLabel((LPVOID)multiCallableAddr)); } else { diff --git a/src/coreclr/vm/ilstubcache.cpp b/src/coreclr/vm/ilstubcache.cpp index 65afb93865dcd1..cf97371eee30e0 100644 --- a/src/coreclr/vm/ilstubcache.cpp +++ b/src/coreclr/vm/ilstubcache.cpp @@ -193,7 +193,7 @@ MethodDesc* ILStubCache::CreateNewMethodDesc(LoaderHeap* pCreationHeap, MethodTa // the no metadata part of the method desc pMD->m_pszMethodName = (PTR_CUTF8)"IL_STUB"; pMD->InitializeFlags(DynamicMethodDesc::FlagPublic | DynamicMethodDesc::FlagIsILStub); - pMD->SetTemporaryEntryPoint(pMT->GetLoaderAllocator(), pamTracker); + pMD->SetTemporaryEntryPoint(pamTracker); // // convert signature to a compatible signature if needed diff --git a/src/coreclr/vm/jithelpers.cpp b/src/coreclr/vm/jithelpers.cpp index 1bfeaf2b039289..c5798f2b1605f0 100644 --- a/src/coreclr/vm/jithelpers.cpp +++ b/src/coreclr/vm/jithelpers.cpp @@ -5313,7 +5313,7 @@ HCIMPL3(void, JIT_VTableProfile32, Object* obj, CORINFO_METHOD_HANDLE baseMethod WORD slot = pBaseMD->GetSlot(); _ASSERTE(slot < pBaseMD->GetMethodTable()->GetNumVirtuals()); - MethodDesc* pMD = pMT->GetMethodDescForSlot(slot); + MethodDesc* pMD = pMT->GetMethodDescForSlot_NoThrow(slot); MethodDesc* pRecordedMD = (MethodDesc*)DEFAULT_UNKNOWN_HANDLE; if (!pMD->GetLoaderAllocator()->IsCollectible() && !pMD->IsDynamicMethod()) @@ -5362,7 +5362,7 @@ HCIMPL3(void, JIT_VTableProfile64, Object* obj, CORINFO_METHOD_HANDLE baseMethod WORD slot = pBaseMD->GetSlot(); _ASSERTE(slot < pBaseMD->GetMethodTable()->GetNumVirtuals()); - MethodDesc* pMD = pMT->GetMethodDescForSlot(slot); + MethodDesc* pMD = pMT->GetMethodDescForSlot_NoThrow(slot); MethodDesc* pRecordedMD = (MethodDesc*)DEFAULT_UNKNOWN_HANDLE; if (!pMD->GetLoaderAllocator()->IsCollectible() && !pMD->IsDynamicMethod()) diff --git a/src/coreclr/vm/jitinterface.cpp b/src/coreclr/vm/jitinterface.cpp index 935f1ccde8202c..945a254aecbcdf 100644 --- a/src/coreclr/vm/jitinterface.cpp +++ b/src/coreclr/vm/jitinterface.cpp @@ -8585,14 +8585,15 @@ void CEEInfo::getMethodVTableOffset (CORINFO_METHOD_HANDLE methodHnd, bool * isRelative) { CONTRACTL { - NOTHROW; - GC_NOTRIGGER; + THROWS; + GC_TRIGGERS; MODE_PREEMPTIVE; } CONTRACTL_END; - JIT_TO_EE_TRANSITION_LEAF(); + JIT_TO_EE_TRANSITION(); MethodDesc* method = GetMethod(methodHnd); + method->EnsureTemporaryEntryPoint(); //@GENERICS: shouldn't be doing this for instantiated methods as they live elsewhere _ASSERTE(!method->HasMethodInstantiation()); @@ -8606,7 +8607,7 @@ void CEEInfo::getMethodVTableOffset (CORINFO_METHOD_HANDLE methodHnd, *pOffsetAfterIndirection = MethodTable::GetIndexAfterVtableIndirection(method->GetSlot()) * TARGET_POINTER_SIZE /* sizeof(MethodTable::VTableIndir2_t) */; *isRelative = false; - EE_TO_JIT_TRANSITION_LEAF(); + EE_TO_JIT_TRANSITION(); } /*********************************************************************/ diff --git a/src/coreclr/vm/loongarch64/stubs.cpp b/src/coreclr/vm/loongarch64/stubs.cpp index a4006f99e94f7a..8f9b46325db37c 100644 --- a/src/coreclr/vm/loongarch64/stubs.cpp +++ b/src/coreclr/vm/loongarch64/stubs.cpp @@ -1465,6 +1465,8 @@ VOID StubLinkerCPU::EmitComputedInstantiatingMethodStub(MethodDesc* pSharedMD, s void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect) { + STANDARD_VM_CONTRACT; + BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP; if (!fTailCall) variationCode = static_cast(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL); @@ -1477,10 +1479,14 @@ void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndir void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall) { + STANDARD_VM_CONTRACT; + + PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT); + // Use direct call if possible. - if (pMD->HasStableEntryPoint()) + if (multiCallableAddr != (PCODE)NULL) { - EmitCallLabel(NewExternalCodeLabel((LPVOID)pMD->GetStableEntryPoint()), fTailCall, FALSE); + EmitCallLabel(NewExternalCodeLabel((LPVOID)multiCallableAddr), fTailCall, FALSE); } else { diff --git a/src/coreclr/vm/method.cpp b/src/coreclr/vm/method.cpp index 628b8c6f6e45dc..4662b86d8300cd 100644 --- a/src/coreclr/vm/method.cpp +++ b/src/coreclr/vm/method.cpp @@ -209,7 +209,7 @@ LoaderAllocator * MethodDesc::GetDomainSpecificLoaderAllocator() } -HRESULT MethodDesc::EnsureCodeDataExists() +HRESULT MethodDesc::EnsureCodeDataExists(AllocMemTracker *pamTracker) { CONTRACTL { @@ -218,13 +218,20 @@ HRESULT MethodDesc::EnsureCodeDataExists() } CONTRACTL_END; + // Assert that the associated type is published. This isn't quite sufficient to cover the case of allocating + // this while creating a standalone MethodDesc, but catches most of the cases where lost allocations are easy to have happen. + _ASSERTE(pamTracker != NULL || GetMethodTable()->GetAuxiliaryData()->IsPublished()); + if (m_codeData != NULL) return S_OK; LoaderHeap* heap = GetLoaderAllocator()->GetHighFrequencyHeap(); AllocMemTracker amTracker; - MethodDescCodeData* alloc = (MethodDescCodeData*)amTracker.Track_NoThrow(heap->AllocMem_NoThrow(S_SIZE_T(sizeof(MethodDescCodeData)))); + if (pamTracker == NULL) + pamTracker = &amTracker; + + MethodDescCodeData* alloc = (MethodDescCodeData*)pamTracker->Track_NoThrow(heap->AllocMem_NoThrow(S_SIZE_T(sizeof(MethodDescCodeData)))); if (alloc == NULL) return E_OUTOFMEMORY; @@ -240,7 +247,7 @@ HRESULT MethodDesc::SetMethodDescVersionState(PTR_MethodDescVersioningState stat WRAPPER_NO_CONTRACT; HRESULT hr; - IfFailRet(EnsureCodeDataExists()); + IfFailRet(EnsureCodeDataExists(NULL)); _ASSERTE(m_codeData != NULL); if (InterlockedCompareExchangeT(&m_codeData->VersioningState, state, NULL) != NULL) @@ -254,9 +261,10 @@ HRESULT MethodDesc::SetMethodDescVersionState(PTR_MethodDescVersioningState stat PTR_MethodDescVersioningState MethodDesc::GetMethodDescVersionState() { WRAPPER_NO_CONTRACT; - if (m_codeData == NULL) + PTR_MethodDescCodeData codeData = VolatileLoadWithoutBarrier(&m_codeData); + if (codeData == NULL) return NULL; - return m_codeData->VersioningState; + return VolatileLoadWithoutBarrier(&codeData->VersioningState); } //******************************************************************************* @@ -493,7 +501,7 @@ Signature MethodDesc::GetSignature() return Signature(pSig, cSig); } -PCODE MethodDesc::GetMethodEntryPoint() +PCODE MethodDesc::GetMethodEntryPointIfExists() { CONTRACTL { @@ -507,7 +515,7 @@ PCODE MethodDesc::GetMethodEntryPoint() // Similarly to SetMethodEntryPoint(), it is up to the caller to ensure that calls to this function are appropriately // synchronized - // Keep implementations of MethodDesc::GetMethodEntryPoint and MethodDesc::GetAddrOfSlot in sync! + // Keep implementations of MethodDesc::GetMethodEntryPoint, MethodDesc::GetMethodEntryPointIfExists, and MethodDesc::GetAddrOfSlot in sync! if (HasNonVtableSlot()) { @@ -522,6 +530,42 @@ PCODE MethodDesc::GetMethodEntryPoint() return GetMethodTable()->GetSlot(GetSlot()); } +#ifndef DACCESS_COMPILE +PCODE MethodDesc::GetMethodEntryPoint() +{ + CONTRACTL + { + THROWS; + GC_NOTRIGGER; + MODE_ANY; + SUPPORTS_DAC; + } + CONTRACTL_END; + + // Similarly to SetMethodEntryPoint(), it is up to the caller to ensure that calls to this function are appropriately + // synchronized + + // Keep implementations of MethodDesc::GetMethodEntryPoint, MethodDesc::GetMethodEntryPointIfExists, and MethodDesc::GetAddrOfSlot in sync! + + if (HasNonVtableSlot()) + { + SIZE_T size = GetBaseSize(); + + TADDR pSlot = dac_cast(this) + size; + + if (*PTR_PCODE(pSlot) == (PCODE)NULL) + { + EnsureTemporaryEntryPoint(); + _ASSERTE(*PTR_PCODE(pSlot) != (PCODE)NULL); + } + return *PTR_PCODE(pSlot); + } + + _ASSERTE(GetMethodTable()->IsCanonicalMethodTable()); + return GetMethodTable()->GetRestoredSlot(GetSlot()); +} +#endif // DACCESS_COMPILE + PTR_PCODE MethodDesc::GetAddrOfSlot() { CONTRACTL @@ -533,7 +577,7 @@ PTR_PCODE MethodDesc::GetAddrOfSlot() } CONTRACTL_END; - // Keep implementations of MethodDesc::GetMethodEntryPoint and MethodDesc::GetAddrOfSlot in sync! + // Keep implementations of MethodDesc::GetMethodEntryPoint, MethodDesc::GetMethodEntryPointIfExists, and MethodDesc::GetAddrOfSlot in sync! if (HasNonVtableSlot()) { SIZE_T size = GetBaseSize(); @@ -918,6 +962,95 @@ WORD MethodDesc::InterlockedUpdateFlags3(WORD wMask, BOOL fSet) return wOldState; } +BYTE MethodDesc::InterlockedUpdateFlags4(BYTE bMask, BOOL fSet) +{ + LIMITED_METHOD_CONTRACT; + + BYTE bOldState = m_bFlags4; + DWORD dwMask = bMask; + + // We need to make this operation atomic (multiple threads can play with the flags field at the same time). But the flags field + // is a byte and we only have interlock operations over dwords. So we round down the flags field address to the nearest aligned + // dword (along with the intended bitfield mask). Note that we make the assumption that the flags byte is aligned itself, so we + // only have four possibilities: the field already lies on a dword boundary or it's 1, 2 or 3 bytes out + LONG* pdwFlags = (LONG*)((ULONG_PTR)&m_bFlags4 - (offsetof(MethodDesc, m_bFlags4) & 0x3)); + +#ifdef _PREFAST_ +#pragma warning(push) +#pragma warning(disable:6326) // "Suppress PREFast warning about comparing two constants" +#endif // _PREFAST_ + +#if BIGENDIAN + if ((offsetof(MethodDesc, m_bFlags4) & 0x3) == 0) { +#else // !BIGENDIAN + if ((offsetof(MethodDesc, m_bFlags4) & 0x3) == 3) { +#endif // !BIGENDIAN + dwMask <<= 24; + } +#if BIGENDIAN + else if ((offsetof(MethodDesc, m_bFlags4) & 0x3) == 1) { +#else // !BIGENDIAN + else if ((offsetof(MethodDesc, m_bFlags4) & 0x3) == 2) { +#endif // !BIGENDIAN + dwMask <<= 16; + } +#if BIGENDIAN + else if ((offsetof(MethodDesc, m_bFlags4) & 0x3) == 2) { +#else // !BIGENDIAN + else if ((offsetof(MethodDesc, m_bFlags4) & 0x3) == 1) { +#endif // !BIGENDIAN + dwMask <<= 8; + } +#ifdef _PREFAST_ +#pragma warning(pop) +#endif + + if (fSet) + InterlockedOr(pdwFlags, dwMask); + else + InterlockedAnd(pdwFlags, ~dwMask); + + return bOldState; +} + +WORD MethodDescChunk::InterlockedUpdateFlags(WORD wMask, BOOL fSet) +{ + LIMITED_METHOD_CONTRACT; + + WORD wOldState = m_flagsAndTokenRange; + DWORD dwMask = wMask; + + // We need to make this operation atomic (multiple threads can play with the flags field at the same time). But the flags field + // is a word and we only have interlock operations over dwords. So we round down the flags field address to the nearest aligned + // dword (along with the intended bitfield mask). Note that we make the assumption that the flags word is aligned itself, so we + // only have two possibilities: the field already lies on a dword boundary or it's precisely one word out. + LONG* pdwFlags = (LONG*)((ULONG_PTR)&m_flagsAndTokenRange - (offsetof(MethodDescChunk, m_flagsAndTokenRange) & 0x3)); + +#ifdef _PREFAST_ +#pragma warning(push) +#pragma warning(disable:6326) // "Suppress PREFast warning about comparing two constants" +#endif // _PREFAST_ + +#if BIGENDIAN + if ((offsetof(MethodDescChunk, m_flagsAndTokenRange) & 0x3) == 0) { +#else // !BIGENDIAN + if ((offsetof(MethodDescChunk, m_flagsAndTokenRange) & 0x3) != 0) { +#endif // !BIGENDIAN + static_assert_no_msg(sizeof(m_flagsAndTokenRange) == 2); + dwMask <<= 16; + } +#ifdef _PREFAST_ +#pragma warning(pop) +#endif + + if (fSet) + InterlockedOr(pdwFlags, dwMask); + else + InterlockedAnd(pdwFlags, ~dwMask); + + return wOldState; +} + #endif // !DACCESS_COMPILE //******************************************************************************* @@ -1736,13 +1869,6 @@ MethodDescChunk *MethodDescChunk::CreateChunk(LoaderHeap *pHeap, DWORD methodDes DWORD maxMethodDescsPerChunk = (DWORD)(MethodDescChunk::MaxSizeOfMethodDescs / oneSize); - // Limit the maximum MethodDescs per chunk by the number of precodes that can fit to a single memory page, - // since we allocate consecutive temporary entry points for all MethodDescs in the whole chunk. - DWORD maxPrecodesPerPage = Precode::GetMaxTemporaryEntryPointsCount(); - - if (maxPrecodesPerPage < maxMethodDescsPerChunk) - maxMethodDescsPerChunk = maxPrecodesPerPage; - if (methodDescCount == 0) methodDescCount = maxMethodDescsPerChunk; @@ -1753,10 +1879,10 @@ MethodDescChunk *MethodDescChunk::CreateChunk(LoaderHeap *pHeap, DWORD methodDes DWORD count = min(methodDescCount, maxMethodDescsPerChunk); void * pMem = pamTracker->Track( - pHeap->AllocMem(S_SIZE_T(sizeof(TADDR) + sizeof(MethodDescChunk) + oneSize * count))); + pHeap->AllocMem(S_SIZE_T(sizeof(MethodDescChunk) + oneSize * count))); // Skip pointer to temporary entrypoints - MethodDescChunk * pChunk = (MethodDescChunk *)((BYTE*)pMem + sizeof(TADDR)); + MethodDescChunk * pChunk = (MethodDescChunk *)((BYTE*)pMem); pChunk->SetSizeAndCount(oneSize * count, count); pChunk->SetMethodTable(pInitialMT); @@ -1765,7 +1891,6 @@ MethodDescChunk *MethodDescChunk::CreateChunk(LoaderHeap *pHeap, DWORD methodDes for (DWORD i = 0; i < count; i++) { pMD->SetChunkIndex(pChunk); - pMD->SetMethodDescIndex(i); pMD->SetClassification(classification); if (fNonVtableSlot) @@ -2031,6 +2156,9 @@ PCODE MethodDesc::TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_FLAGS accessFlags _ASSERTE((accessFlags & ~CORINFO_ACCESS_LDFTN) == 0); } + if (RequiresStableEntryPoint() && !HasStableEntryPoint()) + GetOrCreatePrecode(); + // We create stable entrypoints for these upfront if (IsWrapperStub() || IsEnCAddedMethod()) return GetStableEntryPoint(); @@ -2070,6 +2198,10 @@ PCODE MethodDesc::TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_FLAGS accessFlags if (IsVersionableWithVtableSlotBackpatch()) { // Caller has to call via slot or allocate funcptr stub + + // But we need to ensure that some entrypoint is allocated and present in the slot, so that + // it can be used. + EnsureTemporaryEntryPoint(); return (PCODE)NULL; } @@ -2077,16 +2209,24 @@ PCODE MethodDesc::TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_FLAGS accessFlags if (MayHavePrecode()) return GetOrCreatePrecode()->GetEntryPoint(); -#ifdef HAS_COMPACT_ENTRYPOINTS - // Caller has to call via slot or allocate funcptr stub - return NULL; -#else // HAS_COMPACT_ENTRYPOINTS - // - // Embed call to the temporary entrypoint into the code. It will be patched - // to point to the actual code later. - // - return GetTemporaryEntryPoint(); -#endif // HAS_COMPACT_ENTRYPOINTS + _ASSERTE(!RequiresStableEntryPoint()); + + if (accessFlags & CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT) + { + // If this access flag is set, prefer returning NULL over returning the temporary entrypoint + // But we need to ensure that some entrypoint is allocated and present in the slot, so that + // it can be used. + EnsureTemporaryEntryPoint(); + return (PCODE)NULL; + } + else + { + // + // Embed call to the temporary entrypoint into the code. It will be patched + // to point to the actual code later. + // + return GetTemporaryEntryPoint(); + } } //******************************************************************************* @@ -2214,7 +2354,8 @@ BOOL MethodDesc::IsPointingToPrestub() { if (IsVersionableWithVtableSlotBackpatch()) { - return GetMethodEntryPoint() == GetTemporaryEntryPoint(); + PCODE methodEntrypoint = GetMethodEntryPointIfExists(); + return methodEntrypoint == GetTemporaryEntryPointIfExists() && methodEntrypoint != (PCODE)NULL; } return TRUE; } @@ -2321,6 +2462,24 @@ BOOL MethodDesc::RequiresMethodDescCallingConvention(BOOL fEstimateForChunk /*=F //******************************************************************************* BOOL MethodDesc::RequiresStableEntryPoint(BOOL fEstimateForChunk /*=FALSE*/) +{ + BYTE bFlags4 = VolatileLoadWithoutBarrier(&m_bFlags4); + if (bFlags4 & enum_flag4_ComputedRequiresStableEntryPoint) + { + return (bFlags4 & enum_flag4_RequiresStableEntryPoint) != 0; + } + else + { + if (fEstimateForChunk) + return RequiresStableEntryPointCore(fEstimateForChunk); + BOOL fRequiresStableEntryPoint = RequiresStableEntryPointCore(FALSE); + BYTE requiresStableEntrypointFlags = (BYTE)(enum_flag4_ComputedRequiresStableEntryPoint | (fRequiresStableEntryPoint ? enum_flag4_RequiresStableEntryPoint : 0)); + InterlockedUpdateFlags4(requiresStableEntrypointFlags, TRUE); + return fRequiresStableEntryPoint; + } +} + +BOOL MethodDesc::RequiresStableEntryPointCore(BOOL fEstimateForChunk) { LIMITED_METHOD_CONTRACT; @@ -2464,14 +2623,6 @@ MethodDesc* MethodDesc::GetMethodDescFromStubAddr(PCODE addr, BOOL fSpeculative MethodDesc * pMD = NULL; -#ifdef HAS_COMPACT_ENTRYPOINTS - if (MethodDescChunk::IsCompactEntryPointAtAddress(addr)) - { - pMD = MethodDescChunk::GetMethodDescFromCompactEntryPoint(addr, fSpeculative); - RETURN(pMD); - } -#endif // HAS_COMPACT_ENTRYPOINTS - // Otherwise this must be some kind of precode // PTR_Precode pPrecode = Precode::GetPrecodeFromEntryPoint(addr, fSpeculative); @@ -2485,485 +2636,167 @@ MethodDesc* MethodDesc::GetMethodDescFromStubAddr(PCODE addr, BOOL fSpeculative RETURN(NULL); // Not found } -#ifdef HAS_COMPACT_ENTRYPOINTS - -#if defined(TARGET_X86) - -#include -static const struct CentralJumpCode { - BYTE m_movzxEAX[3]; - BYTE m_shlEAX[3]; - BYTE m_addEAX[1]; - MethodDesc* m_pBaseMD; - BYTE m_jmp[1]; - INT32 m_rel32; - - inline void Setup(CentralJumpCode* pCodeRX, MethodDesc* pMD, PCODE target, LoaderAllocator *pLoaderAllocator) { - WRAPPER_NO_CONTRACT; - m_pBaseMD = pMD; - m_rel32 = rel32UsingJumpStub(&pCodeRX->m_rel32, target, pMD, pLoaderAllocator); - } - - inline BOOL CheckTarget(TADDR target) { - LIMITED_METHOD_CONTRACT; - TADDR addr = rel32Decode(PTR_HOST_MEMBER_TADDR(CentralJumpCode, this, m_rel32)); - return (addr == target); +//******************************************************************************* +#ifndef DACCESS_COMPILE +PCODE MethodDesc::GetTemporaryEntryPoint() +{ + CONTRACTL + { + THROWS; + GC_NOTRIGGER; + MODE_ANY; } -} -c_CentralJumpCode = { - { 0x0F, 0xB6, 0xC0 }, // movzx eax,al - { 0xC1, 0xE0, MethodDesc::ALIGNMENT_SHIFT }, // shl eax, MethodDesc::ALIGNMENT_SHIFT - { 0x05 }, NULL, // add eax, pBaseMD - { 0xE9 }, 0 // jmp PreStub -}; -#include - -#elif defined(TARGET_ARM) - -#include -struct CentralJumpCode { - BYTE m_ldrPC[4]; - BYTE m_short[2]; - MethodDescChunk *m_pChunk; - PCODE m_target; + CONTRACTL_END; - inline void Setup(PCODE target, MethodDescChunk *pChunk) { - WRAPPER_NO_CONTRACT; + _ASSERTE(GetMethodTable()->GetAuxiliaryData()->IsPublished()); - m_target = target; - m_pChunk = pChunk; - } + PCODE pEntryPoint = GetTemporaryEntryPointIfExists(); + if (pEntryPoint != (PCODE)NULL) + return pEntryPoint; - inline BOOL CheckTarget(TADDR target) { - WRAPPER_NO_CONTRACT; - return ((TADDR)m_target == target); - } -} -c_CentralJumpCode = { - { 0xDF, 0xF8, 0x08, 0xF0 }, // ldr pc, =pTarget - { 0x00, 0x00 }, // short offset for alignment - 0, // pChunk - 0 // pTarget -}; -#include + EnsureTemporaryEntryPoint(); + pEntryPoint = GetTemporaryEntryPointIfExists(); + _ASSERTE(pEntryPoint != (PCODE)NULL); -#else -#error Unsupported platform +#ifdef _DEBUG + MethodDesc * pMD = MethodDesc::GetMethodDescFromStubAddr(pEntryPoint); + _ASSERTE(PTR_HOST_TO_TADDR(this) == PTR_HOST_TO_TADDR(pMD)); #endif -typedef DPTR(struct CentralJumpCode) PTR_CentralJumpCode; -#define TEP_CENTRAL_JUMP_SIZE sizeof(c_CentralJumpCode) -static_assert_no_msg((TEP_CENTRAL_JUMP_SIZE & 1) == 0); - -#define TEP_ENTRY_SIZE 4 - -#ifdef TARGET_ARM - -#define TEP_HALF_ENTRY_SIZE (TEP_ENTRY_SIZE / 2) - -// Compact entry point on arm consists of two thumb instructions: -// mov r12, pc -// b CentralJumpCode - -// First instruction 0x46fc -#define TEP_ENTRY_INSTR1_BYTE1 0xFC -#define TEP_ENTRY_INSTR1_BYTE2 0x46 - -// Mask for unconditional branch opcode -#define TEP_ENTRY_INSTR2_MASK1 0xE0 - -// Mask for opcode -#define TEP_ENTRY_INSTR2_MASK2 0xF8 - -// Bit used for ARM to identify compact entry points -#define COMPACT_ENTRY_ARM_CODE 0x2 - -/* static */ int MethodDescChunk::GetCompactEntryPointMaxCount () -{ - LIMITED_METHOD_DAC_CONTRACT; - - return MAX_OFFSET_UNCONDITIONAL_BRANCH_THUMB / TEP_ENTRY_SIZE; -} - -// Get offset from the start of current compact entry point to the CentralJumpCode -static uint16_t DecodeOffsetFromBranchToCentralJump (uint16_t instr) -{ - int16_t offset = decodeUnconditionalBranchThumb ((LPBYTE) &instr); - - offset += PC_REG_RELATIVE_OFFSET + TEP_HALF_ENTRY_SIZE; - - _ASSERTE (offset >= TEP_ENTRY_SIZE && (offset % TEP_ENTRY_SIZE == 0)); - - return (uint16_t) offset; -} - -#ifndef DACCESS_COMPILE - -// Encode branch instruction to central jump for current compact entry point -static uint16_t EncodeBranchToCentralJump (int16_t offset) -{ - _ASSERTE (offset >= 0 && (offset % TEP_ENTRY_SIZE == 0)); - - offset += TEP_HALF_ENTRY_SIZE - PC_REG_RELATIVE_OFFSET; - - uint16_t instr; - emitUnconditionalBranchThumb ((LPBYTE) &instr, offset); - - return instr; + return pEntryPoint; } - -#endif // DACCESS_COMPILE - -#else // TARGET_ARM - -#define TEP_MAX_BEFORE_INDEX (1 + (127 / TEP_ENTRY_SIZE)) -#define TEP_MAX_BLOCK_INDEX (TEP_MAX_BEFORE_INDEX + (128 - TEP_CENTRAL_JUMP_SIZE) / TEP_ENTRY_SIZE) -#define TEP_FULL_BLOCK_SIZE (TEP_MAX_BLOCK_INDEX * TEP_ENTRY_SIZE + TEP_CENTRAL_JUMP_SIZE) - -#endif // TARGET_ARM - -BOOL MethodDescChunk::IsCompactEntryPointAtAddress(PCODE addr) -{ - LIMITED_METHOD_DAC_CONTRACT; - -#if defined(TARGET_X86) || defined(TARGET_AMD64) - - // Compact entrypoints start at odd addresses - return (addr & 1) != 0; - -#elif defined(TARGET_ARM) - - // Compact entrypoints start at odd addresses (thumb) with second bit set to 1 - uint8_t compactEntryPointMask = THUMB_CODE | COMPACT_ENTRY_ARM_CODE; - return (addr & compactEntryPointMask) == compactEntryPointMask; - -#else - #error Unsupported platform #endif -} +#ifndef DACCESS_COMPILE //******************************************************************************* -/* static */ MethodDesc* MethodDescChunk::GetMethodDescFromCompactEntryPoint(PCODE addr, BOOL fSpeculative /*=FALSE*/) +void MethodDesc::SetTemporaryEntryPoint(AllocMemTracker *pamTracker) { - LIMITED_METHOD_CONTRACT; + WRAPPER_NO_CONTRACT; -#ifdef DACCESS_COMPILE - // Always use speculative checks with DAC - fSpeculative = TRUE; + _ASSERTE(pamTracker != NULL); + EnsureTemporaryEntryPointCore(pamTracker); + +#ifdef _DEBUG + PTR_PCODE pSlot = GetAddrOfSlot(); + _ASSERTE(*pSlot != (PCODE)NULL); #endif - // Always do consistency check in debug - if (fSpeculative INDEBUG(|| TRUE)) + if (RequiresStableEntryPoint()) { -#ifdef TARGET_ARM - TADDR instrCodeAddr = PCODEToPINSTR(addr); - if (!IsCompactEntryPointAtAddress(addr) || - *PTR_BYTE(instrCodeAddr) != TEP_ENTRY_INSTR1_BYTE1 || - *PTR_BYTE(instrCodeAddr+1) != TEP_ENTRY_INSTR1_BYTE2) -#else // TARGET_ARM - if ((addr & 3) != 1 || - *PTR_BYTE(addr) != X86_INSTR_MOV_AL || - *PTR_BYTE(addr+2) != X86_INSTR_JMP_REL8) -#endif // TARGET_ARM - { - if (fSpeculative) return NULL; - _ASSERTE(!"Unexpected code in temporary entrypoint"); - } + // The rest of the system assumes that certain methods always have stable entrypoints. + // Mark the precode as such + MarkPrecodeAsStableEntrypoint(); } +} -#ifdef TARGET_ARM - - // On ARM compact entry points are thumb - _ASSERTE ((addr & THUMB_CODE) != 0); - addr = addr - THUMB_CODE; - - // Get offset for CentralJumpCode from current compact entry point - PTR_UINT16 pBranchInstr = (PTR_UINT16(addr)) + 1; - uint16_t offset = DecodeOffsetFromBranchToCentralJump (*pBranchInstr); - - TADDR centralJump = addr + offset; - int index = (centralJump - addr - TEP_ENTRY_SIZE) / TEP_ENTRY_SIZE; - -#else // TARGET_ARM - - int index = *PTR_BYTE(addr+1); - TADDR centralJump = addr + 4 + *PTR_SBYTE(addr+3); - -#endif // TARGET_ARM - - CentralJumpCode* pCentralJumpCode = PTR_CentralJumpCode(centralJump); - - // Always do consistency check in debug - if (fSpeculative INDEBUG(|| TRUE)) +void MethodDesc::EnsureTemporaryEntryPoint() +{ + CONTRACTL { - SIZE_T i; - for (i = 0; i < TEP_CENTRAL_JUMP_SIZE; i++) - { - BYTE b = ((BYTE*)&c_CentralJumpCode)[i]; - if (b != 0 && b != *PTR_BYTE(centralJump+i)) - { - if (fSpeculative) return NULL; - _ASSERTE(!"Unexpected code in temporary entrypoint"); - } - } - -#ifdef TARGET_ARM - - _ASSERTE_IMPL(pCentralJumpCode->CheckTarget(GetPreStubCompactARMEntryPoint())); - -#else // TARGET_ARM - - _ASSERTE_IMPL(pCentralJumpCode->CheckTarget(GetPreStubEntryPoint())); - -#endif // TARGET_ARM + THROWS; + GC_NOTRIGGER; + MODE_ANY; } + CONTRACTL_END; -#ifdef TARGET_ARM - // Go through all MethodDesc in MethodDescChunk and find the one with the required index - PTR_MethodDescChunk pChunk = *((DPTR(PTR_MethodDescChunk))(centralJump + offsetof(CentralJumpCode, m_pChunk))); - TADDR pMD = PTR_HOST_TO_TADDR (pChunk->GetFirstMethodDesc ()); - - _ASSERTE (index >= 0 && index < ((int) pChunk->GetCount ())); - - index = ((int) pChunk->GetCount ()) - 1 - index; - - SIZE_T totalSize = 0; - int curIndex = 0; + // Since this can allocate memory that won't be freed, we need to make sure that the associated MethodTable + // is fully allocated and permanent. + _ASSERTE(GetMethodTable()->GetAuxiliaryData()->IsPublished()); - while (index != curIndex) + if (GetTemporaryEntryPointIfExists() == (PCODE)NULL) { - SIZE_T sizeCur = (PTR_MethodDesc (pMD))->SizeOf (); - totalSize += sizeCur; - - pMD += sizeCur; - ++curIndex; + EnsureTemporaryEntryPointCore(NULL); } - - return PTR_MethodDesc (pMD); -#else // TARGET_ARM - return PTR_MethodDesc((TADDR)pCentralJumpCode->m_pBaseMD + index * MethodDesc::ALIGNMENT); -#endif // TARGET_ARM -} - -//******************************************************************************* -SIZE_T MethodDescChunk::SizeOfCompactEntryPoints(int count) -{ - LIMITED_METHOD_DAC_CONTRACT; - -#ifdef TARGET_ARM - - return COMPACT_ENTRY_ARM_CODE + count * TEP_ENTRY_SIZE + TEP_CENTRAL_JUMP_SIZE; - -#else // TARGET_ARM - - int fullBlocks = count / TEP_MAX_BLOCK_INDEX; - int remainder = count % TEP_MAX_BLOCK_INDEX; - - return 1 + (fullBlocks * TEP_FULL_BLOCK_SIZE) + - (remainder * TEP_ENTRY_SIZE) + ((remainder != 0) ? TEP_CENTRAL_JUMP_SIZE : 0); - -#endif // TARGET_ARM } -#ifndef DACCESS_COMPILE -TADDR MethodDescChunk::AllocateCompactEntryPoints(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker) +void MethodDesc::EnsureTemporaryEntryPointCore(AllocMemTracker *pamTracker) { - CONTRACTL { + CONTRACTL + { THROWS; GC_NOTRIGGER; - } CONTRACTL_END; - - int count = GetCount(); - - SIZE_T size = SizeOfCompactEntryPoints(count); - - TADDR temporaryEntryPoints = (TADDR)pamTracker->Track(pLoaderAllocator->GetPrecodeHeap()->AllocAlignedMem(size, sizeof(TADDR))); - ExecutableWriterHolder temporaryEntryPointsWriterHolder((void *)temporaryEntryPoints, size); - size_t rxOffset = temporaryEntryPoints - (TADDR)temporaryEntryPointsWriterHolder.GetRW(); - -#ifdef TARGET_ARM - BYTE* p = (BYTE*)temporaryEntryPointsWriterHolder.GetRW() + COMPACT_ENTRY_ARM_CODE; - int relOffset = count * TEP_ENTRY_SIZE - TEP_ENTRY_SIZE; // relative offset for the short jump - - _ASSERTE (relOffset < MAX_OFFSET_UNCONDITIONAL_BRANCH_THUMB); -#else // TARGET_ARM - // make the temporary entrypoints unaligned, so they are easy to identify - BYTE* p = (BYTE*)temporaryEntryPointsWriterHolder.GetRW() + 1; - int indexInBlock = TEP_MAX_BLOCK_INDEX; // recompute relOffset in first iteration - int relOffset = 0; // relative offset for the short jump -#endif // TARGET_ARM - - MethodDesc * pBaseMD = 0; // index of the start of the block + MODE_ANY; + } + CONTRACTL_END; - MethodDesc * pMD = GetFirstMethodDesc(); - for (int index = 0; index < count; index++) + if (GetTemporaryEntryPointIfExists() == (PCODE)NULL) { -#ifdef TARGET_ARM + GetMethodDescChunk()->DetermineAndSetIsEligibleForTieredCompilation(); + PTR_PCODE pSlot = GetAddrOfSlot(); - uint8_t *pMovInstrByte1 = (uint8_t *)p; - uint8_t *pMovInstrByte2 = (uint8_t *)p+1; - uint16_t *pBranchInstr = ((uint16_t *)p)+1; - - *pMovInstrByte1 = TEP_ENTRY_INSTR1_BYTE1; - *pMovInstrByte2 = TEP_ENTRY_INSTR1_BYTE2; - *pBranchInstr = EncodeBranchToCentralJump ((int16_t) relOffset); - - p += TEP_ENTRY_SIZE; - -#else // TARGET_ARM - - if (indexInBlock == TEP_MAX_BLOCK_INDEX) - { - relOffset = (min(count - index, TEP_MAX_BEFORE_INDEX) - 1) * TEP_ENTRY_SIZE; - indexInBlock = 0; - pBaseMD = pMD; - } + AllocMemTracker amt; + AllocMemTracker *pamTrackerPrecode = pamTracker != NULL ? pamTracker : &amt; + Precode* pPrecode = Precode::Allocate(GetPrecodeType(), this, GetLoaderAllocator(), pamTrackerPrecode); - *(p+0) = X86_INSTR_MOV_AL; - int methodDescIndex = pMD->GetMethodDescChunkIndex() - pBaseMD->GetMethodDescChunkIndex(); - _ASSERTE(FitsInU1(methodDescIndex)); - *(p+1) = (BYTE)methodDescIndex; + IfFailThrow(EnsureCodeDataExists(pamTracker)); - *(p+2) = X86_INSTR_JMP_REL8; - _ASSERTE(FitsInI1(relOffset)); - *(p+3) = (BYTE)relOffset; + if (InterlockedCompareExchangeT(&m_codeData->TemporaryEntryPoint, pPrecode->GetEntryPoint(), (PCODE)NULL) == (PCODE)NULL) + amt.SuppressRelease(); // We only need to suppress the release if we are working with a MethodDesc which is not newly allocated - p += TEP_ENTRY_SIZE; static_assert_no_msg(TEP_ENTRY_SIZE == 4); + PCODE tempEntryPoint = m_codeData->TemporaryEntryPoint; + _ASSERTE(tempEntryPoint != (PCODE)NULL); - if (relOffset == 0) + if (*pSlot == (PCODE)NULL) { - CentralJumpCode* pCode = (CentralJumpCode*)p; - CentralJumpCode* pCodeRX = (CentralJumpCode*)(p + rxOffset); - - memcpy(pCode, &c_CentralJumpCode, TEP_CENTRAL_JUMP_SIZE); - - pCode->Setup(pCodeRX, pBaseMD, GetPreStubEntryPoint(), pLoaderAllocator); - - p += TEP_CENTRAL_JUMP_SIZE; - - relOffset -= TEP_CENTRAL_JUMP_SIZE; + InterlockedCompareExchangeT(pSlot, tempEntryPoint, (PCODE)NULL); } - - indexInBlock++; - -#endif // TARGET_ARM - - relOffset -= TEP_ENTRY_SIZE; - pMD = (MethodDesc *)((BYTE *)pMD + pMD->SizeOf()); + InterlockedUpdateFlags4(enum_flag4_TemporaryEntryPointAssigned, TRUE); } - -#ifdef TARGET_ARM - - CentralJumpCode* pCode = (CentralJumpCode*)p; - memcpy(pCode, &c_CentralJumpCode, TEP_CENTRAL_JUMP_SIZE); - pCode->Setup (GetPreStubCompactARMEntryPoint(), this); - - _ASSERTE(p + TEP_CENTRAL_JUMP_SIZE == (BYTE*)temporaryEntryPointsWriterHolder.GetRW() + size); - -#else // TARGET_ARM - - _ASSERTE(p == (BYTE*)temporaryEntryPointsWriterHolder.GetRW() + size); - -#endif // TARGET_ARM - - ClrFlushInstructionCache((LPVOID)temporaryEntryPoints, size); - - SetHasCompactEntryPoints(); - return temporaryEntryPoints; } -#endif // !DACCESS_COMPILE - -#endif // HAS_COMPACT_ENTRYPOINTS //******************************************************************************* -PCODE MethodDescChunk::GetTemporaryEntryPoint(int index) +void MethodDescChunk::DetermineAndSetIsEligibleForTieredCompilation() { - LIMITED_METHOD_CONTRACT; + WRAPPER_NO_CONTRACT; -#ifdef HAS_COMPACT_ENTRYPOINTS - if (HasCompactEntryPoints()) + if (!DeterminedIfMethodsAreEligibleForTieredCompilation()) { -#ifdef TARGET_ARM - - return GetTemporaryEntryPoints() + COMPACT_ENTRY_ARM_CODE + THUMB_CODE + index * TEP_ENTRY_SIZE; - -#else // TARGET_ARM - - int fullBlocks = index / TEP_MAX_BLOCK_INDEX; - int remainder = index % TEP_MAX_BLOCK_INDEX; + int count = GetCount(); - return GetTemporaryEntryPoints() + 1 + (fullBlocks * TEP_FULL_BLOCK_SIZE) + - (remainder * TEP_ENTRY_SIZE) + ((remainder >= TEP_MAX_BEFORE_INDEX) ? TEP_CENTRAL_JUMP_SIZE : 0); - -#endif // TARGET_ARM - } -#endif // HAS_COMPACT_ENTRYPOINTS + // Determine eligibility for tiered compilation + { + MethodDesc *pMD = GetFirstMethodDesc(); + bool chunkContainsEligibleMethods = pMD->DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk(); - return Precode::GetPrecodeForTemporaryEntryPoint(GetTemporaryEntryPoints(), index)->GetEntryPoint(); -} + #ifdef _DEBUG + // Validate every MethodDesc has the same result for DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk + MethodDesc *pMDDebug = GetFirstMethodDesc(); + for (int i = 0; i < count; ++i) + { + _ASSERTE(chunkContainsEligibleMethods == pMDDebug->DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk()); + pMDDebug = (MethodDesc *)(dac_cast(pMDDebug) + pMDDebug->SizeOf()); + } + #endif + if (chunkContainsEligibleMethods) + { + for (int i = 0; i < count; ++i) + { + bool isEligible = pMD->DetermineAndSetIsEligibleForTieredCompilation(); + _ASSERTE(isEligible == pMD->IsEligibleForTieredCompilation_NoCheckMethodDescChunk()); -PCODE MethodDesc::GetTemporaryEntryPoint() -{ - CONTRACTL - { - NOTHROW; - GC_NOTRIGGER; - MODE_ANY; - } - CONTRACTL_END; + pMD = (MethodDesc *)(dac_cast(pMD) + pMD->SizeOf()); + } + } + } - MethodDescChunk* pChunk = GetMethodDescChunk(); - TADDR pEntryPoint = pChunk->GetTemporaryEntryPoint(GetMethodDescIndex()); + InterlockedUpdateFlags(enum_flag_DeterminedIsEligibleForTieredCompilation, TRUE); #ifdef _DEBUG - MethodDesc * pMD = MethodDesc::GetMethodDescFromStubAddr(pEntryPoint); - _ASSERTE(PTR_HOST_TO_TADDR(this) == PTR_HOST_TO_TADDR(pMD)); + { + MethodDesc *pMD = GetFirstMethodDesc(); + for (int i = 0; i < count; ++i) + { + _ASSERTE(pMD->IsEligibleForTieredCompilation() == pMD->IsEligibleForTieredCompilation_NoCheckMethodDescChunk()); + if (pMD->IsEligibleForTieredCompilation()) + { + _ASSERTE(!pMD->IsVersionableWithPrecode() || pMD->RequiresStableEntryPoint()); + } + } + } #endif - - return pEntryPoint; -} - -#ifndef DACCESS_COMPILE -//******************************************************************************* -void MethodDesc::SetTemporaryEntryPoint(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker) -{ - WRAPPER_NO_CONTRACT; - - GetMethodDescChunk()->EnsureTemporaryEntryPointsCreated(pLoaderAllocator, pamTracker); - - PTR_PCODE pSlot = GetAddrOfSlot(); - _ASSERTE(*pSlot == (PCODE)NULL); - *pSlot = GetTemporaryEntryPoint(); - - if (RequiresStableEntryPoint()) - { - // The rest of the system assumes that certain methods always have stable entrypoints. - // Create them now. - GetOrCreatePrecode(); } } -//******************************************************************************* -void MethodDescChunk::CreateTemporaryEntryPoints(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker) -{ - WRAPPER_NO_CONTRACT; - - _ASSERTE(GetTemporaryEntryPoints() == (TADDR)NULL); - - TADDR temporaryEntryPoints = Precode::AllocateTemporaryEntryPoints(this, pLoaderAllocator, pamTracker); - -#ifdef HAS_COMPACT_ENTRYPOINTS - // Precodes allocated only if they provide more compact representation or if it is required - if (temporaryEntryPoints == NULL) - { - temporaryEntryPoints = AllocateCompactEntryPoints(pLoaderAllocator, pamTracker); - } -#endif // HAS_COMPACT_ENTRYPOINTS - - *(((TADDR *)this)-1) = temporaryEntryPoints; - - _ASSERTE(GetTemporaryEntryPoints() != (TADDR)NULL); -} //******************************************************************************* Precode* MethodDesc::GetOrCreatePrecode() @@ -2971,40 +2804,45 @@ Precode* MethodDesc::GetOrCreatePrecode() WRAPPER_NO_CONTRACT; _ASSERTE(!IsVersionableWithVtableSlotBackpatch()); + // Since this can allocate memory that won't be freed, we need to make sure that the associated MethodTable + // is fully allocated and permanent. + _ASSERTE(GetMethodTable()->GetAuxiliaryData()->IsPublished()); + if (HasPrecode()) { return GetPrecode(); } - PTR_PCODE pSlot = GetAddrOfSlot(); PCODE tempEntry = GetTemporaryEntryPoint(); +#ifdef _DEBUG + PTR_PCODE pSlot = GetAddrOfSlot(); PrecodeType requiredType = GetPrecodeType(); - PrecodeType availableType = PRECODE_INVALID; - - if (!GetMethodDescChunk()->HasCompactEntryPoints()) - { - availableType = Precode::GetPrecodeFromEntryPoint(tempEntry)->GetType(); - } + PrecodeType availableType = Precode::GetPrecodeFromEntryPoint(tempEntry)->GetType(); + _ASSERTE(requiredType == availableType); + _ASSERTE(*pSlot != (PCODE)NULL); + _ASSERTE(*pSlot == tempEntry); +#endif - // Allocate the precode if necessary - if (requiredType != availableType) - { - // code:Precode::AllocateTemporaryEntryPoints should always create precode of the right type for dynamic methods. - // If we took this path for dynamic methods, the precode may leak since we may allocate it in domain-neutral loader heap. - _ASSERTE(!IsLCGMethod()); + // Set the flags atomically + InterlockedUpdateFlags3(enum_flag3_HasStableEntryPoint | enum_flag3_HasPrecode, TRUE); - AllocMemTracker amt; - Precode* pPrecode = Precode::Allocate(requiredType, this, GetLoaderAllocator(), &amt); + return Precode::GetPrecodeFromEntryPoint(tempEntry); +} - if (InterlockedCompareExchangeT(pSlot, pPrecode->GetEntryPoint(), tempEntry) == tempEntry) - amt.SuppressRelease(); - } +void MethodDesc::MarkPrecodeAsStableEntrypoint() +{ +#if _DEBUG + PCODE tempEntry = GetTemporaryEntryPointIfExists(); + _ASSERTE(tempEntry != (PCODE)NULL); + PrecodeType requiredType = GetPrecodeType(); + PrecodeType availableType = Precode::GetPrecodeFromEntryPoint(tempEntry)->GetType(); + _ASSERTE(requiredType == availableType); +#endif + _ASSERTE(!HasPrecode()); + _ASSERTE(RequiresStableEntryPoint()); - // Set the flags atomically InterlockedUpdateFlags3(enum_flag3_HasStableEntryPoint | enum_flag3_HasPrecode, TRUE); - - return Precode::GetPrecodeFromEntryPoint(*pSlot); } bool MethodDesc::DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk() @@ -3060,8 +2898,7 @@ bool MethodDesc::DetermineAndSetIsEligibleForTieredCompilation() // Functions with NoOptimization or AggressiveOptimization don't participate in tiering !IsJitOptimizationLevelRequested()) { - m_wFlags3AndTokenRemainder |= enum_flag3_IsEligibleForTieredCompilation; - _ASSERTE(IsVersionable()); + InterlockedUpdateFlags3(enum_flag3_IsEligibleForTieredCompilation, TRUE); return true; } #endif @@ -3252,9 +3089,12 @@ void MethodDesc::SetCodeEntryPoint(PCODE entryPoint) // can continue assuming it was successful, similarly to it successfully updating the target and another thread // updating the target again shortly afterwards. } - else if (HasPrecode()) + else if (HasPrecode() || RequiresStableEntryPoint()) { - GetPrecode()->SetTargetInterlocked(entryPoint); + // Use this path if there already exists a Precode, OR if RequiresStableEntryPoint is set. + // + // RequiresStableEntryPoint currently requires that the entrypoint must be a Precode + GetOrCreatePrecode()->SetTargetInterlocked(entryPoint); } else if (!HasStableEntryPoint()) { @@ -3367,6 +3207,7 @@ BOOL MethodDesc::SetStableEntryPointInterlocked(PCODE addr) BOOL fResult = InterlockedCompareExchangeT(pSlot, addr, pExpected) == pExpected; InterlockedUpdateFlags3(enum_flag3_HasStableEntryPoint, TRUE); + _ASSERTE(!RequiresStableEntryPoint()); // The RequiresStableEntryPoint scenarios should all result in a stable entry point which is a PreCode, so that it can be replaced and adjusted over time. return fResult; } @@ -3858,21 +3699,6 @@ MethodDescChunk::EnumMemoryRegions(CLRDataEnumMemoryFlags flags) pMT->EnumMemoryRegions(flags); } - SIZE_T size; - -#ifdef HAS_COMPACT_ENTRYPOINTS - if (HasCompactEntryPoints()) - { - size = SizeOfCompactEntryPoints(GetCount()); - } - else -#endif // HAS_COMPACT_ENTRYPOINTS - { - size = Precode::SizeOfTemporaryEntryPoints(GetTemporaryEntryPoints(), GetCount()); - } - - DacEnumMemoryRegion(GetTemporaryEntryPoints(), size); - MethodDesc * pMD = GetFirstMethodDesc(); MethodDesc * pOldMD = NULL; while (pMD != NULL && pMD != pOldMD) diff --git a/src/coreclr/vm/method.hpp b/src/coreclr/vm/method.hpp index 7c229b146a5382..168ccaef52d62d 100644 --- a/src/coreclr/vm/method.hpp +++ b/src/coreclr/vm/method.hpp @@ -162,8 +162,7 @@ enum MethodDescFlags struct MethodDescCodeData final { PTR_MethodDescVersioningState VersioningState; - - // [TODO] Move temporary entry points here. + PCODE TemporaryEntryPoint; }; using PTR_MethodDescCodeData = DPTR(MethodDescCodeData); @@ -208,26 +207,72 @@ class MethodDesc _ASSERTE(HasStableEntryPoint()); _ASSERTE(!IsVersionableWithVtableSlotBackpatch()); - return GetMethodEntryPoint(); + return GetMethodEntryPointIfExists(); } void SetMethodEntryPoint(PCODE addr); BOOL SetStableEntryPointInterlocked(PCODE addr); +#ifndef DACCESS_COMPILE PCODE GetTemporaryEntryPoint(); +#endif - void SetTemporaryEntryPoint(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker); + PCODE GetTemporaryEntryPointIfExists() + { + LIMITED_METHOD_CONTRACT; + BYTE flags4 = VolatileLoad(&m_bFlags4); + if (flags4 & enum_flag4_TemporaryEntryPointAssigned) + { + PTR_MethodDescCodeData codeData = VolatileLoadWithoutBarrier(&m_codeData); + _ASSERTE(codeData != NULL); + PCODE temporaryEntryPoint = codeData->TemporaryEntryPoint; + _ASSERTE(temporaryEntryPoint != (PCODE)NULL); + return temporaryEntryPoint; + } + else + { + return (PCODE)NULL; + } + } + + void SetTemporaryEntryPoint(AllocMemTracker *pamTracker); - PCODE GetInitialEntryPointForCopiedSlot() +#ifndef DACCESS_COMPILE + PCODE GetInitialEntryPointForCopiedSlot(MethodTable *pMTBeingCreated, AllocMemTracker* pamTracker) { - WRAPPER_NO_CONTRACT; + CONTRACTL + { + THROWS; + GC_NOTRIGGER; + MODE_ANY; + } + CONTRACTL_END; + + if (pMTBeingCreated != GetMethodTable()) + { + pamTracker = NULL; + } + // If EnsureTemporaryEntryPointCore is called, then + // both GetTemporaryEntryPointIfExists and GetSlot() + // are guaranteed to return a NON-NULL PCODE. + EnsureTemporaryEntryPointCore(pamTracker); + + PCODE result; if (IsVersionableWithVtableSlotBackpatch()) { - return GetTemporaryEntryPoint(); + result = GetTemporaryEntryPointIfExists(); } - return GetMethodEntryPoint(); + else + { + _ASSERTE(GetMethodTable()->IsCanonicalMethodTable()); + result = GetMethodTable()->GetSlot(GetSlot()); + } + _ASSERTE(result != (PCODE)NULL); + + return result; } +#endif inline BOOL HasPrecode() { @@ -270,6 +315,8 @@ class MethodDesc } Precode* GetOrCreatePrecode(); + void MarkPrecodeAsStableEntrypoint(); + // Given a code address return back the MethodDesc whenever possible // @@ -616,7 +663,11 @@ class MethodDesc #endif // !FEATURE_COMINTEROP // Update flags in a thread safe manner. +#ifndef DACCESS_COMPILE WORD InterlockedUpdateFlags(WORD wMask, BOOL fSet); + WORD InterlockedUpdateFlags3(WORD wMask, BOOL fSet); + BYTE InterlockedUpdateFlags4(BYTE bMask, BOOL fSet); +#endif // If the method is in an Edit and Continue (EnC) module, then // we DON'T want to backpatch this, ever. We MUST always call @@ -635,11 +686,13 @@ class MethodDesc return (m_wFlags & mdfNotInline); } +#ifndef DACCESS_COMPILE inline void SetNotInline(BOOL set) { WRAPPER_NO_CONTRACT; InterlockedUpdateFlags(mdfNotInline, set); } +#endif // DACCESS_COMPILE #ifndef DACCESS_COMPILE VOID EnsureActive(); @@ -659,11 +712,13 @@ class MethodDesc //================================================================ // +#ifndef DACCESS_COMPILE inline void ClearFlagsOnUpdate() { WRAPPER_NO_CONTRACT; SetNotInline(FALSE); } +#endif // DACCESS_COMPILE // Restore the MethodDesc to it's initial, pristine state, so that // it can be reused for new code (eg. for EnC, method rental, etc.) @@ -1070,16 +1125,8 @@ class MethodDesc public: - bool IsEligibleForTieredCompilation() - { - LIMITED_METHOD_DAC_CONTRACT; - -#ifdef FEATURE_TIERED_COMPILATION - return (m_wFlags3AndTokenRemainder & enum_flag3_IsEligibleForTieredCompilation) != 0; -#else - return false; -#endif - } + bool IsEligibleForTieredCompilation(); + bool IsEligibleForTieredCompilation_NoCheckMethodDescChunk(); // This method must return the same value for all methods in one MethodDescChunk bool DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk(); @@ -1188,6 +1235,7 @@ class MethodDesc private: +#ifndef DACCESS_COMPILE // Gets the prestub entry point to use for backpatching. Entry point slot backpatch uses this entry point as an oracle to // determine if the entry point actually changed and warrants backpatching. PCODE GetPrestubEntryPointToBackpatch() @@ -1199,7 +1247,9 @@ class MethodDesc _ASSERTE(IsVersionableWithVtableSlotBackpatch()); return GetTemporaryEntryPoint(); } +#endif // DACCESS_COMPILE +#ifndef DACCESS_COMPILE // Gets the entry point stored in the primary storage location for backpatching. Entry point slot backpatch uses this entry // point as an oracle to determine if the entry point actually changed and warrants backpatching. PCODE GetEntryPointToBackpatch_Locked() @@ -1212,6 +1262,7 @@ class MethodDesc _ASSERTE(IsVersionableWithVtableSlotBackpatch()); return GetMethodEntryPoint(); } +#endif // DACCESS_COMPILE // Sets the entry point stored in the primary storage location for backpatching. Entry point slot backpatch uses this entry // point as an oracle to determine if the entry point actually changed and warrants backpatching. @@ -1246,11 +1297,13 @@ class MethodDesc BackpatchEntryPointSlots(entryPoint, false /* isPrestubEntryPoint */); } +#ifndef DACCESS_COMPILE void BackpatchToResetEntryPointSlots() { WRAPPER_NO_CONTRACT; BackpatchEntryPointSlots(GetPrestubEntryPointToBackpatch(), true /* isPrestubEntryPoint */); } +#endif // DACCESS_COMPILE private: void BackpatchEntryPointSlots(PCODE entryPoint, bool isPrestubEntryPoint) @@ -1358,6 +1411,7 @@ class MethodDesc ULONG GetRVA(); public: +#ifndef DACCESS_COMPILE // Returns address of code to call. The address is good for one immediate invocation only. // Use GetMultiCallableAddrOfCode() to get address that can be invoked multiple times. // @@ -1371,6 +1425,7 @@ class MethodDesc _ASSERTE(!IsGenericMethodDefinition()); return GetMethodEntryPoint(); } +#endif // This one is used to implement "ldftn". PCODE GetMultiCallableAddrOfCode(CORINFO_ACCESS_FLAGS accessFlags = CORINFO_ACCESS_LDFTN); @@ -1392,6 +1447,7 @@ class MethodDesc PCODE GetSingleCallableAddrOfVirtualizedCode(OBJECTREF *orThis, TypeHandle staticTH); PCODE GetMultiCallableAddrOfVirtualizedCode(OBJECTREF *orThis, TypeHandle staticTH); +#ifndef DACCESS_COMPILE // The current method entrypoint. It is simply the value of the current method slot. // GetMethodEntryPoint() should be used to get an opaque method entrypoint, for instance // when copying or searching vtables. It should not be used to get address to call. @@ -1399,7 +1455,26 @@ class MethodDesc // GetSingleCallableAddrOfCode() and GetStableEntryPoint() are aliases with stricter preconditions. // Use of these aliases is as appropriate. // + // Calling this function will allocate an Entrypoint and associate it with the MethodDesc if it + // doesn't already exist. PCODE GetMethodEntryPoint(); +#endif + + // The current method entrypoint. It is simply the value of the current method slot. + // GetMethodEntryPoint() should be used to get an opaque method entrypoint, for instance + // when copying or searching vtables. It should not be used to get address to call. + // + // GetSingleCallableAddrOfCode() and GetStableEntryPoint() are aliases with stricter preconditions. + // Use of these aliases is as appropriate. + // + PCODE GetMethodEntryPointIfExists(); + + // Ensure that the temporary entrypoint is allocated, and the slot is filled with some value + void EnsureTemporaryEntryPoint(); + + // pamTracker must be NULL for a MethodDesc which cannot be freed by an external AllocMemTracker + // OR must be set to point to the same AllocMemTracker that controls allocation of the MethodDesc + void EnsureTemporaryEntryPointCore(AllocMemTracker *pamTracker); //******************************************************************************* // Returns the address of the native code. @@ -1545,6 +1620,9 @@ class MethodDesc // Returns true if the method has to have stable entrypoint always. BOOL RequiresStableEntryPoint(BOOL fEstimateForChunk = FALSE); +private: + BOOL RequiresStableEntryPointCore(BOOL fEstimateForChunk); +public: // // Backpatch method slots @@ -1616,7 +1694,15 @@ class MethodDesc UINT16 m_wFlags3AndTokenRemainder; BYTE m_chunkIndex; - BYTE m_methodIndex; // Used to hold the index into the chunk of this MethodDesc. Currently all 8 bits are used, but we could likely work with only 7 bits + + enum { + enum_flag4_ComputedRequiresStableEntryPoint = 0x01, + enum_flag4_RequiresStableEntryPoint = 0x02, + enum_flag4_TemporaryEntryPointAssigned = 0x04, + }; + + void InterlockedSetFlags4(BYTE mask, BYTE newValue); + BYTE m_bFlags4; // Used to hold more flags WORD m_wSlotNumber; // The slot number of this MethodDesc in the vtable array. WORD m_wFlags; // See MethodDescFlags @@ -1627,21 +1713,10 @@ class MethodDesc void EnumMemoryRegions(CLRDataEnumMemoryFlags flags); #endif - BYTE GetMethodDescIndex() - { - LIMITED_METHOD_CONTRACT; - return m_methodIndex; - } - - void SetMethodDescIndex(COUNT_T index) - { - LIMITED_METHOD_CONTRACT; - _ASSERTE(index <= 255); - m_methodIndex = (BYTE)index; - } - #ifndef DACCESS_COMPILE - HRESULT EnsureCodeDataExists(); + // pamTracker must be NULL for a MethodDesc which cannot be freed by an external AllocMemTracker + // OR must be set to point to the same AllocMemTracker that controls allocation of the MethodDesc + HRESULT EnsureCodeDataExists(AllocMemTracker *pamTracker); HRESULT SetMethodDescVersionState(PTR_MethodDescVersioningState state); #endif //!DACCESS_COMPILE @@ -1737,19 +1812,19 @@ class MethodDesc SIZE_T SizeOf(); - WORD InterlockedUpdateFlags3(WORD wMask, BOOL fSet); - inline BOOL HaveValueTypeParametersBeenWalked() { LIMITED_METHOD_DAC_CONTRACT; return (m_wFlags & mdfValueTypeParametersWalked) != 0; } +#ifndef DACCESS_COMPILE inline void SetValueTypeParametersWalked() { LIMITED_METHOD_CONTRACT; InterlockedUpdateFlags(mdfValueTypeParametersWalked, TRUE); } +#endif // DACCESS_COMPILE inline BOOL HaveValueTypeParametersBeenLoaded() { @@ -1757,11 +1832,13 @@ class MethodDesc return (m_wFlags & mdfValueTypeParametersLoaded) != 0; } +#ifndef DACCESS_COMPILE inline void SetValueTypeParametersLoaded() { LIMITED_METHOD_CONTRACT; InterlockedUpdateFlags(mdfValueTypeParametersLoaded, TRUE); } +#endif // DACCESS_COMPILE #ifdef FEATURE_TYPEEQUIVALENCE inline BOOL DoesNotHaveEquivalentValuetypeParameters() @@ -1770,11 +1847,13 @@ class MethodDesc return (m_wFlags & mdfDoesNotHaveEquivalentValuetypeParameters) != 0; } +#ifndef DACCESS_COMPILE inline void SetDoesNotHaveEquivalentValuetypeParameters() { LIMITED_METHOD_CONTRACT; InterlockedUpdateFlags(mdfDoesNotHaveEquivalentValuetypeParameters, TRUE); } +#endif // DACCESS_COMPILE #endif // FEATURE_TYPEEQUIVALENCE // @@ -2121,10 +2200,14 @@ class MethodDescChunk // These are separate to allow the flags space available and used to be obvious here // and for the logic that splits the token to be algorithmically generated based on the // #define - enum_flag_HasCompactEntrypoints = 0x4000, // Compact temporary entry points + enum_flag_DeterminedIsEligibleForTieredCompilation = 0x4000, // Has this chunk had its methods been determined eligible for tiered compilation or not // unused = 0x8000, }; +#ifndef DACCESS_COMPILE + WORD InterlockedUpdateFlags(WORD wMask, BOOL fSet); +#endif + public: // // Allocates methodDescCount identical MethodDescs in smallest possible number of chunks. @@ -2137,55 +2220,13 @@ class MethodDescChunk MethodTable *initialMT, class AllocMemTracker *pamTracker); - TADDR GetTemporaryEntryPoints() - { - LIMITED_METHOD_CONTRACT; - return *(dac_cast(this) - 1); - } - - PCODE GetTemporaryEntryPoint(int index); - - void EnsureTemporaryEntryPointsCreated(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker) + bool DeterminedIfMethodsAreEligibleForTieredCompilation() { - CONTRACTL - { - THROWS; - GC_NOTRIGGER; - MODE_ANY; - } - CONTRACTL_END; - - if (GetTemporaryEntryPoints() == (TADDR)0) - CreateTemporaryEntryPoints(pLoaderAllocator, pamTracker); + LIMITED_METHOD_DAC_CONTRACT; + return (VolatileLoadWithoutBarrier(&m_flagsAndTokenRange) & enum_flag_DeterminedIsEligibleForTieredCompilation) != 0; } - void CreateTemporaryEntryPoints(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker); - -#ifdef HAS_COMPACT_ENTRYPOINTS - // - // There two implementation options for temporary entrypoints: - // - // (1) Compact entrypoints. They provide as dense entrypoints as possible, but can't be patched - // to point to the final code. The call to unjitted method is indirect call via slot. - // - // (2) Precodes. The precode will be patched to point to the final code eventually, thus - // the temporary entrypoint can be embedded in the code. The call to unjitted method is - // direct call to direct jump. - // - // We use (1) for x86 and (2) for 64-bit to get the best performance on each platform. - // For ARM (1) is used. - - TADDR AllocateCompactEntryPoints(LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker); - - static MethodDesc* GetMethodDescFromCompactEntryPoint(PCODE addr, BOOL fSpeculative = FALSE); - static SIZE_T SizeOfCompactEntryPoints(int count); - - static BOOL IsCompactEntryPointAtAddress(PCODE addr); - -#ifdef TARGET_ARM - static int GetCompactEntryPointMaxCount (); -#endif // TARGET_ARM -#endif // HAS_COMPACT_ENTRYPOINTS + void DetermineAndSetIsEligibleForTieredCompilation(); FORCEINLINE PTR_MethodTable GetMethodTable() { @@ -2240,17 +2281,6 @@ class MethodDescChunk return m_count + 1; } - inline BOOL HasCompactEntryPoints() - { - LIMITED_METHOD_DAC_CONTRACT; - -#ifdef HAS_COMPACT_ENTRYPOINTS - return (m_flagsAndTokenRange & enum_flag_HasCompactEntrypoints) != 0; -#else - return FALSE; -#endif - } - inline UINT16 GetTokRange() { LIMITED_METHOD_DAC_CONTRACT; @@ -2277,12 +2307,6 @@ class MethodDescChunk #endif private: - void SetHasCompactEntryPoints() - { - LIMITED_METHOD_CONTRACT; - m_flagsAndTokenRange |= enum_flag_HasCompactEntrypoints; - } - void SetTokenRange(UINT16 tokenRange) { LIMITED_METHOD_CONTRACT; diff --git a/src/coreclr/vm/method.inl b/src/coreclr/vm/method.inl index 8d9e973b967407..bf25d6f6955705 100644 --- a/src/coreclr/vm/method.inl +++ b/src/coreclr/vm/method.inl @@ -6,6 +6,28 @@ #ifndef _METHOD_INL_ #define _METHOD_INL_ +inline bool MethodDesc::IsEligibleForTieredCompilation() +{ + LIMITED_METHOD_DAC_CONTRACT; + +#ifdef FEATURE_TIERED_COMPILATION + _ASSERTE(GetMethodDescChunk()->DeterminedIfMethodsAreEligibleForTieredCompilation()); +#endif + return IsEligibleForTieredCompilation_NoCheckMethodDescChunk(); +} + +inline bool MethodDesc::IsEligibleForTieredCompilation_NoCheckMethodDescChunk() +{ + LIMITED_METHOD_DAC_CONTRACT; + + // Just like above, but without the assert. This is used in the path which initializes the flag. +#ifdef FEATURE_TIERED_COMPILATION + return (VolatileLoadWithoutBarrier(&m_wFlags3AndTokenRemainder) & enum_flag3_IsEligibleForTieredCompilation) != 0; +#else + return false; +#endif +} + inline InstantiatedMethodDesc* MethodDesc::AsInstantiatedMethodDesc() const { WRAPPER_NO_CONTRACT; diff --git a/src/coreclr/vm/methodimpl.cpp b/src/coreclr/vm/methodimpl.cpp index 13a4b73e352757..92428ab33c882c 100644 --- a/src/coreclr/vm/methodimpl.cpp +++ b/src/coreclr/vm/methodimpl.cpp @@ -90,62 +90,12 @@ PTR_MethodDesc MethodImpl::GetMethodDesc(DWORD slotIndex, PTR_MethodDesc default TADDR base = dac_cast(pRelPtrForSlot) + slotIndex * sizeof(MethodDesc *); PTR_MethodDesc result = *dac_cast(base); - // Prejitted images may leave NULL in this table if - // the methoddesc is declared in another module. - // In this case we need to manually compute & restore it - // from the slot number. - - if (result == NULL) -#ifndef DACCESS_COMPILE - result = RestoreSlot(slotIndex, defaultReturn->GetMethodTable()); -#else // DACCESS_COMPILE - DacNotImpl(); -#endif // DACCESS_COMPILE - + _ASSERTE(result != NULL); return result; } #ifndef DACCESS_COMPILE -MethodDesc *MethodImpl::RestoreSlot(DWORD index, MethodTable *pMT) -{ - CONTRACTL - { - NOTHROW; - GC_NOTRIGGER; - FORBID_FAULT; - PRECONDITION(pdwSlots != NULL); - } - CONTRACTL_END - - MethodDesc *result; - - PREFIX_ASSUME(pdwSlots != NULL); - DWORD slot = GetSlots()[index]; - - // Since the overridden method is in a different module, we - // are guaranteed that it is from a different class. It is - // either an override of a parent virtual method or parent-implemented - // interface, or of an interface that this class has introduced. - - // In the former 2 cases, the slot number will be in the parent's - // vtable section, and we can retrieve the implemented MethodDesc from - // there. In the latter case, we can search through our interface - // map to determine which interface it is from. - - MethodTable *pParentMT = pMT->GetParentMethodTable(); - CONSISTENCY_CHECK(pParentMT != NULL && slot < pParentMT->GetNumVirtuals()); - { - result = pParentMT->GetMethodDescForSlot(slot); - } - - _ASSERTE(result != NULL); - - pImplementedMD[index] = result; - - return result; -} - /////////////////////////////////////////////////////////////////////////////////////// void MethodImpl::SetSize(LoaderHeap *pHeap, AllocMemTracker *pamTracker, DWORD size) { diff --git a/src/coreclr/vm/methodtable.cpp b/src/coreclr/vm/methodtable.cpp index dde37703a384ce..e6c78910ded7cc 100644 --- a/src/coreclr/vm/methodtable.cpp +++ b/src/coreclr/vm/methodtable.cpp @@ -748,6 +748,7 @@ MethodTable* CreateMinimalMethodTable(Module* pContainingModule, #ifdef _DEBUG pClass->SetDebugClassName("dynamicClass"); pMT->SetDebugClassName("dynamicClass"); + pMT->GetAuxiliaryDataForWrite()->SetIsPublished(); #endif LOG((LF_BCL, LL_INFO10, "Level1 - MethodTable created {0x%p}\n", pClass)); @@ -1644,7 +1645,7 @@ MethodTable::DebugDumpVtable(LPCUTF8 szClassName, BOOL fDebug) name, pszName, IsMdFinal(dwAttrs) ? " (final)" : "", - (VOID *)pMD->GetMethodEntryPoint(), + (VOID *)pMD->GetMethodEntryPointIfExists(), pMD->GetSlot() ); OutputDebugStringUtf8(buff); @@ -1658,7 +1659,7 @@ MethodTable::DebugDumpVtable(LPCUTF8 szClassName, BOOL fDebug) pMD->GetClass()->GetDebugClassName(), pszName, IsMdFinal(dwAttrs) ? " (final)" : "", - (VOID *)pMD->GetMethodEntryPoint(), + (VOID *)pMD->GetMethodEntryPointIfExists(), pMD->GetSlot() )); } @@ -1771,9 +1772,9 @@ MethodTable::Debug_DumpDispatchMap() nInterfaceIndex, pInterface->GetDebugClassName(), nInterfaceSlotNumber, - pInterface->GetMethodDescForSlot(nInterfaceSlotNumber)->GetName(), + pInterface->GetMethodDescForSlot_NoThrow(nInterfaceSlotNumber)->GetName(), nImplementationSlotNumber, - GetMethodDescForSlot(nImplementationSlotNumber)->GetName())); + GetMethodDescForSlot_NoThrow(nImplementationSlotNumber)->GetName())); it.Next(); } @@ -3448,7 +3449,7 @@ BOOL MethodTable::RunClassInitEx(OBJECTREF *pThrowable) MethodTable * pCanonMT = GetCanonicalMethodTable(); // Call the code method without touching MethodDesc if possible - PCODE pCctorCode = pCanonMT->GetSlot(pCanonMT->GetClassConstructorSlot()); + PCODE pCctorCode = pCanonMT->GetRestoredSlot(pCanonMT->GetClassConstructorSlot()); if (pCanonMT->IsSharedByGenericInstantiations()) { @@ -6274,19 +6275,6 @@ void MethodTable::SetCl(mdTypeDef token) _ASSERTE(GetCl() == token); } -//========================================================================================== -MethodDesc * MethodTable::GetClassConstructor() -{ - CONTRACTL - { - NOTHROW; - GC_NOTRIGGER; - MODE_ANY; - } - CONTRACTL_END; - return GetMethodDescForSlot(GetClassConstructorSlot()); -} - //========================================================================================== DWORD MethodTable::HasFixedAddressVTStatics() { @@ -6475,6 +6463,8 @@ InteropMethodTableData *MethodTable::GetComInteropData() GC_TRIGGERS; } CONTRACTL_END; + _ASSERTE(GetAuxiliaryData()->IsPublished()); + InteropMethodTableData *pData = LookupComInteropData(); if (!pData) @@ -6753,13 +6743,13 @@ MethodDesc *MethodTable::MethodDataObject::GetImplMethodDesc(UINT32 slotNumber) if (pMDRet == NULL) { _ASSERTE(slotNumber < GetNumVirtuals()); - pMDRet = m_pDeclMT->GetMethodDescForSlot(slotNumber); + pMDRet = m_pDeclMT->GetMethodDescForSlot_NoThrow(slotNumber); _ASSERTE(CheckPointer(pMDRet)); pEntry->SetImplMethodDesc(pMDRet); } else { - _ASSERTE(slotNumber >= GetNumVirtuals() || pMDRet == m_pDeclMT->GetMethodDescForSlot(slotNumber)); + _ASSERTE(slotNumber >= GetNumVirtuals() || pMDRet == m_pDeclMT->GetMethodDescForSlot_NoThrow(slotNumber)); } return pMDRet; @@ -6795,7 +6785,7 @@ void MethodTable::MethodDataObject::InvalidateCachedVirtualSlot(UINT32 slotNumbe MethodDesc *MethodTable::MethodDataInterface::GetDeclMethodDesc(UINT32 slotNumber) { WRAPPER_NO_CONTRACT; - return m_pDeclMT->GetMethodDescForSlot(slotNumber); + return m_pDeclMT->GetMethodDescForSlot_NoThrow(slotNumber); } //========================================================================================== @@ -6972,6 +6962,14 @@ DispatchSlot MethodTable::MethodDataInterfaceImpl::GetImplSlot(UINT32 slotNumber return m_pImpl->GetImplSlot(implSlotNumber); } +//========================================================================================== +bool MethodTable::MethodDataInterfaceImpl::IsImplSlotNull(UINT32 slotNumber) +{ + WRAPPER_NO_CONTRACT; + UINT32 implSlotNumber = MapToImplSlotNumber(slotNumber); + return (implSlotNumber == INVALID_SLOT_NUMBER); +} + //========================================================================================== UINT32 MethodTable::MethodDataInterfaceImpl::GetImplSlotNumber(UINT32 slotNumber) { @@ -7625,18 +7623,37 @@ Module *MethodTable::GetDefiningModuleForOpenType() PCODE MethodTable::GetRestoredSlot(DWORD slotNumber) { CONTRACTL { - NOTHROW; + THROWS; GC_NOTRIGGER; MODE_ANY; SUPPORTS_DAC; } CONTRACTL_END; + // Since this can allocate memory that won't be freed until the LoaderAllocator is release, we need + // to make sure that the associated MethodTable is fully allocated and permanent. + _ASSERTE(GetAuxiliaryData()->IsPublished()); + // // Keep in sync with code:MethodTable::GetRestoredSlotMT // PCODE slot = GetCanonicalMethodTable()->GetSlot(slotNumber); +#ifndef DACCESS_COMPILE + if (slot == (PCODE)NULL) + { + // This is a slot that has not been filled in yet. This can happen if we are + // looking at a slot which has not yet been given a temporary entry point. + MethodDesc *pMD = GetCanonicalMethodTable()->GetMethodDescForSlot_NoThrow(slotNumber); + PCODE temporaryEntryPoint = pMD->GetTemporaryEntryPoint(); + slot = GetCanonicalMethodTable()->GetSlot(slotNumber); + if (slot == (PCODE)NULL) + { + InterlockedCompareExchangeT(GetCanonicalMethodTable()->GetSlotPtrRaw(slotNumber), temporaryEntryPoint, (PCODE)NULL); + slot = GetCanonicalMethodTable()->GetSlot(slotNumber); + } + } _ASSERTE(slot != (PCODE)NULL); +#endif // DACCESS_COMPILE return slot; } @@ -7712,7 +7729,7 @@ MethodDesc* MethodTable::GetParallelMethodDesc(MethodDesc* pDefMD) return GetParallelMethodDescForEnC(this, pDefMD); #endif // FEATURE_METADATA_UPDATER - return GetMethodDescForSlot(pDefMD->GetSlot()); + return GetMethodDescForSlot_NoThrow(pDefMD->GetSlot()); // TODO! We should probably use the throwing variant where possible } #ifndef DACCESS_COMPILE @@ -7785,7 +7802,7 @@ BOOL MethodTable::HasExplicitOrImplicitPublicDefaultConstructor() return FALSE; } - MethodDesc * pCanonMD = GetMethodDescForSlot(GetDefaultConstructorSlot()); + MethodDesc * pCanonMD = GetMethodDescForSlot_NoThrow(GetDefaultConstructorSlot()); return pCanonMD != NULL && pCanonMD->IsPublic(); } diff --git a/src/coreclr/vm/methodtable.h b/src/coreclr/vm/methodtable.h index 80abb784df1faa..66ecee9073d80c 100644 --- a/src/coreclr/vm/methodtable.h +++ b/src/coreclr/vm/methodtable.h @@ -332,7 +332,11 @@ struct MethodTableAuxiliaryData enum_flag_CanCompareBitsOrUseFastGetHashCode = 0x0004, // Is any field type or sub field type overridden Equals or GetHashCode enum_flag_HasApproxParent = 0x0010, - // enum_unused = 0x0020, +#ifdef _DEBUG + // The MethodTable is in the right state to be published, and will be inevitably. + // Currently DEBUG only as it does not affect behavior in any way in a release build + enum_flag_IsPublished = 0x0020, +#endif enum_flag_IsNotFullyLoaded = 0x0040, enum_flag_DependenciesLoaded = 0x0080, // class and all dependencies loaded up to CLASS_LOADED_BUT_NOT_VERIFIED @@ -506,6 +510,25 @@ struct MethodTableAuxiliaryData } +#ifdef _DEBUG +#ifndef DACCESS_COMPILE + // Used in DEBUG builds to indicate that the MethodTable is in the right state to be published, and will be inevitably. + void SetIsPublished() + { + LIMITED_METHOD_CONTRACT; + m_dwFlags |= (MethodTableAuxiliaryData::enum_flag_IsPublished); + } +#endif + + // The MethodTable is in the right state to be published, and will be inevitably. + // Currently DEBUG only as it does not affect behavior in any way in a release build + bool IsPublished() const + { + LIMITED_METHOD_CONTRACT; + return (VolatileLoad(&m_dwFlags) & enum_flag_IsPublished); + } +#endif // _DEBUG + // The NonVirtualSlots array grows backwards, so this pointer points at just AFTER the first entry in the array // To access, use a construct like... GetNonVirtualSlotsArray(pAuxiliaryData)[-(1 + index)] static inline PTR_PCODE GetNonVirtualSlotsArray(PTR_Const_MethodTableAuxiliaryData pAuxiliaryData) @@ -1129,8 +1152,6 @@ class MethodTable // THE CLASS CONSTRUCTOR // - MethodDesc * GetClassConstructor(); - BOOL HasClassConstructor(); void SetHasClassConstructor(); WORD GetClassConstructorSlot(); @@ -1650,8 +1671,16 @@ class MethodTable // Slots <-> the MethodDesc associated with the slot. // + // Get the MethodDesc that implements a given slot + // NOTE: Since this may fill in the slot with a temporary entrypoint if that hasn't happened + // yet, when writing asserts, GetMethodDescForSlot_NoThrow should be used to avoid + // the presence of an assert hiding bugs. MethodDesc* GetMethodDescForSlot(DWORD slot); + // This api produces the same result as GetMethodDescForSlot, but it uses a variation on the + // algorithm that does not allocate a temporary entrypoint for the slot if it doesn't exist. + MethodDesc* GetMethodDescForSlot_NoThrow(DWORD slot); + static MethodDesc* GetMethodDescForSlotAddress(PCODE addr, BOOL fSpeculative = FALSE); PCODE GetRestoredSlot(DWORD slot); @@ -3015,6 +3044,7 @@ public : virtual MethodData *GetImplMethodData() = 0; MethodTable *GetImplMethodTable() { return m_pImplMT; } virtual DispatchSlot GetImplSlot(UINT32 slotNumber) = 0; + virtual bool IsImplSlotNull(UINT32 slotNumber) = 0; // Returns INVALID_SLOT_NUMBER if no implementation exists. virtual UINT32 GetImplSlotNumber(UINT32 slotNumber) = 0; virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber) = 0; @@ -3127,6 +3157,7 @@ public : virtual MethodData *GetImplMethodData() { LIMITED_METHOD_CONTRACT; return this; } virtual DispatchSlot GetImplSlot(UINT32 slotNumber); + virtual bool IsImplSlotNull(UINT32 slotNumber) { LIMITED_METHOD_CONTRACT; return false; } // Every valid slot on an actual MethodTable has a MethodDesc which is associated with it virtual UINT32 GetImplSlotNumber(UINT32 slotNumber); virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber); virtual void InvalidateCachedVirtualSlot(UINT32 slotNumber); @@ -3267,6 +3298,12 @@ public : { LIMITED_METHOD_CONTRACT; return this; } virtual DispatchSlot GetImplSlot(UINT32 slotNumber) { WRAPPER_NO_CONTRACT; return DispatchSlot(m_pDeclMT->GetRestoredSlot(slotNumber)); } + virtual bool IsImplSlotNull(UINT32 slotNumber) + { + // Every valid slot on an actual MethodTable has a MethodDesc which is associated with it + LIMITED_METHOD_CONTRACT; + return false; + } virtual UINT32 GetImplSlotNumber(UINT32 slotNumber) { LIMITED_METHOD_CONTRACT; return slotNumber; } virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber); @@ -3313,6 +3350,7 @@ public : virtual MethodTable *GetImplMethodTable() { WRAPPER_NO_CONTRACT; return m_pImpl->GetImplMethodTable(); } virtual DispatchSlot GetImplSlot(UINT32 slotNumber); + virtual bool IsImplSlotNull(UINT32 slotNumber); virtual UINT32 GetImplSlotNumber(UINT32 slotNumber); virtual MethodDesc *GetImplMethodDesc(UINT32 slotNumber); virtual void InvalidateCachedVirtualSlot(UINT32 slotNumber); @@ -3437,6 +3475,7 @@ public : inline BOOL IsVirtual() const; inline UINT32 GetNumVirtuals() const; inline DispatchSlot GetTarget() const; + inline bool IsTargetNull() const; // Can be called only if IsValid()=TRUE inline MethodDesc *GetMethodDesc() const; diff --git a/src/coreclr/vm/methodtable.inl b/src/coreclr/vm/methodtable.inl index 4557c652b213bc..37600f26440085 100644 --- a/src/coreclr/vm/methodtable.inl +++ b/src/coreclr/vm/methodtable.inl @@ -408,7 +408,7 @@ inline MethodDesc* MethodTable::GetMethodDescForSlot(DWORD slot) { CONTRACTL { - NOTHROW; + THROWS; GC_NOTRIGGER; MODE_ANY; } @@ -426,6 +426,49 @@ inline MethodDesc* MethodTable::GetMethodDescForSlot(DWORD slot) return MethodTable::GetMethodDescForSlotAddress(pCode); } +//========================================================================================== +inline MethodDesc* MethodTable::GetMethodDescForSlot_NoThrow(DWORD slot) +{ + CONTRACTL + { + NOTHROW; + GC_NOTRIGGER; + MODE_ANY; + } + CONTRACTL_END; + + PCODE pCode = GetCanonicalMethodTable()->GetSlot(slot); + + if (pCode == (PCODE)NULL) + { + // This code path should only be hit for methods which have not been overriden + MethodTable *pMTToSearchForMethodDesc = this->GetCanonicalMethodTable(); + while (pMTToSearchForMethodDesc != NULL) + { + IntroducedMethodIterator it(pMTToSearchForMethodDesc); + for (; it.IsValid(); it.Next()) + { + if (it.GetMethodDesc()->GetSlot() == slot) + { + return it.GetMethodDesc(); + } + } + + pMTToSearchForMethodDesc = pMTToSearchForMethodDesc->GetParentMethodTable()->GetCanonicalMethodTable(); + } + _ASSERTE(!"We should never reach here, as there should always be a MethodDesc for a slot"); + } + + // This is an optimization that we can take advantage of if we're trying to get the MethodDesc + // for an interface virtual, since their slots point to stub. + if (IsInterface() && slot < GetNumVirtuals()) + { + return MethodDesc::GetMethodDescFromStubAddr(pCode); + } + + return MethodTable::GetMethodDescForSlotAddress(pCode); +} + #ifndef DACCESS_COMPILE //========================================================================================== @@ -435,8 +478,8 @@ inline void MethodTable::CopySlotFrom(UINT32 slotNumber, MethodDataWrapper &hSou MethodDesc *pMD = hSourceMTData->GetImplMethodDesc(slotNumber); _ASSERTE(CheckPointer(pMD)); - _ASSERTE(pMD == pSourceMT->GetMethodDescForSlot(slotNumber)); - SetSlot(slotNumber, pMD->GetInitialEntryPointForCopiedSlot()); + _ASSERTE(pMD == pSourceMT->GetMethodDescForSlot_NoThrow(slotNumber)); + SetSlot(slotNumber, pMD->GetInitialEntryPointForCopiedSlot(NULL, NULL)); } //========================================================================================== @@ -544,6 +587,12 @@ inline DispatchSlot MethodTable::MethodIterator::GetTarget() const { return m_pMethodData->GetImplSlot(m_iCur); } +inline bool MethodTable::MethodIterator::IsTargetNull() const { + LIMITED_METHOD_CONTRACT; + CONSISTENCY_CHECK(IsValid()); + return m_pMethodData->IsImplSlotNull(m_iCur); +} + //========================================================================================== inline MethodDesc *MethodTable::MethodIterator::GetMethodDesc() const { LIMITED_METHOD_CONTRACT; diff --git a/src/coreclr/vm/methodtablebuilder.cpp b/src/coreclr/vm/methodtablebuilder.cpp index 549031c53ba059..def33bc6c063fd 100644 --- a/src/coreclr/vm/methodtablebuilder.cpp +++ b/src/coreclr/vm/methodtablebuilder.cpp @@ -6848,7 +6848,7 @@ VOID MethodTableBuilder::ValidateInterfaceMethodConstraints() // Grab the method token MethodTable * pMTItf = pItf->GetMethodTable(); - CONSISTENCY_CHECK(CheckPointer(pMTItf->GetMethodDescForSlot(it.GetSlotNumber()))); + CONSISTENCY_CHECK(CheckPointer(pMTItf->GetMethodDescForSlot_NoThrow(it.GetSlotNumber()))); mdMethodDef mdTok = pItf->GetMethodTable()->GetMethodDescForSlot(it.GetSlotNumber())->GetMemberDef(); // Default to the current module. The code immediately below determines if this @@ -6935,9 +6935,6 @@ VOID MethodTableBuilder::AllocAndInitMethodDescs() SIZE_T sizeOfMethodDescs = 0; // current running size of methodDesc chunk int startIndex = 0; // start of the current chunk (index into bmtMethod array) - // Limit the maximum MethodDescs per chunk by the number of precodes that can fit to a single memory page, - // since we allocate consecutive temporary entry points for all MethodDescs in the whole chunk. - DWORD maxPrecodesPerPage = Precode::GetMaxTemporaryEntryPointsCount(); DWORD methodDescCount = 0; DeclaredMethodIterator it(*this); @@ -6978,8 +6975,7 @@ VOID MethodTableBuilder::AllocAndInitMethodDescs() } if (tokenRange != currentTokenRange || - sizeOfMethodDescs + size > MethodDescChunk::MaxSizeOfMethodDescs || - methodDescCount + currentSlotMethodDescCount > maxPrecodesPerPage) + sizeOfMethodDescs + size > MethodDescChunk::MaxSizeOfMethodDescs) { if (sizeOfMethodDescs != 0) { @@ -7021,10 +7017,10 @@ VOID MethodTableBuilder::AllocAndInitMethodDescChunk(COUNT_T startIndex, COUNT_T PTR_LoaderHeap pHeap = GetLoaderAllocator()->GetHighFrequencyHeap(); void * pMem = GetMemTracker()->Track( - pHeap->AllocMem(S_SIZE_T(sizeof(TADDR) + sizeof(MethodDescChunk) + sizeOfMethodDescs))); + pHeap->AllocMem(S_SIZE_T(sizeof(MethodDescChunk) + sizeOfMethodDescs))); // Skip pointer to temporary entrypoints - MethodDescChunk * pChunk = (MethodDescChunk *)((BYTE*)pMem + sizeof(TADDR)); + MethodDescChunk * pChunk = (MethodDescChunk *)((BYTE*)pMem); COUNT_T methodDescCount = 0; @@ -7045,8 +7041,6 @@ VOID MethodTableBuilder::AllocAndInitMethodDescChunk(COUNT_T startIndex, COUNT_T MethodDesc * pMD = (MethodDesc *)((BYTE *)pChunk + offset); pMD->SetChunkIndex(pChunk); - pMD->SetMethodDescIndex(methodDescCount); - InitNewMethodDesc(pMDMethod, pMD); #ifdef _PREFAST_ @@ -7089,7 +7083,6 @@ VOID MethodTableBuilder::AllocAndInitMethodDescChunk(COUNT_T startIndex, COUNT_T // Reset the chunk index pUnboxedMD->SetChunkIndex(pChunk); - pUnboxedMD->SetMethodDescIndex(methodDescCount); if (bmtGenerics->GetNumGenericArgs() == 0) { pUnboxedMD->SetHasNonVtableSlot(); @@ -9232,7 +9225,7 @@ void MethodTableBuilder::CopyExactParentSlots(MethodTable *pMT) // fix up wrongly-inherited method descriptors MethodDesc* pMD = hMTData->GetImplMethodDesc(i); CONSISTENCY_CHECK(CheckPointer(pMD)); - CONSISTENCY_CHECK(pMD == pMT->GetMethodDescForSlot(i)); + CONSISTENCY_CHECK(pMD == pMT->GetMethodDescForSlot_NoThrow(i)); if (pMD->GetMethodTable() == pMT) continue; @@ -10821,9 +10814,8 @@ MethodTableBuilder::SetupMethodTable2( { for (MethodDescChunk *pChunk = GetHalfBakedClass()->GetChunks(); pChunk != NULL; pChunk = pChunk->GetNextChunk()) { - // Make sure that temporary entrypoints are create for methods. NGEN uses temporary - // entrypoints as surrogate keys for precodes. - pChunk->EnsureTemporaryEntryPointsCreated(GetLoaderAllocator(), GetMemTracker()); + // Make sure that eligibility for versionability is computed + pChunk->DetermineAndSetIsEligibleForTieredCompilation(); } } @@ -10863,7 +10855,7 @@ MethodTableBuilder::SetupMethodTable2( // DWORD indirectionIndex = MethodTable::GetIndexOfVtableIndirection(iCurSlot); if (GetParentMethodTable()->GetVtableIndirections()[indirectionIndex] != pMT->GetVtableIndirections()[indirectionIndex]) - pMT->SetSlot(iCurSlot, pMD->GetInitialEntryPointForCopiedSlot()); + pMT->SetSlot(iCurSlot, pMD->GetInitialEntryPointForCopiedSlot(pMT, GetMemTracker())); } else { @@ -10872,7 +10864,13 @@ MethodTableBuilder::SetupMethodTable2( // _ASSERTE(iCurSlot >= bmtVT->cVirtualSlots || ChangesImplementationOfVirtualSlot(iCurSlot)); - PCODE addr = pMD->GetTemporaryEntryPoint(); + if ((pMD->GetSlot() == iCurSlot) && (GetParentMethodTable() == NULL || iCurSlot >= GetParentMethodTable()->GetNumVirtuals())) + continue; // For cases where the method is defining the method desc slot, we don't need to fill it in yet + + pMD->EnsureTemporaryEntryPointCore(GetMemTracker()); + // Use the IfExists variant, as GetTemporaryEntrypoint isn't safe to call during MethodTable construction, as it might allocate + // without using the MemTracker. + PCODE addr = pMD->GetTemporaryEntryPointIfExists(); _ASSERTE(addr != (PCODE)NULL); if (pMD->HasNonVtableSlot()) @@ -10888,7 +10886,7 @@ MethodTableBuilder::SetupMethodTable2( { // The rest of the system assumes that certain methods always have stable entrypoints. // Create them now. - pMD->GetOrCreatePrecode(); + pMD->MarkPrecodeAsStableEntrypoint(); } } } @@ -10937,7 +10935,7 @@ MethodTableBuilder::SetupMethodTable2( MethodDesc* pMD = hMTData->GetImplMethodDesc(i); CONSISTENCY_CHECK(CheckPointer(pMD)); - CONSISTENCY_CHECK(pMD == pMT->GetMethodDescForSlot(i)); + CONSISTENCY_CHECK(pMD == pMT->GetMethodDescForSlot_NoThrow(i)); // This indicates that the method body in this slot was copied here through a methodImpl. // Thus, copy the value of the slot from which the body originally came, in case it was @@ -10947,11 +10945,11 @@ MethodTableBuilder::SetupMethodTable2( { MethodDesc *pOriginalMD = hMTData->GetImplMethodDesc(originalIndex); CONSISTENCY_CHECK(CheckPointer(pOriginalMD)); - CONSISTENCY_CHECK(pOriginalMD == pMT->GetMethodDescForSlot(originalIndex)); + CONSISTENCY_CHECK(pOriginalMD == pMT->GetMethodDescForSlot_NoThrow(originalIndex)); if (pMD != pOriginalMD) { // Copy the slot value in the method's original slot. - pMT->SetSlot(i, pOriginalMD->GetInitialEntryPointForCopiedSlot()); + pMT->SetSlot(i, pOriginalMD->GetInitialEntryPointForCopiedSlot(pMT, GetMemTracker())); hMTData->InvalidateCachedVirtualSlot(i); // Update the pMD to the new method desc we just copied over ourselves with. This will @@ -11008,8 +11006,7 @@ MethodTableBuilder::SetupMethodTable2( // If we fail to find an _IMPLEMENTATION_ for the interface MD, then // we are a ComImportMethod, otherwise we still be a ComImportMethod or // we can be a ManagedMethod. - DispatchSlot impl(it.GetTarget()); - if (!impl.IsNull()) + if (!it.IsTargetNull()) { pClsMD = it.GetMethodDesc(); @@ -11250,7 +11247,7 @@ void MethodTableBuilder::VerifyVirtualMethodsImplemented(MethodTable::MethodData MethodTable::MethodIterator it(hData); for (; it.IsValid() && it.IsVirtual(); it.Next()) { - if (it.GetTarget().IsNull()) + if (it.IsTargetNull()) { MethodDesc *pMD = it.GetDeclMethodDesc(); diff --git a/src/coreclr/vm/precode.cpp b/src/coreclr/vm/precode.cpp index 9e9b36ede97b97..4dbc3e43948346 100644 --- a/src/coreclr/vm/precode.cpp +++ b/src/coreclr/vm/precode.cpp @@ -199,32 +199,6 @@ PCODE Precode::TryToSkipFixupPrecode(PCODE addr) return 0; } -Precode* Precode::GetPrecodeForTemporaryEntryPoint(TADDR temporaryEntryPoints, int index) -{ - WRAPPER_NO_CONTRACT; - PrecodeType t = PTR_Precode(temporaryEntryPoints)->GetType(); - SIZE_T oneSize = SizeOfTemporaryEntryPoint(t); - return PTR_Precode(temporaryEntryPoints + index * oneSize); -} - -SIZE_T Precode::SizeOfTemporaryEntryPoints(PrecodeType t, int count) -{ - WRAPPER_NO_CONTRACT; - SUPPORTS_DAC; - - SIZE_T oneSize = SizeOfTemporaryEntryPoint(t); - return count * oneSize; -} - -SIZE_T Precode::SizeOfTemporaryEntryPoints(TADDR temporaryEntryPoints, int count) -{ - WRAPPER_NO_CONTRACT; - SUPPORTS_DAC; - - PrecodeType precodeType = PTR_Precode(temporaryEntryPoints)->GetType(); - return SizeOfTemporaryEntryPoints(precodeType, count); -} - #ifndef DACCESS_COMPILE Precode* Precode::Allocate(PrecodeType t, MethodDesc* pMD, @@ -384,144 +358,6 @@ void Precode::Reset() } } -/* static */ -TADDR Precode::AllocateTemporaryEntryPoints(MethodDescChunk * pChunk, - LoaderAllocator * pLoaderAllocator, - AllocMemTracker * pamTracker) -{ - WRAPPER_NO_CONTRACT; - - MethodDesc* pFirstMD = pChunk->GetFirstMethodDesc(); - - int count = pChunk->GetCount(); - - // Determine eligibility for tiered compilation -#ifdef HAS_COMPACT_ENTRYPOINTS - bool hasMethodDescVersionableWithPrecode = false; -#endif - { - MethodDesc *pMD = pChunk->GetFirstMethodDesc(); - bool chunkContainsEligibleMethods = pMD->DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk(); - -#ifdef _DEBUG - // Validate every MethodDesc has the same result for DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk - MethodDesc *pMDDebug = pChunk->GetFirstMethodDesc(); - for (int i = 0; i < count; ++i) - { - _ASSERTE(chunkContainsEligibleMethods == pMDDebug->DetermineIsEligibleForTieredCompilationInvariantForAllMethodsInChunk()); - pMDDebug = (MethodDesc *)(dac_cast(pMDDebug) + pMDDebug->SizeOf()); - } -#endif -#ifndef HAS_COMPACT_ENTRYPOINTS - if (chunkContainsEligibleMethods) -#endif - { - for (int i = 0; i < count; ++i) - { - if (chunkContainsEligibleMethods && pMD->DetermineAndSetIsEligibleForTieredCompilation()) - { - _ASSERTE(pMD->IsEligibleForTieredCompilation()); - _ASSERTE(!pMD->IsVersionableWithPrecode() || pMD->RequiresStableEntryPoint()); - } - -#ifdef HAS_COMPACT_ENTRYPOINTS - if (pMD->IsVersionableWithPrecode()) - { - _ASSERTE(pMD->RequiresStableEntryPoint()); - hasMethodDescVersionableWithPrecode = true; - } -#endif - - pMD = (MethodDesc *)(dac_cast(pMD) + pMD->SizeOf()); - } - } - } - - PrecodeType t = PRECODE_STUB; - bool preallocateJumpStubs = false; - -#ifdef HAS_FIXUP_PRECODE - // Default to faster fixup precode if possible - t = PRECODE_FIXUP; -#endif // HAS_FIXUP_PRECODE - - SIZE_T totalSize = SizeOfTemporaryEntryPoints(t, count); - -#ifdef HAS_COMPACT_ENTRYPOINTS - // Note that these are just best guesses to save memory. If we guessed wrong, - // we will allocate a new exact type of precode in GetOrCreatePrecode. - BOOL fForcedPrecode = hasMethodDescVersionableWithPrecode || pFirstMD->RequiresStableEntryPoint(count > 1); - -#ifdef TARGET_ARM - if (pFirstMD->RequiresMethodDescCallingConvention(count > 1) - || count >= MethodDescChunk::GetCompactEntryPointMaxCount ()) - { - // We do not pass method desc on scratch register - fForcedPrecode = TRUE; - } -#endif // TARGET_ARM - - if (!fForcedPrecode && (totalSize > MethodDescChunk::SizeOfCompactEntryPoints(count))) - return NULL; -#endif - - TADDR temporaryEntryPoints; - SIZE_T oneSize = SizeOfTemporaryEntryPoint(t); - MethodDesc * pMD = pChunk->GetFirstMethodDesc(); - - if (t == PRECODE_FIXUP || t == PRECODE_STUB) - { - LoaderHeap *pStubHeap; - if (t == PRECODE_FIXUP) - { - pStubHeap = pLoaderAllocator->GetFixupPrecodeHeap(); - } - else - { - pStubHeap = pLoaderAllocator->GetNewStubPrecodeHeap(); - } - - temporaryEntryPoints = (TADDR)pamTracker->Track(pStubHeap->AllocAlignedMem(totalSize, 1)); - TADDR entryPoint = temporaryEntryPoints; - for (int i = 0; i < count; i++) - { - ((Precode *)entryPoint)->Init((Precode *)entryPoint, t, pMD, pLoaderAllocator); - - _ASSERTE((Precode *)entryPoint == GetPrecodeForTemporaryEntryPoint(temporaryEntryPoints, i)); - entryPoint += oneSize; - - pMD = (MethodDesc *)(dac_cast(pMD) + pMD->SizeOf()); - } - } - else - { - _ASSERTE(FALSE); - temporaryEntryPoints = (TADDR)pamTracker->Track(pLoaderAllocator->GetPrecodeHeap()->AllocAlignedMem(totalSize, AlignOf(t))); - ExecutableWriterHolder entryPointsWriterHolder((void*)temporaryEntryPoints, totalSize); - - TADDR entryPoint = temporaryEntryPoints; - TADDR entryPointRW = (TADDR)entryPointsWriterHolder.GetRW(); - for (int i = 0; i < count; i++) - { - ((Precode *)entryPointRW)->Init((Precode *)entryPoint, t, pMD, pLoaderAllocator); - - _ASSERTE((Precode *)entryPoint == GetPrecodeForTemporaryEntryPoint(temporaryEntryPoints, i)); - entryPoint += oneSize; - entryPointRW += oneSize; - - pMD = (MethodDesc *)(dac_cast(pMD) + pMD->SizeOf()); - } - } - -#ifdef FEATURE_PERFMAP - PerfMap::LogStubs(__FUNCTION__, "PRECODE_STUB", (PCODE)temporaryEntryPoints, count * oneSize); -#endif - - ClrFlushInstructionCache((LPVOID)temporaryEntryPoints, count * oneSize); - - return temporaryEntryPoints; -} - #endif // !DACCESS_COMPILE #ifdef DACCESS_COMPILE @@ -801,13 +637,6 @@ BOOL DoesSlotCallPrestub(PCODE pCode) TADDR pInstr = dac_cast(PCODEToPINSTR(pCode)); -#ifdef HAS_COMPACT_ENTRYPOINTS - if (MethodDescChunk::GetMethodDescFromCompactEntryPoint(pCode, TRUE) != NULL) - { - return TRUE; - } -#endif - if (!IS_ALIGNED(pInstr, PRECODE_ALIGNMENT)) { return FALSE; diff --git a/src/coreclr/vm/precode.h b/src/coreclr/vm/precode.h index 3093ff80a28655..22ae9b1adaf189 100644 --- a/src/coreclr/vm/precode.h +++ b/src/coreclr/vm/precode.h @@ -467,12 +467,6 @@ class Precode { { SUPPORTS_DAC; unsigned int align = PRECODE_ALIGNMENT; - -#if defined(TARGET_ARM) && defined(HAS_COMPACT_ENTRYPOINTS) - // Precodes have to be aligned to allow fast compact entry points check - _ASSERTE (align >= sizeof(void*)); -#endif // TARGET_ARM && HAS_COMPACT_ENTRYPOINTS - return align; } @@ -585,22 +579,6 @@ class Precode { return ALIGN_UP(SizeOf(t), AlignOf(t)); } - static Precode * GetPrecodeForTemporaryEntryPoint(TADDR temporaryEntryPoints, int index); - - static SIZE_T SizeOfTemporaryEntryPoints(PrecodeType t, int count); - static SIZE_T SizeOfTemporaryEntryPoints(TADDR temporaryEntryPoints, int count); - - static TADDR AllocateTemporaryEntryPoints(MethodDescChunk* pChunk, - LoaderAllocator *pLoaderAllocator, AllocMemTracker *pamTracker); - - static DWORD GetMaxTemporaryEntryPointsCount() - { - SIZE_T maxPrecodeCodeSize = Max(FixupPrecode::CodeSize, StubPrecode::CodeSize); - SIZE_T count = GetStubCodePageSize() / maxPrecodeCodeSize; - _ASSERTE(count < MAXDWORD); - return (DWORD)count; - } - #ifdef DACCESS_COMPILE void EnumMemoryRegions(CLRDataEnumMemoryFlags flags); #endif diff --git a/src/coreclr/vm/prestub.cpp b/src/coreclr/vm/prestub.cpp index ad00f7c5b1bbff..de1f7fd09007ba 100644 --- a/src/coreclr/vm/prestub.cpp +++ b/src/coreclr/vm/prestub.cpp @@ -145,10 +145,8 @@ PCODE MethodDesc::DoBackpatch(MethodTable * pMT, MethodTable *pDispatchingMT, BO } } -#ifndef HAS_COMPACT_ENTRYPOINTS // Patch the fake entrypoint if necessary Precode::GetPrecodeFromEntryPoint(pExpected)->SetTargetInterlocked(pTarget); -#endif // HAS_COMPACT_ENTRYPOINTS } if (HasNonVtableSlot()) @@ -2553,21 +2551,6 @@ Stub * MakeInstantiatingStubWorker(MethodDesc *pMD) } #endif // defined(FEATURE_SHARE_GENERIC_CODE) -#if defined (HAS_COMPACT_ENTRYPOINTS) && defined (TARGET_ARM) - -extern "C" MethodDesc * STDCALL PreStubGetMethodDescForCompactEntryPoint (PCODE pCode) -{ - _ASSERTE (pCode >= PC_REG_RELATIVE_OFFSET); - - pCode = (PCODE) (pCode - PC_REG_RELATIVE_OFFSET + THUMB_CODE); - - _ASSERTE (MethodDescChunk::IsCompactEntryPointAtAddress (pCode)); - - return MethodDescChunk::GetMethodDescFromCompactEntryPoint(pCode, FALSE); -} - -#endif // defined (HAS_COMPACT_ENTRYPOINTS) && defined (TARGET_ARM) - //============================================================================= // This function generates the real code when from Preemptive mode. // It is specifically designed to work with the UnmanagedCallersOnlyAttribute. @@ -2859,7 +2842,7 @@ PCODE MethodDesc::DoPrestub(MethodTable *pDispatchingMT, CallerGCMode callerGCMo { pCode = GetStubForInteropMethod(this); - GetPrecode()->SetTargetInterlocked(pCode); + GetOrCreatePrecode()->SetTargetInterlocked(pCode); RETURN GetStableEntryPoint(); } @@ -3284,6 +3267,7 @@ EXTERN_C PCODE STDCALL ExternalMethodFixupWorker(TransitionBlock * pTransitionBl if (pMD->IsVtableMethod()) { slot = pMD->GetSlot(); + pMD->GetMethodTable()->GetRestoredSlot(slot); // Ensure that the target slot has an entrypoint pMT = th.IsNull() ? pMD->GetMethodTable() : th.GetMethodTable(); fVirtual = true; diff --git a/src/coreclr/vm/riscv64/stubs.cpp b/src/coreclr/vm/riscv64/stubs.cpp index 32d4dc088c4394..507248848c6bb5 100644 --- a/src/coreclr/vm/riscv64/stubs.cpp +++ b/src/coreclr/vm/riscv64/stubs.cpp @@ -1511,6 +1511,8 @@ VOID StubLinkerCPU::EmitComputedInstantiatingMethodStub(MethodDesc* pSharedMD, s void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndirect) { + STANDARD_VM_CONTRACT; + BranchInstructionFormat::VariationCodes variationCode = BranchInstructionFormat::VariationCodes::BIF_VAR_JUMP; if (!fTailCall) variationCode = static_cast(variationCode | BranchInstructionFormat::VariationCodes::BIF_VAR_CALL); @@ -1522,10 +1524,14 @@ void StubLinkerCPU::EmitCallLabel(CodeLabel *target, BOOL fTailCall, BOOL fIndir void StubLinkerCPU::EmitCallManagedMethod(MethodDesc *pMD, BOOL fTailCall) { + STANDARD_VM_CONTRACT; + + PCODE multiCallableAddr = pMD->TryGetMultiCallableAddrOfCode(CORINFO_ACCESS_PREFER_SLOT_OVER_TEMPORARY_ENTRYPOINT); + // Use direct call if possible. - if (pMD->HasStableEntryPoint()) + if (multiCallableAddr != (PCODE)NULL) { - EmitCallLabel(NewExternalCodeLabel((LPVOID)pMD->GetStableEntryPoint()), fTailCall, FALSE); + EmitCallLabel(NewExternalCodeLabel((LPVOID)multiCallableAddr), fTailCall, FALSE); } else { diff --git a/src/coreclr/vm/stubmgr.cpp b/src/coreclr/vm/stubmgr.cpp index c24e0c277fd91f..2bcd39957555ee 100644 --- a/src/coreclr/vm/stubmgr.cpp +++ b/src/coreclr/vm/stubmgr.cpp @@ -1009,13 +1009,6 @@ BOOL PrecodeStubManager::DoTraceStub(PCODE stubStartAddress, MethodDesc* pMD = NULL; -#ifdef HAS_COMPACT_ENTRYPOINTS - if (MethodDescChunk::IsCompactEntryPointAtAddress(stubStartAddress)) - { - pMD = MethodDescChunk::GetMethodDescFromCompactEntryPoint(stubStartAddress); - } - else -#endif // HAS_COMPACT_ENTRYPOINTS { // When the target slot points to the fixup part of the fixup precode, we need to compensate // for that to get the actual stub address diff --git a/src/coreclr/vm/virtualcallstub.cpp b/src/coreclr/vm/virtualcallstub.cpp index d24d4182126127..63669da442526c 100644 --- a/src/coreclr/vm/virtualcallstub.cpp +++ b/src/coreclr/vm/virtualcallstub.cpp @@ -985,6 +985,7 @@ PCODE VirtualCallStubManager::GetCallStub(TypeHandle ownerType, DWORD slot) GCX_COOP(); // This is necessary for BucketTable synchronization MethodTable * pMT = ownerType.GetMethodTable(); + pMT->GetRestoredSlot(slot); DispatchToken token; if (pMT->IsInterface()) @@ -2131,7 +2132,7 @@ VirtualCallStubManager::GetRepresentativeMethodDescFromToken( token = DispatchToken::CreateDispatchToken(token.GetSlotNumber()); } CONSISTENCY_CHECK(token.IsThisToken()); - RETURN (pMT->GetMethodDescForSlot(token.GetSlotNumber())); + RETURN (pMT->GetMethodDescForSlot_NoThrow(token.GetSlotNumber())); } //---------------------------------------------------------------------------- @@ -2163,7 +2164,7 @@ MethodDesc *VirtualCallStubManager::GetInterfaceMethodDescFromToken(DispatchToke MethodTable * pMT = GetTypeFromToken(token); PREFIX_ASSUME(pMT != NULL); CONSISTENCY_CHECK(CheckPointer(pMT)); - return pMT->GetMethodDescForSlot(token.GetSlotNumber()); + return pMT->GetMethodDescForSlot_NoThrow(token.GetSlotNumber()); #else // DACCESS_COMPILE