-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Description
Running the Benchmarks:
This is the process for running dotnet/performance benchmarks on the Mono Runtime. This uses the default AOT configuration with LLVM enabled. This isn’t the fullaot profile used on mobile.
Todo for fullAOT-based tests
- (Completed) Get
dotnet/performancetests compiling with netstandard2.0 runtime
-- https://gist.github.com/alexanderkyte/592a60fe15c0783485b699d2932ab5e2 - (Not yet completed) Get NS2.0 working with testing_aot_full profile (was working for microbenchmarks, has since broken.)
Current --aot=llvm Process:
mkdir $HOME/perf-mono && cd $HOME/perf-mono
git clone [email protected]:dotnet/performance.git
git clone [email protected]:mono/mono.gitcd $HOME/perf-mono
cd mono
./autogen.sh --with-runtime-preset=aot_llvm --disable-boehm --enable-llvm=yes
make -j32cd $HOME/perf-mono
cd performance/src/benchmarks/microNow you’re going to want to edit common.props to add net462 to test mono's desktop AOT profile.
<!-- The Python script can narrow down the TFMs to what the user has asked for -->
<TargetFrameworks>$(PYTHON_SCRIPT_TARGET_FRAMEWORKS)</TargetFrameworks>
<TargetFrameworks Condition="'$(TargetFrameworks)' == '' AND '$(OS)' == 'Windows_NT'">net461;netcoreapp2.0;netcoreapp2.1;netcoreapp2.2;netcoreapp3.0</TargetFrameworks>
- <TargetFrameworks Condition="'$(TargetFrameworks)' == ''">netcoreapp2.0;netcoreapp2.1;netcoreapp2.2;netcoreapp3.0</TargetFrameworks>
+ <TargetFrameworks Condition="'$(TargetFrameworks)' == ''">netcoreapp2.0;netcoreapp2.1;netcoreapp2.2;netcoreapp3.0;net462</TargetFrameworks>
<LangVersion>latest</LangVersion>
<GenerateDocumentationFile>False</GenerateDocumentationFile>
<WarningLevel>4</WarningLevel>And then continue to execute
sudo dotnet restore
sudo msbuild /p:Configuration=Release MicroBenchmarks.slnNow let’s AOT everything. We’re running with sudo in this folder because we have to run the microbenchmarks with AOT, so
everything in this folder ends up being owned by root and run through sudo. The -E argument preserves the environment.
Thus you want to run the AOT command described in:
https://gist.github.com/alexanderkyte/e13cce60028bb3c7d8d34f8e770ac777
And then proceed to run the mono benchmarks by making and running the following script.
https://gist.github.com/alexanderkyte/5b270c5e5e29c90f769de9487bede3c4
We’re doing it this way because it enables us to enforce a disable list for the Benchmarks that time out, hang, or otherwise crash the benchmark suite. The following script is executed as run-mono.sh by executing
export MONO_PATH=$HOME/perf_mono/mono/mcs/class/lib/net_4_x-macos/ && sudo -E bash run-mono.sh ~/perf_mono/mono/mono/mini/mono-sgenThis will leave files in perf_mono/performance/artifacts/bin/Tests/Release/net462/BenchmarkDotNet.Artifacts/results. Copy it over with:
sudo cp -r ~/perf_mono/performance/artifacts/bin/Tests/Release/net462/BenchmarkDotNet.Artifacts ~/perf_mono/mono_BenchmarkDotNet.Artifacts
In order to run the tests with .NET core in order to compare performance, use the following script
https://gist.github.com/alexanderkyte/f19035d26419471a14289135723833c2
with
sudo bash run-dotnet.sh ~/perf_mono
Which will leave the BenchmarkDotNet.Artifacts at:
~/perf_mono/performance/artifacts/bin/MicroBenchmarks/Debug/netcoreapp2.0/BenchmarkDotNet.Artifacts. Copy it to ~/perf_mono/dotnet_BenchmarkDotNet.Artifacts
sudo cp -r ~/perf_mono/performance/artifacts/bin/MicroBenchmarks/Debug/netcoreapp2.0/BenchmarkDotNet.Artifacts ~/perf_mono/dotnet_BenchmarkDotNet.Artifacts
Reporting Results
Now that we have our two BenchmarkDotNet.Artifacts folders, take the following files and place them in the same directory and build them: https://gist.github.com/alexanderkyte/6a85e0a6882685d84e7c97c66cc4d4e8
Now let's run it.
make -p $HOME/perf_mono/BenchmarkParser/mono_output
dotnet run ./obj/Debug/netcoreapp2.1/BenchmarkParser.dll $HOME/perf_mono/mono_BenchmarkDotNet.Artifacts $HOME/perf_mono/BenchmarkParser/mono_output
make -p $HOME/perf_mono/BenchmarkParser/dotnet_output
dotnet run ./obj/Debug/netcoreapp2.1/BenchmarkParser.dll $HOME/perf_mono/dotnet_BenchmarkDotNet.Artifacts $HOME/perf_mono/BenchmarkParser/dotnet_output
Now you'll have two CSV files to work with. Load them into excel and make one column the ratio of the dotnet execution time divided by the mono execution time.
You'll see dotnet and mono both will have some degree of missing data requiring manual scrubbing and removal of failed tests.
A script that makes the direct comparison CSV you're looking for can be run by making the below changes to Program.cs: https://gist.github.com/alexanderkyte/94151b7d778723d59bc8bb0ea91b58ab
Results for 02/20/2019
The .xls is will be attached, along with the intermediate results. I had to rename Report.csv to Report.csv.txt to make Github happy.
A much more colorful version can be found here. (Rename to .xls)
But it may be easier to work with the version that is sorted by the ratio of performance difference.
Report-sorted-perf.xlsx
The LLVM textual IR of the Benchmarks assembly can be found here:
temp.ll.txt.zip
And has names that correlated with the source names from the Report and from
https://github.com/dotnet/performance/tree/master/src/benchmarks/micro
Related Issues
Methodology
When using this as a central place to organize work, do the following:
- Identify slower suite or test
- Post profile of running time, identify where time spent.
- If time spent in emitted code:
- Post LLVM IR for emitted code.
- Use Label
epic: LLVM CodeGen Quality
- When done:
- Post updated running time of Benchmark. PR should have before/after. Describe change in terms of standard deviations from the mean, as well as objective time scores.
- Mention issue as fixed by PR / leave Github trail. Mark checkbox on this PR.
- Periodically rerun all benchmarks. Indicate tests that got faster or slower by 1.5 standard deviations from the mean since the last run.
Issues list
- GetMember suite: [perf] Much slower GetMethod and GetField than .NET Core #13029
- Array.Copy slowdown spotted by the Perf.Array suite [perf] Major array copying performance difference #13133
- CultureInfo String comparison is consistently much slower [perf] Needed optimization around CultureInfo.IsSuffix #13136