Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit a602838

Browse files
authored
Update link format and example sections in readme (microsoft#1729)
* Fix broken link and minor wording updates * Update links to use relative paths * Update sample section organization * Fix a few more links * Update links to relative paths * Fix link urls * Update links to relative paths * Update link to perf test doc page * Update links to relative paths * Update to relative paths for links * Update link
1 parent a0ba25f commit a602838

6 files changed

Lines changed: 68 additions & 55 deletions

File tree

README.md

Lines changed: 41 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -35,53 +35,53 @@
3535
***
3636
# Key Features
3737
## Run any ONNX model
38-
ONNX Runtime provides comprehensive support of the ONNX spec and can be used to run all models based on ONNX v1.2.1 and higher. See version compatibility details [here](https://github.com/microsoft/onnxruntime/blob/master/docs/Versioning.md).
38+
ONNX Runtime provides comprehensive support of the ONNX spec and can be used to run all models based on ONNX v1.2.1 and higher. See version compatibility details [here](./docs/Versioning.md).
3939

4040
**Traditional ML support**
4141

4242
In addition to DNN models, ONNX Runtime fully supports the [ONNX-ML profile](https://github.com/onnx/onnx/blob/master/docs/Operators-ml.md) of the ONNX spec for traditional ML scenarios.
4343

44-
For the full set of operators and types supported, please see [operator documentation](https://github.com/microsoft/onnxruntime/blob/master/docs/OperatorKernels.md)
44+
For the full set of operators and types supported, please see [operator documentation](./docs/OperatorKernels.md)
4545

46-
*Note: Some operators not supported in the current ONNX version may be available as a [Contrib Operator](https://github.com/microsoft/onnxruntime/blob/master/docs/ContribOperators.md)*
46+
*Note: Some operators not supported in the current ONNX version may be available as a [Contrib Operator](./docs/ContribOperators.md)*
4747

4848

4949
## High Performance
5050
ONNX Runtime supports both CPU and GPU. Using various graph optimizations and accelerators, ONNX Runtime can provide lower latency compared to other runtimes for faster end-to-end customer experiences and minimized machine utilization costs.
5151

5252
Currently ONNX Runtime supports the following accelerators:
5353
* MLAS (Microsoft Linear Algebra Subprograms)
54-
* [MKL-DNN](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/MKL-DNN-ExecutionProvider.md) - [subgraph optimization](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/MKL-DNN-Subgraphs.md)
54+
* [MKL-DNN](./docs/execution_providers/MKL-DNN-ExecutionProvider.md) - [subgraph optimization](./docs/execution_providers/MKL-DNN-Subgraphs.md)
5555
* MKL-ML
56-
* [Intel nGraph](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/nGraph-ExecutionProvider.md)
56+
* [Intel nGraph](./docs/execution_providers/nGraph-ExecutionProvider.md)
5757
* CUDA
58-
* [TensorRT](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/TensorRT-ExecutionProvider.md)
59-
* [OpenVINO](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/OpenVINO-ExecutionProvider.md)
60-
* [Nuphar](docs/execution_providers/Nuphar-ExecutionProvider.md)
58+
* [TensorRT](./docs/execution_providers/TensorRT-ExecutionProvider.md)
59+
* [OpenVINO](./docs/execution_providers/OpenVINO-ExecutionProvider.md)
60+
* [Nuphar](./docs/execution_providers/Nuphar-ExecutionProvider.md)
6161

62-
Not all variations are supported in the [official release builds](#apis-and-official-builds), but can be built from source following [these instructions](https://github.com/Microsoft/onnxruntime/blob/master/BUILD.md). Find Dockerfiles [here](https://github.com/microsoft/onnxruntime/tree/master/dockerfiles).
62+
Not all variations are supported in the [official release builds](#apis-and-official-builds), but can be built from source following [these instructions](./BUILD.md). Find Dockerfiles [here](./dockerfiles).
6363

6464
We are continuously working to integrate new execution providers for further improvements in latency and efficiency. If you are interested in contributing a new execution provider, please see [this page](docs/AddingExecutionProvider.md).
6565

6666

6767
## Cross Platform
68-
[API documentation and package installation](https://github.com/microsoft/onnxruntime#installation)
68+
[API documentation and package installation](#installation)
6969

70-
ONNX Runtime is available for Linux, Windows, Mac with Python, C#, and C APIs, with more to come!
71-
If you have specific scenarios that are not currently supported, please share your suggestions and scenario details via [Github Issues](https://github.com/microsoft/onnxruntime/issues).
70+
ONNX Runtime is currently available for Linux, Windows, and Mac with Python, C#, C++, and C APIs.
71+
If you have specific scenarios that are not supported, please share your suggestions and scenario details via [Github Issues](https://github.com/microsoft/onnxruntime/issues).
7272
***
7373
# Installation
7474
**Quick Start:** The [ONNX-Ecosystem Docker container image](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem) is available on Dockerhub and includes ONNX Runtime (CPU, Python), dependencies, tools to convert from various frameworks, and Jupyter notebooks to help get started.
7575

76-
Additional dockerfiles for some features can be found [here](https://github.com/microsoft/onnxruntime/tree/master/dockerfiles).
76+
Additional dockerfiles for some features can be found [here](./dockerfiles).
7777

7878
## APIs and Official Builds
7979

8080
### API Documentation
8181
* [Python](https://aka.ms/onnxruntime-python)
8282
* [C](docs/C_API.md)
8383
* [C#](docs/CSharp_API.md)
84-
* [C++](https://github.com/microsoft/onnxruntime/blob/master/include/onnxruntime/core/session/onnxruntime_cxx_api.h)
84+
* [C++](./include/onnxruntime/core/session/onnxruntime_cxx_api.h)
8585

8686
### Official Builds
8787
| | CPU (MLAS+Eigen) | CPU (MKL-ML) | GPU (CUDA)
@@ -100,7 +100,7 @@ system.
100100
* Version: **CUDA 10.0** and **cuDNN 7.3**
101101
* Linux Python packages require **CUDA 10.1** and **cuDNN 7.6**
102102
* Older ONNX Runtime releases: used **CUDA 9.1** and **cuDNN 7.1** - please refer to [prior release notes](https://github.com/microsoft/onnxruntime/releases) for more details.
103-
* Python binaries are compatible with **Python 3.5-3.7**. See [Python Dev Notes](https://github.com/microsoft/onnxruntime/blob/master/docs/Python_Dev_Notes.md). If using `pip` to be download the Python binaries, run `pip install --upgrade pip` prior to downloading.
103+
* Python binaries are compatible with **Python 3.5-3.7**. See [Python Dev Notes](./docs/Python_Dev_Notes.md). If using `pip` to be download the Python binaries, run `pip install --upgrade pip` prior to downloading.
104104
* Certain operators makes use of system locales. Installation of the **English language package** and configuring `en_US.UTF-8 locale` is required.
105105
* For Ubuntu install [language-pack-en package](https://packages.ubuntu.com/search?keywords=language-pack-en)
106106
* Run the following commands:
@@ -111,7 +111,7 @@ system.
111111
## Building from Source
112112
If additional build flavors are needed, please find instructions on building from source at [Build ONNX Runtime](BUILD.md). For production scenarios, it's strongly recommended to build from an [official release branch](https://github.com/microsoft/onnxruntime/releases).
113113

114-
Dockerfiles are available [here](https://github.com/microsoft/onnxruntime/tree/faxu-doc-updates/tools/ci_build/github/linux/docker) to help you get started.
114+
Dockerfiles are available [here](./tools/ci_build/github/linux/docker) to help you get started.
115115

116116
***
117117
# Usage
@@ -127,24 +127,25 @@ Dockerfiles are available [here](https://github.com/microsoft/onnxruntime/tree/f
127127
## Deploying ONNX Runtime
128128
ONNX Runtime can be deployed to the cloud for model inferencing using [Azure Machine Learning Services](https://azure.microsoft.com/en-us/services/machine-learning-service). See [detailed instructions](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-build-deploy-onnx) and [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/onnx).
129129

130-
**ONNX Runtime Server (beta)** is a hosted application for serving ONNX models using ONNX Runtime, providing a REST API for prediction. Usage details can be found [here](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_Server_Usage.md), and image installation instructions are [here](https://github.com/microsoft/onnxruntime/tree/master/dockerfiles#onnx-runtime-server-preview).
130+
**ONNX Runtime Server (beta)** is a hosted application for serving ONNX models using ONNX Runtime, providing a REST API for prediction. Usage details can be found [here](./docs/ONNX_Runtime_Server_Usage.md), and image installation instructions are [here](./dockerfiles#onnx-runtime-server-preview).
131131

132132
## Performance Tuning
133-
ONNX Runtime is open and extensible, supporting a broad set of configurations and execution providers for model acceleration. For performance tuning guidance, please see [this page](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_Perf_Tuning.md).
133+
ONNX Runtime is open and extensible, supporting a broad set of configurations and execution providers for model acceleration. For performance tuning guidance, please see [this page](./docs/ONNX_Runtime_Perf_Tuning.md).
134134

135135
***
136136
# Examples and Tutorials
137137
## Python
138-
* [Basic Inferencing Sample](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/inference_demos/simple_onnxruntime_inference.ipynb)
139-
* [Inferencing (Resnet50)](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/inference_demos/resnet50_modelzoo_onnxruntime_inference.ipynb)
140-
* [Inferencing samples](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem/inference_demos) using [ONNX-Ecosystem Docker image](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem)
141-
* [Train, Convert, and Inference a SKL pipeline](https://microsoft.github.io/onnxruntime/auto_examples/plot_train_convert_predict.html#sphx-glr-auto-examples-plot-train-convert-predict-py)
142-
* [Convert and Inference a Keras model](https://microsoft.github.io/onnxruntime/auto_examples/plot_dl_keras.html#sphx-glr-auto-examples-plot-dl-keras-py)
143-
* [ONNX Runtime Server: SSD Single Shot MultiBox Detector](https://github.com/onnx/tutorials/blob/master/tutorials/OnnxRuntimeServerSSDModel.ipynb)
144-
* [Running ONNX model tests](https://github.com/microsoft/onnxruntime/blob/master/docs/Model_Test.md)
138+
**Inference only**
139+
* [Model Inferencing (single node Sigmoid)](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/inference_demos/simple_onnxruntime_inference.ipynb)
140+
* [Model Inferencing (Resnet50)](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/inference_demos/resnet50_modelzoo_onnxruntime_inference.ipynb)
141+
* [Model Inferencing](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem/inference_demos) using [ONNX-Ecosystem Docker image](https://github.com/onnx/onnx-docker/tree/master/onnx-ecosystem)
142+
* [Model Inferencing using ONNX Runtime Server (SSD Single Shot MultiBox Detector)](https://github.com/onnx/tutorials/blob/master/tutorials/OnnxRuntimeServerSSDModel.ipynb)
145143

144+
**Inference with model conversion**
145+
* [SKL Pipeline: Train, Convert, and Inference](https://microsoft.github.io/onnxruntime/auto_examples/plot_train_convert_predict.html#sphx-glr-auto-examples-plot-train-convert-predict-py)
146+
* [Keras: Convert and Inference](https://microsoft.github.io/onnxruntime/auto_examples/plot_dl_keras.html#sphx-glr-auto-examples-plot-dl-keras-py)
146147

147-
**Deployment with AzureML**
148+
**Inference and deploy through AzureML**
148149
* Inferencing using [ONNX Model Zoo](https://github.com/onnx/models) models:
149150
* [Facial Expression Recognition](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb)
150151
* [MNIST Handwritten Digits](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb)
@@ -153,19 +154,26 @@ ONNX Runtime is open and extensible, supporting a broad set of configurations an
153154
* [TinyYolo](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb)
154155
* Train a model with PyTorch and Inferencing:
155156
* [MNIST](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-train-pytorch-aml-deploy-mnist.ipynb)
156-
157-
* Inferencing with TensorRT Execution Provider on GPU (AKS)
158-
* [FER+](https://github.com/microsoft/onnxruntime/blob/master/docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb)
157+
158+
* GPU: Inferencing with TensorRT Execution Provider (AKS)
159+
* [FER+](./docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb)
160+
161+
**Inference and Deploy wtih Azure IoT Edge**
162+
* [Intel OpenVINO](http://aka.ms/onnxruntime-openvino)
163+
* [NVIDIA TensorRT on Jetson Nano (ARM64)](http://aka.ms/onnxruntime-arm64)
164+
165+
**Other**
166+
* [Running ONNX model tests](./docs/Model_Test.md)
159167

160168

161169
## C#
162-
* [Inferencing Tutorial](https://github.com/microsoft/onnxruntime/blob/master/docs/CSharp_API.md#getting-started)
170+
* [Inferencing Tutorial](./docs/CSharp_API.md#getting-started)
163171

164172

165173
## C/C++
166-
* [Basic Inferencing (SqueezeNet) - C](https://github.com/microsoft/onnxruntime/blob/master/csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/C_Api_Sample.cpp)
167-
* [Basic Inferencing (SqueezeNet) - C++](https://github.com/microsoft/onnxruntime/blob/master/csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/CXX_Api_Sample.cpp)
168-
* [Inferencing (MNIST) - C++](https://github.com/microsoft/onnxruntime/tree/master/samples/c_cxx/MNIST)
174+
* [C - Inferencing (SqueezeNet)](./csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/C_Api_Sample.cpp)
175+
* [C++ - Inferencing (SqueezeNet)](./csharp/test/Microsoft.ML.OnnxRuntime.EndToEndTests.Capi/CXX_Api_Sample.cpp)
176+
* [C++ - Inferencing (MNIST)](./samples/c_cxx/MNIST)
169177

170178
***
171179
# Technical Design Details

docs/CSharp_API.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ You can load your input data into Tensor<T> objects in several ways. A simple ex
3434
int[] dimensions; // and the dimensions of the input is stored here
3535
Tensor<float> t1 = new DenseTensor<float>(sourceData, dimensions);
3636

37-
Here is a [complete sample code](https://github.com/Microsoft/onnxruntime/tree/master/csharp/sample/Microsoft.ML.OnnxRuntime.InferenceSample) that runs inference on a pretrained model.
37+
Here is a [complete sample code](../csharp/sample/Microsoft.ML.OnnxRuntime.InferenceSample) that runs inference on a pretrained model.
3838

3939
## Running on GPU (Optional)
4040
If using the GPU package, simply use the appropriate SessionOptions when creating an InferenceSession.

docs/ONNX_Runtime_Server_Usage.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<h1><span style="color:red">Note: ONNX Runtime Server is still in beta state. It's currently not ready for production environments.</span></h1>
1+
<h1><span style="color:red">Note: ONNX Runtime Server is still in beta state and may not be ready for production environments.</span></h1>
22

33
# How to Use ONNX Runtime Server for Prediction
44

@@ -43,7 +43,7 @@ http://<your_ip_address>:<port>/v1/models/<your-model-name>/versions/<your-versi
4343

4444
### Request and Response Payload
4545

46-
The request and response need to be a protobuf message. The Protobuf definition can be found [here](https://github.com/Microsoft/onnxruntime/blob/master/onnxruntime/server/protobuf/predict.proto).
46+
The request and response need to be a protobuf message. The Protobuf definition can be found [here](../onnxruntime/server/protobuf/predict.proto).
4747

4848
A protobuf message could have two formats: binary and JSON. Usually the binary payload has better latency, in the meanwhile the JSON format is easy for human readability.
4949

@@ -74,7 +74,7 @@ A simple Jupyter notebook demonstrating the usage of ONNX Runtime server to host
7474

7575
## GRPC Endpoint
7676

77-
If you prefer using the GRPC endpoint, the protobuf could be found [here](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/server/protobuf/prediction_service.proto). You could generate your client and make a GRPC call to it. To learn more about how to generate the client code and call to the server, please refer to [the tutorials of GRPC](https://grpc.io/docs/tutorials/).
77+
If you prefer using the GRPC endpoint, the protobuf could be found [here](../onnxruntime/server/protobuf/prediction_service.proto). You could generate your client and make a GRPC call to it. To learn more about how to generate the client code and call to the server, please refer to [the tutorials of GRPC](https://grpc.io/docs/tutorials/).
7878

7979
## Advanced Topics
8080

@@ -91,7 +91,7 @@ For easy tracking of requests, we provide the following header fields:
9191

9292
### rsyslog Support
9393

94-
If you prefer using an ONNX Runtime Server with [rsyslog](https://www.rsyslog.com/) support([build instruction](https://github.com/microsoft/onnxruntime/blob/master/BUILD.md#build-onnx-runtime-server-on-linux)), you should be able to see the log in `/var/log/syslog` after the ONNX Runtime Server runs. For detail about how to use rsyslog, please reference [here](https://www.rsyslog.com/category/guides-for-rsyslog/).
94+
If you prefer using an ONNX Runtime Server with [rsyslog](https://www.rsyslog.com/) support([build instruction](../BUILD.md#build-onnx-runtime-server-on-linux)), you should be able to see the log in `/var/log/syslog` after the ONNX Runtime Server runs. For detail about how to use rsyslog, please reference [here](https://www.rsyslog.com/category/guides-for-rsyslog/).
9595

9696
## Report Issues
9797

docs/execution_providers/MKL-DNN-ExecutionProvider.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,10 +26,10 @@ InferenceSession session_object{so};
2626
session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime:: MKLDNNExecutionProvider >());
2727
status = session_object.Load(model_file_name);
2828
```
29-
The C API details are [here](https://github.com/Microsoft/onnxruntime/blob/master/docs/C_API.md#c-api).
29+
The C API details are [here](../C_API.md#c-api).
3030

3131
## Python
32-
When using the python wheel from the ONNX Runtime built with MKL-DNN execution provider, it will be automatically prioritized over the CPU execution provider. Python APIs details are [here](https://github.com/Microsoft/onnxruntime/blob/master/docs/python/api_summary.rst#api-summary).
32+
When using the python wheel from the ONNX Runtime built with MKL-DNN execution provider, it will be automatically prioritized over the CPU execution provider. Python APIs details are [here](https://aka.ms/onnxruntime-python).
3333

34-
## Using onnxruntime_perf_test and onnx_test_runner
35-
You can test the performance of your ONNX Model with the MKL-DNN execution provider. Use the flag -e mkldnn in [onnxruntime_perf_test](https://github.com/Microsoft/onnxruntime/tree/master/onnxruntime/test/perftest#onnxruntime-performance-test) and [onnx_test_runner](https://github.com/Microsoft/onnxruntime/tree/master/onnxruntime/test/onnx/README.txt)..
34+
## Performance Tuning
35+
For performance tuning, please see guidance on this page: [ONNX Runtime Perf Tuning](../ONNX_Runtime_Perf_Tuning.md)

0 commit comments

Comments
 (0)