The Red Hat Ecosystem Catalog is the official source for discovering and learning more about the Red Hat Ecosystem of both Red Hat and certified third-party products and services.
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Intel® Distribution of OpenVINO™ toolkit Docker dev image based on Red Hat* UBI 8.
The Intel® Distribution of OpenVINO™ toolkit quickly deploys applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNN), the toolkit extends computer vision (CV) workloads across Intel® hardware, maximizing performance.
- Web-site
- Release Notes
- Documentation
- Support forum
You can use Docker CI Framework for Intel® Distribution of OpenVINO™ toolkit to generate a Dockerfile, build, test, and deploy an image with the Intel® Distribution of OpenVINO™ toolkit. You can reuse available Dockerfiles, add your layer, and customize the image of OpenVINO™ for your needs.
Content:
- Since the 2021.4.1 release images include Inference Engine, OpenCV, samples, and demos, Python tools for development: Model Optimizer, Post-Training Optimization tool, Accuracy Checker, Open Model Zoo tools (Downloader, Converter). Supports CPU and GPU devices.
- Since the 2022.1.0 release images include OpenVINO Runtime (Inference Engine core, nGraph), samples, Python tools for development: Model Optimizer, Post-Training Optimization tool, Accuracy Checker, Open Model Zoo tools (Downloader, Converter). Supports CPU and GPU devices.
Images natively support inference on the following devices and older from OpenVINO Docker container:
- 11th Generation Intel® Core™ Processor Family for Internet of Things (IoT) Applications (formerly codenamed Tiger Lake)
The same images are available on Red Hat* Quay.io OpenVINO organization.
Usage:
This docker image is available for CPU only for Windows*/macOS*. If your host machine is Linux-based then inference run inside the image is available for CPU, GPU targets. You need to specify the device to access GPU target:
docker run -it --device /dev/dri --rm registry.connect.redhat.com/intel/openvino-dev
Licenses:
Copyright © 2018-2022 Intel Corporation
LEGAL NOTICE: Your use of this software and any required dependent software (the "Software Package") is subject to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the "third-party-programs.txt" or other similarly-named text file included with the Software Package for additional details.
Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the Intel Global Human Rights Principles. Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.
By downloading and using this container and the included software, you agree to the terms and conditions of the software license agreements located here. Please, review content inside <openvino_install_root>/licensing folder for more details.
As for any pre-built image usage, it is the image user's responsibility to ensure that any use of this image complies with any relevant licenses and potential fees for all software contained within. We will have no indemnity or warranty coverage from suppliers.
Components:
---
* Other names and brands may be claimed as the property of others.
The following information was extracted from the containerfile and other sources.
Summary | Provides the latest release of Intel(R) Distribution of OpenVINO(TM) toolkit. |
Description | The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly. |
Provider | INTEL CORP |
Maintainer | [email protected] |
The following information was extracted from the containerfile and other sources.
Repository name | rhel8_dev |
Image version | 2025.3.0.19807 |
Architecture | amd64 |
Use the following instructions to get images from a Red Hat container registry using registry service account tokens. You will need to create a registry service account to use prior to completing any of the following tasks.
First, you will need to add a reference to the appropriate secret and repository to your Kubernetes pod configuration via an imagePullSecrets field.
Then, use the following from the command line or from the OpenShift Dashboard GUI interface.
Use the following command(s) from a system with podman installed
Use the following command(s) from a system with docker service installed and running
Use the following instructions to get images from a Red Hat container registry using your Red Hat login.
For best practices, it is recommended to use registry tokens when pulling content for OpenShift deployments.
Use the following command(s) from a system with podman installed
Use the following command(s) from a system with docker service installed and running