Published inOpenVINO-toolkitRunning your GenAI App locally on Intel GPU and NPU with OpenVINO™ Model ServerGet the best performance from GenAI models on different Intel hardware accelerators using OpenVINO™ Model Server.15h ago15h ago
Published inOpenVINO-toolkitDeploying the Flux.1 Kontext Model on Intel® Arc™ Pro B60 Graphics GPUHow to use Optimum-Intel that leverages OpenVINO™ Runtime to deploy the Flux.1 Kontext dev model on the Intel® Arc™ Pro B60 Graphics GPU.Sep 10Sep 10
Published inOpenVINO-toolkitDeploying the Qwen3-Embedding Model Series with Optimum-IntelThis article will share how to use Optimum-Intel to quickly deploy the Qwen3-Embedding series models on Intel platforms.Sep 10Sep 10
Published inOpenVINO-toolkitOpenVINO™ 2025.3: More GenAI, More PossibilitiesDiscover OpenVINO 2025.3; new models, GenAI pipelines, and model server updates for faster, easier AI deployment on Intel hardware.Sep 4Sep 4
Published inOpenVINO-toolkitAccelerate LLMs on Intel® GPUs: A Practical Guide to Dynamic QuantizationOptimize Transformer Inference on Intel® GPUs with Dynamic Quantization in OpenVINO™ 2025.2Aug 4Aug 4
Published inOpenVINO-toolkitDeploying the Qwen2.5-Omni Multimodal Model Locally Using OpenVINODeploy Qwen2.5-Omni for Real-Time Multimodal Inference Using OpenVINO™ ToolkitJul 24A response icon1Jul 24A response icon1
Published inOpenVINO-toolkitHow to Deploy Your LangChain Application on Intel® NPUOptimize LLM Workflows with OpenVINO™ and Intel® NPU for Efficient LangChain DeploymentJul 24Jul 24
Published inOpenVINO-toolkitTransforming Prompts into Storytelling and Design Generation with Qwen, FLUX, and OpenVINO™Learn how to get the Multimodal AI Visual Generator up and running on your machine.Jul 16Jul 16
Published inOpenVINO-toolkitAnnouncing OpenVINO™ 2025.2: New Models, Generative AI Pipelines, and Performance ImprovementsOpenVINO 2025.2 provides the foundation to bring AI capabilities to production environments efficiently.Jun 18A response icon1Jun 18A response icon1
Published inOpenVINO-toolkitEfficient Inference of MiniCPM4 Series Models Using OpenVINO™ toolkitHow to deploy the MiniCPM 4.0 series models locally using OpenVINO™ GenAI.Jun 9A response icon1Jun 9A response icon1