Faster and Lighter Model Inference with ONNX Runtime from Cloud to Client | AI Show

Channel 9 - En podcast af Microsoft

Kategorier:

ONNX Runtime is a high-performance inferencing and training engine for machine learning models. This show focuses on ONNX Runtime for model inference. ONNX Runtime has been widely adopted by a variety of Microsoft products including Bing, Office 365 and Azure Cognitive Services, achieving an average of 2.9x inference speedup. Now we are glad to introduce ONNX Runtime quantization and ONNX Runtime mobile for further accelerating model inference with even smaller model size and runtime size. ONNX Runtime keeps evolving not only for cloud-based inference but also for on-device inference. Jump To: [01:02] ONNX and ONNX Runtime overview[02:26] model operationalization with ONNX Runtime[04:04] ONNX Runtime adoption[05:07] ONNX Runtime INT8 quantization for model size reduction and inference speedup[09:46] Demo of ONNX Runtime INT8 quantization[16:00] ONNX Runtime mobile for runtime size reductionLearn More: ONNX RuntimeFaster and smaller quantized NLP with Hugging Face and ONNX RuntimeONNX Runtime for Mobile PlatformsONNX Runtime Inference on Azure Machine Learning Create a Free account (Azure)Deep Learning vs. Machine Learning Get Started with Machine LearningDon't miss new episodes, subscribe to the AI Show

Visit the podcast's native language site