9 Profiling Insights

 

This chapter covers

  • Profiling ONNX ported models.
  • Transforming raw ONNX profiling data into insights.
  • Optimizing ONNX graphs for LLMs.

Chapter 4 covered the ONNX format and the ONNX Runtime capabilities (in general first, then in particular with reference to LLMs), while chapter 5 detailed, among other methods, the possibility to perform 8-bit quantization through the ONNX API. This chapter explores other ONNX capabilities, such as the profiling of LLMs which have been ported to the ONNX format and utilities to get useful insights from the raw profiling data.

9.1 Profiling ONNX ported LLMs

In chapters 4 and 5 we learned that the ONNX Runtime provides high performance for running Machine Learning/Deep Learning models on a wide range of hardware. But there are extra model optimization techniques and runtime configurations that may be needed to improve performance for specific use cases, models, and hardware/devices, depending on the given KPIs about latency, throughput, and memory utilization.

The ONNX runtime (ORT) allows in-code performance profiling. By default, this kind of profiling is disabled, but it can be set at debugging time as follows:

import onnxruntime as rt
 
sess_options = rt.SessionOptions()
sess_options.enable_profiling = True

9.2 Transforming raw ONNX profiling data into insights

9.3 Optimization of ONNX graphs for LLMs

9.4 Summary