Exploring the Boundaries of On-Device Inference: When Tiny Falls Short, Go Hierarchical
Compartir
Metadatos
Mostrar el registro completo del ítemFecha
2025-07Resumen
On-device inference offers significant benefits in edge ML systems, such as improved energy efficiency, responsiveness, and privacy, compared to traditional centralized approaches. However, the resource constraints of embedded devices limit their use to simple inference tasks, creating a trade-off between efficiency and capability. In this context, the Hierarchical Inference (HI) system has emerged as a promising solution that augments the capabilities of the local ML by offloading selected samples to an edge server/cloud for remote ML inference. Existing works, primarily based on simulations, demonstrate that HI improves accuracy. However, they fail to account for the latency and energy consumption in real-world deployments, nor do they consider three key heterogeneous components that characterize ML-enabled IoT systems: hardware, network connectivity, and models. To bridge this gap, this paper systematically evaluates HI against standalone on-device inference by analyzing accuracy, latency, and energy trade-offs across five devices and three image classification datasets. Our findings show that, for a given accuracy requirement, the HI approach we designed achieved up to 73% lower latency and up to 77% lower device energy consumption than an on-device inference system. Despite these gains, HI introduces a fixed energy and latency overhead from on-device inference for all samples. To address this, we propose a hybrid system called Early Exit with HI (EE-HI) and demonstrate that, compared to HI, EE-HI reduces the latency up to 59.7% and lowers the device’s energy consumption up to 60.4%. These findings demonstrate the potential of HI and EE-HI to enable more efficient ML in IoT systems.