• español
    • English
  • Login
  • English 
    • español
    • English
  • Publication Types
    • bookbook partconference objectdoctoral thesisjournal articlemagazinemaster thesispatenttechnical documentationtechnical report
View Item 
  •   IMDEA Networks Home
  • View Item
  •   IMDEA Networks Home
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Exploring the Boundaries of On-Device Inference: When Tiny Falls Short, Go Hierarchical

Share
Files
Exploring_the_Boundaries_of_On_Device_Inference__When_Tiny_Falls_Short__Go_Hierarchical (1).pdf (5.087Mb)
Identifiers
URI: https://hdl.handle.net/20.500.12761/1958
DOI: 10.1109/JIOT.2025.3583477
Metadata
Show full item record
Author(s)
Behera, Adarsh Prasad; Daubaris, Paulius; Bravo Aramburu, Iñaki; Gallego Delgado, José; Morabito, Roberto; Widmer, Joerg; Champati, Jaya Prakash
Date
2025-07
Abstract
On-device inference offers significant benefits in edge ML systems, such as improved energy efficiency, responsiveness, and privacy, compared to traditional centralized approaches. However, the resource constraints of embedded devices limit their use to simple inference tasks, creating a trade-off between efficiency and capability. In this context, the Hierarchical Inference (HI) system has emerged as a promising solution that augments the capabilities of the local ML by offloading selected samples to an edge server/cloud for remote ML inference. Existing works, primarily based on simulations, demonstrate that HI improves accuracy. However, they fail to account for the latency and energy consumption in real-world deployments, nor do they consider three key heterogeneous components that characterize ML-enabled IoT systems: hardware, network connectivity, and models. To bridge this gap, this paper systematically evaluates HI against standalone on-device inference by analyzing accuracy, latency, and energy trade-offs across five devices and three image classification datasets. Our findings show that, for a given accuracy requirement, the HI approach we designed achieved up to 73% lower latency and up to 77% lower device energy consumption than an on-device inference system. Despite these gains, HI introduces a fixed energy and latency overhead from on-device inference for all samples. To address this, we propose a hybrid system called Early Exit with HI (EE-HI) and demonstrate that, compared to HI, EE-HI reduces the latency up to 59.7% and lowers the device’s energy consumption up to 60.4%. These findings demonstrate the potential of HI and EE-HI to enable more efficient ML in IoT systems.
Share
Files
Exploring_the_Boundaries_of_On_Device_Inference__When_Tiny_Falls_Short__Go_Hierarchical (1).pdf (5.087Mb)
Identifiers
URI: https://hdl.handle.net/20.500.12761/1958
DOI: 10.1109/JIOT.2025.3583477
Metadata
Show full item record

Browse

All of IMDEA NetworksBy Issue DateAuthorsTitlesKeywordsTypes of content

My Account

Login

Statistics

View Usage Statistics

Dissemination

emailContact person Directory wifi Eduroam rss_feed News
IMDEA initiative About IMDEA Networks Organizational structure Annual reports Transparency
Follow us in:
Community of Madrid

EUROPEAN UNION

European Social Fund

EUROPEAN UNION

European Regional Development Fund

EUROPEAN UNION

European Structural and Investment Fund

© 2021 IMDEA Networks. | Accesibility declaration | Privacy Policy | Disclaimer | Cookie policy - We value your privacy: this site uses no cookies!