• español
    • English
  • Login
  • español 
    • español
    • English
  • Tipos de Publicaciones
    • bookbook partconference objectdoctoral thesisjournal articlemagazinemaster thesispatenttechnical documentationtechnical report
Ver ítem 
  •   IMDEA Networks Principal
  • Ver ítem
  •   IMDEA Networks Principal
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Regret Bounds for Online Learning for Hierarchical Inference

Compartir
Ficheros
RegretBoundsforOnlineLearningforHierarchicalInference-fontsembedded.pdf (294.9Kb)
Identificadores
URI: https://hdl.handle.net/20.500.12761/1897
Metadatos
Mostrar el registro completo del ítem
Autor(es)
Al Atat, Ghina; Datta, Puranjay; Moharir, Sharayu; Champati, Jaya Prakash
Fecha
2024-10
Resumen
Hierarchical Inference (HI) has emerged as a promising approach for efficient distributed inference between end devices deployed with small pre-trained Deep Learning (DL) models and edge/cloud servers running large DL models. Under HI, a device uses the local DL model to perform inference on the data samples it collects, and only the data samples on which this inference is likely to be incorrect are offloaded to a remote DL model running on the server. Thus, gauging the likelihood of incorrect local inference is key to implementing HI. A natural approach is to compute a confidence metric for the local DL inference and then use a threshold on this confidence metric to determine whether to offload or not. Recently, the HI online learning problem was studied to learn an optimal threshold for the confidence metric over a sequence of data samples collected over time. However, existing algorithms have computation complexity that grows with the number of rounds and do not exhibit a sub-linear regret bound. In this work, we propose the Hedge-HI algorithm and prove that it has $O\left(T^\frac{2}{3}\E_\Z[N_T]^\frac{1}{3}\right)$ regret, where $T$ is the number of rounds, and $N_T$ is the number of distinct confidence metric values observed till round $T$. Further, under a mild assumption, we propose Hedge-HI-Restart, which has an $O\left(T^\frac{2}{3}\log (\E_\Z[ N_T])^\frac{1}{3}\right)$ regret bound with high probability and has a much lower computation complexity that grows sub-linearly in the number of rounds. Using runtime measurements on Raspberry Pi, we demonstrate that Hedge-HI-Restart has a runtime lower by order of magnitude and achieves cumulative loss close to that of the alternatives.
Compartir
Ficheros
RegretBoundsforOnlineLearningforHierarchicalInference-fontsembedded.pdf (294.9Kb)
Identificadores
URI: https://hdl.handle.net/20.500.12761/1897
Metadatos
Mostrar el registro completo del ítem

Listar

Todo IMDEA NetworksPor fecha de publicaciónAutoresTítulosPalabras claveTipos de contenido

Mi cuenta

Acceder

Estadísticas

Ver Estadísticas de uso

Difusión

emailContacto person Directorio wifi Eduroam rss_feed Noticias
Iniciativa IMDEA Sobre IMDEA Networks Organización Memorias anuales Transparencia
Síguenos en:
Comunidad de Madrid

UNIÓN EUROPEA

Fondo Social Europeo

UNIÓN EUROPEA

Fondo Europeo de Desarrollo Regional

UNIÓN EUROPEA

Fondos Estructurales y de Inversión Europeos

© 2021 IMDEA Networks. | Declaración de accesibilidad | Política de Privacidad | Aviso legal | Política de Cookies - Valoramos su privacidad: ¡este sitio no utiliza cookies!