Event-based Vision: Understanding Network Traffic Characteristics
Date
2020-08-31Abstract
Event-based vision fosters a new way of sensing reality. Event-based cameras work radically differently compared to legacy frame-based cameras because they continuously measure brightness changes at a per-pixel granularity (i.e., events) rather than snapshots of intensity measurements (i.e., frames). Event-based cameras are applied in robotics and augmented and virtual reality applications due to their properties of low-latency, high temporal resolution and dynamic range. For example, they greatly improve unmanned aerial vehicle (UAV) navigation and collision avoidance. While event-based vision is currently restricted to local devices, in the near future applications involving distributed systems will gain momentum, such as the coordination of swarms of UAVs or robots. However, the network traffic characteristics of event-based vision systems are largely unexplored. In this paper, we aim to fill this gap by providing the first study of network traffic generated by event-based cameras. To this end, we employ publicly available data sets and experimentally study properties like the impact of packet/event losses on typical computer vision operations like tracking, and the implications of medium access under contention. We find that complex scenes that incur a high event generation rate are more robust against packet loss due to transmission errors or wireless contention. Conversely, packet loss or delay are more harmful to tracking and visualization operations when the event generation rate is small.