dc.description.abstract | We consider a finite-state Discrete-Time Markov Chain
(DTMC) source that can be sampled for detecting the events when
the DTMC transits to a new state. Our goal is to study the trade-off
between sampling frequency and staleness in detecting the events. We
argue that, for the problem at hand, using Age of Information (AoI) for
quantifying the staleness of a sample is conservative and therefore, study
another freshness metric age penalty, which is defined as the time elapsed
since the first transition out of the most recently observed state. We
study two optimization problems: minimize average age penalty subject
to an average sampling frequency constraint, and minimize average
sampling frequency subject to an average age penalty constraint; both
are Constrained Markov Decision Problems. We solve them using the
Lagrangian MDP approach, where we also provide structural results
that reduce the search space. Our numerical results demonstrate that the
computed Markov policies not only outperform optimal periodic sampling
policies, but also achieve sampling frequencies close to or lower than that
of an optimal clairvoyant (non-causal) sampling policy, if a small age
penalty is allowed. | es |