dc.description.abstract | We consider an Internet-based Master-Worker framework, for machine-oriented computing tasks (i.e.
SETI@home) or human intelligence tasks (i.e. Amazon’s Mechanical Turk). In this framework a master sends
tasks to unreliable workers, and the workers execute and report back the result. We model such computations using
evolutionary dynamics and consider three type of workers: altruistic, malicious and rational. Altruistic workers
always return the correct result, malicious workers always return an incorrect result, and rational (selfish) workers
decide whether to be truthful depending on what increases their benefit. The goal of the master is reaching eventual
correctness, that is, a stable state of the system in which it always obtains the correct results. To this respect, we
propose a mechanism that uses reinforcement learning to induce a correct behavior to rational workers; coping with
malice leveraging reputation schemes. We analyze our system as a Markov chain and we give provable guarantees
under which truthful behavior can be ensured. Simulation results, obtained using parameter values similar to the
values observed in real systems, reveal interesting trade-offs between various metrics and parameters, such as cost,
time of convergence to a truthful behavior, tolerance to cheaters and the type of reputation metric employed. | |