Distributed Learning for Cooperative Inference

Cesar A. Uribe, University of Illinois


We study the problem of cooperative inference where a group of agents, interacting over a network, seeks to estimate a joint parameter that explains a set of observations over the network. Agents are agnostic to the network topology and the observations of other agents. The complex interactions on the network result in intractable computations for centralized estimation approaches. We explore a variational interpretation of the Bayesian posterior, and its relation with the stochastic mirror descent algorithm, to propose a new distributed learning algorithm. The beliefs generated by the proposed algorithm concentrate around the true parameter exponentially fast. We provide explicit non-asymptotic bounds for this concentration. Particularly, we develop explicit and computationally efficient algorithms for observation models belonging to the exponential family.