Satyen Kale
Proceedings of 27th Conference on Learning Theory (COLT), 2014

We consider the problem of minimizing regret in the setting of advice-efficient multiarmed bandits with expert advice. We give an algorithm for the setting of $$K$$ arms and $$N$$ experts out of which we are allowed to query and use only $$M$$ experts’ advice in each round, which has a regret bound of $$\tilde{O}\left(\sqrt{\frac{\min\{K, M\} N}{M} T}\right)$$ after $$T$$ rounds. We also prove that any algorithm for this problem must have expected regret at least $$\tilde{\Omega}\left(\sqrt{\frac{\min\{K, M\} N}{M}T}\right)$$, thus showing that our upper bound is nearly tight. This solves the COLT 2013 open problem of Seldin et al.