Author : Stefano Tracà
Publisher :
ISBN 13 :
Total Pages : 173 pages
Book Rating : 4.:/5 (16 download)
Book Synopsis Regulating Exploration in Multi-armed Bandit Problems with Time Patterns and Dying Arms by : Stefano Tracà
Download or read book Regulating Exploration in Multi-armed Bandit Problems with Time Patterns and Dying Arms written by Stefano Tracà and published by . This book was released on 2018 with total page 173 pages. Available in PDF, EPUB and Kindle. Book excerpt: In retail, there are predictable yet dramatic time-dependent patterns in customer behavior, such as periodic changes in the number of visitors, or increases in customers just before major holidays. The standard paradigm of multi-armed bandit analysis does not take these known patterns into account. This means that for applications in retail, where prices are fixed for periods of time, current bandit algorithms will not suffice. This work provides a framework and methods that take the time-dependent patterns into account. In the corrected methods, exploitation (greed) is regulated over time, so that more exploitation occurs during higher reward periods, and more exploration occurs in periods of low reward. In order to understand why regret is reduced with the corrected methods, a set of bounds on the expected regret are presented and insights into why we would want to exploit during periods of high reward are discussed. When the set of available options changes over time, mortal bandits algorithms have proven to be extremely useful in a number of settings, for example, for providing news article recommendations, or running automated online advertising campaigns. Previous work on this problem showed how to regulate exploration of new arms when they have recently appeared, but they do not adapt when the arms are about to disappear. Since in most applications we can determine either exactly or approximately when arms will disappear, we can leverage this information to improve performance: we should not be exploring arms that are about to disappear. Also for this framework, adaptations of algorithms and regret bounds are provided. The proposed methods perform well in experiments, and were inspired by a high-scoring entry in the Exploration and Exploitation 3 contest using data from Yahoo! Front Page. That entry heavily used time-series methods to regulate greed over time, which was substantially more effective than other contextual bandit methods.