Event

PhD defence of Nima Akbarzadeh - Indexability, Planning, and Learning in Restless Bandits

Thursday, July 14, 2022 10:00to12:00
McConnell Engineering Building Room 603, 3480 rue University, Montreal, QC, H3A 0E9, CA

Abstract

Restless bandits are a class of sequential resource allocation problems concerned with allocating one or more resources among several alternative processes where the evolution of the processes is Markovian and depends on the resources allocated to them. In 1988, Whittle developed an index heuristic for restless bandit problems which has emerged as a popular solution approach due to its simplicity and strong empirical performance. The Whittle index heuristic is applicable if the model satisfies a technical condition known as indexability.

In this thesis, we focus on three general setups: fully-observable, partially-observable, and learning models. For the fully observable setup, we present two general sufficient conditions for indexability and identify simpler to verify refinements of these conditions. Afterwards, we revisit a previously proposed algorithm called adaptive greedy algorithm which is known to compute the Whittle index for a subclass of restless bandits. We show that a generalization of the adaptive greedy algorithm computes the Whittle index for all indexable restless bandits. We present an efficient implementation of this algorithm which can compute the Whittle index of an arm with K states in O(K3) computations. Finally, we present a detailed numerical study which affirms the strong performance of the Whittle index heuristic.

Regarding the partially observable setup, we consider a restart bandits with two observational models: first, the state of each bandit is not observable at all, and second, the state of each bandit is observable only if it is chosen. For both models, we show that the system is indexable. For the first model, we derive a closed-form expression for the Whittle index. For the second model, we propose an efficient algorithm to compute the Whittle index by exploiting the qualitative properties of the optimal policy. We present detailed numerical experiments which indicates that the Whittle index policy outperforms myopic policy and can be close to optimal in different setups.

For the learning setup, we assume the true system dynamics are unknown and present a Thompson sampling reinforcement learning (RL) algorithm which has a regret which scales polynomially with the number of arms. This contrasts with most RL algorithms where the cumulative regret scales exponentially with the number of arms. We present two characterizations of the regret of the proposed algorithm with respect to the Whittle index policy. Depending on the reward model and technical assumptions, we show that for a restless bandit with n arms and at most m activations at each time, the regret scales either as O(nmT1/2), O(n2T1/2) or O(n1.5T1/2). We present numerical examples to illustrate the salient features of the algorithm.

Back to top