arXiv Open Access 2021

Learn to Intervene: An Adaptive Learning Policy for Restless Bandits in Application to Preventive Healthcare

Arpita Biswas Gaurav Aggarwal Pradeep Varakantham Milind Tambe
Lihat Sumber

Abstrak

In many public health settings, it is important for patients to adhere to health programs, such as taking medications and periodic health checks. Unfortunately, beneficiaries may gradually disengage from such programs, which is detrimental to their health. A concrete example of gradual disengagement has been observed by an organization that carries out a free automated call-based program for spreading preventive care information among pregnant women. Many women stop picking up calls after being enrolled for a few months. To avoid such disengagements, it is important to provide timely interventions. Such interventions are often expensive and can be provided to only a small fraction of the beneficiaries. We model this scenario as a restless multi-armed bandit (RMAB) problem, where each beneficiary is assumed to transition from one state to another depending on the intervention. Moreover, since the transition probabilities are unknown a priori, we propose a Whittle index based Q-Learning mechanism and show that it converges to the optimal solution. Our method improves over existing learning-based methods for RMABs on multiple benchmarks from literature and also on the maternal healthcare dataset.

Topik & Kata Kunci

Penulis (4)

A

Arpita Biswas

G

Gaurav Aggarwal

P

Pradeep Varakantham

M

Milind Tambe

Format Sitasi

Biswas, A., Aggarwal, G., Varakantham, P., Tambe, M. (2021). Learn to Intervene: An Adaptive Learning Policy for Restless Bandits in Application to Preventive Healthcare. https://arxiv.org/abs/2105.07965

Akses Cepat

Lihat di Sumber
Informasi Jurnal
Tahun Terbit
2021
Bahasa
en
Sumber Database
arXiv
Akses
Open Access ✓