Modified Index Policies for Multi-Armed Bandits with Network-like Markovian Dependencies
Sequential decision-making in dynamic and interconnected environments is a cornerstone of numerous applications, ranging from communication networks and finance to distributed blockchain systems and IoT frameworks. The multi-armed bandit (MAB) problem is a fundamental model in this domain that tradi...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-01-01
|
| Series: | Network |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-8732/5/1/3 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850090949928026112 |
|---|---|
| author | Abdalaziz Sawwan Jie Wu |
| author_facet | Abdalaziz Sawwan Jie Wu |
| author_sort | Abdalaziz Sawwan |
| collection | DOAJ |
| description | Sequential decision-making in dynamic and interconnected environments is a cornerstone of numerous applications, ranging from communication networks and finance to distributed blockchain systems and IoT frameworks. The multi-armed bandit (MAB) problem is a fundamental model in this domain that traditionally assumes independent and identically distributed (iid) rewards, which limits its effectiveness in capturing the inherent dependencies and state dynamics present in some real-world scenarios. In this paper, we lay a theoretical framework for a modified MAB model in which each arm’s reward is generated by a hidden Markov process. In our model, each arm undergoes Markov state transitions independent of play in a way that results in varying reward distributions and heightened uncertainty in reward observations. The number of states for each arm can be up to three states. A key challenge arises from the fact that the underlying states governing each arm’s rewards remain hidden at the time of selection. To address this, we adapt traditional index-based policies and develop a modified index approach tailored to accommodate Markovian transitions and enhance selection efficiency for our model. Our proposed proposed Markovian Upper Confidence Bound (MC-UCB) policy achieves logarithmic regret. Comparative analysis with the classical UCB algorithm reveals that MC-UCB consistently achieves approximately a 15% reduction in cumulative regret. This work provides significant theoretical insights and lays a robust foundation for future research aimed at optimizing decision-making processes in complex, networked systems with hidden state dependencies. |
| format | Article |
| id | doaj-art-c89ca8b19c6942319e1a06cfc41c21f0 |
| institution | DOAJ |
| issn | 2673-8732 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Network |
| spelling | doaj-art-c89ca8b19c6942319e1a06cfc41c21f02025-08-20T02:42:28ZengMDPI AGNetwork2673-87322025-01-0151310.3390/network5010003Modified Index Policies for Multi-Armed Bandits with Network-like Markovian DependenciesAbdalaziz Sawwan0Jie Wu1Department of Computer and Information Science, Temple University, Philadelphia, PA 19122, USADepartment of Computer and Information Science, Temple University, Philadelphia, PA 19122, USASequential decision-making in dynamic and interconnected environments is a cornerstone of numerous applications, ranging from communication networks and finance to distributed blockchain systems and IoT frameworks. The multi-armed bandit (MAB) problem is a fundamental model in this domain that traditionally assumes independent and identically distributed (iid) rewards, which limits its effectiveness in capturing the inherent dependencies and state dynamics present in some real-world scenarios. In this paper, we lay a theoretical framework for a modified MAB model in which each arm’s reward is generated by a hidden Markov process. In our model, each arm undergoes Markov state transitions independent of play in a way that results in varying reward distributions and heightened uncertainty in reward observations. The number of states for each arm can be up to three states. A key challenge arises from the fact that the underlying states governing each arm’s rewards remain hidden at the time of selection. To address this, we adapt traditional index-based policies and develop a modified index approach tailored to accommodate Markovian transitions and enhance selection efficiency for our model. Our proposed proposed Markovian Upper Confidence Bound (MC-UCB) policy achieves logarithmic regret. Comparative analysis with the classical UCB algorithm reveals that MC-UCB consistently achieves approximately a 15% reduction in cumulative regret. This work provides significant theoretical insights and lays a robust foundation for future research aimed at optimizing decision-making processes in complex, networked systems with hidden state dependencies.https://www.mdpi.com/2673-8732/5/1/3dynamic distributionslearning theoryMarkov chainmulti-armed bandit |
| spellingShingle | Abdalaziz Sawwan Jie Wu Modified Index Policies for Multi-Armed Bandits with Network-like Markovian Dependencies Network dynamic distributions learning theory Markov chain multi-armed bandit |
| title | Modified Index Policies for Multi-Armed Bandits with Network-like Markovian Dependencies |
| title_full | Modified Index Policies for Multi-Armed Bandits with Network-like Markovian Dependencies |
| title_fullStr | Modified Index Policies for Multi-Armed Bandits with Network-like Markovian Dependencies |
| title_full_unstemmed | Modified Index Policies for Multi-Armed Bandits with Network-like Markovian Dependencies |
| title_short | Modified Index Policies for Multi-Armed Bandits with Network-like Markovian Dependencies |
| title_sort | modified index policies for multi armed bandits with network like markovian dependencies |
| topic | dynamic distributions learning theory Markov chain multi-armed bandit |
| url | https://www.mdpi.com/2673-8732/5/1/3 |
| work_keys_str_mv | AT abdalazizsawwan modifiedindexpoliciesformultiarmedbanditswithnetworklikemarkoviandependencies AT jiewu modifiedindexpoliciesformultiarmedbanditswithnetworklikemarkoviandependencies |