An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear Programming
This paper proposes a new approach to integrating Q learning into the fuzzy linear programming (FLP) paradigm to improve peer selection in P2P networks. Using Q learning, the proposed method employs real-time feedback to adjust and update peer selection policies. The FLP framework enriches this proc...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Journal of Sensor and Actuator Networks |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2224-2708/14/2/38 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850180084865957888 |
|---|---|
| author | Mahalingam Anandaraj Tahani Albalawi Mohammad Alkhatib |
| author_facet | Mahalingam Anandaraj Tahani Albalawi Mohammad Alkhatib |
| author_sort | Mahalingam Anandaraj |
| collection | DOAJ |
| description | This paper proposes a new approach to integrating Q learning into the fuzzy linear programming (FLP) paradigm to improve peer selection in P2P networks. Using Q learning, the proposed method employs real-time feedback to adjust and update peer selection policies. The FLP framework enriches this process by dealing with imprecise information through fuzzy logic. It is used to achieve multiple objectives, such as enhancing the throughput rate, reducing the delay, and guaranteeing a reliable connection. This integration effectively solves the problem of network uncertainty, making the network configuration more stable and flexible. It is also important to note that throughout the use of the Q-learning agent in the network, various state metric indicators, including available bandwidth, latency, packet drop rates, and connectivity of nodes, are observed and recorded. It then selects actions by choosing optimal peers for each node and updating a Q table that defines states and actions based on these performance indices. This reward system guides the agent’s learning, refining its peer selection policy over time. The FLP framework supports the Q-learning agent by providing optimized solutions that balance conflicting objectives under uncertain conditions. Fuzzy parameters capture variability in network metrics, and the FLP model solves a fuzzy linear programming problem, offering guidelines for the Q-learning agent’s decisions. The proposed method is evaluated under different experimental settings to reveal its effectiveness. The Erdos–Renyi model simulation is used, and it shows that throughput increased by 21% and latency decreased by 40%. The computational efficiency was also notably improved, with computation times diminishing by up to five orders of magnitude compared to traditional methods. |
| format | Article |
| id | doaj-art-0272510e6cf94d719d570f411e228bc2 |
| institution | OA Journals |
| issn | 2224-2708 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Journal of Sensor and Actuator Networks |
| spelling | doaj-art-0272510e6cf94d719d570f411e228bc22025-08-20T02:18:19ZengMDPI AGJournal of Sensor and Actuator Networks2224-27082025-04-011423810.3390/jsan14020038An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear ProgrammingMahalingam Anandaraj0Tahani Albalawi1Mohammad Alkhatib2Department of Information Technology, PSNA College of Engineering and Technology, Dindigul 624 622, Tamilnadu, IndiaDepartment of Computer Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi ArabiaDepartment of Computer Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi ArabiaThis paper proposes a new approach to integrating Q learning into the fuzzy linear programming (FLP) paradigm to improve peer selection in P2P networks. Using Q learning, the proposed method employs real-time feedback to adjust and update peer selection policies. The FLP framework enriches this process by dealing with imprecise information through fuzzy logic. It is used to achieve multiple objectives, such as enhancing the throughput rate, reducing the delay, and guaranteeing a reliable connection. This integration effectively solves the problem of network uncertainty, making the network configuration more stable and flexible. It is also important to note that throughout the use of the Q-learning agent in the network, various state metric indicators, including available bandwidth, latency, packet drop rates, and connectivity of nodes, are observed and recorded. It then selects actions by choosing optimal peers for each node and updating a Q table that defines states and actions based on these performance indices. This reward system guides the agent’s learning, refining its peer selection policy over time. The FLP framework supports the Q-learning agent by providing optimized solutions that balance conflicting objectives under uncertain conditions. Fuzzy parameters capture variability in network metrics, and the FLP model solves a fuzzy linear programming problem, offering guidelines for the Q-learning agent’s decisions. The proposed method is evaluated under different experimental settings to reveal its effectiveness. The Erdos–Renyi model simulation is used, and it shows that throughput increased by 21% and latency decreased by 40%. The computational efficiency was also notably improved, with computation times diminishing by up to five orders of magnitude compared to traditional methods.https://www.mdpi.com/2224-2708/14/2/38Erdos–Renyi modelfuzzy linear programmingQ learningP2P networkQ tablereinforcement learning |
| spellingShingle | Mahalingam Anandaraj Tahani Albalawi Mohammad Alkhatib An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear Programming Journal of Sensor and Actuator Networks Erdos–Renyi model fuzzy linear programming Q learning P2P network Q table reinforcement learning |
| title | An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear Programming |
| title_full | An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear Programming |
| title_fullStr | An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear Programming |
| title_full_unstemmed | An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear Programming |
| title_short | An Efficient Framework for Peer Selection in Dynamic P2P Network Using Q Learning with Fuzzy Linear Programming |
| title_sort | efficient framework for peer selection in dynamic p2p network using q learning with fuzzy linear programming |
| topic | Erdos–Renyi model fuzzy linear programming Q learning P2P network Q table reinforcement learning |
| url | https://www.mdpi.com/2224-2708/14/2/38 |
| work_keys_str_mv | AT mahalingamanandaraj anefficientframeworkforpeerselectionindynamicp2pnetworkusingqlearningwithfuzzylinearprogramming AT tahanialbalawi anefficientframeworkforpeerselectionindynamicp2pnetworkusingqlearningwithfuzzylinearprogramming AT mohammadalkhatib anefficientframeworkforpeerselectionindynamicp2pnetworkusingqlearningwithfuzzylinearprogramming AT mahalingamanandaraj efficientframeworkforpeerselectionindynamicp2pnetworkusingqlearningwithfuzzylinearprogramming AT tahanialbalawi efficientframeworkforpeerselectionindynamicp2pnetworkusingqlearningwithfuzzylinearprogramming AT mohammadalkhatib efficientframeworkforpeerselectionindynamicp2pnetworkusingqlearningwithfuzzylinearprogramming |