Reinforcement Learning-Based Formulations With Hamiltonian-Inspired Loss Functions for Combinatorial Optimization Over Graphs
Quadratic Unconstrained Binary Optimization (QUBO) is a versatile approach used to represent a wide range of NP-hard Combinatorial Optimization (CO) problems through binary variables. The transformation of QUBO to an Ising Hamiltonian is recognized as an effective method for solving key optimization...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10752916/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850236496564453376 |
|---|---|
| author | Redwan Ahmed Rizvee Raheeb Hassan Md. Mosaddek Khan |
| author_facet | Redwan Ahmed Rizvee Raheeb Hassan Md. Mosaddek Khan |
| author_sort | Redwan Ahmed Rizvee |
| collection | DOAJ |
| description | Quadratic Unconstrained Binary Optimization (QUBO) is a versatile approach used to represent a wide range of NP-hard Combinatorial Optimization (CO) problems through binary variables. The transformation of QUBO to an Ising Hamiltonian is recognized as an effective method for solving key optimization problems using quantum algorithms. Recently, PI-GNN, a generic framework, has been proposed to address CO problems over graphs based on QUBO with Hamiltonian loss function to train the underlying GNN architecture. Though PI-GNN is highly scalable, it exhibits a noticeable decrease in terms of the number of satisfied constraints with higher graph densities. In this paper, firstly, we identify the limitations and empirically present our strategy to improve PI-GNN’s performance. Secondly, we formulate and evaluate two strategies to integrate QUBO-Hamiltonian as the generic loss function in Reinforcement learning-based (RL) frameworks. The major contribution of our work lies in understanding the feasibility and quality of the QUBO-based generic reward function in an unsupervised RL setup in addressing graph-based CO problems. Empirically, through our empirical evaluation (Our implementation can be found in <uri>https://tinyurl.com/5apnymz7</uri>), we have observed up to 44% improvement in terms of the number of satisfied constraints over PI-GNN in a representative Max-Cut problem. |
| format | Article |
| id | doaj-art-4a5f6a7a3da242a5b01c397b21975fe7 |
| institution | OA Journals |
| issn | 2169-3536 |
| language | English |
| publishDate | 2024-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-4a5f6a7a3da242a5b01c397b21975fe72025-08-20T02:01:57ZengIEEEIEEE Access2169-35362024-01-011217105517106510.1109/ACCESS.2024.349795510752916Reinforcement Learning-Based Formulations With Hamiltonian-Inspired Loss Functions for Combinatorial Optimization Over GraphsRedwan Ahmed Rizvee0https://orcid.org/0000-0001-5056-1701Raheeb Hassan1https://orcid.org/0009-0003-2074-1113Md. Mosaddek Khan2https://orcid.org/0000-0002-7871-7111Department of Computer Science and Engineering, University of Dhaka, Dhaka, BangladeshDepartment of Computer Science and Engineering, University of Dhaka, Dhaka, BangladeshDepartment of Computer Science and Engineering, University of Dhaka, Dhaka, BangladeshQuadratic Unconstrained Binary Optimization (QUBO) is a versatile approach used to represent a wide range of NP-hard Combinatorial Optimization (CO) problems through binary variables. The transformation of QUBO to an Ising Hamiltonian is recognized as an effective method for solving key optimization problems using quantum algorithms. Recently, PI-GNN, a generic framework, has been proposed to address CO problems over graphs based on QUBO with Hamiltonian loss function to train the underlying GNN architecture. Though PI-GNN is highly scalable, it exhibits a noticeable decrease in terms of the number of satisfied constraints with higher graph densities. In this paper, firstly, we identify the limitations and empirically present our strategy to improve PI-GNN’s performance. Secondly, we formulate and evaluate two strategies to integrate QUBO-Hamiltonian as the generic loss function in Reinforcement learning-based (RL) frameworks. The major contribution of our work lies in understanding the feasibility and quality of the QUBO-based generic reward function in an unsupervised RL setup in addressing graph-based CO problems. Empirically, through our empirical evaluation (Our implementation can be found in <uri>https://tinyurl.com/5apnymz7</uri>), we have observed up to 44% improvement in terms of the number of satisfied constraints over PI-GNN in a representative Max-Cut problem.https://ieeexplore.ieee.org/document/10752916/Deep reinforcement learninggraph neural networkHamiltonian functionMonte Carlo tree search |
| spellingShingle | Redwan Ahmed Rizvee Raheeb Hassan Md. Mosaddek Khan Reinforcement Learning-Based Formulations With Hamiltonian-Inspired Loss Functions for Combinatorial Optimization Over Graphs IEEE Access Deep reinforcement learning graph neural network Hamiltonian function Monte Carlo tree search |
| title | Reinforcement Learning-Based Formulations With Hamiltonian-Inspired Loss Functions for Combinatorial Optimization Over Graphs |
| title_full | Reinforcement Learning-Based Formulations With Hamiltonian-Inspired Loss Functions for Combinatorial Optimization Over Graphs |
| title_fullStr | Reinforcement Learning-Based Formulations With Hamiltonian-Inspired Loss Functions for Combinatorial Optimization Over Graphs |
| title_full_unstemmed | Reinforcement Learning-Based Formulations With Hamiltonian-Inspired Loss Functions for Combinatorial Optimization Over Graphs |
| title_short | Reinforcement Learning-Based Formulations With Hamiltonian-Inspired Loss Functions for Combinatorial Optimization Over Graphs |
| title_sort | reinforcement learning based formulations with hamiltonian inspired loss functions for combinatorial optimization over graphs |
| topic | Deep reinforcement learning graph neural network Hamiltonian function Monte Carlo tree search |
| url | https://ieeexplore.ieee.org/document/10752916/ |
| work_keys_str_mv | AT redwanahmedrizvee reinforcementlearningbasedformulationswithhamiltonianinspiredlossfunctionsforcombinatorialoptimizationovergraphs AT raheebhassan reinforcementlearningbasedformulationswithhamiltonianinspiredlossfunctionsforcombinatorialoptimizationovergraphs AT mdmosaddekkhan reinforcementlearningbasedformulationswithhamiltonianinspiredlossfunctionsforcombinatorialoptimizationovergraphs |