PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding
Panoramic images offer a comprehensive spatial view that is crucial for indoor robotics tasks such as visual room rearrangement, where an agent must restore objects to their original positions or states. Unlike existing 2D scene change understanding datasets, which rely on single-view images, panora...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10965672/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849712691534364672 |
|---|---|
| author | Mariia Khan Yue Qiu Yuren Cong Jumana Abu-Khalaf David Suter Bodo Rosenhahn |
| author_facet | Mariia Khan Yue Qiu Yuren Cong Jumana Abu-Khalaf David Suter Bodo Rosenhahn |
| author_sort | Mariia Khan |
| collection | DOAJ |
| description | Panoramic images offer a comprehensive spatial view that is crucial for indoor robotics tasks such as visual room rearrangement, where an agent must restore objects to their original positions or states. Unlike existing 2D scene change understanding datasets, which rely on single-view images, panoramic views capture richer spatial context, object relationships, and occlusions—making them better suited for embodied artificial intelligence (AI) applications. To address this, we introduce Panoramic Scene Change Understanding (PanoSCU), a dataset specifically designed to enhance the visual object rearrangement task. Our dataset comprises 5,300 panoramas generated in an embodied simulator, encompassing 48 common indoor object classes. PanoSCU supports eight research tasks: single-view and panoramic detection, single-view and panoramic segmentation, single-view and panoramic change understanding, embodied object tracking, and change reversal. We also present PanoStitch, a training-free method for automatic panoramic data collection within embodied environments. We evaluate state-of-the-art methods on panoramic segmentation and change understanding tasks. There is a gap in existing methods, as they are not designed for panoramic inputs and struggle with varying ratios and sizes, resulting from the unique challenges of visual object rearrangement. Our findings reveal these limitations and underscore PanoSCU’s potential to drive progress in developing models capable of robust panoramic reasoning and fine-grained scene change understanding. |
| format | Article |
| id | doaj-art-19825937e4ea45b9bb18fb83e4c971d5 |
| institution | DOAJ |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-19825937e4ea45b9bb18fb83e4c971d52025-08-20T03:14:12ZengIEEEIEEE Access2169-35362025-01-0113724567247610.1109/ACCESS.2025.356105510965672PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene UnderstandingMariia Khan0https://orcid.org/0000-0001-6662-4607Yue Qiu1Yuren Cong2Jumana Abu-Khalaf3https://orcid.org/0000-0002-6651-2880David Suter4https://orcid.org/0000-0001-6306-3023Bodo Rosenhahn5https://orcid.org/0000-0003-3861-1424School of Science, Centre for AI and Machine Learning, Edith Cowan University, Joondalup, WA, AustraliaArtificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Chiyoda, JapanInstitute for Information Processing, Leibniz University Hannover, Hannover, GermanySchool of Science, Centre for AI and Machine Learning, Edith Cowan University, Joondalup, WA, AustraliaSchool of Science, Centre for AI and Machine Learning, Edith Cowan University, Joondalup, WA, AustraliaInstitute for Information Processing, Leibniz University Hannover, Hannover, GermanyPanoramic images offer a comprehensive spatial view that is crucial for indoor robotics tasks such as visual room rearrangement, where an agent must restore objects to their original positions or states. Unlike existing 2D scene change understanding datasets, which rely on single-view images, panoramic views capture richer spatial context, object relationships, and occlusions—making them better suited for embodied artificial intelligence (AI) applications. To address this, we introduce Panoramic Scene Change Understanding (PanoSCU), a dataset specifically designed to enhance the visual object rearrangement task. Our dataset comprises 5,300 panoramas generated in an embodied simulator, encompassing 48 common indoor object classes. PanoSCU supports eight research tasks: single-view and panoramic detection, single-view and panoramic segmentation, single-view and panoramic change understanding, embodied object tracking, and change reversal. We also present PanoStitch, a training-free method for automatic panoramic data collection within embodied environments. We evaluate state-of-the-art methods on panoramic segmentation and change understanding tasks. There is a gap in existing methods, as they are not designed for panoramic inputs and struggle with varying ratios and sizes, resulting from the unique challenges of visual object rearrangement. Our findings reveal these limitations and underscore PanoSCU’s potential to drive progress in developing models capable of robust panoramic reasoning and fine-grained scene change understanding.https://ieeexplore.ieee.org/document/10965672/Object segmentationchange detection algorithmsembodied artificial intelligenceimage stitching |
| spellingShingle | Mariia Khan Yue Qiu Yuren Cong Jumana Abu-Khalaf David Suter Bodo Rosenhahn PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding IEEE Access Object segmentation change detection algorithms embodied artificial intelligence image stitching |
| title | PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding |
| title_full | PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding |
| title_fullStr | PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding |
| title_full_unstemmed | PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding |
| title_short | PanoSCU: A Simulation-Based Dataset for Panoramic Indoor Scene Understanding |
| title_sort | panoscu a simulation based dataset for panoramic indoor scene understanding |
| topic | Object segmentation change detection algorithms embodied artificial intelligence image stitching |
| url | https://ieeexplore.ieee.org/document/10965672/ |
| work_keys_str_mv | AT mariiakhan panoscuasimulationbaseddatasetforpanoramicindoorsceneunderstanding AT yueqiu panoscuasimulationbaseddatasetforpanoramicindoorsceneunderstanding AT yurencong panoscuasimulationbaseddatasetforpanoramicindoorsceneunderstanding AT jumanaabukhalaf panoscuasimulationbaseddatasetforpanoramicindoorsceneunderstanding AT davidsuter panoscuasimulationbaseddatasetforpanoramicindoorsceneunderstanding AT bodorosenhahn panoscuasimulationbaseddatasetforpanoramicindoorsceneunderstanding |