From Pixels to Insights: Unsupervised Knowledge Graph Generation with Large Language Model

The role of image data in knowledge extraction and representation has become increasingly significant. This study introduces a novel methodology, termed Image to Graph via Large Language Model (ImgGraph-LLM), which constructs a knowledge graph for each image in a dataset. Unlike existing methods tha...

Full description

Saved in:
Bibliographic Details
Main Authors: Lei Chen, Zhenyu Chen, Wei Yang, Shi Liu, Yong Li
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/16/5/335
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The role of image data in knowledge extraction and representation has become increasingly significant. This study introduces a novel methodology, termed Image to Graph via Large Language Model (ImgGraph-LLM), which constructs a knowledge graph for each image in a dataset. Unlike existing methods that rely on text descriptions or multimodal data to build a comprehensive knowledge graph, our approach focuses solely on unlabeled individual image data, representing a distinct form of unsupervised knowledge graph construction. To tackle the challenge of generating a knowledge graph from individual images in an unsupervised manner, we first design two self-supervised operations to generate training data from unlabeled images. We then propose an iterative fine-tuning process that uses this self-supervised information, enabling the fine-tuned LLM to recognize the triplets needed to construct the knowledge graph. To improve the accuracy of triplet extraction, we introduce filtering strategies that effectively remove low-confidence training data. Finally, experiments on two large-scale real-world datasets demonstrate the superiority of our proposed model.
ISSN:2078-2489