Learning Face Pareidolia via Global Feature Transfer

Convolutional Neural Networks learn different details of features across various layers, progressively extracting features from low-level aspects such as edges to high-level semantic concepts. In this work, we investigate which feature representations are more effective for pareidolic face detection...

Full description

Saved in:
Bibliographic Details
Main Authors: Usfita Kiftiyani, Seungkyu Lee
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11071290/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Convolutional Neural Networks learn different details of features across various layers, progressively extracting features from low-level aspects such as edges to high-level semantic concepts. In this work, we investigate which feature representations are more effective for pareidolic face detection. We demonstrate that high-level human face representations optimally enhance both pareidolic face classification and localization. In contrast, animal face transfers fail to provide significant improvements over a no-transfer model, as species-specific features mismatch the abstract patterns of pareidolia. Class activation maps of human face feature-transferred models and the last-layer transferred animal feature model reveal that the models focus on precise locations of face-like patterns, whereas the no-transfer model fails to do so. Our results yield several findings. First, effective transfer requires alignment between source and target feature abstractions. Second, human face topology provides ideal priors for face pareidolia. These findings offer valuable insights for challenging visual recognition tasks using advanced cross-domain feature transfer methodologies.
ISSN:2169-3536