Employing the concept of stacking ensemble learning to generate deep dream images using multiple CNN variants

Addiction and adverse effects resulting from schizophrenia are rapidly becoming a global issue, necessitating the development of advanced approaches that can provide support to psychiatrists and psychologists to understand and replicate the hallucinations and imagery experienced by patients. Such ap...

Full description

Saved in:
Bibliographic Details
Main Authors: Lafta Alkhazraji, Ayad R. Abbas, Abeer S. Jamil, Zahraa Saddi Kadhim, Wissam Alkhazraji, Sabah Abdulazeez Jebur, Bassam Noori Shaker, Mohammed Abdallazez Mohammed, Mohanad A. Mohammed, Basim Mohammed Al-Araji, Abdulkareem Z. Mohmmed, Wasiq Khan, Bilal Khan, Abir Jaafar Hussain
Format: Article
Language:English
Published: Elsevier 2025-03-01
Series:Intelligent Systems with Applications
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2667305325000146
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Addiction and adverse effects resulting from schizophrenia are rapidly becoming a global issue, necessitating the development of advanced approaches that can provide support to psychiatrists and psychologists to understand and replicate the hallucinations and imagery experienced by patients. Such approaches can also be useful for promoting interest in human artwork, particularly surrealist images. Accordingly, in the present, a stacking ensemble Deep Dream model was developed that aids psychiatrists and psychologists in addressing the challenge of mimicking hallucinations. The dream-like images generated in the present study possess an aesthetic quality reminiscent of surrealist art. For model development, a series of five pre-trained Convolutional Neural Network (CNN) architectures—VGG-19, Inception v3, VGG-16, Inception-ResNet-V2, and Xception were stacked in an ensemble learning approach to create Deep Dream images whereby the upper hidden layers of the architectures were activated, and the models were trained via the Adam optimizer. Performance of the proposed model was evaluated across three octaves to amplify the maximum possible patterns and features of the base image. The resulting dream-like images contain shapes that reflect elements from the ImageNet dataset on which the above pre-trained models were trained. Each of the base images was manipulated to generate various dreamed images, each one with three octaves, which were finally combined to construct the final image with its loss. Final Deep Dream image showed a loss of 47.5821, while still retaining some features from the base image.
ISSN:2667-3053