Comprehensive review of dimensionality reduction algorithms: challenges, limitations, and innovative solutions

Dimensionality reduction (DR) simplifies complex data from genomics, imaging, sensors, and language into interpretable forms that support visualization, clustering, and modeling. Yet widely used methods like principal component analysis, t-distributed stochastic neighbor embedding, uniform manifold...

Full description

Saved in:
Bibliographic Details
Main Author: Aasim Ayaz Wani
Format: Article
Language:English
Published: PeerJ Inc. 2025-07-01
Series:PeerJ Computer Science
Subjects:
Online Access:https://peerj.com/articles/cs-3025.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Dimensionality reduction (DR) simplifies complex data from genomics, imaging, sensors, and language into interpretable forms that support visualization, clustering, and modeling. Yet widely used methods like principal component analysis, t-distributed stochastic neighbor embedding, uniform manifold approximation and projection, and autoencoders are often applied as “black boxes,” neglecting interpretability, fairness, stability, and privacy. This review introduces a unified classification—linear, nonlinear, hybrid, and ensemble approaches—and assesses them against eight core challenges: dimensionality selection, overfitting, instability, noise sensitivity, bias, scalability, privacy risks, and ethical compliance. We outline solutions such as intrinsic dimensionality estimation, robust neighborhood graphs, fairness-aware embeddings, scalable algorithms, and automated tuning. Drawing on case studies from bioinformatics, vision, language, and Internet of Things analytics, we offer a practical roadmap for deploying dimensionality reduction methods that are scalable, interpretable, and ethically sound—advancing responsible artificial intelligence in high-stakes applications.
ISSN:2376-5992