Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the tradi...
Saved in:
Main Authors: | Wei-Jong Yang, Chih-Chen Wu, Jar-Ferr Yang |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-12-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/1/80 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Recognition and localization of ratoon rice rolled stubble rows based on monocular vision and model fusion
by: Yuanrui Li, et al.
Published: (2025-01-01) -
Breaking New Ground in Monocular Depth Estimation with Dynamic Iterative Refinement and Scale Consistency
by: Akmalbek Abdusalomov, et al.
Published: (2025-01-01) -
Monocular Depth Estimation: A Review on Hybrid Architectures, Transformers and Addressing Adverse Weather Conditions
by: Kumara Lakindu, et al.
Published: (2025-01-01) -
Pictorial depth cues elicit the perception of tridimensionality in dogs
by: Anna Broseghini, et al.
Published: (2024-07-01) -
Boosting Depth Estimation for Self-Driving in a Self-Supervised Framework via Improved Pose Network
by: Yazan Dayoub, et al.
Published: (2025-01-01)