Leveraging two-dimensional pre-trained vision transformers for three-dimensional model generation via masked autoencoders
Abstract Although the Transformer architecture has established itself as the industry standard for jobs involving natural language processing, it still has few uses in computer vision. In vision, attention is used in conjunction with convolutional networks or to replace individual convolutional netw...
Saved in:
Main Authors: | Muhammad Sajid, Kaleem Razzaq Malik, Ateeq Ur Rehman, Tauqeer Safdar Malik, Masoud Alajmi, Ali Haider Khan, Amir Haider, Seada Hussen |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Scientific Reports |
Subjects: | |
Online Access: | https://doi.org/10.1038/s41598-025-87376-y |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Analysing UPQC performance with dual NPC converters: Three-dimensional and two-dimensional space vector modulation
by: Palanisamy R, et al.
Published: (2025-03-01) -
Two-Dimensional Ferroelectric Materials: From Prediction to Applications
by: Shujuan Jiang, et al.
Published: (2025-01-01) -
Device-to-Device Communication as an Underlay in Coordinated Heterogenous Cellular Network
by: Chen Xu, et al.
Published: (2013-06-01) -
Survey of software defined D2D and V2X communication
by: Wenjuan SHAO, et al.
Published: (2019-04-01) -
Latent space improved masked reconstruction model for human skeleton-based action recognition
by: Enqing Chen, et al.
Published: (2025-02-01)