About Me (Yuiga Wada / YuWd)
OTTER: Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation
Graphormer: Do Transformers Really Perform Bad for Graph Representation?
TokenGT: Pure Transformers are Powerful Graph Learners
Why do tree-based models still outperform deep learning on tabular data?
Deformable Attention Transformer
Prototypical Contrastive Learning of Unsupervised Representations
GSAM - Surrogate Gap Minimization Improves Sharpness-Aware Training
Toronto Paper Matching System
RegionCLIP: Region-based Language-Image Pretraining
When Shift Operation Meets Vision Transformer: An Extremely Simple Alternative to Attention Mechanism