Self-attention in Transformers comes with a high computational cost because of their quadratic computational complexity, but their effectiveness in addressing problems in language and vision has sparked extensive research aimed at enhancing their efficiency. However, diverse experimental conditions, spanning multiple input domains, prevent a fair comparison based solely on reported results, posing challenges for model selection. To address this gap in comparability, we perform a large-scale benchmark of more than 45 models for image classification, evaluating key efficiency aspects, including accuracy, speed, and memory usage. Our benchmark provides a standardized baseline for efficiency-oriented transformers. We analyze the results based on the Pareto front – the boundary of optimal models. Surprisingly, despite claims of other models being more efficient, ViT remains Pareto optimal across multiple metrics. We observe that hybrid attention-CNN models exhibit remarkable inference memory- and parameter-efficiency. Moreover, our benchmark shows that using a larger model in general is more efficient than using higher resolution images. Thanks to our holistic evaluation, we provide a centralized resource for practitioners and researchers, facilitating informed decisions when selecting or developing efficient transformers.
The Transformer architecture [7] is one of the most successful models in deep learning, outperforming traditional models in multiple domains from language modeling to computer vision. However, a major challenge in working with Transformer models is their computational complexity of $\mathcal O(N^2)$ in the size of the input $N$. Therefore, researchers have proposed a multitude of modifications to overcome this hurdle and make Transformers more efficient.
However, it is unclear which modifications and overall strategies are the most efficient. That’s why in this paper, we will answer the following questions for the domain of image classification:
We tackle these questions by training more than 45 transformer variants from scratch, ensuring fair and comparable evaluation conditions. These transformer variants have been proposed to increase the efficiency for the domains of language or computer vision. Then we measure their speed and memory requirements, both at training and inference time. We additionally compare to the theoretical metrics of parameters and FLOPs. Our analysis is based on the Pareto front, the set of models that provide an optimal tradeoff between model performance and one aspect of efficiency. It lets us analyze the complex multidimensional tradeoffs involved in judging efficiency. In out plots, Pareto optimal models have a black dot, while the others have a white dot. For an example, see here.
We briefly describe the key elements of ViT [4] (the Transformer baseline for image classification), that have been studied to make it more efficient, as well as its key bottleneck: the $\mathcal O(N^2)$ computational complexity of self-attention.
ViT is an adaption of the original Transformer, taking an image as an input, which is converted into a sequence of non-overlapping patches of size $p \times p$ (usually $p = 16$).
Each patch is linearly embedded into a token of size $d$, with a positional encoding being added.
A classification token [CLS]
is appended to the sequence, which is then fed through a Transformer encoder.
There, the self-attention mechanism computes the attention weights $A$ from the queries $Q \in \mathbb R^{N \times d}$ and keys $K \in \mathbb R^{N \times d}$ for each token from the sequence:
$$ A = \text{softmax}\left( \frac{QK^\top}{\sqrt{d_\text{head}}} \right) \in \mathbb R^{N \times N} $$
This matrix encodes the global interactions between every possible pair of tokens, but it’s also the reason for the inherent $\mathcal O(N^2)$ computational complexity of the attention mechanism.
The output of attention is a sum over the values $V$ weighted by the attention weights: $X_\text{out} = AV$.
After self-attention, the sequence elements are passed through a 2-layer MLP.
In the end, only the [CLS]
token is used for the classification decision.
We systematically classify the efficient models using a two step approach:
The first and most popular approach is to change the token mixing mechanism, which directly tackles the $\mathcal O(N^2)$ computational complexity of self-attention. We identify 7 strategies for changing the token mixing mechanism to make it more efficient:
Models that change up the token sequence are more prevalent in CV compared to NLP. The idea is to remove redundant information and in doing so, using the $\mathcal O(N^2)$ complexity to our advantage. Removing 30% of the tokens reduces the computational cost of self-attention by approximately 50%. The strategies we identify are:
The final way of changing the architecture was only taken by two methods. Their idea was to move computations from self-attention into the efficient MLP blocks. This is done by expanding the MLPs or exchanging self-attention layers for more MLPs.
We conduct a series of over 200 experiments on more than 45 models.
We compare models on even grounds by training from scratch with a standardized pipeline. This pipeline is based on DeiT III [6], an updated version of DeiT [5]. To reduce bias, our pipeline is relatively simple and only consists of elements commonly used in CV. In particular, we refrain from using knowledge distillation to prevent introducing bias from the choice of teacher model. Any orthogonal techniques, like quantization, sample selection, and others, are not included as they can be applied to every model and would manifest as a systematic offset in the results. To avoid issues from limited training data, we pre-train all models on ImageNet-21k [3].
Pretrain | Finetune | |
---|---|---|
Dataset | ImageNet-21k | ImageNet-1k |
Epochs | 90 | 50 |
LR | $3 \times 10^{-3}$ | $3 \times 10^{-4} |
Schedule | cosine decay | cosine decay |
Batch Size | 2048 | 2048 |
Warmup Schedule | linear | linear |
Warmup Epochs | 5 | 5 |
Weight Decay | 0.02 | 0.02 |
Gradient Clipping | 1.0 | 1.0 |
Label Smoothing | 0.1 | 0.1 |
Drop Path Rate | 0.05 | 0.05 |
Optimizer | Lamb | Lamb |
Dropout Rate | 0.0 | 0.0 |
Mixed Precision | ✅ | ✅ |
Augmentation | 3-Augment | 3-Augment |
Image Resolution | $224 \times 224$ or $192 \times 192$ | $224 \times 224$ or $384 \times 384$ |
GPUs | 4 NVIDIA A100 | 4 or 8 NVIDIA A100 |
We track the following metrics for evaluating the model efficiency:
For comparability, the empirical metrics are measured using the same setup.
To validate the fairness of our training pipeline, we validate our ImageNet-1k accuracy with the original papers’ (whenever reported).
Model | Orig. DeiT | Orig. Acc. | Our Acc. | Model | Orig. DeiT | Orig. Acc. | Our Acc. | |
---|---|---|---|---|---|---|---|---|
ViT-S (DeiT) | ✅ | 79.8 | 82.54 | ViT-S (DeiT III) | 82.6 | 82.54 | ||
XCiT-S | ✅ | 82.0 | 83.65 | Swin-S | ✅ | 83.0 | 84.87 | |
Swin-V2-Ti | 81.7 | 83.09 | Wave-ViT-S | 82.7 | 83.61 | |||
Poly-SA-ViT-S | 71.48 | 78.34 | SLAB-S | ✅ | 80.0 | 78.70 | ||
EfficientFormer-V2-S0 | 75.7${}^D$ | 71.53 | CvT-13 | 83.3$\uparrow$ | 82.35 | |||
CoaT-Ti | ✅ | 78.37 | 78.42 | EfficientViT-B2 | 82.7$\uparrow$ | 81.53 | ||
NextViT-S | 82.5 | 83.92 | ResT-S | ✅ | 79.6 | 79.92 | ||
FocalNet-S | 83.4 | 84.91 | SwiftFormer-S | 78.5${}^D$ | 76.41 | |||
FastViT-S12 | ✅ | 79.8$\uparrow$ | 78.77 | EfficientMod-S | ✅ | 81.0 | 80.21 | |
GFNet-S | 80.0 | 81.33 | EViT-S | ✅ | 79.4 | 82.29 | ||
DynamicViT-S | 83.0${}^D$ | 81.09 | EViT Fuse | ✅ | 79.5 | 81.96 | ||
ToMe-ViT-S | ✅ | 79.42 | 82.11 | TokenLearner-ViT-8 | 77.87$\downarrow$ | 80.66 | ||
STViT-Swin-Ti | ✅ | 80.8 | 82.22 | CaiT-S24 | ✅ | 82.7 | 84.91 |
We find that 13 out of 26 papers base their training pipelines on DeiT, making our pipeline a good fit. Additionally, we see that with our pipeline accuracy increases by $0.85$% on average. Most models reporting higher performance using the original pipeline were trained with knowledge distillation (which we avoid to reduce bias) or using a larger image resolution (which we show is inefficient).
We find that in general, the accuracy per parameter goes down as models get larger. This is especially the case with the ViT models, which are more parameter efficient than similar accuracy models at smaller sizes (ViT-Ti) and less parameter efficient for the larger models (ViT-B). The most parameter efficient models are Hybrid Attention models (EfficientFormerV2-S0, CoaT-Ti) and other Non-attention shuffling models which incorporate convolutions (SwiftFormer, FastViT).
Generally, ViT is still a solid choice for speed.
Our observations reveal that fine-tuning at a higher resolution is inefficient. While it may result in improved accuracy, it entails a significant increase in computational cost, leading to a substantial reduction in throughput. In turn, scaling up the model ends up being more efficient. This can be seen when comparing the corresponding Pareto fronts for throughput, training speed, and training memory.
A few examples for scaling the model vs. scaling the image size:
We see that scaling up the model size is always more efficient than scaling up the image resolution.$\text{corr}(x, y)$ | Params | Training Time | Training Memory | Inference Time | Inference Memory |
---|---|---|---|---|---|
FLOPS | 0.30 | 0.72 | 0.85 | 0.48 | 0.42 |
Params | 0.05 | 0.18 | 0.02 | 0.40 | |
Training Time | 0.89 | 0.81 | 0.17 | ||
Training Memory | 0.71 | 0.48 | |||
Inference Time | 0.13 |
The highest correlation of 0.89 is between fine-tuning time and training memory. This suggests a common underlying factor or bottleneck, possibly related to the necessity of memory reads during training. We find a reliability of estimating computational costs only based on theoretical metrics, like [1, 2] before. Consequently, assessing model efficiency in practice requires the empirical measurement of throughput and memory requirements.
Our benchmark offers actionable insights for answering the question of which transformer to favor in the form of models and strategies to use. We have compiled an overview of these in the flowchart above. ViT remains the preferred choice overall. However, Token Sequence methods can become viable alternatives when speed and training efficiency are of importance. For scenarios with significant inference memory constraints, considering Hybrid CNN-attention models can prove advantageous.
We additionally find that it is much more efficient to scale up the model size than to scale up the image resolution. This goes against the trend of efficient models being evaluated using higher resolution images, which cancels out possible efficiency gains.
If you use this information, method or the associated code, please cite our paper:
@misc{Nauen2024WTFBenchmark,
title = {Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers},
author = {Tobias Christian Nauen and Sebastian Palacio and Federico Raue and Andreas Dengel},
year = {2024},
eprint = {2308.09372},
archivePrefix = {arXiv},
primaryClass = {cs.CV},
note = {Accepted at WACV 2025},
}
For references and links to the efficient transformer models, see the list of models.