Tobias Nauen
Tobias Nauen
Home
Publications
Projects
Contact
Light
Dark
Automatic
Publications
Type
Preprint
Conference paper
Thesis
Date
2025
2024
2022
2021
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers
A comprehensive benchmark and analysis of more than 45 transformer models for image classification to evaluate their efficiency, considering various performance metrics. We find the optimal architectures to use and uncover that model-scaling is more efficient than image scaling.
Tobias Christian Nauen
,
Sebastian Palacio
,
Federico Raue
,
Andreas Dengel
PDF
Cite
Code
Project
Project
Data Explorer
Supplementary Material
TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax
This paper introduces TaylorShift, a novel reformulation of the attention mechanism using Taylor softmax that enables computing full token-to-token interactions in linear time. We analytically and empirically determine the crossover points where employing TaylorShift becomes more efficient than traditional attention. TaylorShift outperforms the traditional transformer architecture in 4 out of 5 tasks.
Tobias Christian Nauen
,
Sebastian Palacio
,
Andreas Dengel
PDF
Cite
Code
Project
Project
Slides
DOI
Appendix
Zoomed In, Diffused Out: Towards Local Degradation-Aware Multi-Diffusion for Extreme Image Super-Resolution
We extend pretrained super-resolution models to larger images by using local-aware prompts.
Brian B. Moser
,
Stanislav Frolov
,
Tobias Christian Nauen
,
Federico Raue
,
Andreas Dengel
PDF
Cite
Project
Project
Project
Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class Pruning
We speed up diffusion classifiers by utilizing a label hierarchy and pruning unrelated paths.
Arundhati S Shanbhag
,
Brian Bernhard Moser
,
Tobias Christian Nauen
,
Stanislav Frolov
,
Federico Raue
,
Andreas Dengel
PDF
Cite
Project
Project
Project
Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning
We improve dataset distillation by distilling only a representative coreset.
Brian Bernhard Moser
,
Federico Raue
,
Tobias Christian Nauen
,
Stanislav Frolov
,
Andreas Dengel
PDF
Cite
Project
Project
Project
A Low-Resolution Image is Worth 1x1 Words: Enabling Fine Image Super-Resolution with Transformers and TaylorShift
We utilize the TaylorShift attention mechanism for global pixel-wise-attention in image super-resolution.
Sanath Budakegowdanadoddi Nagaraju
,
Brian Bernhard Moser
,
Tobias Christian Nauen
,
Stanislav Frolov
,
Federico Raue
,
Andreas Dengel
PDF
Cite
Project
Project
Project
Stochastic Control with Signatures
This paper proposes a new method to parameterize open loop controls in stochastic optimal control problems using path signatures. We show that these controls are dense in the space of all admissible controls and establish conditions for stability of the controlled dynamics and target functional.
Peter Bank
,
Christian Bayer
,
Paul Peter Hager
,
Sebastian Riedel
,
Tobias Christian Nauen
PDF
Code
Stochastic Optimal Control using Signatures
We consider a stochastic control problem and try to solve it using the signature method.
Tobias Christian Nauen
,
Sebastian Riedel
PDF
Cite
Code
Explaining Graph Neural Networks
We extend and test KEdge, an interpretable-by-design approach for graph neural networks, and compare it to gradient-based attribution techniques.
Tobias Christian Nauen
,
Thorben Funke
,
Avishek Anand
PDF
Cite
DOI
Cite
×