Tobias Nauen
Tobias Nauen
Home
Publications
Projects
Contact
Light
Dark
Automatic
Deep Learning
PRISM: Diversifying Dataset Distillation by Decoupling Architectural Priors
We introduce PRISM, a framework that disentangles architectural priors for dataset distillation, outperforming single-teacher setups.
Brian Bernhard Moser
,
Shalini Sarode
,
Federico Raue
,
Krzysztof Adamkiewicz
,
Arundhati Shanbhag
,
Joachim Folz
,
Tobias Christian Nauen
,
Andreas Dengel
PDF
Cite
HyperCore: Coreset Selection under Noise via Hypersphere Models
We present HyperCore, a lightweight adaprive coreset selection framework designed for noisy environments. HyperCore utilizes per class hypersphere models and adaptively selects pruning thresholds.
Brian Bernhard Moser
,
Arundhati Shanbhag
,
Tobias Christian Nauen
,
Stanislav Frolov
,
Federico Raue
,
Joachim Folz
,
Andreas Dengel
PDF
Cite
SubZeroCore: A Submodular Approach with Zero Training for Coreset Selection
We introduce SubZeroCore, a novel, training-free coreset selection method that integrates submodular coverage and density into a single, unified objective.
Brian Bernhard Moser
,
Tobias Christian Nauen
,
Arundhati Shanbhag
,
Federico Raue
,
Stanislav Frolov
,
Joachim Folz
,
Andreas Dengel
PDF
Cite
When 512×512 is not Enough: Local Degradation-Aware Multi-Diffusion for Extreme Image Super-Resolution
We extend pretrained super-resolution models to larger images by using local-aware prompts.
Brian B. Moser
,
Stanislav Frolov
,
Tobias Christian Nauen
,
Federico Raue
,
Andreas Dengel
PDF
Cite
Code
Project
Project
Project
DOI
ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation
We improve the training of vision transformers by segmenting and recombining objects and backgrounds from datasets. This makes the transformers more accurate, as well as more robust.
Tobias Christian Nauen
,
Brian Moser
,
Federico Raue
,
Stanislav Frolov
,
Andreas Dengel
PDF
Cite
Code
Dataset
Project
Project
Project
Supplementary Material
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers
A comprehensive benchmark and analysis of more than 45 transformer models for image classification to evaluate their efficiency, considering various performance metrics. We find the optimal architectures to use and uncover that model-scaling is more efficient than image scaling.
Tobias Christian Nauen
,
Sebastian Palacio
,
Federico Raue
,
Andreas Dengel
PDF
Cite
Code
Project
Project
Poster
Slides
Data Explorer
Supplementary Material
A Study in Dataset Distillation for Image Super-Resolution
We conduct the first systematic study of dataset distillation for Super-Resolution.
Tobias Dietz
,
Brian Bernhard Moser
,
Tobias Christian Nauen
,
Federico Raue
,
Stanislav Frolov
,
Andreas Dengel
PDF
Cite
Project
Project
TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax
This paper introduces TaylorShift, a novel reformulation of the attention mechanism using Taylor softmax that enables computing full token-to-token interactions in linear time. We analytically and empirically determine the crossover points where employing TaylorShift becomes more efficient than traditional attention. TaylorShift outperforms the traditional transformer architecture in 4 out of 5 tasks.
Tobias Christian Nauen
,
Sebastian Palacio
,
Andreas Dengel
PDF
Cite
Code
Project
Project
Slides
DOI
Appendix
Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning
We improve dataset distillation by distilling only a representative coreset.
Brian Bernhard Moser
,
Federico Raue
,
Tobias Christian Nauen
,
Stanislav Frolov
,
Andreas Dengel
PDF
Cite
Code
Project
Project
Project
Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class Pruning
We speed up diffusion classifiers by utilizing a label hierarchy and pruning unrelated paths.
Arundhati S Shanbhag
,
Brian Bernhard Moser
,
Tobias Christian Nauen
,
Stanislav Frolov
,
Federico Raue
,
Andreas Dengel
PDF
Cite
Project
Project
Project
»
Cite
×