
Tobias Christian Nauen
PhD Student
My research interests include efficiency of machine learning models, multimodal learning, and transformer models.
Publications
We extend pretrained super-resolution models to larger images by using local-aware prompts.
Brian B. Moser, Stanislav Frolov, Tobias Christian Nauen, Federico Raue, Andreas Dengel
We improve the training of vision transformers by segmenting and recombining objects and backgrounds from datasets. This makes the transformers more accurate, as well as more robust.
Tobias Christian Nauen, Brian Moser, Federico Raue, Stanislav Frolov, Andreas Dengel
We conduct the first systematic study of dataset distillation for Super-Resolution.
Tobias Dietz, Brian Bernhard Moser, Tobias Christian Nauen, Federico Raue, Stanislav Frolov, Andreas Dengel
We improve dataset distillation by distilling only a representative coreset.
Brian Bernhard Moser, Federico Raue, Tobias Christian Nauen, Stanislav Frolov, Andreas Dengel
We speed up diffusion classifiers by utilizing a label hierarchy and pruning unrelated paths.
Arundhati S Shanbhag, Brian Bernhard Moser, Tobias Christian Nauen, Stanislav Frolov, Federico Raue, Andreas Dengel
We utilize the TaylorShift attention mechanism for global pixel-wise-attention in image super-resolution.
Sanath Budakegowdanadoddi Nagaraju, Brian Bernhard Moser, Tobias Christian Nauen, Stanislav Frolov, Federico Raue, Andreas Dengel





