Topics for Theses

Available Topics

Generating ground truth data can be very costly and time consuming. For instance, generating labels for a single image in semantic segmentation can take up to 1 hour. Alternatively, synthetic data can be generated automatically much easier and faster. However, neural networks trained on synthetic data show a poor generalization to real data. In this thesis we assume that we have a synthetic data set with labels and an unlabeled real dataset consisting only of images. The task is to set up methods that allow the model trained on synthetic data to generalize on real data as well. Various approaches can be used to solve this task using Generative Adversarial Networks (GANs) or a self-supervised training pipeline.

If you are interested and want more information, please contact Sebastian Scherer

In this work we assume the following szenario. We have a large amount of images, but only a small subset has annotated ground truth labels. Supervised approaches only allow the usage of the small subset of data. The question we ask in this work is, can we make use of all images? We will pre-train our models in a self-supervised task on a large amount of unlabeled data before adapting the model to the target task. Possible target taks may be semantic segmentation, human pose estimation or 3D object detection.

If you are interested and want more information, please contact Sebastian Scherer

Supervised training of deep neural networks require large labeled datasets. However, the label generation process can be very noisy/error-prone in the sense that some labels are labeled incorrectly. Additionally, there are self-supervised methods that generate pseudo labels on non-annotated data that are used for training afterwards. Training on noisy labels can yield to a poor performance. In this work, we will investigate the effect of wrong annotations in the training and design approaches that overcome this issue. Possible target taks may be image classification, semantic segmentation, human pose estimation or 3D object detection.

If you are interested and want more information, please contact Sebastian Scherer

Human Pose Estimation is the task of detecting human keypoints in images or videos. 2D Human Pose Estimation means the localization of these keypoints in 2D coordinates in the image or video frame. Convolutional neural networks are the most common for such tasks. Recently, the Transformer architecture emerged from natural language processing tasks to vision tasks. It has the benefit to have a global view instead of the local view that convolution operations have. As it was originally not designed for vision tasks, some adaptations have to made to make this architecture feasible for vision tasks. A lot of variants have been proposed recently, but they are mostly not evaluated for Human Pose Estimation. Theses in this topic should analyze the performance of different Transformer variants for Human Pose Estimation. Variants could include different basic architectures, target heads, architecture nuances/hyperparameters etc.

 

If you are interested and want more information, please contact Katja Ludwig

Semi-Supervised Learning is an active research field in computer vision with the goal to train neural networks with only a small labeled dataset and a lot of unlabeled data. For human pose estimation, this means that a large dataset with images from people is available, but only a small subset has annotated keypoints. Semi-supervised human pose estimation uses different techniques to train jointly on labeled and unlabeled images in order to improve the detection performance of the network. Popular methods are pseudo labels - the usage of network predictions as annotations - and teacher-student-approaches, where one network is enhanced by being trained by a second network.  

 

If you are interested and want more information, please contact  Katja Ludwig

Note: The following topics are currently only available for practical courses (interships, "Projektmodul", ...) and maybe Bachelor's theses.

Introduction

Driven by the massive progress in 2D Human Pose Estimation and related detection-based tasks over the last years, active research is steadily advancing to the next logical step: the reconstruction of the human pose in 3D space. And while existing multi-view or RGB-D motion capture systems are perfectly capable of this task, current research focuses on the difficult and highly under-constrained case of single-view RGB images and videos. Reliably estimating the 3D pose of a human from a single consumer-grade camera opens up a vast area of practical applications.

 

Like in many computer vision topics, all current state-of-the-art methods in 3D human pose estimation (HPE) evolve around some form of convolutional neural network (CNN), Transformer network, or a combination of both. The main differences come from

  • the specific task definition
  • the pose representation, especially within the CNN/Transformer
  • the type of supervision (configuration and quantity of data and labels)
  • the runtime vs. fidelity trade-off

There is onging research in all these different topics, with year-to-year gains in precision, reliability and efficiency.

 

Thesis Topics

The research at our chair covers all the aforementioned concepts. We always have specific research questions that are suitabel for a Bachelor or Master thesis as well as practical courses and internships. With the speed of new developments and advancements in this field, the detailed topic for a thesis will be defined on-demand based on the current research at our chair and the prerequsites and interests of the student. Below are some topics that are suitabel for a potential thesis. If you are interested in the overall reserach field or one of the following topics, please contact Moritz Einfalt.

 

Pose Representations for Multi-Person 3D HPE

Coming from the current state in 2D human pose estimation, the quasi-standard method to represent the detection targets for pose-defining human keypoints in CNN/Transformer models are spatial 2D heatmaps. Retaining the spatial dimensions from input (image) to output (heatmaps, one per keypoint) is the currently best performing mode. The naive transfer of this concept to the 3D task (i.e. the detection of keypoints in 3D space) are volumentric 3D heatmaps. However, the additional dimension in the network output makes this representation very costly, especially when a high spatial resolution in the predicted heatmaps is required. And while the approach can be feasible for single-person 3D HPE on tight image crops, it completely breaks in the multi-person case on large images.

 

Current solutions try to factorize the 3D volume into smaller, more efficent 1D and 2D components [1]. This divides the learning task into a detection part (2D heatmaps) and a regression part (e.g. numerical regression of the depth component), see figure 1. Other approaches use a learned compact representation of 3D heatmaps from an integrated autoencoder [2], see figure 2. Both variants have disadvantages and can lead to ambiguites in the encoding of 3D keypoints of tightly grouped people in the image. Potential topics for theses under this research question include the comparison of different existing representation methods and the development of new representations under the contraints of efficiency or precision.

 

 

 

 

Figure 1: Mixed pose representation: Spatial detection task with 2D heatmaps + sparse regression of 3D keypoint locations. Image taken from [1].

 

 

 

 

 

 

 

Figure 2: Reconstructed volumentic heatmap (summed over the z-axis for visualization) with the autoencoder from [2]. Image taken from the JTA dataset [3].

 

 

 

 

Real-Time 3D HPE on Edge Devices

The current state-of-the-art in monocular 3D HPE  is already at a level of precision and reliability where it can be intergrated into actual applications. This can range from analytical applications, where the motion of humans in 3D space is infered and evaluated, to interactive scenarios, where the human body is used as an input mode to control other agents (robots, virtual characters, ...). However, most of the current best-perfoming monocular 3D HPE methods rely on very deep CNNs, large spatial input and ouput sizes and sometimes even the combination of multiple CNN/Tranformer models for two-step person detection and pose estimation. Aside from the need of entire GPU servers for training, these models still require a dedicated high-end consumer or even professional GPU during inference (i.e. application) to reach real-time capabilities. This contraint massively hinders the development of new applications: It restricts the usage to stationary scenarios, where the recording device is connected to a powerful GPU machine. The true application potential lies in mobile applications, where the 3D HPE is performed direclty on the recording device (read: smartphone).

 

One highly relevant research questions is therefore the transfer of the current state-of-the-art in 3D HPE to less powerful edge devices like smartphones. Existing approaches focus on single-shot architectures [4] (  see figure 3),    low-resolution image crops or CNN model compression [5] (see figure 4). Potential topics for theses under this research question include benchmarking and adapting existing methods and developing new strategies in teacher-student model compression.

 

 

>> Mobile Application for Real-Time 3D HPE <<

We are currently looking for students that are intersted in working on a standalone iOS mobile demo application with a complete 3D HPE pipeline. This project covers model compression of 2D and 3D HPE models, platform conversion as well as the developtment of the final application. The project is best suited for bachelor or master students that want to complete their internship ("Betriebspraktium") or practical course ("Forschungs-/Projektmodul) at our lab and have some prior experience with iOS development. Ideally, two students will tackle the project as a team. Please contact Moritz Einfalt for further details and prerequisites.

 

 

 

Figure 3: Single-shot multi-person 3D HPE with Pandanet. Image taken from [4].

 

 

 

 

 

 

 

Figure 4: Real-time 3D HPE directly on a smartphone via CNN model compression. Image taken from [5].

 

 

 

 

Litearture

[1] Mehta, Dushyant, et al. "XNect: Real-time multi-person 3D motion capture with a single RGB camera." ACM Transactions on Graphics (TOG) 39.4 (2020): 82-1.

 

[2] Fabbri, Matteo, et al. "Compressed volumetric heatmaps for multi-person 3d pose estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

 

[3] Fabbri, Matteo, et al. "Learning to detect and track visible and occluded body joints in a virtual world." Proceedings of the European Conference on Computer Vision (ECCV). 2018.

 

[4] Benzine, Abdallah, et al. "Pandanet: Anchor-based single-shot multi-person 3d pose estimation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

 

[5] Hwang, Dong-Hyun, et al. "Lightweight 3D human pose estimation network training using teacher-student learning." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2020.

 

 

 

Scene Graph Generation is about detecting relationships in images. Relationships are described as triplets of subject, predicate, and object, e.g. "person-driving-car". Current methods are evaluated using the Recall@k metric and its variants. However, these metrics have different drawbacks which should be addressed in this thesis.

You will create an overview over existing scene graph generation metrics and compare their properties using various experiments on different datasets. Based on your findings, you will design a new metric that tackles the drawbacks of existing metrics. Your code will be published as Python package on PyPI to make it accessible for other researchers in the field.

To succeed, you should be proactive, creative, and bring your own ideas. Additionally, it is recommended to have a founded knowledge of neural networks for computer vision and be familiar with PyTorch.

 

If you are interested or want more information, feel free to contact Julian Lorenz

Scene graph generation models can detect interactions and relationships in images. A relationship is defined as a triplet of subject-predicate-object, e.g. „person-driving-car“.

However, current scene graph datasets that are used to train such models are struggling with incomplete annotations and unbalanced predicate class distributions. Using synthetic datasets, we do not have these problems because we can generate as many images as we want and decide how they should look like.

To generate a synthetic dataset for scene graph generation, you will use the Unity game engine and its Perception Package. The Perception Package enables us to construct and automatically annotate synthetic data. However, only standard annotations like depth masks or bounding boxes are supported. You will have to write a custom extension to the Perception Package to support annotations for scene graph datasets.

Additionally, you will develop algorithms to position objects in a virtual environment to create images that contain predefined sets of predicate classes, useful for scene graph generation.

Finally, you will evaluate your dataset using a state of the art scene graph model to demonstrate the effectiveness of your dataset.

Previous knowledge in C# and the Unity game engine are recommended to quickly get started. To succeed, you will have to bring in your own ideas and be able to understand and modify existing code bases like the Perception Package.

 

If you are interested or want more information, feel free to contact Julian Lorenz

This topic is suitable for a master thesis.

Scene graph generation models are trained to find interactions and relationships in images. A relationship is defined as a triplet of subject-predicate-object, e.g. „person-playing-piano“.

However, current scene graph models are still limited to a fixed set of subject/object classes even though object detectors exist for open vocabulary classification and detection. Open vocabulary means that a model is not trained on a fixed set of classes but on arbitrary labels. Your task will be to integrate such an open vocabulary detection model into a scene graph generation pipeline.

Previous knowledge in PyTorch is required. Additionally, you should have experience with object detection models and how to incorporate large foreign code bases into your own work.

 

If you are interested or want more information, feel free to contact Julian Lorenz

Scene graph generation models are trained to find interactions and relationships in images. A relationship is defined as a triplet of subject-predicate-object, e.g. "person-playing-piano". Recently, the HiLo architecture achieved a drastic improvement on panoptic scene graph generation.

Your task will be to build a new scene graph generation architecture, based on the state of the art HiLo model. Analyse HiLo's different building blocks and their effectiveness to find out which parts can be improved or even removed.

Previous fundamental knowledge in PyTorch is required, as well as working with foreign code bases. To build your own model, you will have to be creative and resourceful.

If you are interested or want more information, feel free to contact Julian Lorenz.

Search