Topics for Theses

Available Topics

Human Pose Estimation (HPE) is the task of detecting human keypoints in images or videos. 2D Human Pose Estimation means the localization of these keypoints in pixel coordinates in the image or video frame. 3D Human Pose Estimation is the task of estimating a three dimensional pose of the humans in the image or video. Mostly, this task is accomplished by uplifting estimated 2D poses to the third dimension, e.g., by leveraging the time context in videos.

Transformer architectures are currently most common in these taks. They have the benefit to have a global view instead of the local view that convolution operations have. Thesis topics in this field could include analyzing 3D HPE architectures, improving/adapting them, e.g., for different domains or target applications, analyzing different input or training modes like semi-supervised learning, etc. 

Semi-Supervised Learning is an active research field in computer vision with the goal to train neural networks with only a small labeled dataset and a lot of unlabeled data. For human pose estimation, this means that a large dataset with images from people is available, but only a small subset has annotated keypoints. Semi-supervised human pose estimation uses different techniques to train jointly on labeled and unlabeled images in order to improve the detection performance of the network. Popular methods are pseudo labels - the usage of network predictions as annotations - and teacher-student-approaches, where one network is enhanced by being trained by a second network.   

 

If you are interested and want more information, please contact Katja Ludwig

Scene graph generation models can detect interactions and relationships in images. A relationship is defined as a triplet of subject-predicate-object, e.g. „person-driving-car“.

However, current scene graph datasets that are used to train such models are struggling with incomplete annotations and unbalanced predicate class distributions. Using synthetic datasets, we do not have these problems because we can generate as many images as we want and decide how they should look like.

To generate a synthetic dataset for scene graph generation, you will use the Unity game engine and its Perception Package. The Perception Package enables us to construct and automatically annotate synthetic data. However, only standard annotations like depth masks or bounding boxes are supported. You will have to write a custom extension to the Perception Package to support annotations for scene graph datasets.

Additionally, you will develop algorithms to position objects in a virtual environment to create images that contain predefined sets of predicate classes, useful for scene graph generation.

Finally, you will evaluate your dataset using a state of the art scene graph model to demonstrate the effectiveness of your dataset.

Previous knowledge in C# and the Unity game engine are recommended to quickly get started. To succeed, you will have to bring in your own ideas and be able to understand and modify existing code bases like the Perception Package.

 

If you are interested or want more information, feel free to contact Julian Lorenz

This topic is suitable for a master thesis.

Scene graph generation models are trained to find interactions and relationships in images. A relationship is defined as a triplet of subject-predicate-object, e.g. „person-playing-piano“.

However, current scene graph models are still limited to a fixed set of subject/object classes even though object detectors exist for open vocabulary classification and detection. Open vocabulary means that a model is not trained on a fixed set of classes but on arbitrary labels. Your task will be to integrate such an open vocabulary detection model into a scene graph generation pipeline.

Previous knowledge in PyTorch is required. Additionally, you should have experience with object detection models and how to incorporate large foreign code bases into your own work.

 

If you are interested or want more information, feel free to contact Julian Lorenz

Scene graph generation models are trained to find interactions and relationships in images. A relationship is defined as a triplet of subject-predicate-object, e.g. "person-playing-piano". Recently, the HiLo architecture achieved a drastic improvement on panoptic scene graph generation.

Your task will be to build a new scene graph generation architecture, based on the state of the art HiLo model. Analyse HiLo's different building blocks and their effectiveness to find out which parts can be improved or even removed.

Previous fundamental knowledge in PyTorch is required, as well as working with foreign code bases. To build your own model, you will have to be creative and resourceful.

If you are interested or want more information, feel free to contact Julian Lorenz.

The computer vision task of Human Pose Estimation estimates keypoints of humans in either 2D or 3D. These keypoints can be connected such that a skeleton model of the human can be created. This skeleton model is sufficient for some tasks, but does not reflect the body shape of the person. Human Mesh Estimation overcomes this issue. It estimates not only keypoints, but a whole mesh representing the pose and the body shape of humans. This task is more challenging than pure 3D Human Pose Estimation, as a lot more parameters need to be estimated. In order to keep the amount of parameters relatively small, body models like SMPL and its successors are common in this field. Thesis topics could include the analysis of Human Mesh architectures, slight adaptations to the models or training routines, analyses or conversion of body models, etc.    

 

If you are interested and want more information, please contact

Search