Donnerstag, 21.01.2021, 15.00 Uhr

Learned Embeddings for Geometric Data



Solving high-level tasks on 3D shapes such as classification, segmentation, vertex-to-vertex maps or computing the perceived style similarity between shapes requires methods that are able to extract the necessary information from geometric data and describe the appropriate properties. Constructing functions that do this by hand is challenging because it is unclear how and which information to extract for a task. Furthermore, it is difficult to determine how to use the extracted information to provide answers to the questions about shapes that are being asked (e.g. what category a shape belongs to). To this end, we propose to learn functions that map geometric data to an embedding space. The outputs of those maps are compressed encodings of the input geometric data that can be optimized to contain all necessary task-dependent information. These encodings can then be compared directly (e.g. via the Euclidean distance) or by other fairly simple functions to provide answers to the questions being asked.

Neural networks can be used to implement such maps and comparison functions. This has the benefit that they offer flexibility and expressiveness. Furthermore, information extraction and comparison can be automated by designing appropriate objective functions that are used to optimize the parameters of the neural networks on geometric data collections with task-related meta information provided by humans.

We therefore have to answer two questions. Firstly, given the often irregular nature of representations of 3D shapes, how can geometric data be represented as input to neural networks and how should such networks be constructed? Secondly, how can we design the resulting embedding space provided by neural networks in such a manner that we are able to achieve good results on high-level tasks on 3D shapes?

In this talk we provide answers to these two questions. Concretely, de- pending on the availability of the data sources and the task specific requirements we compute encodings from geometric data representations in the form of images, point clouds and triangle meshes. Once we have a suitable way to encode the input, we explore different ways in which to design the learned embedding space by careful construction of appropriate objective functions that extend beyond straightforward cross-entropy minimization based approaches on categorical distributions. We show that these approaches are able to achieve good results in both discriminative as well as generative tasks.


Es laden ein: die Dozentinnen und Dozenten der Informatik