Computer Science Colloquium

Wednesday December 1, 2021, 1:30pm

Learning with Graphs: From Theory to Applications



Graph-structured data is ubiquitous across domains ranging from chemo- and bioinformatics to image and social network analysis. To develop successful machine learning models in these domains, we need techniques mapping the graph's structure to a vectorial representation in a meaningful way---so-called graph embeddings. Starting from the 1960s in chemoinformatics, different research communities have worked in the area under various guises, often leading to recurring ideas.

Moreover, triggered by the resurgence of (deep) neural networks, there is an ongoing trend in the machine learning community to design permutation-invariant or -equivariant neural architectures capable of dealing with graph input often denoted as neural graph networks (GNNs). However, although often successful in practice, GNN's capabilities and limits are understood to a lesser extend. In this talk, we overview some results shedding some light on the limitations and capabilities of GNNs by leveraging tools from graph theory and related areas. To complement the theory, we show how GNNs can act as an inductive bias to enhance state-of-art solvers for combinatorial optimization in a data-driven way.


The computer science lecturers invite interested people to join.