Springe zum Hauptinhalt
Women in Data Science Chemnitz
2022
Women in Data Science Chemnitz 

WiDS Chemnitz 2022

am 19. Mai 2022

 

"On May 19th the regional WiDS Chemnitz event took place for the first time. It was entirely held in-person and attracted 50 data scientists, students and professionals from industry across several disciplines mostly from Germany but also other European countries. WiDS Chemnitz was a conference day full of highlights including 6 plenary talks from excellent speakers, a career panel, a poster session in which the participants were able to present their work and fantastic networking sessions. Since the event was rather small networking was super easy and the exchange went great and was very successful. The WiDS Chemnitz ambassadors Franziska Nestler and Theresa Wagner are thankful for the great feedback. Both are very satisfied with how everything worked out and took a lot from that day about how the next edition might turn out even better. They are very looking forward to future collaborations whose foundation was laid at WiDS Chemnitz. Franziska and Theresa are very grateful about the great WiDS network and all the support. 2022 was certainly not the last time WiDS Chemnitz took place."

 

Programm

9:15 - 9:30 Eröffnung
9:30 - 10:15

Plenary Talk: Marika Kaden

"Interpretable Machine Learning Methods and Their

Application to Bioinformatics and Medicine "

10:15 - 11:00

Plenary Talk: Nicole Mücke

"Deep Learning Needs Math!"

11:00 - 11:15 Kaffeepause
11:15 - 12:00 Poster Session / Networking
12:00 - 13:30 Mittagspause im Yasmin Imbiss
13:30 - 14:15

Plenary Talk: Anke Stoll

"AI-Based Trajectory Prediction for Autonomous Driving"

14:15 - 15:00

Plenary Talk: Leena Chennuru Vankadara

"Interpolation and Regularization for Causal Learning"

15:00 - 15:30 Networking / Kaffeepause
15:30 - 16:15

Plenary Talk: Franziska Diez

"A Study on Text Classification for Public Administration"

16:15 - 17:00

Plenary Talk: Melina Freitag

"From Models to Data and Back - An Introduction

to Data Assimilation Algorithms"

17:00 - 17:30 Networking / Pause
17:30 - 18:00

Karriere Panel

Panelists: Nicole Mücke, Anke Stoll,

Stefanie Schwaar, Leena Chennuru Vankadara

18:00 - 18:05 Abschluss
19:00 Konferenz-Dinner im Miramar

Sprecherinnen


Marika Kaden

(Hochschule Mittweida)

"Interpretable Machine Learning Methods and Their Application to Bioinformatics and Medicine"
Machine Learning finds versatile application in almost all areas. The deep neural networks experience a special hype here, since their performance is often very impressive. However, many users are skeptical because these models are often so-called black boxes, i.e. the decision-making process is difficult to understand and it is not really possible to interpret how the network arrives at a decision. The focus of our research group at the Saxon Institute for Computational Intelligence and Machine Learning (SICIM) of UAS Mittweida is the (further) development of interpretable and transparent Machine Learning models. In my talk I will present some of these models with their broad applications.

Nicole Mücke

(TU Braunschweig)

"Deep Learning Needs Math!"
The field of deep learning has undoubtedly established itself as an outstanding machine learning approach over the past few years. This dominant position is motivated by a series of overwhelming successes in widely different application areas. In my talk, I review those areas and show some of the shortcomings of neural networks in current practical applications. This will build a bridge to the mathematical research questions that I will highlight and summarize at the end.

Anke Stoll

(FDTech Chemnitz)

"AI-Based Trajectory Prediction for Autonomous Driving"
We address the problem of forecasting vehicle trajectories in a parking garage environment conditioned on their past motion and a bird’s eye view representation of the scene around them. As rule-based approaches struggle to scale to complex driving scenarios an approach for AI-based trajectory prediction is proposed. The data set which requires to represent a large variety of scenarios was generated using simulation. For parking tasks within different parking garages, the simulation visualizes sensor responses from within the Ego and/or the parking environment which help with the interpretation of the situation. Such images are then used to train a CNN model, that predicts the Ego vehicle’s trajectory for the next few seconds in order to safely position the Ego vehicle in a parking space.

Leena Chennuru Vankadara

(Universität Tübingen)

"Is Memorization Compatible with Causal Learning? The Case of High-Dimensional Linear Regression."
Deep learning models exhibit a rather curious phenomenon. They optimize over hugely complex model classes and are often trained to memorize the training data. This is seemingly contradictory to classical statistical wisdom which suggests avoiding interpolation in favor of reducing the complexity of the prediction rules. A large body of recent work partially resolves this contradiction and suggests that interpolation does not necessarily harm statistical generalization and it may even be necessary for optimal statistical generalization in some settings. This is however an incomplete picture. In modern ML, we care about more than building good statistical models. We want to learn models which are reliable and have good causal implications. Under a simple linear model in high dimensions, we will discuss the role of interpolation and its counterpart --- regularization --- in learning better causal models. 

Franziska Diez

(Fraunhofer ITWM Kaiserslautern)

"A Study on Text Classification for Public Administration"

Text classification is one of the typical tasks in natural language processing. In Geman public administration this task arises from the fact that every municipality is obliged to entertain special messaging infrastructure, a so-called "besonderes elektronisches Behördenpostfach" (special electronic institutional post box). The received emails need to be forwarded to one of for up to 45 different offices. With the number of received messages expected to grow substantially in the near future, finding ways to forward at least a large share of them automatically is crucial. Furthermore, a high accuracy of the algorithm is important due to possible sensitive data. Because incoming messages will often lack structure, the classification algorithm needs to be based on content. Moreover, it must be able to deal with starkly different document lengths. One way to solve this is by splitting documents into segments and classifying them with standard methods separately. However, classification results have to be aggregated on document level again afterwards. We introduce our comparative study of different aggregation algorithms and classification techniques in the context of text classification for public administration.

joint work with Michael Trebing and Stefanie Schwaar


Melina Freitag

(Universität Potsdam)

"From Models to Data and Back - An Introduction to Data Assimilation Algorithms"
Data assimilation is a method that combines observations (e.g. real world data) of a state of a system with model output for that system in order to improve the estimate of the state of the system. The model is usually represented by discretised time dependent partial differential equations. The data assimilation problem can be formulated as a large scale Bayesian inverse problem. Based on this interpretation we derive the most important variational and sequential data assimilation approaches, in particular three-dimensional and four-dimensional variational data assimilation (3D-Var and 4D-Var), and the Kalman filter. The final part reviews advances and challenges for data assimilation.

Career Panel

Nicole Mücke

(TU Braunschweig)

Anke Stoll

(FDTech Chemnitz)

Stefanie Schwaar

(Fraunhofer ITWM Kaiserslautern)

Leena Chennuru Vankadara

(Universität Tübingen)


Poster Sessions

Melanie Kircheis

"Efficient inversion of the multivariate nonequispaced fast Fourier transform"
The well-known discrete Fourier transform (DFT) can easily be generalized to arbitrary nodes in the spatial domain.The fast procedure for this generalization is referred to as nonequispaced fast Fourier transform (NFFT). Various applications such as MRI, solution of PDEs, etc. are interested in the inverse problem, i.e., computing Fourier coefficients from given nonequispaced data.In contrast to iterative solvers we study direct methods for this inversion in the overdetermined setting.For this purpose, we use the matrix representation of the NFFT and introduce a new method using least-squares minimization.Modifying one of the matrix factors of the NFFT leads to an optimization problem, which can simply be solved in a precomputational step.Thereby, we are able to compute an inverse NFFT up to a certain accuracy by means of a modified adjoint NFFT, which preserves its arithmetic complexity.

Kirandeep Kour

"Tensor Kernel Approximation"

When data are high-dimensional, efficient learning algorithms must utilize the tensorial structure as much as possible. The ever-present curse of dimensionality for high dimensional data and the loss of structure when vectorizing the data motivates the use of tailored low-rank tensor methods. Also, defining a basic Machine Learning (ML) model for such data is implicit. Therefore, exploiting the tensorial structure and extracting features to find the best fitted tensorial ML model does not hold a generic vectorized approach.One of these well-known classification ML models is the Support Tensor Machine, where a non-linear boundary value problem leads to finding the best kernel approximation. Hence, dealing with these models is a two-step problem that includes finding a best-suited low-rank approximation for the best-suited kernel approximation.The classical tensor factorization methods such as tucker, and tensor-train fails to capture the right features for the state-of-the-art Dual Structure-preserving Kernel (DuSK). We have observed that these classical tensor factorizations lack very specific qualities like norm equilibration. In this work, we have tried to minimize the gap between two steps, a low-rank approximation and kernel approximation, to achieve a state-of-the-art classification model. We have introduced a new tensor factorization in tucker form, as well as a new kernel approximation function, known as Weighted Subspace Kernel. With numerical examples, we show that this approach can outperform state-of-the-art techniques.

joint work with P. Benner, S. Dolgov, M. Pfeffer and M. Stoll


Fatima Antarou Ba

"Sparse Mixture Model"
We aim to learn a sparse mixture model where each component depends only on a small number of variable interactions using the Expectation-Maximization algorithm and some statistical tests (Kolmogorov-Smirnov and correlation test). This has the advantage of overcoming the curse of dimensionality and, in the same way improving the approximation accuracy.

Franziska Nestler

"Junior Research Group SALE: Fast Algorithms for Explainable Recommendation Systems"
The ongoing digitization of all aspects of our society goes hand in hand with the generation of data on an enormous scale. Commonly, the recorded data sets consist of many data points, each of which is characterized by a large number of characteristics, the so-called features. The task of analyzing these data in a reasonable fashion and efficiently extracting required information is of enormous importance in many applications. With the growing storage of recorded data, the requirements for procedures for their meaningful and transparent analysis is also increasing. In particular the traceability of the analysis results is only partially or not even given at all by many known AI processes. The project SALE addresses this point and aims at developing efficient algorithms for data analysis that ensure the traceability of the obtained analysis results based on the underlying data.

Theresa Wagner

"Learning in High-Dimensional Feature Spaces Using ANOVA-Based Fast Matrix-Vector Multiplication"

Kernel matrices are crucial in many learning tasks and typically dense and large-scale. Depending on the dimension of the feature space even the computation of all its entries in reasonable time becomes a challenging task. For such dense matrices the cost of a matrix-vector product scales quadratically with the dimensionality, if no customized methods are applied. In basically all kernel methods, a linear system must be solved. Our approach exploits the computational power of the non-equispaced fast Fourier transform (NFFT), which is of linear complexity for fixed accuracy. The ANOVA kernel has proved to be a viable tool to group the features into smaller pieces that are then amenable to the NFFT-based summation technique. Multiple kernels based on lower-dimensional feature spaces are combined, such that kernel-vector products can be realized by this fast approximation algorithm. Based on a feature grouping approach this can be embedded into a CG solver within a learning method and we nearly reach a linear scaling. This approach enables to run learning tasks using kernel methods for large-scale data on a standard laptop computer in reasonable time without or very benign loss of accuracy. It can be embedded into methods that rely on kernel matrices or even graph Laplacians. Examples are support vector machines or graph neural networks that can then benefit from having the fast matrix-vector products available. Moreover, we make use of this method for Gaussian process regressors, where the covariance matrix is a kernel matrix. By applying our NFFT-based fast summation technique, fitting the kernel by tuning the hyperparameters, so making predictions by posterior inference can be accelerated.

joint work with Franziska Nestler and Martin Stoll


Angela Thränhardt & Martina Hentschel

"Data-driven Physics at Chemnitz University of Technology"
Computational science and simulation techniques have a long tradition at the TUC Physics Institute. Presently, our staff includes six female professors, an outstanding number across Germany given the intermediate institute size. The range of our data-driven research ranges from self organisation studies of planar and helical molecules via the complex dynamics of mesoscopic systems and Monte Carlo simulations of organic semiconductors and structure formation processes to analysis of ellipsometry and magneto-optical measurements by numerical optical layer modelling. On the poster, we give an overview of the Physics Institute and exemplarily introduce a number of data-driven research approaches.