# (2) Data analytics

[PIs: Kramer, Memmesheimer, Mukherjee, Neese, Schultze, Suarez, Urbach, Wrobel]

Recent progress in machine learning is mainly due to deep neural networks or to recurrent neural networks with millions of parameters whose optimization requires substantial computational efforts. Since high performance computing hardware has become affordable, deep learning has become feasible. However, training algorithms still involve poorly understood heuristics such as drop out learning which ease computational burden or provide a workaround for fundamental problems related to non-linear dynamical systems. We will investigate the use of finite element techniques or MC methods to speed up deep learning. Then, we will address recurrent neural networks and study new energy functions whose minimization will provide novel training algorithms for these non-linear dynamical systems. Furthermore, we aim at improving neurocomputing techniques. The methods and techniques developed this way will be tested and applied to practical problems in physics and the life sciences. How such deep learning methods will perform on future exascale compute system is presently an open question. To this end, we will address the issues of memory access times, fault tolerance, and the cost of data movement for the resulting parallel large scale machine learning algorithms. Furthermore, the above mentioned multilevel algorithms will be employed which help to cope with the issues arising on exascale systems later on. It should also be stressed that ML necessarily raises philosophical questions that will be dealt within the CST.