FR1.R9.3

A Mathematical Framework for Computability Aspects of Algorithmic Transparency

Holger Boche, Technical University of Munich, Germany; Adalbert Fono, Gitta Kutyniok, Ludwig-Maximilians-Universität München, Germany

Session:
Complexity and Computation Theory 1

Track:
21: Other topics

Location:
Lamda

Presentation Time:
Fri, 12 Jul, 10:25 - 10:45

Session Chair:
Shuki Bruck, California Institute of Technology
Abstract
The lack of trustworthiness is a major downside of deep learning. To mitigate the associated risks clear obligations of deep learning models have been proposed via regulatory guidelines. Therefore, a crucial question is to what extent trustworthy deep learning can be realized. Establishing trustworthiness requires that the factors influencing an algorithmic computation can be retraced, i.e., the algorithmic implementation is transparent. Motivated by the observation that the current evolution of deep learning models necessitates a change in computing technology, we derive a mathematical framework that enables us to analyze whether a transparent implementation in a given computing model is feasible. We exemplarily apply our trustworthiness framework to analyze deep learning approaches for inverse problems in digital and analog computing models represented by Turing and Blum-Shub-Smale Machines, respectively. Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems under fairly general conditions, whereas, Turing machines cannot guarantee trustworthiness to the same degree. For a longer version of this paper with more details and proofs, we refer to [1].
Resources