Where: 362B (ACS building)
Speaker: Roummel Marcia
Title: Interpretability of ReLU for Inversion
Abstract: In this talk, we focus on the mathematical interpretability of fully-connected neural networks, especially those that use a rectified linear unit (ReLU) activation function. Our analysis elucidates the difficulty of approximating the reciprocal function. Notwithstanding, using the ReLU activation function halves the error compared with a linear model. In addition, one might have expected the errors to increase only towards a singular point, but both the linear and ReLU errors are fairly oscillatory and increase near both edge points.