December 1st, 2025, 2:30pm in ENR2 S215
When
2:30 – 3:30 p.m., Dec. 1, 2025
Title: Recurrent Neural Networks for Nonlinear Time Series
Abstract:
This paper develops theoretical guarantees for recurrent neural networks trained on time series generated by a nonlinear vector autoregressive moving-average model with exogenous variables. We derive upper bounds for the predictive risk that decompose into approximation and estimation terms. The approximation error depends on the smoothness of the conditional mean and the effective input dimension, decreasing as network capacity scales with the sample size. The estimation error is determined by the network architecture and likewise vanishes with more data. Under an invertibility condition, we show that recurrence efficiently encodes dependence information in the hidden state while capturing nonlinear temporal structure, and achieves sharper convergence rates than deep feedforward networks and conventional nonparametric regressions based on high-order autoregressive truncations.