In this talk, we discuss the mathematical foundations of regularized learning algorithms in reproducing kernel Hilbert spaces (RKHS). Given a finite set of data points, we aim to construct an approximator that converges to the regression function. Specifically, we present an explicit construction of the approximator and analyze its convergence properties.