This paper presents a detailed examination of the convergence properties of algorithms used for approximating feedback Nash equilibria in nonlinear dynamic games. Specifically, we extend the Value Iteration for Games (VIt-G) algorithm by incorporating Radial Basis Functions (RBFs) to improve the approximation of value functions. The resulting algorithm, RaBVIt-G, is rigorously analyzed for its convergence properties. Our findings are compared with results presented in the literature, particularly those that utilize the Chebyshev spectral collocation method combined with policy iteration, focusing on the convergence of algorithms for linear-quadratic games.