The natural method for solving nonlinear least squares (NLS) problems is the classical Gauss-Newton (GN) method. By using only first derivative information (Jacobian, J) fast convergence is often achieved. However, it is well-known that the pure GN method breaks down if J is rank-deficient or ill-conditioned. Existing techniques try to stabilize locally where the difficulties occur. Our approach is to regularize the original problem, i.e., solve a well-conditioned problem that are close to the original problem. This smoothing technique enables us to solve problems that are rank-deficient or ill-conditioned everywhere in the solution space. In this talk, basic ideas and theory of truncated SVD and Tikhonov regularization will be discussed. The optimization group in Umeå currently works extensively with Tikhonov regularization of NLS0 and examples of applications will be given in parameter identification problems such as signal processing, ODE, PDE, and neural network training.