non-linear system solving for non square matrix

Discussion in 'Numerical Analysis' started by Richard, Sep 21, 2005.

  1. Richard

    Richard Guest

    hey

    it is a long time since i have had to put my maths to use, so forgive
    stupid questions.

    anywho, i have a system of k non-linear eqns in n variables. i have been
    reminding myself of the newton-raphson method for root finding, but all
    the literature i have looked at is for systems on n eqns in n variables.

    is it possible to apply the newton root finding procedure to such a
    system? if not, can anyone point me in the right direction?

    thanks.
     
    Richard, Sep 21, 2005
    #1
    1. Advertisements

  2. if k>n, you might minimize sum_{i=1 to k} F_i(x)^2 def =f(x)
    using e.g. the gauss -newton or levenberg marquardt methods,
    see
    http://plato.la.asu.edu/topics/problems/nlolsq.html
    for codes (minpack1, lmder, lmdif, elsunc,...)
    if however k<n, then the solution, if existing at all, will not be unique
    and you may want to compute the solution of minimal norm, which is
    also possible using a modified Newtons method. see work of Graves in Duke Math.
    J. 17, (1950) and b.T.Polyak in USSR Comp Math Phys4(1964)
    (in principle the constructive application of the proof of the implicit function
    theorem)
    hth
    peter
     
    Peter Spellucci, Sep 21, 2005
    #2
    1. Advertisements

  3. Well, if you write your equations as a overdetermined system of equations as
    f(x) = 0 you can solve it just as in the regular case:

    x_{k+1} = x_{k} - Df(x)^{+}f(x_k), where Df(x)^{+} is a pseudoinverse of the Jacobian matrix.
    It is called "Newton method for overdetermined systems" and its convergence properties are not so well studied; see for example a paper by Didier and Schub.
    You do not need to evaluate second order derivatives, what happens if you treat the problem as a nonlinear least squares; The convergence speed is now not quadratic in general, but!
    If there is a root y f(y) = 0 the convergence is still quadratic. If there is an approximate root f(y) = eps than the convergence is linear but the speed q
    is proportional to eps.
    This method has proved to be rather efficient in some applications of mine.
     
    Ivan Oseledets, Oct 9, 2005
    #3
  4. Richard

    carlos Guest

    To get back to speed, read Fletchers Practical Methods of Optimization
    2nd ed (1987) Ch 6. After 18 years IMO still by far the best book
    to learn residual minimization methods for solving NL equations
    when n<=k. Fletcher has that unique British gift of blending
    theory and practice in the right proportion while writing complete
    sentences with the correct grammar. He favors Gauss-Newton,
    and I do too.

    Chapters are largely self contained so you wont waste too much time.
    If k<n you need n-k constraints; that falls under constrained
    minimization, which is covered in the 2nd part of that book.

    NR Chs 9-10 might also be consulted but offer no theory.
    For open source code see minpack in netlib.
     
    carlos, Oct 9, 2005
    #4
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.