Conj. concerning infinite sums of powertowers of like height (tetration)

Discussion in 'Math Research' started by Gottfried Helms, Aug 7, 2007.

  1. I just uploaded a conjecture about an identity concerning

    infinite alternating sum of powertowers of like height (tetration)

    at
    http://go.helms-net.de/math/pdf/Tetration_GS.pdf

    It is based on the previous discussions here and elsewhere,
    where I introduced my general method, how to handle tetration
    with a matrix-concept.
    I recall the basic definitions in this article, so the reference
    to my previous articles are not needed.

    The notation of powertowers in ascii-mode is too much tedious, so
    I put that in .pdf-format on my website instead trying to make it
    into a post here.

    It is an excerpt of a manuscript, in which I'm collecting all my
    results about tetration and series of powertowers, and I'd like
    to get comments and criticism before I'm possibly going to
    insert crap in that article-in-work.


    Gottfried
     
    Gottfried Helms, Aug 7, 2007
    #1
    1. Advertisements

  2. As in the previous "power-tower" summation, it is wise to compare your
    summation results with standard methods.

    The sum

    1^^n - 2^^n + 3^^n - 4^^n + ...

    is Shanks summable for n = 0, 1, or 2. Obviously the Euler method can
    be used also for the first two cases, giving values of 1/2 for n = 0 and
    1/4 for n = 1. With n = 2 iterating the Shanks transformation shows
    convergence to between 0.29 and 0.30, using MS Excel and taking the
    first 13 partial sums as inputs. (The absolutely large inputs damp out
    quickly with successive transformations, leading to said convergence.)
    By treating the second entry of each column as a linear function of the
    first and finding the fixed point of this function, I obtain the
    estiamte S = 0.29629.

    --OL
     
    Oscar Lanzi III, Aug 13, 2007
    #2
    1. Advertisements

  3. Am 14.08.2007 00:48 schrieb Oscar Lanzi III:
    Well, I couldn't find a special way to evaluate such series yet, although
    I tried some approaches.
    What I already have is the following.

    Let
    la(s) = log(1)^s - log(2)^s + log(3)^s - + ...

    This can be approximated by Euler sum, and I computed that function
    for the first 64 s , s=0,1,2,3,...

    Define then the diagonal-matrix:

    LA = diag(la(0), la(1), la(2), ... )

    Then I applied that to the matrix-formula, where B is the "unparametrized"
    version ( B = F^-1 * VZ ), to obtain the alternating sum of powers of
    like height and increasing base, aka the eta-function (alternating zeta)


    V(1)~ * LA * B = [ eta(0) , eta(-1), eta(-2), ...]

    so numerically the Euler-sums in LA seem to be justified.

    The nice effect is, that this is parametrizable; if we take V(s)
    instead of V(1) we get

    V(s)~ * LA * B = [ eta(0) , eta(-s), eta(-2s), ...]

    - not surprising by the formal evaluation.

    But this reflects only the powertower of height 1 (using V(1)):

    1^^1 - 2^^1 + 3^^1 - 4^^1 + ... = 1 - 2 + 3 - 4

    or using V(s)

    {1,s}^^1 - {2,s]^^1 + {3,s}^^1 ... = 1^s - 2^s + 3^s - 4^s + - ...

    To work with higher powertowers, we needed to introduce powers
    of LA and B, like

    V(s)~ * (LA * B)^2 = [ ... ]

    but this does not produce correct results, since the product

    (LA * B)^2 = (LA * B * LA * B)

    is no more a linear combination of operators with different
    parameters, but is spoiled by the product of the sums
    of parameters (because LA represents sums:

    LA = V(log(1)) - V(log(2)) + V(log(3)) - ...

    )

    So I don't have a special summing method yet to compete
    meaningfully with your proposal.


    What I tried anyway, is just to add the 2'nd powers of
    the matrices Bs with parameters s=1..64:

    M = sum(k=1,64, (-1)^(s-1)* (BsInit(k))^2 )

    and then summing the second column of M to obtain S:

    EulerSum(7.5)* M [,1]

    Here the 62..64'th partial sums approximating S
    which represents S= 1^1 - 2^2 + 3^3 - 4^4 + ...

    0.296297340140
    0.296316010848
    0.296313457511

    so S should be between the first and second row;
    an estimate with slightly changed order of EulerSum

    S ~ 0.296312653685

    If I also take care to sum the components of M with Euler-summation
    to smooth out the effect of the alternating signs, I get the last
    three partial sums for S like:

    0.296400343380
    0.296416104504
    0.296415016281

    There is a difference..., so this method does not yet
    provide much more insight for that type of series, at least not
    using dimension=64.

    Gottfried
     
    Gottfried Helms, Aug 15, 2007
    #3
  4. Am 14.08.2007 00:48 schrieb Oscar Lanzi III:
    I tried with some rearranging of the explicite formula for
    the entries of the second column of Bs^2, or write it here
    B(s)^2

    I have the explicite formula as

    s^^2 = sum{r=0..inf} B(s)^2[r,1]

    = sum{r=0..inf}
    (sum{k=0..inf}
    ( log(s)^r/r!*log(s)^k/k! * k^r ))

    Collecting the two log-terms and multiplying (r+k)! in numerator
    and denominator to obtain a binomial-expression this is

    s^^2 = sum{r=0..inf}
    (sum{k=0..inf}
    ( binomial(r+k,k) * k^r
    * (log(s)^(r+k)/(r+k)!)
    ))

    Denote formally the alternating sum of all matrices B(s)^2,

    Bh = sum(s=1,inf, (-1)^(s-1) B(s)^2 )

    = B(1)^2 - B(2)^2 + B(3)^2 - + ...


    Then the sum of entries of the second column of Bh are

    S = 1^^2 - 2^^2 + 3^^2 - 4^^2 + - ...
    = sum{r=0..inf} Bh [r,1]
    = sum{r=0..inf}
    (sum{k=0..inf}
    ( binomial(r+k,k) * k^r
    * sum(s=1,inf, (-1)^(s-1)* log(s)^(r+k) ) /(r+k)!)
    ))

    Precomputing the alternating sum of the (r+k)'th powers of log(s)

    lh(m) = sum(s=1,inf, (-1)^(s-1)*log(s)^m )

    = log(1)^m - log(2)^m + log(3)^m ...

    we have
    S = sum{r=0..inf}
    (sum{k=0..inf}
    ( binomial(r+k,k) * k^r * lh(r+k)/(r+k)! )
    ))


    If the lh(m) are precomputed, for instance by Euler-sum (which is
    possible) , we may generate a matrix M, which contains all terms of
    the above sum separately.

    The elements of M in dimension 32 are all of absolute value<1 with
    different signs.

    Euler-sum the rows and then again Euler-sum the results (order
    of summation may be relevant)

    With dimension 32 only I get the value

    S ~ 0.29710507

    this way in a first try.

    The last three partial sums were:
    0.29588341 terms=27
    0.29635409 terms=28
    0.29672625 terms=29
    0.29697953 terms=30
    0.29710507 terms=31
    0.29710507 terms=32

    This result, however, needs crosschecking with higer dimensions,
    since the lh(m)-function is fast growing for m>20. (But note, that
    in this computation already m=63 was involved). Also the terms

    lh(r+k)/(r+k)!

    decrease in absolute value, so log( abs(lh(m))/m! ) ~ -m .


    Gottfried
     
    Gottfried Helms, Aug 15, 2007
    #4
  5. Am 14.08.2007 00:48 schrieb Oscar Lanzi III:
    Some series evaluations.
    Let

    AP(s) = 1 - 2^2^s + 3^3^s - 4^4^s + - ...
    AM(s)= 1 - 2^(1/2^s) + 3^(1/3^s) - 4^(1/4^s) + - ..

    then,

    (AP(s) by Euler-summation using the explicite matrix-formula
    and the infinite alternating sums of powers of logarithms,
    AM(s) crosschecked by Pari/GP alternating-sum-procedure)

    conjectured

    s AP(s) AM(s)
    -------------------------------------
    1/e 0.25851293 0.26288470
    1/2 0.26554503 0.27151141
    1 0.29710507 0.31211976 Your value was AP(1)~ 0.29629
    sqrt(2) 0.33123572 0.34649710
    e^(1/e) 0.33429771 0.34888936
    1.5 0.34008000 0.35317524
    Pi/2 0.34789084 0.35854021
    phi 0.35335742 0.36204316
    sqrt(3) 0.36736414 0.37023697
    ?? ?? = ?? <-- point of equality not found
    2 0.40428749 0.38800225
    2.5 0.48280000 0.41569498
    3 0.56384865 0.43709211
    3.5 0.63917890 0.45337711

    Gottfried Helms
     
    Gottfried Helms, Aug 15, 2007
    #5
  6. Since I can only use numbers with limited precision, my own estimate for
    AP(s) with s = 1 may well be subject to truncation/roundoff errors; that
    may account for the difference in the third significant digit.

    For s = 1/2 the Shanks convergence is faster and th calculaitons are
    better conditioned. For that case the Shanks limit I find is 0.265544;
    the result quoted by Helms agrees almost perfectly with this.

    Further calculations indicate that s = 0, sum = 0.25 is not exactly the
    minimum. There is a slightly lower Shanks sum for small positive s,
    with a minimum around s = 0.037.

    --OL
     
    Oscar Lanzi III, Aug 17, 2007
    #6
    1. Advertisements

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.