# Tetration / Powertowers again

Discussion in 'Math Research' started by Gottfried Helms, Jul 16, 2007.

1. ### Gottfried HelmsGuest

Finding Andrew Robbins' "tetration"-pages recently
I was inspired to check, whether my previously
described method to compute the alternating sum
of increasing powertowers would also allow inter-
polation for the tetration-operator to fractional
iterations, in Robbins' notation:

z(x,y,a) = x^^y.a

for fractional y (and a=1 for instance).

This is indeed possible, at least in some region of
parameters x and y.

If I use the matrix-logarithm of my Bs-matrix (as
described in my previous posts) multiply by a fractional
y and exponentiate, such that

By = exp(y*log(Bs))

then I can use to compute

z(x,y) := z(x,y,1)

for any real y in a reasonable domain.

With dim=24 (only providing 24 coefficients for the
series to be summed) I get for the following examples:

-- fractional computation applied--- -- original computation applied ---
z(sqrt(2),1/2) ~ 1.24362162767
z(sqrt(2),2/2) ~ 1.41421356237 z(sqrt(2),1) ~ 1.41421356237
z(sqrt(2),3/2) ~ 1.53880541744
z(sqrt(2),4/2) ~ 1.63252691944 z(sqrt(2),2) ~ 1.63252691944

For the less good approximable, but still easily manageable case x=2

-- fractional computation applied--- -- original computation applied ---
z( 2 ,1/2) ~ 1.45878179055
z( 2 ,2/2) ~ 2.00000000000 z( 2 ,1) ~ 2.00000000000
z( 2 ,3/2) ~ 2.74876151102
z( 2 ,4/2) ~ 3.99999999999 z( 2 ,2) ~ 4.00000000000

Critical here is the possibility to approximate
log(Bs) to a good degree - and formally, to describe the
range of parameters, for which log(Bs) and thus exp(y*log(Bs))
is also formally defined.

Reading Andrew Robbins' articles (*1) cursory it seems, that the
core computations are equivalent, if not identical.
This matrix method, which computes not only a single
value z(x,y) for parameters x and y, but always a complete powerseries
of that value z(x,y) , meaning

Z(x,y) = [1 , z(x,y) , z(x,y)^2, z(x,y)^3), ....]

seems to be then the best description for a complete operator
to perform continuous tetration and its inverses.

Gottfried Helms

(*1) Andrew Robbins at http://tetration.itgo.com/

-----------------------------------------------------------------
For an impression:
protocol - the first 6 terms of the powerseries-vectors for Z(x,y) are given.
Z(x,y) = [1 , z(x,y) , z(x,y)^2, z(x,y)^3), ....]

x = sqrt(2)

Z(sqrt(2),1/2) ~ 1.00000000000 1.24362162767 1.54659475280 1.92337868383 2.39195532940 2.97468738006
Z(sqrt(2),2/2) ~ 1.00000000000 1.41421356237 2.00000000000 2.82842712475 4.00000000000 5.65685424949
Z(sqrt(2),3/2) ~ 1.00000000000 1.53880541744 2.36792211274 3.64377137516 5.60705513201 8.62816681301
Z(sqrt(2),4/2) ~ 1.00000000000 1.63252691944 2.66514414269 4.35091955712 7.10299330132 11.5958277730

Z(sqrt(2), 1 ) ~ 1.00000000000 1.41421356237 2.00000000000 2.82842712475 4.00000000000 5.65685424949
Z(sqrt(2), 2 ) ~ 1.00000000000 1.63252691944 2.66514414269 4.35091955712 7.10299330132 11.5958277730

-----------------------------------------------------------------
x = 2

Z( 2 ,1/2) ~ 1.00000000000 1.45878179055 2.12804354412 3.10435127462 4.52857888176 6.60624045681
Z( 2 ,2/2) ~ 1.00000000000 2.00000000000 4.00000000000 8.00000000000 16.0000000000 32.0000000000
Z( 2 ,3/2) ~ 1.00000000000 2.74876151102 7.55569462067 20.7688507896 57.0888799638 156.924683512
Z( 2 ,4/2) ~ 1.00000000000 3.99999999999 15.9999999846 63.9999975683 255.999858206 1023.99518770

Z( 2 ,1 ) ~ 1.00000000000 2.00000000000 4.00000000000 8.00000000000 16.0000000000 32.0000000000
Z( 2 ,2 ) ~ 1.00000000000 4.00000000000 16.0000000000 64.0000000000 256.000000000 1024.00000000

===============================================================================================================

Gottfried Helms, Jul 16, 2007

2. ### Oscar Lanzi IIIGuest

Intuitively one would expect to see the identity

x(x,y+1) = x^z(x,y)

That seems to work for your examples, but showing it either numerically
or analytically would make the argument more powerful.

--OL

Oscar Lanzi III, Jul 21, 2007

3. ### Gottfried HelmsGuest

Oscar -

Am 21.07.2007 16:18 schrieb Oscar Lanzi III:
Hmm, I've difficulties to understand this, since
on the left hand I'm missing a "z", did you mean

z(x,y+1) = x^z(x,y)

It works exactly like this. One other numerical example:

x=s=1.2 (I'm using s for the basis in my examples)
Then for

V(1)~ * Bs^k (for k=0..9)

in the 2'nd column of the result-vector I get the entries
documented in the table-column "matrix",
corresponding to the direct scalar computation (table-
column "skalar").

In table-column "Step 3 matrix" I get the results for

V(1)~ * (Bs^3)^j for j=0..3

which all are perfectly compatible.

VPT(s,9) step 3
matrix skalar matrix
B=Bs x^z(x,y) C= Bs^3
B^0 -> 1 1 C^0 -> 1
B^1 -> 1.200000000 1.200000000
B^2 -> 1.244564747 1.244564747
B^3 -> 1.254718171 1.254718171 C^1 -> 1.254718171
B^4 -> 1.257043041 1.257043041
B^5 -> 1.257575982 1.257575982
B^6 -> 1.257698182 1.257698182 C^2 -> 1.257698182
B^7 -> 1.257726204 1.257726204
B^8 -> 1.257732629 1.257732629
B^9 -> 1.257734103 1.257734103 C^3 -> 1.257734103
.... ->

It seems that the Eigenvalues of Bs are the consecutive
powers of w= 1.257734541 ,where w^w = s = 1.2

------------------------------------------------------------
Commenting the analytical side, where I made some
progress in finding promising heuristics and one
possible pathway to a final analytical deciphering (see 4).

1) binary operator vs matrix-operator - notation
for tetration?

I've found the collection of notations at A.Robbins
site and tried to reformulate my results in terms
of one of this "classical" forms. It doesn't fit
the notation exactly, so I came to the provisorial
conclusion/consideration:

- with the given form of notation for binary operators
we use a somehow "anonymous" operator which is thought
to be an abstract, to be independent of its operands.
The common use of "+","*","^".or "^^" keeps the
effect of the binary operator somehow independent of
its operands.

- The matrix-approach tells me, that this view of
things may be not sufficient. For tetration it would be
better to have a sense instead, that the operator
itself has a parameter "imprinted", such that in any
notation of a chosen binary operator-symbol this symbol
must have a connotation of this "imprinted" parameter,
which is the parameter s in my case (the base-parameter)
and may be a constant for the classical binary operations.

I'm checking, how this concept of an operator
with "imprint" is also meaningful for the other
common binary operators, too.
As I have shown in the other article (*1), the pathway
to the eta-and zeta/bernoulli-polynomials can
also be seen as an iteration of an operator (the
binomial-matrix, which implements x^k -> (x+1)^k
for all k). But I didn't formulate an explict
conclusion yet.

The notation of an "imprinted" binary operator
seemed then to be a bit artificial and I left
it because of its only relative benefit or even drawback
in flexibility and precise expression of what's going
on, and stick to the simple matrix-formula instead.

So my notation will be

V(z)~ = V(x)~ * Bs^y

using the second column of z for the scalar result
and which is nearest to one of the notations
cited by A. Robbins:

s^^y.x = s^s^s...^s^x (y-fold occurence of s)

-----------------------------------------------

2) Continuous version by Eigensystem-decomposition

Analytically it appears, that from the finite dimensional
approximations of Bs with parameters s for the convergent
cases of the infinite powertower (I will call them
the "simple cases") one can conclude, that Bs dim->oo has
a discrete spektrum, and in the simple cases eigenvalues
between 1..0 or 1..-1 , possibly with a simple geometric
progression occur.

In the simple case an eigendecomposition makes sense and
from

Bs = Qs * Ds^y * Qs^-1

I can operate with a continuous parameter y.

2.a) Inverse operation

I can even formulate the inverse operation, given x,z, and s,
searching y

V(z)~ = V(x)~ Qs * Ds^y * Qs^-1
(V(z)~*Qs) = (V(x)~*Qs) * Ds^y

then the parentheses are known row-vectors, which
may be rewritten as multiplication of the unit-
rowvector V(1)~ multiplied by known diagonalvectors
for instance

V(1)~ Ts = V(1)~ Us * Ds^y
V(1)~ Ts = V(1)~ Ds^y * Us // interchaning the rightmost
diagonalmatrices

and then

Ds^y = (Ts*Us^-1)

and getting the logarithm of the entries of Ds
to the base s.

Just today I found, that the eigenvalues for the
simple cases seem to be the powerseries of a value
t, where t is defined by the tetration-expression:

s = t^t

I'll check about the cases, where s>e^e^(-1) next
time, but it is difficult to compute reliable eigen-
system-decompositions for the current matrices when
dim>~20 using Pari/GP, even with float-precision
of 400 or 800.

--------------------------------------------------

3) the infinite alternating sum of powertowers of
increasing height

The heuristics about the spectrum of Bs also
explain the "niceness" of the cases for infinite
summations of increasing powertowers. It was
expressed as

Z~ = V(x)~ * Ms

where Ms = (I + Bs)^-1

The spectrum for the simple cases is in the range

r = [1,,0[ // discrete

and even the cases , where it ranges r= [1..-1[
and has a convergence-point for all eigenvalues of
high index, will be shifted to r' = [2..[0 if the identity-
matrix is added to Bs, and I+Bs can then be inverted
still having an analytically accessible set of
finite eigenvalues.

The surprising ability of the method to sum power-
towers of increasing height even in some divergent
cases is then due to the this discreteness
of the spectrum of the operator and this shifting,
which allows then inversion also for a range of
non-simple cases, as far as eigenvalues r_inf -> 0
do not occur.

--------------------------

4) A true triangular operator for a very near related
operation possibly provides better analytical access
to the eigensystem-properties

I just found also another pathway to possibly
assure these heuristics and base them with
analytical arguments.

This will then be due to the fact, that Bs can be
decomposed in the product of two well known
matrices, which are triangular and additionally have
eigenvalues of all 1 if s=exp(1) or generally the power-
series of log(s).

Given the known decomposition

Bs = dV(log(s)) * fS2F * P~

where

- dV(log(s)) is a diagonalmatrix containing
the powers of log(s)

- fs2F is the factorial similarity-scaling of
the lower triangular matrix of Stirling-numbers
2'nd kind (which thus has unitary spectrum)
using F = diag(0!,1!,2!,...) and f= F^-1

- P is the lower triangular Pascal-matrix , and
P~ its transpose.

one may use a version free of P~

As = dV(log(s)) * fS2F

Then this is an operator, which is triangular, has known
spectrum (the powers of log(s)) but still behaves very
similar to Bs. It gives, by one iteration,

V(x)~ * As = V(s^x-1)

meaning I get the powers of (s^x-1) instead of s^x
as in the case of using Bs .

Letting s=e and x=1 I get by powers of As in the
second column of the result the iterated exponentiation
for the scalar parameter e:

(V(1)~ * As ) [1]= e^1-1
(V(1)~ * As^2 ) [1]= e^(e^1-1)-1
(V(1)~ * As^3 ) [1]= e^(e^(e^1-1)-1)-1
....
or letting s=2

(V(1)~ * As ) [1]= 2^1-1 =1
(V(1)~ * As^2 ) [1]= 2^(2^1-1)-1 =1
(V(1)~ * As^3 ) [1]= 2^(2^(2^1-1)-1)-1 =1
....

which is an identical type of result, and also can
be used to compute alternating infinite increasing
powertower-sums for some s.

Possibly I can find an analytical proof, that
the spectrum of Bs is indeed the powers of
its parameter s (or some shifting or function
of it) from its near relation to As.

--------------------------------------------

Well, I didn't add more new numerical examples here -
I thought the mentioned analytical aspects are
the more interesting at this stage of discussion.
If you would like to see some numerical example,
I would provide them in my documentation of power-
tower tables (link was given in previous post)

Gottfried

Gottfried Helms, Jul 23, 2007
4. ### Oscar Lanzi IIIGuest

I did in fact mean z(y+1,x) on the LHS in my posting. Also for w =
1.257+ it is w^(1/w) that equals 1.2, not w^w.

I'm curious about what happens when s drops below 1 and the
integer-tetration values acquire an oscillatory character. Does this
lead to the introduction of complex numbers for fractional tetrates?

--OL

Oscar Lanzi III, Jul 24, 2007
5. ### Gottfried HelmsGuest

Sorry ,

at some places I was mixing things at the second
editing of the posting.

Am 23.07.2007 15:05 schrieb Gottfried Helms:
this is wrong, this value is in fact the exponential of
the value, what should have been given here.

The ordered eigenvalues of Bs are

[ 1.00000000000 0.229312119617 0.0525840482028 0.0120581594905 ....]

Let r be the second eigenvalue, r = 0.229312119617; then

exp(r) = w = 1.257734541

and
w^(1/w) = 1.2 = s
or
exp(r * exp(-r)) = 1.2 = s

And the second place is wrong the same way by quickly
inserting new results in the nearly finished posting:
It should have been the same way as above,
first correcting

s = t^(1/t)

and then

t = exp(r)

where r is the second highest eigenvalue, and the sequence
of eigenvalues are the powers of r.

These corrections don't affect the other contents
of the article.

Gottfried

Gottfried Helms, Jul 24, 2007
6. ### Gottfried HelmsGuest

Am 24.07.2007 02:59 schrieb Oscar Lanzi III:
Again from heuristics I assume the following.

I observed negative eigenvalues for such cases, and
also, that the sequence of negative eigenvalues might have
a convergence point between ]0..-1] for e^-e < s <1.

In the light of the previous post, it seems, these
"convergence points" are simply the negative values
of the geometric series for negative argument r.

This would fit perfectly the observations, as far as
I can calculate reasonably reliable eigen-decompositions
for the parameter s, where e^-e <= s <= e^e^-1.

Example:
s=0.9; Bs = PT_Bs(s); \\
MEig(VE(Bs,20)) \\ using dimension 20
Mat(CEV): \\ cached vector of Eigenvalues
1.00000000000
0.00916634448180
0.0000840218711590
0.000000770173415049
0.00000000705967485049
6.47093874029E-11
5.66714157626E-13
6.50472212310E-15
9.15796133789E-16
1.60199559534E-18
-1.53871877961E-18
-1.07883869814E-16
-4.32453870387E-14
-9.24289969572E-13
-6.21585192123E-12
-0.000000000675900550369
-0.0000000737371946520
-0.00000804434033649
-0.000877595245571
-0.0957410282053

-----------------------------------------------
Assume the second highest /absolute/ value indicates r,
the parameter for the geometric series, then

tmpr=-0.0957410282053
tmpw=exp(tmpr)
%1375 = 0.908699313092
tmpw^(1/tmpw)
%1376 = 0.900000000000
which equals s (accepting as reasonable approximation).

-------------------

2) s <1 and towards the lower bound

Already for s<2/3 I have surprising results, where
some eigenvalues deviate from the expectation,
and I didn't find a special critical point yet,
which apparently lies already in the range

0.64 < s < 0.666... = 2/3

and possibly at s=1-log(2)/2 ~ 0.653426409720

I get sometimes small complex eigenvalues and
also often one or two exceeding -1 or +1, depending on
whether I select odd or even dimension. With s=1/2
this is already very clear, as I have documented
in the appendix (*1). These exceptional eigenvalues
are not assignable to the geometric series built
by the remaining "normal" eigenvalues.
Trying with Bs^2, where the entries are analytically
determined (thus *not* by simply squaring the
finite dimension examples of Bs), it seems a bit
better, but still numerically questionable.

It may be due to a numeric instability of the
eigensystem-method in Pari/Gp and I would be
interested, how Mathematica or Maple would perform
here.

------------------
3) s exceeding the upper bound

For s>e^e^-1 the sequence of eigenvalues diverges
as expected, additionally the sequence is then no more
a geometric series and also I have eigenvalues in
the range [1..0[ (finite dimension), generally
I get the range for the k'th eigenvalues:

0 < r_k

The alternating sum of powertowers of increasing height
is still computable due to this result, since for
(I + Bs) I get then eigenvalues

1 < r_k

and for Ms = (I + Bs)^-1 this is then for the
eigenvalues of Ms

1 > r_k

which explains then the nice and stable computations
even for the parameters s, which would produce
divergent cases for the single infinite powertower.

------

Gottfried

(*1) last page of
http://go.helms-net.de/math/binomial_new/powertower/powertowertables.htm

Gottfried Helms, Jul 25, 2007
7. ### Gottfried HelmsGuest

A new(?) interesting intermediate result:

With the taken assumptions concerning the characteristic
of the spectrum of the matrix Bs, being a geometric series
related to its parameter s.

Decompose the given parameter s into the following:

s = t^(1/t) // for admissible s

then my assumption concerning the asymptotic spectrum
for infinite simension is with

v = log(t)

about the k'th eigenvalue (the index k=0 for first eigenvalue)

v_k = v^k

We know, that the sum of eigenvalues equals the trace of
a matrix, so under this assumption it should be:

trace(Bs) = sum {k=0..inf} v_k
= sum {k=0..inf} v^k
= (1-v)^-1 (*1)

The structure of Bs is simple; the sum of its diagonal
elements are

trace(Bs) = sum {r=0..inf} log(s)^r/r! * r^r
= sum {r=0..inf} (log(t)/t )^r* r^r / r! (*2)

(*1) and (*2) should thus be equal , - and the
equality can be tested for arbitrary dimension;
the found equality backs my assumption numerically.

Is the resulting equality

sum {r=0..inf} (log(t)/t )^r* r^r / r! = (1-log(t))^-1

a known one?

Gottfried

Gottfried Helms, Jul 31, 2007
8. ### I.N. GalidakisGuest

[snip for brevity]
Yes. It can be derived directly from one of the identities on the paper "On An
Application of Lambert's W Function to Infinite Exponentials".

Specifically, the last of the three identities on page 773, has:

sum((log(x)/x)^(n-1)*n^(n-1)/n!,n=1..oo)=h(x^(1/x)), when |log(x)/x|<=1/e.

Multiplying the above by log(x)/x, we get:

sum((log(x)/x)^n*n^(n-1)/n!,n=1..oo)=log(x)/x*h(x^(1/x)).

h is a partial inverse of the function f(x)=x^(1/x), therefore when x is in the
above range,

A=sum((log(x)/x)^n*n^(n-1)/n!,n=1..oo)=log(x)/x*x=log(x).

Let a_n be the coefficients of A, i.e., a_n=n^(n-1)/n!.

Then the coefficients b_n of the series for B=A/(1-A), are exactly:

b_n=n^(n-1)/(n-1)!

which are the coefficients of

sum((log(x)/x)^n*n^(n-1)/(n-1)!,n=1..oo).

Therefore,

sum((log(x)/x)^n*n^n/n!,n=1..oo)=log(x)/(1-log(x)), from which follows:

sum((log(x)/x)^n*n^n/n!,n=0..oo)=1/(1-log(x))

I.N. Galidakis, Aug 1, 2007