# why is integration harder than differentiation

Discussion in 'Math Research' started by Zig, Feb 16, 2004.

1. ### Greg KuperbergGuest

If your question is "how can the inverse of differentiation be so much
harder than differentiation itself", then I can say that it happens all
the time in mathematics that an operation is easier to apply than invert.
For example, multiplying prime numbers is easier than factoring a number
into primes. This example is more relevant than you might suppose.
Factoring a number into primes is similar to factoring a polynomial;
integrating a rational function requires factoring its denominator.

To be sure, there are fast algorithms to numerically factor polynomials.
But that dodges your question, because there are also fast algorithms
to numerically integrate any analytic function. Your question
is really about exact integration.

Greg Kuperberg, Feb 19, 2004
1. ### Advertisements

2. ### Greg KuperbergGuest

Readers may be interested in my new paper in which I fight the
curse of dimensionality for numerical integration in high dimensions:

http://front.math.ucdavis.edu/math.NA/0402047

Granted, it's impossible to completely defeat the curse. I derive
good methods for functions that are well-approximated by low-degree
polynomials.

Greg Kuperberg, Feb 19, 2004
1. ### Advertisements

3. ### Mitch HarrisGuest

The key terms are: integration in finite terms, Risch's algorithm, Liouville's theorem.
The authors associated with it are Liouville, Ostrowski, Ritt, Risch, and Rosenlicht.

The main references I know of are

Robert Risch, The Problem of Integration in Finite Terms, Trans. of AMS, v139 (1969) p167-189.
(the main results)

and

Max Rosenlicht, Integration in Finite Terms, Am. Math. Monthly, v79 (1972) p936-972.
(good exposition with classic examples (like int e^(-x^2)))

Mitch Harris, Feb 19, 2004
4. ### Russell BlackadarGuest

...

This answer seems right to me.

And (to spell it out more explicitly) we find the situation
puzzling mainly because we don't take ease of differentiation
as our criterion for selecting this class. Rather, we generate
this class "naturally" by combining a small number of elementary
functions via algebraic ops (multiplication and division being
the pesky ones) and by other kinds of functional composition.
The chain rule makes differentiation a piece of cake. On the
other hand, to apply the chain rule in reverse we first have to
solve a factorization problem, and that's where the difficulty
lies.

Russell Blackadar, Feb 19, 2004
5. ### tchowGuest

These are certain classical references and the keywords Mitch listed are the
right ones. However, to dispel a common misconception, let me mention that
the general problem of integration in finite terms is not completely solved.
See Bronstein's book, that I mentioned in my other article in this thread,
for more information.

tchow, Feb 19, 2004
6. ### Herman RubinGuest

Even in one dimension, there are lots of cases of
numerical integration where the integrand is not
well-approximated by even high degree polynomials.

One such example encountered is \int (F(x+c))^n dF(x),
where dF(x) = exp(-x^2/2)/sqrt(2*\pi) dx. For moderate
values of n, using 32-point Gauss-Hermite integration,
accurate for integrating all polynomials of degree 63
with respect to dF, the results are not too good.

Herman Rubin, Feb 19, 2004
7. ### gowanGuest

A couple more ideas ... First, integration and differentiation are
equally easy for polynomials and among continuous functions on an
interval, say, polynomials are uniformly dense. That suggests that
the problem comes from trying to work with specific algebraic
representations of functions. Also, there are plenty of functions not
easily definable in closed form (maybe most functions are of this
sort) for which differentiation is as difficult as integration unless
a power series is available. So the question really seems to be about
functions built up from basic algebraic operations and composition,
including the standard transcendental functions, too. You are asking
why integration isn't well related to the basic operations for
building up these functions.

gowan, Feb 20, 2004
8. ### Greg KuperbergGuest

I should study the literature of this and your reference to your book
is certainly helpful. However, I suspect that cubature which is exact
for polynomials is a very reasonable limiting case of information-based
complexity. The extra generality from not taking the limit may or may
not be important, depending on the application.

To be precise, in your article "Information-based complexity and
information-based optimization" with Traub on your home page, you
discuss both worst-case and average-case error of sampling methods.
Your example of average-case error assumes a function f on the interval
[0,1] whose rth derivative is a Brownian path. This amounts to assuming
a spectral decay rate for f. I.e. if f were a function on the circle
instead of the interval, you would say that the Fourier components are
i.i.d Gaussian with some a decay rate for their standard deviations.

In high dimensions you have to assume an abrupt decay for the problem
to be tractable. It is then very easy to construct natural-looking
spectral assumptions that limit to the polynomial cubature problem.
For example, you might very reasonably expand f in Hermite polynomials
times a Gaussian. Assuming a decay rate for this expansion seems very
similar to me to polynomial cubature with a Gaussian weight function.
This is one of the main cases of the cubature problem, and the results
in my paper apply to it.

Greg Kuperberg, Feb 20, 2004
9. ### Greg KuperbergGuest

Yes, because the integrand (without its Gaussian weight) is
approximately a step function. If you take the vector space spanned by
this integrand for different c, then the information-based complexity of
integration is high, because the integrand can take abrupt steps.
Any numerical method is going to be bad for this class. I'm more
interested in cases that information-based complexity doesn't show to
be impossible.

Greg Kuperberg, Feb 20, 2004
10. ### Herman RubinGuest

This is not a typical integral arising in obtaining
statistical procedures, but is in evaluating them.
The integral is wanted for "small" c, small is a
function of n.

A more important integral, which is needed for
asymptotic (already an approximation) purposes,
is the probability that

max (B(t) - L(B)(t)) > q,

where B is the Brownian bridge, and L is a linear
functional from B to smooth functions. The probability
exceeds the maximum of the probabilities over t, which is
the normal tail integral from hq to infinity. This is
approximately exp(-(hq)^2/2)/(hq*sqrt(2\pi)). If L=0, it
is known that h=2 and the precise answer just drops the
denominator. For other L, the h can be computed, and it
is known that the correction should be of a similar form,
but is not that simple. Even a decent first-order
approximation for large q would be quite useful. The
methods used for L=0 cannot be used here. Note that B is
continuous but not differentiable, and we are interested
in small probabilities, while L(B) is differentiable.

Herman Rubin, Feb 21, 2004
11. ### Narasimham G.L.Guest

Differentiation is a breaking up procedure, but integration puts it
all together. A quote from Sam Rayburn:" Any donkey can kick down a
barn, but it takes a good carpenter to build one." [Absolutely no
offence meant for the first activity though.]

A differential equation links/relates the two. Performing integration
into an analytical form is as difficult as the complexity of this
relationship. y''+ y' = 1 may be easily solved, but log(y''+ y')^
sin(x y') = 1 is much more difficult to solve than simply writing out
such a differential relationship.

I too had often wondered why there is no proper mathematical
definition of "degree of hardness" of solvability of differential
equations, accenting on relational complexity. All we have is
categorization into ordinary, partial, non-linear etc. The arrival of
numerical machine solvers like the Runge-Kutta or implicit functions
has rendered such an enquiry perhaps needless... once and for all ?

Narasimham G.L., Feb 24, 2004
12. ### Axel VogtGuest

Is there an example available to look at for computing a cumulative
multivariate normal distribtuion?

Axel Vogt, Feb 22, 2005

Ask a Question

## Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.