# why is integration harder than differentiation

Discussion in 'Math Research' started by Zig, Feb 16, 2004.

1. ### ZigGuest

someone asked this question, and i have been thinking about it for a
while. "why is integration harder than differentiation?"

first of all, is it true? integration is defined on a larger class of
functions than differentiation, so in some sense, it is easier to show
the existence of the integral than the derivative.

but what we really want to know is why, given a function built out of
certain "elementary functions", it is easy to construct the derivative
in terms of those elementary functions, but hard (and sometimes
impossible) to construct the antiderivative in terms of those elemntary
functions.

from a practical standpoint, the reason is clear: there are rules for
the derivatives of the two constructions you can do to elementary
functions, namely the product and the composition. if there were a rule
for the integral of the composition of two functions and for the product
of two functions, then from that, we could write any integrals of
elementary functions in terms of elementary functions.

but we cannot. so why not? what is different about integration that
makes it not have these rules? on the surface, the definitions of
differentiation and integration seem at least slightly similar: take
the limit as epsilon goes to zero of some algebraic operation on your
function.

i wanted to mutter something about differential Galois theory, but i
think that would just have been a cover for the more honest "i don't know".

so, what do you say?

-z

Zig, Feb 16, 2004
1. ### Advertisements

2. ### Norm DresnerGuest

One major difference is that differentiation is a local operation while
integration involves "finite intervals" -- okay, measurable sets, but
intuitively it is defined over an extended region of the domain while
differentiation isn't.

Norm

Norm Dresner, Feb 16, 2004
1. ### Advertisements

3. ### ZigGuest

differentiation is local and integration is nonlocal. hmm. i like that
idea, but i am not convinced. actually, didn't i once read that one of
the amazing things about deRham cohomology is that the differential
forms on a manifold, which are all locally defined objects, can encode
global information about the manifold? but i am not sure if that has
any relevance here.

anyway, just thinking outloud here, but the difference between
integration and differentiation that makes the former hard and the
latter easy (in the elementary calculus sense) is that differentiation
is a sort of "forward" operation, and integration is an "inverse"
operation, in a way that is not symmetric.

i think the analogy with algebraic operations is good. anyone can
square a small number in her head, but who can take the square root?
solving the quadratic equation is hard, and solving certain quintics is
impossible.

but why does differentiation have to be the forward operation and
integration have to be its inverse? could we make it go the other way?
in other words, is there some fundamental property of integration that
rules out the possibility of writing the integral of a product in terms
of the integrals of the multiplicands? if we had a rule like that, and
one for composition of functions, then integration would be as
algorithmic as differentiation, and all elementary functions would be known.

since i know that not all elementary functions have elementary
integrals, then i must conclude that such a product rule cannot exist.
but this is a very indirect way to see this, and sort of relies on the
property i am trying to understand to explain this property. it is a
bit circular. is there an obvious reason why there can be no
integration rule for products? (the familiar product rule is not good
enough here)

Zig, Feb 17, 2004
4. ### Norm DresnerGuest

Concerning the "difficulty"
I guess we have to differentiate [no pun intended] between the definite
integral which is always with respect to a given set in the function's
domain and the process of indefinite integration which is just the reverse
of differentiation. In an algebraic sense I don't think that inverse
differentiation is all that much more difficult than forward
differentiation. We have a set of rules to apply to "elementary" functions
and algebraic (and some transcendental) combinations of them to
differentiate a given function. If none of our usual rules apply, then
differentiation is -- at least symbolically -- impossible and we're stuck
with a numerical approximation. Ditto the process of inverse
differentiation. This is generally thought to be difficult because most
people simply don't recognize the inverse rules as readily as the forward
ones. There are tables to use in both cases.

Concerning the inverse relationship
Assuming that we define the indefinite integral in terms of the definite
integral, it's relatively easy to see that integration doesn't really care
if a function has, for example, discrete jump discontinuities, the function
is still integrable. But that same function fails to be differentiable at
the points of discontinuity so in that case I'd have to say that the process
of integration is easier than the process of differentiation.

Norm

Norm Dresner, Feb 17, 2004
5. ### Ken PledgerGuest

This is not an explanation of why it's harder to integrate, but a
comment on the major historical effect of that fact.

Integrals came first. There are quite a few arguments in Euclid
(probably due to Eudoxus) and in Archimedes, which find various areas,
volumes and centres of gravity by using limits of sums of little bits.
Modern writers often mention that Archimedes also handled a tangent to a
spiral, but this isolated case is very unlike modern differentiation.
Most of what we see as calculus-style arguments in the ancient and early
modern periods were integrations. But integration is hard, so each new
integral was a new research problem.

Then came the 17th-century development of differentiation, and the
Fundamental Thgeorem of the Calculus. It was Newton and Leibniz who
appreciated that differentiation admits a collection of easy algorithms,
and antidifferentiating is a practical way to find a lot of integrals.
_That_ is the sense in which Newton and Leibniz "invented the calculus".
Derivatives and integrals were already there before them, but those two
men showed how easy differentiation is, and what a lot of integrals it
lets you find.

Ken Pledger.

Ken Pledger, Feb 17, 2004
6. ### Timothy M. BrauchGuest

If you are dealing with a finite precision machine, then integration is
actually easier than differentiation. Integration is a stable process
while differentiation is ill-posed.

I'll leave it to you to find out why (hint: think about the definitions).

- Tim

Timothy M. Brauch, Feb 18, 2004
7. ### gowanGuest

A few thoughts ... First, there is a rule of sorts for integrating
the product of two functions, namely integration by parts. I'm not
sure whether this is something important in this context but
integration doesn't give a unique answer, it's only unique up to an
additive constant. Differentiation, when it can be done at all gives
a unique answer. Finally, if we restrict ourselves to a suitable
class of functions then integration and differentiation are equally
"difficult", each being a termwise operation on power series.

gowan, Feb 18, 2004
8. ### mathmanGuest

Inirgration in general is harder. For derivatives, there are many
straightforward rules that can be used for complicated expressions,
e.g. derivatives of products, quotients, functions of functions, etc.
For indefinite integrals, there are no such general rules.

mathman, Feb 18, 2004
9. ### Stephen Montgomery-SmithGuest

If you have a sequence a(n), it is generally much easier to compute its difference:

a(n+1)-a(n)

than the partial sums:

a(1)+a(2)+...+a(n).

Stephen Montgomery-Smith, Feb 18, 2004
10. ### SuvritGuest

Zig> A few thoughts ... First, there is a rule of sorts for integrating
Zig> the product of two functions, namely integration by parts. I'm not
Zig> sure whether this is something important in this context but
Zig> integration doesn't give a unique answer, it's only unique up to an
Zig> additive constant. Differentiation, when it can be done at all gives
Zig> a unique answer. Finally, if we restrict ourselves to a suitable
Zig> class of functions then integration and differentiation are equally
Zig> "difficult", each being a termwise operation on power series.

A few thoughts echoing some of the opinions/ideas already posted here. The
class of functions that we are willing to accept as giving us *closed form*
solutions limits our ability to consider integrals as easy or
difficult. The simplest examples being the error function and certain log
integrals...

That reminds me, long ago somebody on this group had pointed out some
theory that enables one to decide whether a particular integral is
evaluable (evaluatable?) in closed form,'' or not -- could somebody
please resend some of that information -- It might help to shed light on
the difficulty of integration!

Grüße,
-suvrit.

Suvrit, Feb 18, 2004
11. ### ZigGuest

I think the math that tells you when some integrals can be express in
terms of elementary functions is called differential Galois theory.
this theory tells you, for example, that the error function is not a
finite composition of elementary functions. it uses the same types of
ideas as used in showing that the quintic cannot be solved in terms of
simple root extractions.

i don't know much about it beyond what i have said here, so it will be
cool if someone who knows a lot about it weighs in.

Zig, Feb 18, 2004
12. ### ZigGuest

sure, but this is not good enough. for example, consider the integral

\int 1/sqrt(b^2-x^2)*1/sqrt(a^2-x^2) dx

this is the product of two functions, each of whose integrals i know in
terms of elementary functions (arcsine). and yet, the integral of their
product is not an elementary function (elliptic function).

so i know that integration by parts will never help me solve this
integral.

compare this with differentiation, where i could immediately write down
the total derivative, knowing the individual derivatives. this
difference between integration and differentiation makes differentiation
an algorithm, and integration an art form.

but why the difference?

yes. perhaps that has something to do with it. it is a good suggestion.

yes, of course, if we view functions in terms of their power series,
then neither one is easier or harder. so i guess i am only talking
about finite compositions of elementary functions, not power series.

perhaps there is something unnatural about restricting yourself to only
talking about closed functions like that? but differentiation doesn't
care, why should integration?

Zig, Feb 18, 2004
13. ### ZigGuest

this seems related to Norm Dresner's suggestion that basically
integration is nonlocal; you have to know a lot more about the function.
i wonder if this connection could be made more explicit?

Zig, Feb 18, 2004
14. ### ZigGuest

but this is exactly the issue; differentiation never gets "stuck" in
this way. given any finite composition of elementary functions, i can
use the chain rule and product rule to algorithmically reduce this
completely. i can differentiate any such function with impunity.

not so with integration (or inverse differentiation, if you would like
to make the distinction).

yes, i mentioned something about this in the original post. in some
sense, integration is easier, because more functions are integrable than
are differentiable. a lot more, i think. it is easier for a function
to have an integral than a derivative. if we chose a function at
random, it would be more likely to be integrable than differentiable
(although i suspect actually both probabilities would be zero, eh?)

anyway, maybe this is also exactly the reason why the process of finding
that elementary antiderivative is harder. simply because the
antiderivative has to exist for a broader class of functions? somehow?
i dunno...

thanks for your responses
-z

Zig, Feb 18, 2004
15. ### ZigGuest

i am not sure right away what the answer to your question is; something
about needing more significant digits to get an accurate difference
between two close numbers, than you do to take the sum of a bunch of
numbers?

thinking about this, it struck me as surprising that derivatives are
defined as in terms of subtraction and division, which are inverse
operations in algebra, whereas (Riemann) integration is defined in terms
of multiplication and summation, which are primitive'' operations
algebraically, so to speak. and yet differentiation turns out to be the
primitive operation on functions, and integration the inverse operation.

Zig, Feb 18, 2004
16. ### tchowGuest

A good reference is "Symbolic Integration I" by Manuel Bronstein.

I wonder if the question being asked in this thread is a non-question.
It seems to presuppose that the class of elementary functions is the
"right" set of functions to consider. That is, we pick a class of
functions that behaves well under differentiation, and puzzle over
why it doesn't behave well under antidifferentiation. But given that
we can pick other classes of functions for which differentiation
and antidifferentiation are equally easy, why puzzle? Presumably
one can also pick another class of functions that behaves well under
antidifferentiation and not under differentiation (derivatives of
C^1 functions, say). So some classes have an affinity for one process
and others have an affinity for the other process. Unless there's
some reason to think that elementary functions are "canonical" in some
sense, rather than an arbitrary artifact of notation, why would we
expect an answer beyond what has already been stated?

tchow, Feb 18, 2004
17. ### Vries de JGuest

At this moment I have no library at hand so I cannot be very
specific. But I remember something like the Rish method which is
used in Maple and which (here I have to rely on memory, so I may be
wrong) enables one (Maple?) to say whether of a function build from
elementary functions is integrable or not.

Vries de J, Feb 18, 2004
18. ### Timothy M. BrauchGuest

It is exactly these reasons that on a computer, differentiation is "harder"
than integration. If you look at the definition of the derivative,
lim(dx->0) [f(x+dx)-f(x)]/dx, if dx really is small, almost zero, you are
subtracting two almost equal values f(x+dx) and f(x). This result is then
also close to zero so you lose many, many degrees of accuracy. Then, you
are dividing a number that is almost zero by another number that is almost
zero. This again leads to a lose of many degrees of accuracy. In fact,
depending on the function f(x), it could blow up towards positive infinty
or negative infinity, with only a small change in how you put it into your
computer.

Integration, on the other hand, is just addition, which for the most part,
does not lose any accuracy (sure there are some cases) and multiplication
which, depending, can increase accuracy (and occasionally decrease it).

Differentiation is an example of an ill-posed or unstable problem. Maybe a
more concrete example, think of tan(3^x) For x=1.411, it is positive,
nearly 4000. For x=1.412 it is negative, about -200. Just a small
difference in the x and there is a huge difference in the answer.
Differentiation, for a computer, is alot like this.

Here's something else to think about, when you learned about the
exponential and natural logarithm, which did you learn first and how was
the other one presented to you? Most cases you learn exp first then ln as
the inverse function of exp. But, this is misleading. In fact, exp is the
inverse function of ln. Why? ln exists as a derivative of 1/x whether exp
exists or not.

But, don't think about it too much.

- Tim

Timothy M. Brauch, Feb 18, 2004
19. ### Arnold NeumaierGuest

More precisely: The subtraction is done without error but it turns the
small relative error in the function values into a small absolute but large
relative error. The division has a small relative error, and has little
effect on the relative error; but it turns the small absolute error
into a large absolute error. The result is useless.

Nevertheless, one can get full accuracy numerical derivatives using
extrapolation procedures, as described in my numerical analysis book.
One just needs care.

Arnold Neumaier

Arnold Neumaier, Feb 18, 2004
20. ### Arnold NeumaierGuest

The rules for differentiation write a derivative in terms of derivatives
of simpler pieces (usign fewer letters to represent them), hence the
differentiation of any expression is a finite process.

Already integration by part (the product rule for integration)
does not have this property.
The theory of differential fields gives a fairly precise answer of
which classes of expressions can be integrated by elementary functions
(and, if desired, a finite number of new functions defined as solutions
of differential equations - generalizing the definitions of exp, sin, etc.)
Thus it delineated the limits of symbolic integration packages.

Numerically, integration is simpler than differentiation in one dimension,
but in higher dimension, integration suffers from the curse of dimensionality
while differntiation doesn't. In particular, it is very hard to get
accurate integrals in dimensions >100, say.

This has to do with the global aspects of integration and the infinitesimal
aspect of differentiation.

Arnold Neumaier

Arnold Neumaier, Feb 18, 2004

Ask a Question

## Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.