The problem of differential calculus

The problem of differential calculus

A discussion of geometry and its lack of basis in phenomenal reality reminded me of this problem, mentioned in passing by an authority on the matter, to which I elucidated a little.

The problem of differentiation from first principles: It appears at first that division by 0 is a problem (cue dunning-kruger scoffing), though mathematicians (or those who use maths) tend to brush aside this and tell you to go and learn about “limits” and there is no problem. This says (presumably hard mathematicians write this stuff, Penguin dictionary of mathematics):

“Differential calculus is concerned with the rates of change of functions with respect to changes in the independent variable. It came out of problems of finding tangents to curves”

A change in the independent variable necessitates a non-zero difference between 2 points on a curve, hence we have a chord rather than a tangent. A tangent to the curve seems satisfactory to provide an exact value for the slope of the curve at one point, whereby there is a change of ‘direction’ of points entering and leaving the point in question. However, this is exposed as fallacious: we require an infinitesimal approach of neighbouring points, there can be no variation in the direction of such points.

They go on to say:

“In the 1820s, Cauchy put the differential and integral calculus on a more secure footing by using the concept of a limit. Differentiation he defined by the limit of a ratio”

Limit is defined as:

“A value that can be approached arbitrarily closely by the dependent variable when some restriction is placed on the independent variable of a function”

The example is given of ‘the limit of 1/x as x tends to infinity is 0’.

As is clear in this example, the limit will not be reached by the dependent variable. “Arbitrary” is used to describe the lack of a determined value of separation between the DV and the limit, it’s just really really close. So we do seem to merge, alas fudge, the necessity of a change in the IV while also reducing the delta to 0 in the algebra. Not sure about a ‘sure footing’. Perhaps this is why the idea of ‘linear approximation’ is used. The arbitrary limit is not actually reducing the delta to 0 as the algebra would suggest.

) 2 Views 2
07 January 2025 at 11:11 PM
Reply...

32 Replies

5
w


by stremba70 k

1&onlybillyshears,

I think maybe the problem you are having is that you are focused on infinitismal quantities. While the notion of infinitismals was indeed used by Leibniz in his original development of calculus, the modern definition of limit has completely eliminated the idea. If f is any function, then the limit as x->c of f(x) is equal to L if and only if for all possible values epsilon (which can be arbitrarily small, but not infinitismal) there some value delta such that |f(c+delta)-L|

Let's examine in PlainSpeak this property -

"Magnitudes are said to have a ratio to one another which can, when multiplied, exceed one another."

an axiom we are not obliged to adhere to, nonetheless let's accept it for the sake of argument. Let us also, if agreeable have an underpinning of classical logic. Probably there will be no agreement, see below re skepticism in education, "Others still can prove that 1/3 0.333..., but, upon being confronted by the fractional proof, insist that "logic" supersedes the mathematical calculations."... yes, indeed it does if we are to avoid straight-jacketing. The offence is ofc continuity and its opposite discontinuity; rejection of infinitesimals and its opposite in the reduction of delta to 0; indefinite continuation and its opposite termination etc etc.

I see I am not alone. All power to the student body, the student ultimately surpasses the master. Emperor's clothes etc.

https://en.wikipedia.org/wiki/0.999...#I...

Students of mathematics often reject the equality of 0.999... and 1, for reasons ranging from their disparate appearance to deep misgivings over the limit concept and disagreements over the nature of infinitesimals. There are many common contributing factors to the confusion:

Students are often "mentally committed to the notion that a number can be represented in one and only one way by a decimal". Seeing two manifestly different decimals representing the same number appears to be a paradox, which is amplified by the appearance of the seemingly well-understood number 1.[g]
Some students interpret "0.999..." (or similar notation) as a large but finite string of 9s, possibly with a variable, unspecified length. If they accept an infinite string of nines, they may still expect a last 9 "at infinity".[51]
Intuition and ambiguous teaching lead students to think of the limit of a sequence as a kind of infinite process rather than a fixed value since a sequence need not reach its limit. Where students accept the difference between a sequence of numbers and its limit, they might read "0.999..." as meaning the sequence rather than its limit.[52]
These ideas are mistaken in the context of the standard real numbers, although some may be valid in other number systems, either invented for their general mathematical utility or as instructive counterexamples to better understand 0.999...; see § In alternative number systems below.

Many of these explanations were found by David Tall, who has studied characteristics of teaching and cognition that lead to some of the misunderstandings he has encountered with his college students. Interviewing his students to determine why the vast majority initially rejected the equality, he found that "students continued to conceive of 0.999... as a sequence of numbers getting closer and closer to 1 and not a fixed value, because 'you haven't specified how many places there are' or 'it is the nearest possible decimal below 1'".[23]

The elementary argument of multiplying 0.333... =
1
3
{\textstyle {\frac {1}{3}}} by 3 can convince reluctant students that 0.999... = 1. Still, when confronted with the conflict between their belief in the first equation and their disbelief in the second, some students either begin to disbelieve the first equation or simply become frustrated.[53] Nor are more sophisticated methods foolproof: students who are fully capable of applying rigorous definitions may still fall back on intuitive images when they are surprised by a result in advanced mathematics, including 0.999.... For example, one real analysis student was able to prove that 0.333... =
1
3
{\textstyle {\frac {1}{3}}} using a supremum definition but then insisted that 0.999... < 1 based on her earlier understanding of long division.[54] Others still can prove that
1
3
{\textstyle {\frac {1}{3}}} = 0.333..., but, upon being confronted by the fractional proof, insist that "logic" supersedes the mathematical calculations.

Mazur (2005) tells the tale of an otherwise brilliant calculus student of his who "challenged almost everything I said in class but never questioned his calculator", and who had come to believe that nine digits are all one needs to do mathematics, including calculating the square root of 23. The student remained uncomfortable with a limiting argument that 9.99... = 10, calling it a "wildly imagined infinite growing process".[55]

As part of the APOS Theory of mathematical learning, Dubinsky et al. (2005) propose that students who conceive of 0.999... as a finite, indeterminate string with an infinitely small distance from 1 have "not yet constructed a complete process conception of the infinite decimal". Other students who have a complete process conception of 0.999... may not yet be able to "encapsulate" that process into an "object conception", like the object conception they have of 1, and so they view the process 0.999... and the object 1 as incompatible. They also link this mental ability of encapsulation to viewing
1
3
{\textstyle {\frac {1}{3}}} as a number in its own right and to dealing with the set of natural numbers as a whole.[56]


The greeks might've entertained 0.999... = 1 for the reason that infinity is an attribute of the One, though equating seems a step too far. It's also ugly.

A strawman in any case. Instantaneous and change are mutually exclusive. The tangent is the linear approximation to the curve at that point, where a point disallows a delta (let's say a non-zero delta to emphasise the craziness) notwithstanding replacement by the roman d.


Have you derived trig from circles for yourself, or worked through the proofs for the exponential function, all that jazz?


The idea that a limit of a converging sequence exists should be inarguable.

What I think some here are stumbling on is how that limit is expressed.

Take a simple example:

9 + 9/10 + 9/100 + ... = 10.

We know exactly what 10 is because 10 is an algebraic number that can be defined exactly by algebraic processes. Nobody ever describes 10 as a limit of a converging sequence because it is unnecessary.

Now, take this series:

4 - 4/3 + 4/5 - 4/7 + 4/9 + ... = ???

The limit is pi, but pi can not be expressed exactly algebraicly. Hence humans will never know exactly what pi is, along with all transcendental numbers.

Basically, when anyone discusses pi they are actually describing a limit of a converging sequence.

So one should be able to recognize that limits of converging sequences work for all real numbers, both those we can express exactly and those we cannot.


by 1&onlybillyshears k

The greeks might've entertained 0.999... = 1 for the reason that infinity is an attribute of the One, though equating seems a step too far. It's also ugly.

A strawman in any case. Instantaneous and change are mutually exclusive. The tangent is the linear approximation to the curve at that point, where a point disallows a delta (let's say a non-zero delta to emphasise the craziness) notwithstanding replacement by the roman d.

Let z=0.999Â… What is the value of 1-z? If they are not equal, then there must be some c<>0 such that c=1-z. Let (0.9)n be equal to a decimal point followed by n 9Â’s. Clearly for any n, (0.9)n

But c<>0 by assumption. Remembering how c was defined we take c>0 (If we donÂ’t think 0.999Â… is equal to 1, surely it should be <1 since all of the finite partial sums are). So we have c<10^-n for all n. Multiply both sides by 10^n and you get 10^n * c < 1 for all n. For any x, no matter how large, 10^n>x for sufficiently large n. By the Archimedean property there exists an x such that for any b>x, bc>1. We just shows that there exists n such that 10^n>x so this implies that for some n, 10^n *c >1.

But this is a contradiction since we found above that 10^n * c < 1 for all n. Our assumption that c<>0 is therefore incorrect and c=0. But c = 1-0.9999Â… , hence 1=0.9999Â….

The Greeks’ opinion is irrelevant. Proper logical inference from the axioms of real numbers is what matters, not the writings of people who lived over 2000 years ago. And BTW, calculus as a branch of mathematics does not incorporate the ideas of point or instantaneous change - these are descriptions of physical uses for the derivative. Calculus is formulated mathematically using only the concept of limit, real number, and function. The derivative of a given function is a new function that is defined as a limit of a quantity calculated using the given function. We regard it as a rate of change because we make sense of functions and their behavior by drawing graphs, looking at curves, etc. The deeivative happens to be the slope of the tangent line to a curve that represents the given function, but that is NOT what the derivative actually is. Even if we had no notion of analytic geometry and couldn’t draw curves and tangents to represent functions and derivatives, we would still be able to define derivatives. Any issues arising from that analytical geometrical representation are not problematic for calculus.


by wazz k

Have you derived trig from circles for yourself, or worked through the proofs for the exponential function, all that jazz?

naturally.


by stremba70 k

Let z=0.999Â… What is the value of 1-z? If they are not equal, then there must be some c<>0 such that c=1-z. Let (0.9)n be equal to a decimal point followed by n 9Â’s. Clearly for any n, (0.9)n

But c<>0 by assumption. Remembering how c was defined we take c>0 (If we donÂ’t think 0.999Â… is equal to 1, surely it should be <1 since

gotcha alright, already!

The Greeks’ opinion is irrelevant. Proper logical inference from the axioms of real numbers is what matters, not the writings of people who lived over 2000 years ago. And BTW, calculus as a branch of mathematics does not incorporate the ideas of point or instantaneous change - these are descriptions of physical uses for the derivative. Calculus is formulated mathematically using only the concept of limit, real number, and function. The derivative of a given function is a new function that is defined as a limit of a quantity calculated using the given function. We regard it as a rate of change because we make sense of functions and their behavior by drawing graphs, looking at curves, etc. The deeivative happens to be the slope of the tangent line to a curve that represents the given function, but that is NOT what the derivative actually is. Even if we had no notion of analytic geometry and couldn’t draw curves and tangents to represent functions and derivatives, we would still be able to define derivatives. Any issues arising from that analytical geometrical representation are not problematic for calculus.

Rate of change is fundamental ofc.

derivative
from The Penguin Dictionary of Mathematics

The rate of change of a function with respect to the independent variable...


by 1&onlybillyshears k

gotcha alright, already!

Rate of change is fundamental ofc.

That’s a useless definition, or at least an incomplete one. What does “rate of change of a function with respect to the independent variable” mean? We have a good intuitive grasp of what we think that means, but for formal mathematics that isn’t good enough. A precise definition is needed. That precise definition is

lim h->0 [f(x+h)-f(x)]/h].

I have defined limit already and shown that there is nothing problematic about that definition. It uses only logical quantifiers and operations on real numbers. The function f is taken as given, and the only remaining component of this definition is standard operations of arithmetic. Based on the definition of limit, these are valid because when defining the limit as h goes to zero, h is never allowed to actually be zero. Therefore division by h is not problematic. The modern formalism has been done specifically to deal with the issues you raised. The ideas of infinitismals no longer plays any role in calculus.

Reply...