The problem of differential calculus
A discussion of geometry and its lack of basis in phenomenal reality reminded me of this problem, mentioned in passing by an authority on the matter, to which I elucidated a little.
The problem of differentiation from first principles: It appears at first that division by 0 is a problem (cue dunning-kruger scoffing), though mathematicians (or those who use maths) tend to brush aside this and tell you to go and learn about “limits” and there is no problem. This says (presumably hard mathematicians write this stuff, Penguin dictionary of mathematics):
“Differential calculus is concerned with the rates of change of functions with respect to changes in the independent variable. It came out of problems of finding tangents to curves”
A change in the independent variable necessitates a non-zero difference between 2 points on a curve, hence we have a chord rather than a tangent. A tangent to the curve seems satisfactory to provide an exact value for the slope of the curve at one point, whereby there is a change of ‘direction’ of points entering and leaving the point in question. However, this is exposed as fallacious: we require an infinitesimal approach of neighbouring points, there can be no variation in the direction of such points.
They go on to say:
“In the 1820s, Cauchy put the differential and integral calculus on a more secure footing by using the concept of a limit. Differentiation he defined by the limit of a ratio”
Limit is defined as:
“A value that can be approached arbitrarily closely by the dependent variable when some restriction is placed on the independent variable of a function”
The example is given of ‘the limit of 1/x as x tends to infinity is 0’.
As is clear in this example, the limit will not be reached by the dependent variable. “Arbitrary” is used to describe the lack of a determined value of separation between the DV and the limit, it’s just really really close. So we do seem to merge, alas fudge, the necessity of a change in the IV while also reducing the delta to 0 in the algebra. Not sure about a ‘sure footing’. Perhaps this is why the idea of ‘linear approximation’ is used. The arbitrary limit is not actually reducing the delta to 0 as the algebra would suggest.
Have you actually ever looked at formal definitions of limits? For your example, (limit of 1/x =0 as x goes to infinity), we obviously canÂ’t take x to actually BE infinite. We set a value of delta, very close to zero, say 0.001, and then prove that for some large enough value of x, 0<1/x
Regarding the limit of the quotient for differentiation it is similar. We canÂ’t divide by zero, but we can divide by numbers very close to zero. For example suppose we want to find the derivative of f(x)=x^2 at x=2. We take some very small delta, again day 0.001 and calculate {f(2+delta)-f(2)}/delta. Numerically (2.001^2-4)/.001 =4.001. We notice that this is close to 4. We try another delta - 0.000001 and make the same calculation and get 4.000001. We generalize by saying that for ANY epsilon, no matter how small, we can find a delta such that our differential quotient differs from 4 by no more than this epsilon. As shown above for epsilon = 0.001 we can use any delta<0.001. For epsilon<0.000001 we must use delta<0.000001. More generally we can use algebra to express our differential quotient {(x+h)^2-x^2}/h. This gives (x^2+2xh+h^2-x^2)/h. This in turn yields
(2xh+h^2)/h = 2x+h. This proves that our differential quotient differs from 2x by the value h - that is if we take epsilon=h above we must use delta<=h.
Note that none of this requires a notion of “arbitrarily close”, nor does it require any division by zero. It only requires a proof that for ANY epsilon we can find SOME delta such that the quotient differs from the limit by no more than epsilon when that delta is used to calculate the quotient,
Hell, why even use a metric? In a topological space, x is a limit of the sequence x_1, x_2, ... if for every neighbourhood of x there is some i such that, for all j > i, x_j is in the neighbourhood.
It works the same as defining 10 as 10 or 9.9 recurring. It doesn't matter that you're never actually dividing 0 by 0 to get to the gradient. The limit process allows us to bypass that. Thus, dy/dx actually = m. It is not an approximation.
You may be interested to learn about 'rough path theory'. These are the theoretical cases in which the above is not always true. For all continuous functions, the above is true.
You might be interested in the set theoretic construction of the Real Numbers from the rational numbers by way of Dedekind Cuts.
"A Dedekind cut is a partition of the rational numbers into two sets A and B, such that each element of A is less than every element of B, and A contains no greatest element. The set B may or may not have a smallest element among the rationals."
Wiki-
https://en.wikipedia.org/wiki/Dedekind_c....
liminf and limsup are fun concepts.
PairTheBoard
Have you actually ever looked at formal definitions of limits? For your example, (limit of 1/x =0 as x goes to infinity), we obviously canÂ’t take x to actually BE infinite. We set a value of delta, very close to zero, say 0.001, and then prove that for some large enough value of x, 0<1/x
This merely restates the definition I cited above, which you appear not to have read when you ask me to look at the definition of limit, albeit with a lot more waffle added in for good measure. You have just expanded on what they mean by arbitrary.
I also predicted "go look at limits" scoffing. A mathematician of some authority pointed the above problem out from a much more informed perspective, but this is ofc instinctively the case.
It works the same as defining 10 as 10 or 9.9 recurring. It doesn't matter that you're never actually dividing 0 by 0 to get to the gradient. The limit process allows us to bypass that. Thus, dy/dx actually = m. It is not an approximation.
You may be interested to learn about 'rough path theory'. These are the theoretical cases in which the above is not always true. For all continuous functions, the above is true.
You cannot have 9.9 recurring as stremba inelegantly tells us. You also cannot have dy/dx = m. The gradient of the tangent is not the same as the gradient of the curve, for the same reasons as before. For some reason we cannot be happy with an approximation here. I don't see why not.
You cannot have 9.9 recurring as stremba inelegantly tells us. You also cannot have dy/dx = m. The gradient of the tangent is not the same as the gradient of the curve, for the same reasons as before. For some reason we cannot be happy with an approximation here. I don't see why not.
Stremba said nothing of the sort. 9.9 recurring is the same as 3 x 3.3 recurring which is the same as 3 x 3 and a third which is the same as 10. You can have dy/dx = m, because that is the only way to derive it. That's what it is. Unless you can disprove that or come up with a 'real' way to find the gradient of a curve, all you're doing is saying 'it doesn't work' when it's been conclusively mathematically proven that it does work and is not an approximation. This isn't trial and error where you rabbit and tortoise your way to a best guess until you get too bored. This is *finding the gradient using calculus*. No guesswork, no fudges. At this point you're just making up problems using words, in a way that's approaching the direction of Terence Howard.
You want to grapple with a real version of this problem, you want -1/12. Or look up rough path theory.
I believe Isaac Newton developed calculus using infinitesimals. While infinitesimals make for a more awkward mathematics, I think it's been shown that a robust number system can be developed which includes them. I think Leibniz developed calculus independently using the dy/dx notation. The results are the same as with the modern use of limits.
Wiki - Infinitesimals
PairTheBoard
You can if you regard it as a potentially infinite sequence, e.g. the output of a non-halting computer program. That 10 is the limit of the sequence 9, 9.9, 9.99, ... is also well-defined.
It is infinite sequences where there is no rule for generating one value after another that do not have limits, despite the consensus that they do.
This merely restates the definition I cited above, which you appear not to have read when you ask me to look at the definition of limit, albeit with a lot more waffle added in for good measure. You have just expanded on what they mean by arbitrary.
I also predicted "go look at limits" scoffing. A mathematician of some authority pointed the above problem out from a much more informed perspective, but this is ofc instinctively the case.
Well that’s because the definition of derivative is a perfectly well-founded mathematical definition. I’m not sure why you think it isn’t. It does not involve division by zero. It does not involve the notion (which I agree is undefined) of “arbitrarily close”. It involves standard mathematical operations that are well-defined and standard logical quantifiers (“all” and “some”), so I fail to see what the problem with the standard definition of either the derivative or limit is.
Also, 0.999 repeating was not something I recall mentioning, but it is equal to 1 exactly per the axioms of real numbers. It is actually a representation of an infinite sum - 9/10 + 9/100 + 9/1000 Â… The value of an infinite sum can only exist if the set of partial sums is bounded. If it is then the value of the sun is the supremum of that set of partial sums (which is guaranteed to exist via the completion axiom). For any bounded set, an upper bound is any value greater than all members of the set. There are an infinite number of upper bounds; the supremum is the least of these.
In the specific example of the sum 9/10 + 9/100 + 9/1000 Â…, partial sums are 9/10, 99/100, 999/1000, etc. The nth partial sum can be expressed as (10^n-1)/10^n. Clearly 1 is an upper bound of this set since 10^n-1<10^n. We must prove that there is no lesser value that is an upper bound for this set.
To do so, assume that x<1 is an upper bound. Then for any n (10^n-1)/10^n
Stremba said nothing of the sort. 9.9 recurring is the same as 3 x 3.3 recurring which is the same as 3 x 3 and a third which is the same as 10. You can have dy/dx = m, because that is the only way to derive it. That's what it is. Unless you can disprove that or come up with a 'real' way to find the gradient of a curve, all you're doing is saying 'it doesn't work' when it's been conclusively mathematically proven that it does work and is not an approximation. This isn't trial and error where you
Excellent point to illustrate the problem. In order to equate 0.3 recurring with 1/3 infinity is required. "Gradient of a curve" is the very problem ofc, the "Instantaneous Rate of Change", the universe doesn't like us doing this.
I believe Isaac Newton developed calculus using infinitesimals. While infinitesimals make for a more awkward mathematics, I think it's been shown that a robust number system can be developed which includes them. I think Leibniz developed calculus independently using the dy/dx notation. The results are the same as with the modern use of limits.
Wiki - Infinitesimals
PairTheBoard
Stumbling into a rabbit hole known as Bishop Berkeley now...
Ya I believe Newton developed x dot notation, i.e. dx/dt, being interested in rates of change. Prime notation was Lagrange.
You can if you regard it as a potentially infinite sequence, e.g. the output of a non-halting computer program. That 10 is the limit of the sequence 9, 9.9, 9.99, ... is also well-defined.
It is infinite sequences where there is no rule for generating one value after another that do not have limits, despite the consensus that they do.
Seems a problem ultimately of rest and motion... being and becoming perhaps.
Here is an example of a sequence where there is no rule for generating successive values.
Let x_0 = 0, and begin flipping a coin indefinitely. For n > 0, let x_n = x_(n-1) + 1/(2^n) if the nth flip is Heads, and let x_n = x_(n-1) - 1/(2^n) if the nth flip is Tails.
For example, if the sequence of coinflips begins H, T, H, H, T, ..., then the sequence of numbers obtains begins 0, 1/2, 1/4, 3/8, 7/16, 13/32....
The sequence of numbers is always a
, and it has been proved that every such sequence of real numbers has a limit. But I don't think it makes sense to say that sequences obtained without rules have limits, because it makes no sense to speak of all the coinflips, or to know in advance the outcomes of coinflips. Even if it did make sense, it is certainly impossible to know what the limits are.The thing is, if you're in a car under constant acceleration from 0 to 60mph, the graph of your miles traveled wrt time elapsed, M(t) will be a parabola. Then your speed at time t will be the derivative of the parabola, M'(t). Suppose you reach 30mph at time t=1. You can declare that there's no such thing as an instantaneous rate of change for the parabola at t=1, but it's hard to ignore the fact that as you watch your speedometer it passes through the mark 30mph at time 1.
I think this rejection of the derivative as an impossible instantaneous rate of change is another form of Zeno's Paradox. Of course, under that perspective the car ride above couldn't even get started. Not only does Achiles never catch the Tortoise, but the arrow never leaves the bow, Achiles never starts the race, and the car never moves.
PairTheBoard
Ya'll might like these. This is the first of a 5 part series on the Pythagorean theory of number and numeration. The rest can be found on Manly P Hall (masterful speaker of the 33rd degree) Manly Hall society YouTube channel.
Ya'll might like these. This is the first of a 5 part series on the Pythagorean theory of number and numeration. The rest can be found on Manly P Hall (masterful speaker of the 33rd degree) Manly Hall society YouTube channel.
Are you making either claim, that a) you know this subject matter better than we do, or that b) this subject matter refutes what we've said and backs up what you've said?
Are you making either claim, that a) you know this subject matter better than we do, or that b) this subject matter refutes what we've said and backs up what you've said?
the royal 'we'? Regarding the original question, I have it on certain authority - from a mathematics academic - that a problem exists. This agrees with my instinct also. It may be that the royal we have it on higher authority still that the problem is resolved, though nothing here indicates this imo. An interesting thing nonetheless.
Re the Pythagoreans, this is not directly related to the problem, though could offer some deeper insight. It is fascinating in any case and might appeal to the philosophically minded here. If you think maths is just some functional thing to make stuff work then you might not be interested. The principal idea is the symbolism of number. He explains the importance of the monad firstly, the most important number, representing one-ness, totality, indivisibility, divinity etc. Then the duad - division, separateness, fall into generation etc, the Pythagoreans despised the number 2. Such things could be important when considering problems of infinity: infinity is a symbol which refuses to partake of the finite, making it impossible to reconcile problems such as calculus.
the royal 'we'? Regarding the original question, I have it on certain authority - from a mathematics academic - that a problem exists. This agrees with my instinct also. It may be that the royal we have it on higher authority still that the problem is resolved, though nothing here indicates this imo. An interesting thing nonetheless.
Re the Pythagoreans, this is not directly related to the problem, though could offer some deeper insight. It is fascinating in any case and might appeal to the phil
If a problem exists here, you haven't found it. Second-hand authority isn't worth much, I'm afraid. The world of maths is tricky and counter-intuitive, so we can't play by our own intuition. Our own intuition says 'nothing can be infinitesimally small', but we get round that by just saying it does. That's what we're doing a lot of in maths. Just saying 'what if', and following the logic. It really doesn't matter that we can't do it in reality either. The maths doesn't care. The gradient is derived from a curve with the exact same process we derive the gradient of a straight line. That holds true whether it's a square or exponential or a trig function.
There are plenty of holes in maths. -1/12, Godel's incompleteness. If there's a hole at the bottom of differential calculus, we'd all be delighted to see it.😮
If you want to get into numerology, I suggest Kabbalah. Or the story of how we discovered the number 0. Or debates about what's the biggest number you could create within a limited dataframe. Or Planck Length, and whether that implies the universe operates on a grid formation.
Have you ever derived for yourself the rules of calculus, i.e. why differential equations work the way they do, and integrals?
If a problem exists here, you haven't found it. Second-hand authority isn't worth much, I'm afraid. The world of maths is tricky and counter-intuitive, so we can't play by our own intuition. Our own intuition says 'nothing can be infinitesimally small', but we get round that by just saying it does. That's what we're doing a lot of in maths. Just saying 'what if', and following the logic. It really doesn't matter that we can't do it in reality either. The maths doesn't care. The gradient is deriv
No-thing can absolutely be infinitesimally small. It is 'things' such as the distance between 2 points that cannot. (yes, yes, it's all theoretical and yes yes there's no such 'thing' as a point, and we can do what we want etc but we must remain within our own laws or else we are being illogical, which is an unstated ground rule I assume we have here).
Second-hand authority is what 99.999% (yes that decimal terminates) of so-called mathematicians have. I throw it in because the discussion is not something that can be dismissed with 'limits don't cha know'. Because at least one person who has studied limits in great depth is unsatisfied. I have done my best to outline the problem by citing a source on the definitions of the terms which expose the contradiction quite well. You choose to ignore this, your prerogative.
We can ofc apply our own intuition to anything and everything, particularly mathematics which is inherently, let's say transcendental... We intuitively acknowledge the infinite in both directions, the small and the big or else we end up in straight-jackets... We can agree there are things we can do mathematically that cannot be done 'in reality'. Then we must have a meaningful definition of 'reality' - let's say 'in reality' means time dependent phenomena, in which case you have no leg to stand on.
How do you mean derived the rules of calculus - from first principles yes naturally that is rudimentary. "The gradient of a curve is derived from the same process as a straight line"? This is senseless. The gradient of a straight line is simply the delta x/delta y, that's big bad capital D make it as big as you like. The gradient of the curve is the gradient of the tangent to the curve, a straight line yes which is the linear approximation to the curve (that again is a citation from the authoritative literature ignore if you wish.)
I am a student of mathematics, I am currently studying differential equations, integrating vector fields etc, but this is all applied stuff as it relates to physics, which is my core study. We use 'real' numbers (as in, measured values) so all this is really immaterial to my primary concern since the sig figs we are dealing with are long since abandoned. Probably herein lies the problem, the calculus was developed as a tool for rates of change etc, not as a pure mathematical pursuit.
No-thing can absolutely be infinitesimally small. It is 'things' such as the distance between 2 points that cannot. (yes, yes, it's all theoretical and yes yes there's no such 'thing' as a point, and we can do what we want etc but we must remain within our own laws or else we are being illogical, which is an unstated ground rule I assume we have here).
Second-hand authority is what 99.999% (yes that decimal terminates) of so-called mathematicians have. I throw it in because the discussion is not
Like in programming, we get to create our own laws. In our laws, 'infinitesimally small' gets to exist in a mathematical sense that we can do things with. This is all fully logical and consistent. I have no more ways to say that to you.
I choose to ignore this because all you've presented is 'I know a guy with authority that has a problem with this' and all you've done is vaguely handwave in its direction. Bring us the problem. All you've brought is the marketing for the problem. We want some meat on the bone. That's my prerogative - I'm not going to sit at the dinner table and look at an empty plate and pretend it's a meal. WHERE'S MY BURRITO?
1&onlybillyshears,
I think maybe the problem you are having is that you are focused on infinitismal quantities. While the notion of infinitismals was indeed used by Leibniz in his original development of calculus, the modern definition of limit has completely eliminated the idea. If f is any function, then the limit as x->c of f(x) is equal to L if and only if for all possible values epsilon (which can be arbitrarily small, but not infinitismal) there some value delta such that |f(c+delta)-L|
The derivative is then simply defined in terms of limit, namely the limit h->0 of
[f(x+h)-f(x)]/h. Again, since this is simply a limit as defined above, there is no need for any infinitismal quantities.
Indeed, if you read my earlier post regarding the proof that 0.9 (repeating) = 1, it included a property of real numbers that contradicts the existence of infinitismal values, namely the Archimedean property. In case you didnÂ’t read or remember it, this is the property that for any positive reals an and c there exists b such that ab>c. The negation of this property can be used as a reasonable definition of infinitismal, namely if a is infinitismal then there exists c such that for any b, ab
Like in programming, we get to create our own laws. In our laws, 'infinitesimally small' gets to exist in a mathematical sense that we can do things with. This is all fully logical and consistent. I have no more ways to say that to you.
I choose to ignore this because all you've presented is 'I know a guy with authority that has a problem with this' and all you've done is vaguely handwave in its direction. Bring us the problem. All you've brought is the marketing for the problem. We want some mea
The issue is stated in OP. Y'all cried about the definition of limit being a value that can be approached arbitrarily closely by the DV and rejected this outright with nothing understandable forthcoming. Perhaps you ought to plainly state or better still quote, as I have done, something resembling readable sense.
Okay