Conditions for differentiability for many variables

From Applied Science

For single variable functions we learn that being differentiable implies in being continuous. We also learn that being continuous doesn't imply in being differentiable because there are some exceptions to the rule. For multivariable functions differentiability also requires continuity, but we also have specific cases where we can calculate partial derivatives while the function is also discontinuous at a point.

[math]\displaystyle{ f(x) = |x| }[/math]. This single variable example shows that the limits exists at the origin, the function is continuous there and yet, we can't differentiate there because the rates of change to the left and to the right mismatch each other. Visually there is a sharp edge at the origin. [math]\displaystyle{ f(x,y) = \sqrt{x^2 + y^2} }[/math]. The same behaviour in two variables. The function is continuous at the origin, but the sharp edge means that we can't differentiate it there.

[math]\displaystyle{ f(x,y) = \begin{cases} 0 & \text{if} & (x,y) = (0,0) \\ \frac{xy}{x^2 + y^2} & \text{if} & (x,y) \neq (0,0) \end{cases} }[/math]

If we calculate [math]\displaystyle{ \frac{\partial f(0,0)}{\partial x} }[/math] and [math]\displaystyle{ \frac{\partial f(0,0)}{\partial y} }[/math] using the definition we do find valid results. But we also know, from calculating limits with different paths, that the function is discontinuous at the origin. With this example we have shown that the existence of partial derivatives doesn't guarantee that the function is continuous.

Differentiability

From a geometrical perspective, when a single variable function is differentiable it means that we can approximate it using a straight line in a small interval around a point. That's the idea of finding the tangent line. For two variables what is the equivalent to a line with one more dimension? It's a plane. Therefore, for a multivariable function to be differentiable it has to accept a linear approximation, a tangent plane around a point (beyond 3D the algebra exists, but we can't visualize the function). If the function's rate of change in some direction is abrupt, much like a sharp edge, then we can't use a linear approximation there and the function is not differentiable at that point.

For one variable we have to prove that [math]\displaystyle{ \lim_{x \ \to \ x_0} f(x) = f(x_0) }[/math]. Where the left side is a limit and not just any limit, but the derivative's formal definition. With the linear approximation what we do is to see that the difference between a function and its derivative at a certain point is given by some error function. When the error is zero it means that the linear approximation and the function are equal to each other. For two and more variables the idea is the same:

[math]\displaystyle{ \lim_{(x, \ y) \ \to \ (x_0,\ y_0)}f(x,\ y) = f(x_0,\ y_0) }[/math].

With multivariable functions the derivatives have direction, which means that partial derivatives won't suffice. We need to use the directional derivative for the proof to work because directional derivatives carry the concept of rates of change in arbitrary directions.

[math]\displaystyle{ f(x,y) = \sqrt{x^2 + y^2} }[/math]. This function is continuous at the origin, but not differentiable. If we try infinitely many paths towards the origin, all of them result in the same value. However, if we consider paths parallel to the axis with partial derivatives, what we are going to see is that the function behaves exactly like [math]\displaystyle{ f(x) = |x| }[/math] in directions parallel to an axis. We are going to see [math]\displaystyle{ f(x,0) = \sqrt{x^2 + 0} }[/math] and [math]\displaystyle{ f(0,y) = \sqrt{0 + y^2} }[/math]. From there we calculate the derivates to the right and to the left as we'd do with [math]\displaystyle{ f(x) = |x| }[/math] and conclude that the rates of change to the right and to the left are different from each other. This should explain why partial derivatives aren't suitable to prove that if a multivariable function is differentiable, it's continuous too.

Prove that differentiability implies continuity

We say that [math]\displaystyle{ z = f(x,y) }[/math] is differentiable at [math]\displaystyle{ (x_0,y_0) }[/math] if there is a plane with equation

[math]\displaystyle{ Z = f(x_0,y_0) + A(x - x_0) + B(y - y_0) }[/math]

containing the point [math]\displaystyle{ ((x_0,y_0),f(x_0,y_0)) }[/math], such that the difference [math]\displaystyle{ f(x,y) - Z }[/math] is some infinitely small quantity greater than

[math]\displaystyle{ r = \sqrt{(x - x_0)^2 + (y - y_0)^2} }[/math] when [math]\displaystyle{ r \to 0 }[/math]

In case you got lost with the equation of the plane looking for the two vectors and the vector's coordinates. With a function we walk over its surface by incrementing the variables by some quantity. In case of a function of two variables we have two coordinates and two increments, one for each direction. We can say that [math]\displaystyle{ (x - x_0) }[/math] is one small step in the direction of [math]\displaystyle{ x }[/math], while A is the number of steps in that direction. Repeat for B and [math]\displaystyle{ y }[/math]. Many textbooks do the substitution [math]\displaystyle{ h = x - x_0 }[/math] and [math]\displaystyle{ k = y - y_0 }[/math]. I'm adopting the same substitution here:

[math]\displaystyle{ \eta = \frac{f(x_0 + h,y_0 + k) - f(x_0, y_0) - Ah - Bk}{r} }[/math]

In the same way we consider an infinitely small interval around a point for a single variable, we consider an infinitely small circle around a point for two variables. The difference [math]\displaystyle{ f(x,y) - Z }[/math] is a distance and it should be some infinitely small quantity greater than the radius. If look closely, the terms that make up the numerator should add up to some quantity that is greater than the denominator.

Now we want the tangent plane to coincide with the function at a given point, which equates to calculating a limit for [math]\displaystyle{ r \to 0 }[/math]. But the radius has the variables [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math], which means that making the radius approach zero is the same as [math]\displaystyle{ x \to x_0 }[/math] and [math]\displaystyle{ y \to y_0 }[/math]. Therefore (after a quick algebraic manipulation):

[math]\displaystyle{ f(x_0 + h, y_0 + k) = f(x_0,y_0) + Ah + Bk + \eta r }[/math]

Calculating the limit as [math]\displaystyle{ r \to 0 }[/math] yields:

[math]\displaystyle{ \lim_{(x,y) \to (x_0,y_0)}f(x,y) = f(x_0,y_0) }[/math]

With this we have shown that if the function is differentiable at a point, it's continuous there.

Let's go one step further and show that if the function is differentiable at a point, then all (1st order) partial derivatives in any direction should exist at that point. Going back to the first equation, let's make [math]\displaystyle{ k = 0 }[/math]:

[math]\displaystyle{ \lim_{h \to 0}\frac{f(x_0 + h,y_0) - f(x_0, y_0) - Ah}{|h|} = 0 }[/math]

If you are asking where is the radius, [math]\displaystyle{ r = \sqrt{h^2 + 0^2} }[/math]. Notice that the above limit is almost the same as the partial derivative, except for the extra term. If you remember the proof for the single variable case, we are repeating the same reasoning, except that there is one more variable.

[math]\displaystyle{ \lim_{h \to 0}\frac{f(x_0 + h,y_0) - f(x_0, y_0)}{h} - A = 0 }[/math]

[math]\displaystyle{ \lim_{h \to 0}\frac{f(x_0 + h,y_0) - f(x_0, y_0)}{h} = A }[/math]

With this we have shown that [math]\displaystyle{ \frac{\partial f}{\partial x} (x_0,y_0) = A }[/math]. Repeat with [math]\displaystyle{ h = 0 }[/math] to show the same for [math]\displaystyle{ y }[/math]. We have proven that if the function is differentiable at a point, then partial derivatives in all directions should exist at that point.