Derivatives of Exponentials & Logarithmics

We now turn to differentiating exponential and logarithmic functions. Knowing how to compute the derivatives for these functions is especially useful in applied mathematics fields. For example, we frequently see exponential functions with future value calculations in finance, and logarithmic functions for discounting to present value. We also see exponential functions in population growth models. In all of these situations, knowing how to compute various rates of change is invaluabe. And as we've seen, that knowledge is provided through the derivative.

We begin by recalling some of the basic rules of exponents. First, where a,b>0,{a, b > 0,} the following rules hold:

Exponent-product Rule. The exponential of the sum is the exponential of the product:

ax1β‹…ax2=ax1+x2 a^{x_1} \cdot a^{x_2} = a^{x_1 + x_2}

where a∈Z+,{a \in \uint^{+},} x1,x2∈Z.{x_1, x_2 \in \uint.}

Exponent-quotient Rule. If the bases are the same, keep the base and subtract the exponent in the denominator from the exponent in the numerator.

ax1ax2=ax1βˆ’x2=1ax2βˆ’x1 \dfrac{a^{x_1}}{a^{x_2}} = a^{x_1 - x_2} = \dfrac{1}{a^{x_2 - x_1}}

where a∈Z+,{a \in \uint^{+},} x1,x2∈Z.{x_1, x_2 \in \uint.}

Raising a Product to a Power. To raise a product to the nth{n^{\text{\scriptsize{th}}}} power, raise each factor to the nth{n^{\text{\scriptsize{th}}}} power.

(ab)x=axbx (ab)^{x} = a^xb^x

where a,b∈Z+,{a, b \in \uint^{+},} x∈Z.{x \in \uint.}

Exponent-power Rule. To raise a power to a power, multiply the exponents.

(ax1)x2=ax1β‹…x2 (a^{x_1})^{x_2} = a^{x_1 \cdot x_2}

where a,x1,x2∈Z+{a, x_1, x_2 \in \uint^{+}}

Raising a Quotient to a Power. To raise a quotient to the nth{n^{\text{\scriptsize{th}}}} power, raise both the numerator and the denominator to the nth{n^{\text{\scriptsize{th}}}} power.

(ab)x=axbx \left(\dfrac{a}{b}\right)^{x} = \dfrac{a^x}{b^x}

where a,x∈Z+{a, x \in \uint^{+}}

Exponent of 1. Any number raised to the first power returns the original number.

a1=a a^1 = a

where a∈R.{a \in \reals.}

Exponent of 0. Any number raised to the zero power returns 1.

a0=1 a^0 = 1

where {a∈R:aβ‰ 0};{\{a \in \reals : a \neq 0\};} 00=undefined.{0^0 = \text{undefined}.}

Negative Integer Exponent Rule. Negative exponents are equivalent to an exponent in the denominator.

aβˆ’x=1ax a^{-x} = \dfrac{1}{a^x}

where a,x∈Z+.{a, x \in \uint^{+}.}

Fractional Exponent Rule. Fractional exponents are equivalent to the denomth{\text{denom}^{\text{\scriptsize{th}}}} root of the numerth{\text{numer}^{\text{\scriptsize{th}}}} power.

ax1x2=ax1x2 a^{\frac{x_1}{x_2}} = \sqrt[x_2]{a^{x_1}}

where a,x1,x2∈Z+{a, x_1, x_2 \in \uint^{+}}

Keeping these rules in mind, let's see some functions. How about f(x)=2x:{f(x) = 2^x:}

Graph of two raised to the x.

To find the derivative of this function, let's think about how Newton's quotient applies. It would look something like this:

fβ€²(x)=lim⁑Δxβ†’0ax+Ξ”xβˆ’axΞ”x f'(x) = \lim\limits_{\Delta x \to 0} \dfrac{a^{x + \Delta x} -a^x}{\Delta x}

What should we do next? Look at the term ax+Ξ”xβˆ’ax.{a^{x + \Delta x} -a^x.} We can rewrite this term as (ax)(aΞ”x).{(a^{x})(a^{\Delta x}).} Incorporation this rewrite:

fβ€²(x)=lim⁑Δxβ†’0(ax)(aΞ”x)βˆ’axΞ”x f'(x) = \lim\limits_{\Delta x \to 0} \dfrac{(a^{x})(a^{\Delta x})- a^x}{\Delta x}

Now we see a common factor we can factor out, ax{a^x}:

fβ€²(x)=lim⁑Δxβ†’0(ax)(aΞ”x)βˆ’axΞ”x=lim⁑Δxβ†’0(ax)(aΞ”xβˆ’1Ξ”x) \begin{aligned} f'(x) &= \lim\limits_{\Delta x \to 0} \dfrac{(a^{x})(a^{\Delta x})- a^x}{\Delta x} \\[1em] &= \lim\limits_{\Delta x \to 0} (a^x) \left(\dfrac{a^{\Delta x} - 1}{\Delta x}\right) \end{aligned}

At this point, we have something to work with conceptually. Let's recall how the limit lim⁑Δxβ†’0{\lim\limits_{\Delta x \to 0}} applies here. With this limit, we know that: (1) a{a} is fixed, and (2) x{x} is fixed. Because of these two facts, it follows that ax{a^x} is constant. And because ax{a^x} is constant, we can factor it out of the limit β€” it has no bearing on our analysis of the limit:

fβ€²(x)=axlim⁑Δxβ†’0aΞ”xβˆ’1Ξ”x f'(x) = a^x \lim\limits_{\Delta x \to 0} \dfrac{a^{\Delta x} - 1}{\Delta x}

Thus, our analysis is really focused on what the limit term is. Because of how important the term is, we'll write this expression as:

fβ€²(x)=axβ‹…L(a) f'(x) = a^x \cdot L(a)

where:

L(a)=lim⁑Δxβ†’0aΞ”xβˆ’1Ξ”x L(a) = \lim\limits_{\Delta x \to 0} \dfrac{a^{\Delta x} - 1}{\Delta x}

So, what exactly is L(a)?{L(a)?} To begin, we state an observation. If we plug in the value x=0,{x = 0,} we get:

ddxax∣x=0=a0β‹…L(a)=1β‹…L(a)=L(a) \left. \dfrac{d}{dx} a^x \right\vert_{x = 0} = a^0 \cdot L(a) = 1 \cdot L(a) = L(a)

This observation tells us that at x=0,{x = 0,} the slope of ax{a^x} is this limit we're calling L(a).{L(a).} More specifically, since a=2,{a = 2,} we have a slope of L(2){L(2)} (at x=0.{x = 0.}) This tells us that if we instead had the function f(x)=3x,{f(x) = 3^x,} at x=0{x = 0} we'd get L(3).{L(3).} For f(x)=13x,{f(x) = 13^x,} at x=0{x = 0} we'd get L(13),{L(13),} and so on. This leads to a broader observation: If we know the slope at x=0{x = 0} for some function f(x)=ax,{f(x) = a^x,} we can determine the slope everywhere else. We've seen this phenomenon before with sines and cosines: We knew the slope for sine and cosine at x=0,{x = 0,} and from the trigonometric formulas, we determined the slope everywhere else.

The difference with exponentials: we're stuck right off the bat. With sine and cosine, we had the benefit of using radians, which allowed us to interpret the limits geometrically. Here, we're dealing directly with a number. There are no clever shortcuts to rely on. So we're going to need a different tactic. In this case, we're going to beg the question.1

We start by stating a proposition:

There exists a base e,{e,} where e{e} is a unique real number, such that M(e)=1.{M(e) = 1.}

From the proposition above, we have the following:

ddxex=ex \dfrac{d}{dx} e^x = e^x

This is a very important deduction (so much so that we will encapsulate it in a formal definition later). Furthermore, from our previous analysis, we know that:

ddxex∣x=0=1 \left. \frac{d}{dx} e^x \right\vert_{x = 0} = 1

As we're all too familiar with in mathematics, we should never take things for granted. Why does e{e} exist? How do we know it exists? So we know that f(x)=2x{f(x) = 2^x} exists. That is, after all, what we started with. Furthermore, we know that it has the property fβ€²(0)=L(2).{f'(0) = L(2).} Now let's say we stretch the function f(x)=2x.{f(x) = 2^x.} We can stretch this function by multiplying its inputs with a factor k{k}:

f(kx)=2kx f(kx) = 2^{kx}

By the exponent power rule, the expression 2kx{2^{kx}} is the same as (2k)x.{(2^k)^x.} And since 2{2} and k{k} are both real numbers, 2k{2^k} is a real number, which we can denote as b=2x.{b = 2^x.} This means we can rewrite the function above as:

f(kx)=bx,Β Β Β (b=2k) f(kx) = b^x, \space \space \space (b = 2^k)

Now, what happens when we shrink a function's graph? Well, we essentially shrink the x{x}-axis, which in turn leads to the graph's slope tilting up. Thus, as kβ†’+∞{k \to \texttt{+}\infty} (k{k} gets bigger and bigger), the slope of f(kx){f(kx)} gets steeper and steeper. This is numerically confirmed by considering the derivative of f(kx):{f(kx):}

ddxbx=ddxf(kx)=kβ‹…fβ€²(kx) \dfrac{d}{dx} b^x = \dfrac{d}{dx}f(kx) = k \cdot f'(kx)

Evaluating at 0:{0:}

ddxbx∣x=0=kβ‹…fβ€²(0)=kβ‹…L(2) \left. \dfrac{d}{dx} b^x \right\vert_{x = 0} = k \cdot f'(0) = k \cdot L(2)

At this point, we know that e{e} exists. When k=1L(2),{k = \dfrac{1}{L(2)},} we have:

ddxbx∣x=0=kβ‹…fβ€²(0)=1L(2)β‹…L(2)=1 \left. \dfrac{d}{dx} b^x \right\vert_{x = 0} = k \cdot f'(0) = \dfrac{1}{L(2)} \cdot L(2) = 1

Substitute e{e} into b,{b,} the equation holds true: e0=1.{e^0 = 1.} And since the equation holds true, we know that e{e} exists.

Knowing that e{e} exists, we can continue with our derivative. We must now put all of these observations together. To do so, we rely on a particularly useful operation in mathematics: The natural logarithm. The natural log is defined as such:

y=ex⇔ln⁑y=x y = e^x \iff \ln y = x

The natural logarithm has several useful properties that are worth revisiting:

  • ln⁑(x1β‹…x2)=ln⁑x1+ln⁑x2{\ln (x_1 \cdot x_2) = \ln x_1 + \ln x_2}
  • ln⁑1=0{\ln 1 = 0}
  • ln⁑e=1{\ln e = 1}

And just to reminder ourselves, the graphs of f(x)=ex{f(x) = e^x} and g(x)=ln⁑x:{g(x) = \ln x:}

Euler's number

With the graphs above, we can see that g(x)=ln⁑x{g(x) = \ln x} is indeed the inverse of f(x)=ex.{f(x) = e^x.} We're switching the roles of x{x} and y.{y.} Notice further that g(x)=ln⁑x{g(x) = \ln x} is only defined when x>0.{x > 0.} This corresponds to the fact that f(x)=ex{f(x) = e^x} is always positive.

Since g(x)=ln⁑x{g(x) = \ln x} is the inverse function of f(x)=ln⁑x,{f(x) = \ln x,} we can use implicit differentiation to find the derivative of the natural logarithm. To do so, we'll write w=ln⁑x.{w = \ln x.} We want to find the derivative of w.{w.} But we don't know how, so what we'll do is exponentiate:

w=ln⁑xβ‡’ew=xβ‡’eln⁑x=x w = \ln x \nc e^w = x \nc e^{\ln x} = x

Given that w=eln⁑x=x,{w = e^{\ln x} = x,} we can differentiate:

ddxew=ddxx=1 \begin{aligned} \dfrac{d} {dx} e^w &= \dfrac{d} {dx} x \\[1em] &= 1 \end{aligned}

Applying implicit differentiation, we have:

(ddxew)(dwdx)=1(ew)(dwdx)=1dwdx=1ew \begin{align*} \left( \dfrac{d}{dx} e^w \right)\left( \dfrac{dw}{dx} \right) &= 1\\[1em] \left( e^w \right)\left( \dfrac{dw}{dx} \right) &= 1\\[1em] \dfrac{dw}{dx} &= \dfrac{1}{e^w} \end{align*}

Substitute w{w} with ln⁑x{\ln x} (since we defined w=ln⁑x{w = \ln x}):

dwdx=1eln⁑x=1x \begin{align*} \dfrac{dw}{dx} &= \dfrac{1}{e^{\ln x}} \\[1em] &= \dfrac{1}{x} \\ \end{align*}

Hence, we have the following:

ddxln⁑x=1x \dfrac{d}{dx} \ln x = \dfrac{1}{x}

We now have two formulas to work with:

  • (ex)β€²=ex{(e^x)' = e^x}
  • (ln⁑x)β€²=1x{(\ln x)' = \dfrac{1}{x}}

We now return to our pesky function y=ax.{y = a^x.} If we can find the derivative for this function, we can find the derivative for any exponential function. Knowing the two formulas above, we can compute the derivative by first rewriting ax{a^x} in terms of base e.{e.} I.e., converting ax{a^x} into base e.{e.} How do we convert ax{a^x} to ex?{e^x?} We want to write ax{a^x} as e?.{e^{?}.} In other words, e{e} to some power:

ax=(eln⁑a)x=exln⁑a a^x = (e^{\ln a})^x = e^{x \ln a}

Now we can carry out the differentiation:

ddxax=ddxexln⁑a \dfrac{d}{dx}a^x = \dfrac{d}{dx} e^{x \ln a}

The first step is often a point of great confusion, so it's critical that we are comfortable with it as possible. First, we emphasize that e{e} and ln⁑a{\ln a} are constants. They're fixed. No movement. The only thing that's moving is x.{x.} If we were asked to compute the derivative of e2x,{e^{2x,}} the derivative would look like:

(e2x)β€²=2e2x (e^{2x})' = 2e^{2x}

by the chain rule. The same idea applies to the derivative of exln⁑a.{e^{x \ln a}.} Hence:

ddxax=ddxexln⁑a=(ln⁑a)exln⁑a \dfrac{d}{dx}a^x = \dfrac{d}{dx} e^{x \ln a} = (\ln a)e^{x \ln a}

We know that exln⁑a=ax,{e^{x \ln a} = a^x,} so we have the derivative of ax:{a^x:}

ddxax=(ln⁑a)ax \dfrac{d}{dx} a^x = (\ln a)a^x

Now we can go back to our very first problem: y=2x.{y = 2^x.} Recall we asked, what exactly is L(a)?{L(a)?} Well now have it:

L(a)=lim⁑Δxβ†’0aΞ”xβˆ’1Ξ”x=ln⁑a L(a) = \lim\limits_{\Delta x \to 0} \dfrac{a^{\Delta x} - 1}{\Delta x} = \ln a

Accordingly, we have:

ddx2x=(ln⁑2)2x \dfrac{d}{dx} 2^x = (\ln 2)2^x

Similarly:

ddx10x=(ln⁑10)10x \dfrac{d}{dx} 10^x = (\ln 10)10^x

This is one of the reasons for why the natural logarithm takes as part of its name the adjective β€œnatural.” No matter what base we choose, the respective exponential function's derivative will contain a natural logarithm. It's a logarithm that appears without any reference.

Logarithmic Differentiation

In the method above, we arrived at the derivative of y=ax{y = a^x} by rewriting the base in terms of e.{e.} There is, however, any way to compute the derivative: through the method of logarithmic differentiation.

With logarithmic differentiation, rather than directly computing the derivative of some function u,{u,} we instead attempt to compute the derivative of its logarithm. More explicitly, instead of computing:

ddxu \dfrac{d}{dx} u

we instead compute:

ddxln⁑u \dfrac{d}{dx} \ln u

Applying the chain rule:

ddxln⁑u=(dln⁑udu)dudx \dfrac{d}{dx} \ln u = \left(\dfrac{d \ln u}{du}\right) \dfrac{du}{dx}

Note that the expression dln⁑udu{\dfrac{d \ln u}{du}} just another way to write dduln⁑u.{\dfrac{d}{du} \ln u.} We know that this term evaluates to 1u,{\dfrac{1}{u},} so we have:

ddxln⁑u=1ududx \dfrac{d}{dx} \ln u = \dfrac{1}{u} \dfrac{du}{dx}

This is the controlling principle of logarithmic differentiation. We state it explicitly:

(ln⁑u)β€²=uβ€²u (\ln u)' = \dfrac{u'}{u}

Applying this to ddxax:{\dfrac{d}{dx} a^x:}

ddxax∣u=ax=ln⁑u=xln⁑a \left. \dfrac{d}{dx}a^x \right\vert_{u = a^x} = \ln u = x \ln a

And as we know from our rules, (ln⁑u)β€²=ln⁑a.{(\ln u)' = \ln a.} Why? Because ln⁑a{\ln a} is just a constant. It doesn't move. Performing the rest of the computation:

uβ€²u=(ln⁑u)β€²=ln⁑a \dfrac{u'}{u} = (\ln u)' = \ln a

Solving for uβ€²:{u':}

uβ€²=uln⁑a u' = u \ln a

Substituting u=ax:{u = a^x:}

ddxax=(ln⁑a)ax \dfrac{d}{dx} a^x = (\ln a)a^x

We get the same derivative we saw with the rewriting-in-terms-of-e{e} method. Logarithmic differentiation allows us to compute even more complex derivatives. Consider the function:

(y=xx) (y = x ^ x)

This is a fairly nasty function. It's got both a moving base and a moving exponent. Once more, we apply logarithmic differentiation. First, we express xx{x^x} as a variable:

(v=xx) (v = x ^ x)

Next, we rewrite the expression as a logarithm:

ln⁑v=xln⁑x \ln v = x \ln x

Then we compute the derivative:

(ln⁑v)β€²=ln⁑x+x(1x)=1+ln⁑x \begin{align*} (\ln v)' &= \ln x + x \left( \dfrac{1}{x} \right) \\[1em] &= 1 + \ln x \end{align*}

Accordingly, we now have:

vβ€²v=1+ln⁑x \dfrac{v'}{v} = 1 + \ln x

Solving for vβ€²:{v':}

vβ€²=v(1+ln⁑x) v' = v(1 + \ln x)

Substituting with v=xx,{v = x^x,} we have the following rule:

ddxxx=xx(1+ln⁑x) \dfrac{d}{dx} x^x = x^x(1 + \ln x)

Euler's Number

Having seen how the natural logarithm is so useful, we now turn to a closer look at the number e.{e.} After all, this entire discussion has been premised on the existence of e,{e,} and although we proved that it existed, we didn't delve any deeper. We do so now.

What does this limit evaluate to:

lim⁑nβ†’βˆž(1+1n)n \lim\limits_{n \to \infty} \left( 1 + \dfrac{1}{n} \right)^n

Evaluating this limit, we have:

ln⁑((1+1n)n)=nln⁑(1βˆ’1n) \ln \left( \left( 1 + \dfrac{1}{n} \right)^n \right) = n \ln \left( 1 - \dfrac{1}{n} \right)

Let's rewrite this expression in a more recognizable form. Suppose Ξ”x=1n.{\Delta x = \dfrac{1}{n}.} This means that as n{n} tends to infinity (as our expression provies), Ξ”x{\Delta x} tends to zero. Rewriting the expression:

ln⁑((1+1n)n)=nln⁑(1βˆ’1n)=1Ξ”xln⁑(1+Ξ”x) \ln \left( \left( 1 + \dfrac{1}{n} \right)^n \right) = n \ln \left( 1 - \dfrac{1}{n} \right) = \dfrac{1}{\Delta x}\ln(1 + \Delta x)

Now we perform the classic trick of subtracting 0{0} from the expression. In this case, we will represent 0{0} as ln⁑1:{\ln 1:}

ln⁑((1+1n)n)=nln⁑(1+1n)=1Ξ”x(ln⁑(1+Ξ”x)βˆ’ln⁑1) \ln \left( \left( 1 + \dfrac{1}{n} \right)^n \right) = n \ln \left( 1 + \dfrac{1}{n} \right) = \dfrac{1}{\Delta x}(\ln(1 + \Delta x) - \ln 1)

Do we see the pattern in the last expression? Let's rewrite it one more time:

ln⁑(1+Ξ”x)βˆ’ln⁑1Ξ”x \dfrac{\ln(1 + \Delta x) - \ln 1}{\Delta x}

This is just an application of Newton's Quotient. It's the expression we get from computing:

ddxln⁑x∣x=1 \left. \dfrac{d}{dx} \ln x \right\vert_{x = 1}

And we know the result of this computation:

ddxln⁑x∣x=1=1x∣x=1=1 \left. \dfrac{d}{dx} \ln x \right\vert_{x = 1} = \left. \dfrac{1}{x} \right\vert_{x = 1} = 1

Finally, given x=1,{x = 1,} the limit evaluates to 1.{1.} Now we just have to work backwards to determine our original limit, lim⁑nβ†’βˆž(1+1n).{\lim\limits_{n \to \infty} (1 + \frac{1}{n}).} First, we remind ourselves of the limit we just computed:

lim⁑nβ†’βˆžln⁑((1+1n)n)=1 \lim\limits_{n \to \infty} \ln \left( \left( 1 + \dfrac{1}{n} \right)^n \right) = 1

This is the limit of the natural logarithm of our original expression. To evaluate our original limit, we must β€œundo” the natural logarithm. And how do we do so? Be rewriting the natural logarithm in terms of its inverse: a power with the base e:{e:}

lim⁑nβ†’βˆž(1+1n)n=elim⁑nβ†’βˆžln⁑((1+1n)n) \Large \lim\limits_{n \to \infty} \left( 1 + \dfrac{1}{n} \right)^n = e^{^{\lim\limits_{n \to \infty} \ln \left( \left( 1 + \frac{1}{n} \right)^n \right)}}

Note that this isn't anything new. We're just undoing what we've done. The limit of the log is equal to the log of the limit. And given that we've solved for the limit of the log, we see e{e} in all its glory:

lim⁑nβ†’βˆž(1+1n)n=elim⁑nβ†’βˆžln⁑((1+1n)n)=e1=e \Large \lim\limits_{n \to \infty} \left( 1 + \dfrac{1}{n} \right)^n = e^{^{\lim\limits_{n \to \infty} \ln \left( \left( 1 + \frac{1}{n} \right)^n \right)}} = e^1 = e

This is such an important result, that we must state it from a different perspective:

e=lim⁑nβ†’βˆž(1+1n)n e = \lim\limits_{n \to \infty} \left( 1 + \dfrac{1}{n} \right)^n

As an aside, notice that reading the equation the other way leads to another key insight: An approximation of e.{e.} In mathematics, we should always read our equations (and really, any proposition) both β€œforward” and β€œbackward”. If we have equality, rewrite it from both directions. Inequality, state it in terms of what's less and what's greater. A conditional, state the contrapositive. Switching perspectives is the first step towards greater insight.

Power Functions

Recall our rule for computing the derivative of a power function:

(xn)β€²=nβ‹…xnβˆ’1 (x^n)' = n \cdot x^{n-1}

Our rule worked, but we restricted it to a particular condition: Where n∈Q.{n \in \mathbb{Q}.} In other words, the power rule, as we originally formulated it, was limited to rational number powers. We couldn't use it to compute the derivatives of something like xΟ€,{x^{\pi},} x2{x^{\sqrt{2}}} or xe.{x^{e}.} These are all irrational powers. Now that we know how to rewrite bases in terms of e{e} and logarithmic differentiation, we can extend the power rule to apply to all real numbers.

With what we know now, we have a few ways of computing the derivative of f(x)=xr,{f(x) = x^r,} where r∈R.{r \in \reals.} One way is to rewrite the base in terms of e:{e:}

f(x)=xr=(eln⁑x)r=erln⁑x f(x) = x^r = (e^{\ln x})^r = e^{r \ln x}

Using prime notation to express the derivative:

ddxxr=(erln⁑x)β€² \dfrac{d}{dx} x^r = (e^{r \ln x})'

we see that we can compute the derivative by applying the chain rule:

(erln⁑x)β€²=erln⁑x(rln⁑x)β€² (e^{r \ln x})' = e^{r \ln x} (r \ln x)'

Simplifying, the derivative of the exponential is just itself, so no change there. The derivative of r{r} is zero, since r{r} is a constant. And the derivative of ln⁑x{\ln x} is 1/x,{1/x,} as we know.

(erln⁑x)β€²=erln⁑x(rln⁑x)β€²=erln⁑xrx=xrrx=rβ‹…xrβˆ’1 \begin{aligned} (e^{r \ln x})' &= e^{r \ln x} (r \ln x)' \\[1em] &= e^{r \ln x} \dfrac{r}{x} \\[1em] &= \dfrac{x^r r}{x} \\[1em] &= r \cdot x^{r - 1} \end{aligned}

We've now extended the power rule to all real numbers. We could've also done the same with logarithmic differentiation. First, we denote xr{x^r} with the variable u:{u:}

(u=xr) (u = x ^ r)

Taking the natural logarithm of u:{u:}

ln⁑u=rln⁑x \ln u = r \ln x

Then, we differentiate:

(ln⁑u)β€²=(rln⁑x)=rx \begin{aligned} (\ln u)' &= (r \ln x) \\[2em] &= \dfrac{r} {x} \end{aligned}

Next, we use the fact that ln⁑u{\ln u} is the same as uβ€²u:{\dfrac{u'}{u}:}

uβ€²u=rx \dfrac{u'}{u} = \dfrac{r}{x}

Solving for uβ€²:{u':}

uβ€²=urx u' = u \dfrac{r} {x}

Then substituting u{u} with xr:{x^r:}

uβ€²=xrrx=rxrβˆ’1 \begin{aligned} u' &= x^r \dfrac{r} {x} \\[2em] &= r x^{r - 1} \end{aligned}

This confirms that the power rule extends to all real numbers. It also demonstrates that logarithmic differentiation and convering to base e{e} are essentially the same methods. The biggest differences: With converting to base e,{e,} we must deal with exponents, and with logarithmic differentiation, we often must introduce new symbols.

Footnotes

  1. β€œBegging the question” is a type of informal fallacy. Essentially, smuggling the conclusion into our premises. For example: God must exist. How do you know? Because the Bible says so. Why should I believe the Bible? Because the Bible was written by God. Another example: The belief in God is universal. After all, everyone believes in God. Begging the question is not a logical fallacy. Arguments that beg the question are logically valid. These arguments, however, are typically unpersuasive (as seen in the preceding examples), but they are perfectly logical. In other words, we're going to go at this problem in a roundabout way. This is a circuitous approach, not circular. Effectively, we're going to refuse to answer β€œWhat is L(a)?{L(a)?}” directly for now, but we will eventually get to the question. ↩