Powered by MathJax

Friday, July 4, 2014

Question of the week #8

   Given that
$$
siny+cosy = x
$$
find the first $\frac{dy}{dx}$ and the second derivative $\frac{d^{2}y}{dx^{2}}$ as functions of x.

   Hint: It deserves to be persistent (and to use some trigonometric identities as well) to get rid of the $y$ !


Thursday, July 3, 2014

Question of the week #7: the answer

$\bullet$ Last week's exercise was the following:

Exercise
(a). Find the general solution of the following differential equation:
$$\frac{dy}{dx}lnx+\frac{y}{x}= cotx$$
(b). Find also a special solution coming through $(0,1)$.


$\bullet$ In today's  post let us see how this could have been worked out: 

Solution:
(a). Let us begin by noticing that $(lnx)'=\frac{1}{x}$. Thus we have that:

$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \  \ \ \ \ \ \  \frac{dy}{dx}lnx+\frac{y}{x}= cotx  \ \ \ \Leftrightarrow \ \ \  y' lnx + y (lnx)' = cotx \Leftrightarrow$

$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Leftrightarrow (y lnx)' = \frac{cosx}{sinx} \ \ \ \Leftrightarrow \ \ \ (y lnx)' = \frac{(sinx)'}{sinx}   \Leftrightarrow $

$\ \ \ \ \ \ \  \ \ \ \ \ \ \ \ \ \ \Leftrightarrow  (y lnx)' = \big( ln(sinx) \big)'  \ \ \ \Leftrightarrow \ \ \ y lnx = ln(sinx) + c$

$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Leftrightarrow lnx^{y} = ln(sinx)+c \ \ \ \Leftrightarrow \ \ \ x^{y} = d sinx$

where $x > 0$ and both $c$ and $d=e^{c}$ are integration constants.
   So the general solution consists of all the functions defined (in implicit form) by the parametric family of equations:
\begin{equation} \label{implgensol}
x^{y} = d sinx
\end{equation}
where $x > 0$.
(b). In order to determine the special solution coming through $(0,1)$ we have to substitute $x=0$, $y=1$ in the general solution \eqref{implgensol}, getting:  $ \ \ \ 0=d sin0$.
   Thus, all solutions \eqref{implgensol}, for any $d \in \mathbb{R}$ are coming through $(0,1)$.

Sunday, June 29, 2014

Parametric curves and second derivatives: methods

   Let us know come to face the problem of computing the second derivative for functions defined through a pair of parametric equations:
Utilizing the results of the previous theorem on the first derivative, we can proceed -after having obtained the first derivative $\frac{dy}{dx}=\frac{dy/dt}{dx/dt}$ as a function of the parameter $t$-  to the computation of the second derivative either as
\begin{equation} \label{secder1st}
\frac{d^{2}y}{dx^{2}} = \frac{dy'}{dx} = \frac{dy'/dt}{dx/dt}
\end{equation}
where $y'=\frac{dy}{dx}$, or as
\begin{equation}  \label{secder2nd}
\frac{d^{2}y}{dx^{2}} = \frac{d}{dx}\Big[ \frac{dy}{dx} \Big] = \frac{d}{dt}\Big[ \frac{dy}{dx} \Big] \cdot \frac{dt}{dx}
\end{equation}
Remarks:
1. Notice that -according to a previous remark in the computation of the first derivative- if we wish to apply \eqref{secder2nd}, a suitable partition of the domain $E \subseteq \mathbb{R}$ of $x=f(t)$, must be considered in order for $x=f(t)$ to be bijective ("1-1"). Thus, we will have $t=f_{i}^{-1}(x)$ (in the corresponding interval in $E$'s partition) and the correct formula for $\frac{dt}{dx}$ must be replaced.
2. Notice that in general $$\frac{d^{2}y}{dx^{2}} \neq \frac{d^{2}y/dt^{2}}{d^{2}x/dt^{2}}$$

Let us now procceed in a couple of clarifying examples:

Example1: Let us consider the pair of parametric equations $\left\{%
                               \begin{array}{l}
                               x = sint  \\
                               y = cos2t
                               \end{array}
                                \right. \ \ $, for $\ t \in [-\frac{\pi}{2}, \frac{\pi}{2}]$.

It is clear that $x=sint$ is invertible in the domain $[-\frac{\pi}{2}, \frac{\pi}{2}]$ and thus we can directly apply \eqref{secder2nd}:

   Since $\frac{dx}{dt}=cost$ and $\frac{dy}{dt}=-2sin2t$, we have:
$$
\frac{dy}{dx} = \frac{dy/dt}{dx/dt} = \frac{-2sin2t}{cost} = \frac{-4 sint cost}{cost} = -4 sint
$$
therefore:
$$
\frac{d^{2}y}{dx^{2}} = \frac{d}{dx}\Big[ \frac{dy}{dx} \Big] = \frac{d}{dx}(-4 sint) =  \frac{d}{dt}(-4 sint)  \cdot \frac{dt}{dx} = (-4 cost) \cdot (\frac{1}{cost}) = -4
$$
where we have made use of the fact that $x=sint \Leftrightarrow t=arcsinx$ for $t \in [-\frac{\pi}{2}, \frac{\pi}{2}]$ and thus $\frac{dt}{dx} = (arcsinx)' = \frac{1}{cost} = \frac{1}{\sqrt{1-x^{2}}}$ for $t \in (-\frac{\pi}{2}, \frac{\pi}{2})$. (see previous post on the derivatives of the inverse trigonometric functions).
   Notice that $\frac{dx}{dt} \Big|_{\pm \frac{\pi}{2}} = \frac{dy}{dt} \Big|_{\pm \frac{\pi}{2}}=0$ so these points are singular points and the above result does not apply at these points. Can you figure out what is happening at these points ? (plotting a graph of the parametric equations will probably help you understand the behavior at these singular points).

Example2: The curve given by the parametric equations $\left\{%
                               \begin{array}{l}
                               x = t^{2}  \\
                               y = t^{3}
                               \end{array}
                                \right. \ \ $, for $\ t \in [-\infty, \infty]$ is called a semicubic parabola.

   This curve (and the functions defined by it) can be equivalently described in implicit form by the equation $y^{2}=x^{3}$ (square $y(t)$ and cube $x(t)$ to see this!).
   The graph of these parametric equations consists of two branches: the upper branch corresponding to the function $y=x^{3/2}$ while the lower branch is the graph of the function $y=-x^{3/2}$. These two branches meet at the origin, which corresponds to the value $t=0$.
   Since $\frac{dx}{dt}=2t$ and $\frac{dy}{dt}=3t^{2}$ we can clearly see that the origin corresponds to a singular point of the graph.
   We can readily apply the theorem to compute the first derivative (for either branch):
$$
\frac{dy}{dx} = \frac{dy/dt}{dx/dt} = \frac{\frac{d}{dt}(t^{3})}{\frac{d}{dt}(t^{2})}  = \frac{3t^{2}}{2t} = \frac{3}{2}t
$$
while we will follow \eqref{secder1st} to compute the second derivative:
$$
\frac{d^{2}y}{dx^{2}} = \frac{dy'}{dx} = \frac{dy'/dt}{dx/dt} = \frac{\frac{d}{dt}(3t/2)}{\frac{d}{dt}(t^{2})} = \frac{3}{4t}
$$
   Can you apply \eqref{secder2nd}, after suitably dividing the domain $(-\infty, \infty)$, to obtain the same results?
   Can you figure out what is happening at the singular point $O(0,0)$ ?



Tuesday, June 24, 2014

Question of the week #7

(a). Find the general solution of the following differential equation:
$$
\frac{dy}{dx}lnx+\frac{y}{x}= cotx
$$
(b). Find also a special solution coming through $(0,1)$.

Any thoughts or solutions would be highly appreciable !!

Enjoy!


Saturday, June 7, 2014

Parametric curves: some theory and a few examples

   In today's post, I will move on to discuss the topic of parametric equations, the graph of parametric equations, the function(s) -and the conditions under which they are defined- defined by a pair of parametric equations and finally I will state and prove a theorem concerning the differentiation of parametric curves and functions. Cases of applicability broadening the scope of the theorem will also be investigated.
$\bullet$ Let us consider the real functions
\begin{equation} \label{parameq1}
\begin{array}{ccc}
x(t):E \rightarrow \mathbb{R} & & y(t):E \rightarrow \mathbb{R}
\end{array}
\end{equation}
where $E \subseteq \mathbb{R}$. In such a case, a point $(x(t),y(t))$ of the Cartesian plane is mapped to  any value of the variable $t$.
   The $x(t)$, $y(t)$ functions are called parametric equations and the real variable $t$ will be called parameter.
   The set of points of the Cartesian plane represented by the ordered pairs of the set
\begin{equation} \label{parameq2}
\mathcal{C} = \{ (x(t),y(t)) / t \in E \}
\end{equation}
will be called the graph or the curve of the parametric equations \eqref{parameq1}.
$\bullet$ Let us now consider the parametric equations
\begin{equation}    \label{parameq3}
\begin{array}{ccc}
x=f(t), \ f : E \rightarrow \mathbb{R}     &     & y=g(t),  \ g : E \rightarrow \mathbb{R}
\end{array}
\end{equation}
and let us suppose that there is $A \subseteq E \subseteq \mathbb{R}$, such that
$$
\forall x \in f(A) \ , \ \exists \ ! \ y \in g(A) \ , \ (x,y) \in \{ (f(t),g(t)) / t \in A \}
$$
It is clear now that a function $\phi: f(A) \rightarrow \mathbb{R}$ has been formed with formula $y=\phi(x)$, whose graph $\mathcal{C}_{\phi}$ is a part of the graph $\mathcal{C}$.
   We will then say that the parametric equations \eqref{parameq3} define the function $\phi$.
Remark: The conditions of the above definition are obviously met in case $x=f(t)$ is an invertible (thus: bijective or $``1-1"$) function: in such a case $t=f^{-1}(x)$ and $\phi(x)=g(f^{-1}(x))$; in other words: $\phi = g \circ f^{-1}$. However, we stress the fact that this is not the only case: the conditions described earlier are much wider than demanding the existence of an inverse function for $f$. For example we can readily see that the parametric functions $x = f(t) = t^{2}$, $y = g(t) = t^{4}+1$ define a (unique) function $y = \phi(x) = x^{2}+1$. Of course in this case the $f$ function is clearly not invertible. However the conditions described above are met !
Examples:
$\boxed{1}$ If we consider the parametric equations $x(\theta) = \rho cos\theta$, $y(\theta) = \rho sin\theta$ for $\theta \in [0, 2\pi]$, then the graph (or: the curve) of these parametric equations is a circle of radius $\rho$ centered at the origin. It clearly is not a function (at least not a single!).
$\boxed{2}$ If we consider the parametric equations $x(\theta) = \rho cos\theta$, $y(\theta) = \rho sin\theta$ for $\theta \in [0, \pi]$, then the graph (or: the curve) of these parametric equations is the upper circle of radius $\rho$ centered at the origin. We say that the parametric equations define the function $y=\sqrt{1-x^{2}}$ whose graph completely coincides with the graph of the parametric equations.
$\boxed{3}$ If we consider the parametric equations $x(\theta) = \rho cos\theta$, $y(\theta) = \rho sin\theta$ for $\theta \in [\pi, 2\pi]$, then the graph (or: the curve) of these parametric equations is the lower  semicircle of radius $\rho$ centered at the origin. We say that the parametric equations define the function $y=-\sqrt{1-x^{2}}$ whose graph completely coincides with the graph of the parametric equations.
   It should be noted at this point that the intervals $[0, \pi]$, $[\pi, 2\pi]$ in which $[0, 2\pi]$ was divided in order for the parametric equations to define a single function each time, are exactly those intervals for which the $x(\theta) = \rho cos\theta$ function is bijective (i.e. $``1-1"$) and thus invertible.
$\boxed{4}$ The following figure displays the graph
 of the parametric functions $x(t)=t-3sint$, $y(t)=4-3sint$ for $t \in [0,10]$.
$\bullet$ The inverse of a function in parametric form: The above discussion implies that given any  function $y=f(x)$, $x \in D_{f}$, this can be written in parametric form as: $x=t$, $y=f(t)$, $t \in D_{f}$.
   The inverse function (in case it exists) can be written as $y=f^{-1}(x)$ (we assume that we are using a common coordinate system for both the initial and the inverse) which is equivalent to saying $x=f(y)$. This (inverse) function in turn, can be written -following exactly what we did earlier- in parametric form as: $x=f(t)$, $y=t$, $t \in f(D_{f})$, where $f(D_{f})$ is the range of $f$ or equivalently the domain of $f^{-1}$.
$\bullet$ Theorem: (derivative of a function defined in parametric form) 
If:
        1. the parametric functions

        $
        x=f(t), \ f : I \rightarrow \mathbb{R}  \ , \ \  y=g(t),  \ g : I \rightarrow \mathbb{R}
        $

        where $I$ is an interval, are differentiable functions
        2. the function $x=f(t)$ is $``1-1"$ (and thus invertible)
        3. For any $t \in I$, $f'(t) \neq 0$
then the parametric equations $x=f(t)$, $y=g(t)$, $t \in I$, define the function
$$
y = \phi(x) = g(f^{-1}(x)) = (g \circ f^{-1})(x):  f(I) \rightarrow \mathbb{R}
$$
which is differentiable; moreover for any $x \in f(I)$
\begin{equation} \label{parameq4}
\frac{d\phi}{dx} = \frac{dy/dt}{dx/dt}
\end{equation}
or equivalently $\phi'(x) = \frac{g'(t)}{f'(t)}$.
   We will provide a proof for this theorem in some subsequent post.
Remarks:
1. Two different situations may occur at points of the domain where $dx/dt=0$:
   If at a given point $P$ we have $dx/dt = 0$ and $dy/dt \neq 0$, then at such a point $dy/dx \big|_{P}=\phi'(P)$ will be infinite, we will say that the slope is infinite at the given point and that the tangent to the graph of the parametric equations at $P$ will be vertical.
   If at a given point we have $dx/dt = dy/dt = 0$ then the rhs of \eqref{parameq4} becomes an indeterminate form; such points are called singular points. Unfortunately, we have no general statement available for the behavior of parametric equations at singular points: they have to be investigated case-by-case.
2. In the case that we are working with parametric equations $x=f(t)$, $y=g(t)$ and the $f$ function is not invertible (i.e. $f$ is a ``many-to-one" function) we can still revert the $x=f(t)$ formula obtaining more than one functions of the form $t=f_{i}^{-1}(x)$ (for various values of $i$). This usually amounts to a suitable partition of the initial domain $D_{f}$ (see examples 1-3 earlier) into suitable domains $D_{f_{i}}$. Then the above theorem is valid for each one of the $f_{i}$ functions and since they all stem from the single function $x=f(t)$, $t \in D_{f}$ it can be applied once and for all !
   The following example is supposed to shed some light in this last remark:
Example: The unit circle can be written in parametric form (see example 1) as: $x(\theta) = cos\theta$, $y(\theta) = sin\theta$ for $\theta \in [0, 2\pi]$. An arbitrary point on the circumference has coordinates $(cos\theta, sin\theta)$.
   According to the previous theorem (and the last remark), the derivative of either of the two functions defined (i.e. the upper and the lower semicircle respectively, see ex.2,3) at the given point will be
                $\frac{dy}{dx}=\frac{dy/dt}{dx/dt}=-\frac{cos\theta}{sin\theta}$
thus the equation of the tangent at the (arbitrary) point  $(cos\theta, sin\theta)$ will be:
      $y-sin\theta = -\big( \frac{cos\theta}{sin\theta} \big)(x-cos\theta)$
Notice that the situation is exactly the same no matter which semicircle the point belongs at!. Thus, in accordance with the last remark earlier, the theorem has been applied once and for all, covering both the $y = f_{1} = \sqrt{1-x^{2}}$ and the $y = f_{2} = -\sqrt{1-x^{2}}$ functions defined by the initial parametric equations. 

Wednesday, June 4, 2014

Implicit functions: some remarks

$\bullet$ Let us consider the equation
\begin{equation} \label{eq1}
F(x,y)=0
\end{equation}
and the set
\begin{equation}
A=\{(x,y)/ x,y \in \mathbb{R} \ \ and \ \ F(x,y)=0 \} \subset  \mathbb{R}^{2}
\end{equation}
of real solutions of \eqref{eq1}.
   The set of points of the Cartesian plane represented by the ordered pairs of $A$ is called the graph of  \eqref{eq1}.
$\bullet$ If there is a set $E \subset  \mathbb{R}$ such that
$$
\begin{array}{ccc}
 \forall x \in E,  & \exists \ \ y  \in \mathbb{R}, &   F(x,y)=0
\end{array}
$$
then we can map for any $x \in E$ a single $y=f(x)$ such that
$$
F(x,f(x))=0
$$
In this way a function $f:E \rightarrow \mathbb{R}$ with formula $y=f(x)$ is defined and this function will be called an implicit function defined by \eqref{eq1}.
$\bullet$ If this is the case, the graph of the implicit function $y=f(x)$, $f:E \rightarrow \mathbb{R}$ defined by \eqref{eq1} will be a part of the graph of \eqref{eq1}.
Remarks: 
1. It is interesting to note that an implicit function may be defined through an equation even if there is no way to obtain an analytic expression for the function's formula by solving the equation. This is the case (why?) for example for the equation
\begin{equation}
2y-2x-siny=0
\end{equation}
2. Even in cases in which the formula is easy to handle, many "unexpected" functions may be defined implicitly through an equation; most of them may not even be continuous: this is the case for example with the equation of a circle
\begin{equation}
x^{2}+y^{2}=1
\end{equation}
when it is solved with respect to $y$ a host of functions arise; some of them are continuous:
$$
\begin{array}{ccc}
 \begin{array}{l}
y=f_{1}(x)=\sqrt{1-x^{2}}  \\
[-1,1] \rightarrow \mathbb{R}
 \end{array}   &  \begin{array}{l}
                            y=f_{2}(x)=-\sqrt{1-x^{2}}  \\
                            [-1,1] \rightarrow \mathbb{R}
                            \end{array}
\end{array}
$$

while others are not, either at a point:



$
 y=f_{3}(x)= \left\{%
                               \begin{array}{l}
                               \sqrt{1-x^{2}}, \ \ x \in [-1,0) \\
                              -\sqrt{1-x^{2}}, \ \ x \in [0,1]
                               \end{array}
                                \right.$


or nowhere in their domain:
$$
 y=f_{4}(x)=\left\{%
                            \begin{array}{l}
                            \sqrt{1-x^{2}}, \ \ x \in [-1,1],  x: \ rational \\
                           -\sqrt{1-x^{2}}, \ \ x \in [-1,1], x: \ irrational
                            \end{array}
                            \right.
$$
Note that for all the above functions the domain is common $D_{f_{i}}=[-1,1]$, $i=1,2,3,4$. 

Wednesday, May 14, 2014

Conics and tangents: a general method

   A conic or a $2^{nd}$ degree planar curve has the following general form in cartesian coordinates
$
\boxed{ax^{2}+by^{2}+cxy+dx+ey+f=0}
$
   If we denote with $C$ the graph of this equation, we are going to show that the equation of the tangent to the curve $C$ at the point $(x_{1},y_{1}) \in C$ can be immediately derived from the equation of the curve, using the substitutions:
$
\boxed{
\begin{array}{c}
x^{2} \rightsquigarrow xx_{1}, \  y^{2} \rightsquigarrow yy_{1} \\
 xy \rightsquigarrow \frac{1}{2}(xy_{1}+x_{1}y)  \\
x \rightsquigarrow \frac{x+x_{1}}{2}, \ y \rightsquigarrow \frac{y+y_{1}}{2} \\
\end{array}
}
$
Consequently, the equation of the tangent to the curve $C$ at the point $(x_{1},y_{1}) \in C$ can be immediately written as
\begin{equation} \label{tangentconic}
\boxed{
axx_{1}+byy_{1}+c\frac{xy_{1}+x_{1}y}{2}+d\frac{x+x_{1}}{2}+e\frac{y+y_{1}}{2}+f=0
}
\end{equation}
   Proof: $\bullet$ Since $(x_{1},y_{1}) \in C$ its coordinates should satisfy the equation of the curve, thus
\begin{equation} \label{pointonconic}
ax_{1}^{2}+by_{1}^{2}+cx_{1}y_{1}+dx_{1}+ey_{1}+f=0
\end{equation}
   $\bullet$ Now we can proceed in implicitly differentiating the initial equation with respect to $x$, obtaining thus an expression for its slope at the (arbitrary) point $(x_{1},y_{1}) \in C$:
$$
\begin{array}{c}
(ax^{2}+by^{2}+cxy+dx+ey+f)'=0 \Leftrightarrow \\
   \\
\Leftrightarrow 2ax+2byy'+cy+cxy'+d+ey'=0 \Leftrightarrow \\
    \\
\Leftrightarrow (2by+cx+e)y'=-2ax-cy-d \Rightarrow \\
    \\
\Leftrightarrow y'(x_{1})=\frac{dy}{dx}\mid_{(x_{1},y_{1})}=-\frac{2ax_{1}+cy_{1}+d}{2by_{1}+cx_{1}+e}
\end{array}
$$
for all $(x_{1},y_{1})$ for which  $2by_{1}+cx_{1}+e \neq 0$.
   $\bullet$ Consequently, the equation of the tangent to the curve $C$ at the point $(x_{1},y_{1}) \in C$ will be
$
y-y_{1}=\frac{dy}{dx}\mid_{(x_{1},y_{1})}\cdot(x-x_{1})
$
Thus we can now readily work out
$$
\begin{array}{c}
y-y_{1}=\frac{dy}{dx}\mid_{(x_{1},y_{1})}\cdot(x-x_{1}) \Leftrightarrow\\
    \\
\Leftrightarrow y-y_{1}=-\frac{2ax_{1}+cy_{1}+d}{2by_{1}+cx_{1}+e}(x-x_{1}) \Leftrightarrow \\
    \\
\Leftrightarrow (2by_{1}+cx_{1}+e)(y-y_{1})=-(2ax_{1}+cy_{1}+d)(x-x_{1}) \Leftrightarrow  \\
     \\
\Leftrightarrow 2by_{1}y-2by_{1}^{2}+cx_{1}y-cx_{1}y_{1}+ey-ey_{1}=-2ax_{1}x+2ax_{1}^{2}-cy_{1}x+cy_{1}x_{1} \\   -dx+dx_{1} \Leftrightarrow  \\
     \\
\Leftrightarrow 2by_{1}y+cx_{1}y+ey+2ax_{1}x+cy_{1}x+dx=2ax_{1}^{2}+cy_{1}x_{1}+dx_{1}+2by_{1}^{2}+ \\ cy_{1}x_{1}+ey_{1} \Leftrightarrow   \\
     \\
\Leftrightarrow 2ax_{1}x+2by_{1}y+c(x_{1}y+xy_{1})+dx+ey=  \\
  =\underbrace{ax_{1}^{2}+by_{1}^{2}+cx_{1}y_{1}}_{=-dx_{1}-ey_{1}-f}+\underbrace{ax_{1}^{2}+by_{1}^{2}+cx_{1}y_{1}+dx_{1}+ey_{1}}_{=-f} \Leftrightarrow   \\
    \\
\Leftrightarrow axx_{1}+byy_{1}+c\frac{xy_{1}+x_{1}y}{2}+d\frac{x+x_{1}}{2}+e\frac{y+y_{1}}{2}+f=0
\end{array}
$$
which finally completes the proof!

Wednesday, April 16, 2014

Implicit differentiation: a motivating example

   Without getting into technical definitions of what is an implicit function and when a given equation in two variables defines implicitly one or more -differentiable or not- functions (i will leave such a discussion for some subsequent post on theory), I will just examine a simple yet illuminating example:
   Let us consider the function $y=f(x)= \frac{1}{x} \equiv x^{-1}$. It is well known that the power rule of differentiation $x^{a} = ax^{a-1}$ applies for any real value of $a$ (in its respective domain of course). So we can readily conclude that $f'(x) = - \frac{1}{x^{2}}$ in $\mathbb{R}^{*}$. 
   But let us momentarily think a little different: Since $y=\frac{1}{x} \Rightarrow xy=1$ we have that  $$xf(x)=1 \Leftrightarrow xy=1$$. 
We differentiate this last relation, applying the product rule of differentiation to both sides and we get $$f(x)+xf'(x)=0 \Leftrightarrow y+xy'=0$$, thus $y'=f'(x)=-\frac{f(x)}{x}=-\frac{y}{x}$. Using the definition $y=f(x)= \frac{1}{x}$ we finally arrive at $$y'=f'(x) = - \frac{1}{x^{2}}$$ 
in $\mathbb{R}^{*}$.

Thursday, April 10, 2014

Derivative of the inverse: a couple of examples

   Let us proceed in today's post to a couple of applications of the theorem on the differentiation of the inverse function:
   Example 1 (exp-log):  $\bullet$ Let us consider the case of $y=f(x)=e^{x}$. It is well known that $f'(x)=e^{x}=f(x)$ and that the inverse function $f^{-1}$ can be written as: $x=f^{-1}(y)=lny$. The domain and the range of these functions are
$$
\begin{array}{c}
D_{f} = (-\infty, \infty) = f^{-1}(D_{f^{-1}})    \\
    \\
f(D_{f}) = (0, \infty) = D_{f^{-1}}
\end{array}
$$
   One can easily check that the conditions of the theorem apply and thus for any $x_{0} \in D_{f}$ and for any $y_{0}=f(x_{0}) \in f(D_{f}) \equiv D_{f^{-1}}$ we have:
$$
(f^{-1})'(y) = (lny)' = \frac{dx}{dy} = \frac{1}{\frac{dy}{dx}} = \frac{1}{f'(x)} = \frac{1}{(e^{x})'} = \frac{1}{e^{x}} = \frac{1}{y}
$$
where -following the same notation conventions as we did in the development of the theory- $(f^{-1})'(y) = (lny)'$ denotes differentiation with respect to $y$ while $f'(x) = (e^{x})'$ denotes differentiation with respect to $x$. So we finally arrive at: $(lny)' = \frac{1}{y}$ and since the independent variable can be always named at our choice we arrive at the familiar rule:
$$
(lnx)' = \frac{1}{x}
$$
   $\bullet$ Let us see how we could have work alternatively, thinking through the chain rule of differentiating composite function: We could differentiate the composition $f\circ f^{-1}$ with respect to (the independent  variable) $y$.  So we get $y = e^{x} = e^{x(y)} = e^{lny}$. Differentiating both sides with respect to $y$ and applying the chain rule in the r.h.s of the last relation, we get
$$
1 = \frac{dy}{dy} = e^{lny} \frac{d(lny)}{dy} \Rightarrow \frac{d(lny)}{dy} = \frac{1}{e^{lny}} = \frac{1}{y}
$$
and finally, renaming (as usual) the independent variable form $y$ to $x$ we get the familiar relation
$$
(lnx)' \equiv \frac{d(lnx)}{dx} = \frac{1}{x}
$$
   Example 2 (inverse trigonometric functions): $\bullet$ Let $y=f(x)=sinx$. Its inverse function can be written as $x=f^{-1}(y)=arcsin(y) \equiv sin^{-1}(x)$. Of course $sinx$ when considered in its natural domain $\mathbb{R}$ is not a $``1-1"$ and thus not an invertible function. However a suitable restriction is: We consider the domains and ranges as follows:

$
\begin{array}{l}
D_{f} = [-\frac{\pi}{2}, \frac{\pi}{2}] = f^{-1}(D_{f^{-1}})    \\
    \\
f(D_{f}) = [-1, 1] = D_{f^{-1}}
\end{array}
$
 
   Notice that, if displayed in a common set of axis (that is: with the same independent variable $x$) the graphs of $sinx$ and $arcsinx$ are displayed in the following figure:

   However, we will proceed the computation using the notation $y= f(x) = sinx$ and $x = f^{-1}(y) = arcsiny$.
   We first have to note that the theorem provides us the derivative of the inverse function $x = f^{-1}(y) = arcsiny$ for any $y \in (-1,1)$ excluding thus the points $sin(-\frac{\pi}{2})=-1$, $sin(\frac{\pi}{2})=1$ simply because $f'(x) = (sinx)' = cosx$ and thus $f'(-\frac{\pi}{2})=f'(\frac{\pi}{2})=0$. However, the conditions of the theorem are valid in $(-1,1)$ so we get:
$$
\begin{array}{c}
(f^{-1})'(y) = (arcsiny)' = \frac{dx}{dy} = \frac{1}{\frac{dy}{dx}} = \frac{1}{f'(x)} = \frac{1}{(sinx)'} = \\
   \\
= \frac{1}{cosx} = \frac{1}{\sqrt{1-sin^{2}x}} = \frac{1}{\sqrt{1-y^{2}}}
\end{array}
$$
for $y$ in $(-1,1)$. Notice that we have used $cosx = \sqrt{1-sin^{2}x} \geq 0$ for $x \in (-\frac{\pi}{2},\frac{\pi}{2})$. We thus have shown (after renaming the independent variable as is customary) that:
$$
(arcsinx)' = \frac{1}{\sqrt{1-x^{2}}}
$$
for any $x$ in $(-1,1)$.
   $\bullet$ Let us now try to work alternatively (through the chain rule) as before: We differentiate $f\circ f^{-1}$ with respect to (the independent  variable) $y$ using the chain rule and keeping in mind that $y = sinx \equiv sinx(y) = sin(arcsin y)$
$$
\begin{array}{c}
1 = \frac{dy}{dy} = cos(arcsiny) \frac{d(arcsiny)}{dy} \Rightarrow \\
\\
\Rightarrow \frac{d(arcsiny)}{dy} = \frac{1}{cos(arcsiny)} = \frac{1}{\sqrt{1-sin^{2}(arcsiny)}} = \frac{1}{\sqrt{1-y^{2}}}
\end{array}
$$
for $y$ in $(-1,1)$. With the -customary now- change in the independent variable from $y$ to $x$ we arrive at our desired formula:
$$
(arcsinx)' = \frac{1}{\sqrt{1-x^{2}}}
$$
for any $x$ in $(-1,1)$.   

Saturday, April 5, 2014

Theoretical Remarks #5 - the derivative of the inverse function

   In today's post (after quite a long time) I am going to discuss the differentiation of an inverse function $f^{-1}$ given that we know the derivative of $f$. I will start by laying down the theorem which will be the basic ingredient for working with the derivatives of inverse functions:
Theorem: Let a $f:\Delta \rightarrow \mathbb{R}$ be a function defined on an interval $\Delta \subseteq \mathbb{R}$ and let $f$ be
  • One-to-one ("1-1") and continuous
  • differentiable at a point $\xi \in \Delta$ 
  • $f'(\xi) \neq 0$
then the inverse function $f^{-1}:f(\Delta) \rightarrow \Delta \subseteq \mathbb{R}$ is also differentiable at the point $\zeta = f(\xi) \in f(\Delta)$ and we have 
\begin{equation} \label{invder1}(f^{-1})'( \zeta ) = \frac{1}{f'( \xi )}\end{equation}
which can be equivalently written (in Leibniz's notation) 
\begin{equation} \label{invder2}  \frac{dx}{dy} |_{\zeta=f(\xi)} = \frac{1}{\frac{dy}{dx} |_{\xi}}\end{equation}
with the understanding (in the notation) that:  $ \ y=f(x) \Leftrightarrow x=f^{-1}(y)$. 

Remark: If the conditions of the above theorem are valid for any point $x_{0} \in \Delta$ then we can write (in a somewhat more simplified notation):
\begin{equation} \label{invder3}  \frac{dx}{dy} |_{y_{0}} = \frac{1}{\frac{dy}{dx} |_{x_{0}}} \end{equation}
for any $x_{0} \in \Delta$ and $y_{0}=f(x_{0})$. In that case it is customary to write $$\frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}$$ in $\Delta$. We (again!) we have to keep in mind that $ \ y=f(x) \Leftrightarrow x=f^{-1}(y)$.

Let us now proceed to a couple of proofs of the above result:

(I). A geometrical proof:
   
   The following figure, provides us with a  geometrical interpretation of the previous relation between the derivative of a function and the derivative of its inverse function: 
   It is well known that the graphs $C_{f}$ and $C_{f^{-1}}$ are symmetric with respect to the bisector $y=x$ of the first quadrant. 
   One can easily convince himself, that the above described symmetry, implies that 
\begin{equation} \label{geominterpr}θ+φ=π/2 \end{equation}
where $θ \neq 0$ is the angle between the tangent line of $C_{f}$ at $A(\xi,\zeta)$ and the horizontal axis and $φ \neq Ï€/2$ is the angle between the tangent line of $C_{f^{-1}}$ at $B(\zeta,\xi)$ and the horizontal axis. 
   But since $f'(\xi)=tanθ$ and $(f^{-1})'(\zeta)=tanφ$, it suffices to invoke \eqref{geominterpr} together with the well known trigonometrical relation 
$$tanφ=tan(\frac{π}{2}-θ)=\frac{1}{tanθ}$$
to conclude that $(f^{-1})'(\zeta)=\frac{1}{f'(\xi)}$. 

   The proof presented above is based on the geometrical interpretation of the derivative as the slope of the tangent line to the graph of the function. However, we could have proceeded through a completely different road, using the chain rule of differentiation:

(II). A "proof" through the chain rule of differentiation: 
   Under the conditions of the theorem, it is clear that the function $f:\Delta \rightarrow \mathbb{R}$ has an inverse function $f^{-1}:f(\Delta) \rightarrow \Delta \subseteq \mathbb{R}$.
   We can consider either the inverse (or the initial function) function as a composite function in the following sense:
\begin{equation} \label{inverderthrcomp}
x=f^{-1}(y)=f^{-1}(f(x))=(f^{-1} \circ f)(x)
\end{equation}
   Now we can straightforwardly apply the chain rule of differentiating composite functions as follows: We differentiate \eqref{inverderthrcomp} with respect to $x$:
$$
1 = \frac{dx}{dx}|_{x_{0}} = [(f^{-1})(y_{0})]' =  (f^{-1})'(y_{0})\cdot f'(x_{0}) \Rightarrow  (f^{-1})'(y_{0})  = \frac{1}{f'(x_{0}) }
$$
which concludes the proof.
Remarks:
   1. The reader should pay particular attention at the notation at this point: the symbol $[(f^{-1})(y_{0})]' \equiv [f^{-1}(f(x_{0}))]' \equiv \frac{dx}{dx}|_{x_{0}}$ denotes the derivative of $f^{-1} \circ f$ with respect to $x$ -computed at $x_{0}$- while $(f^{-1})'(y_{0})$ denotes the derivative of $x=f^{-1}(y)$ with respect to $y$ -computed at $y_{0}$- and $f'(x_{0})$ the derivative of $y=f(x)$ with respect to $x$ (as usually)  computed at $x_{0}$.
   2. Using the Leibniz notation, we could have alternatively written:
$$
1 = \frac{dx}{dx}|_{x_{0}} = \frac{dx}{dy}|_{y_{0}} \cdot \frac{dy}{dx}|_{x_{0}} \Rightarrow \frac{dx}{dy} |_{y_{0}} = \frac{1}{\frac{dy}{dx} |_{x_{0}}}
$$
where $y_{0}=f(x_{0}) \Leftrightarrow x_{0}=f^{-1}(y_{0})$.
3. The discerning reader should notice the following fact in this last "proof": we have only proved \eqref{invder1}, \eqref{invder2}. However we have not shown (in fact we took it for granted) that under the conditions of the theorem the inverse function is actually differentiable or in other words that $(f^{-1})'(y_{0}) = \frac{dx}{dy}|_{y_{0}} $ exists (in the sense that it is a real number). This is the reason for the use of the quotation marks in the word proof.

   Finally we have to mention, that we can supply another proof of the above theorem by straightforward use of the definition of the derivative as the limit of the rate of change. We will discuss about this proof in a subsequent post.

Monday, January 20, 2014

Derivative of a unit vector

   In today's post I would like to present a simple but elegant construction: the computation of the derivative of a rotating unit vector i.e. of a vector function $\vec{\varepsilon}: [0, +\infty) \rightarrow \mathbb{R}^{2}$ whose length is equal to one $|\vec{\varepsilon}(t)|=1$ for any $t \in [0, +\infty)$. I will use $t$ to denote the independent variable since it normally stands for time. Thus, at any given instant $t$, the value of the function will be a  unit vector $\vec{\varepsilon}(t)$.
   We will apply the definition, so we need to compute the vector 
\begin{equation} \label{def1}
\frac{d}{dt}\vec{\varepsilon}(t) = \lim_{\Delta t \rightarrow 0} \frac{\vec{\varepsilon}(t + \Delta t) - \vec{\varepsilon}(t)}{\Delta t}
\end{equation}
   First note that since $|\vec{\varepsilon}(t)|^{2} = \vec{\varepsilon}(t) \cdot \vec{\varepsilon}(t) = 1$ by differentiating both sides we get
$$
\begin{array}{c}
\frac{d}{dt} \big( \vec{\varepsilon}(t) \cdot \vec{\varepsilon}(t) \big) = \frac{d \vec{\varepsilon}(t)}{dt} \cdot \vec{\varepsilon}(t) + \vec{\varepsilon}(t) \cdot \frac{d \vec{\varepsilon}(t)}{dt} = \\
    \\
2 \frac{d \vec{\varepsilon}(t)}{dt} \cdot \vec{\varepsilon}(t) = 0 \Leftrightarrow 2 \frac{d \vec{\varepsilon}(t)}{dt} \cdot \vec{\varepsilon}(t) = 0
\end{array}
$$
   Thus $\frac{d \vec{\varepsilon}(t)}{dt} \cdot \vec{\varepsilon}(t) = 0$ which is equivalent to the fact that $\vec{\varepsilon}(t)$ and its derivative vector $\frac{d \vec{\varepsilon}(t)}{dt}$ are perpendicular
\begin{equation} \label{def2}
\vec{\varepsilon}(t) \bot \frac{d \vec{\varepsilon}(t)}{dt}
\end{equation}
   The situation can be presented in the following picture:
   $\Delta s$ is the length of the arc spanned by the edge of the unit vector $\vec{\varepsilon}(t)$ while it rotates by an angle of $\Delta \varphi$ during time $\Delta t$. Since $\varphi$   is measured in radians we have $\Delta s = \Delta \varphi  |\vec{\varepsilon}(t)|$ which implies that
$$
\Delta s = \Delta \varphi
$$
 
   Notice that (in the figure) we also have:  $|\vec{\eta}(t)|=1$ and $ \ \vec{\eta}(t) \bot \vec{\varepsilon}(t)$.
   Now, during that time interval $\Delta t$ the vector has changed by
$$
\Delta \vec{\varepsilon} = \vec{\varepsilon}(t + \Delta t) - \vec{\varepsilon}(t)
$$
While $\Delta t \rightarrow 0 $ (the notation $dt \rightarrow 0$ will be used instead from now on) we can notice two things: i). the direction of the vector $d \vec{\varepsilon}  = \vec{\varepsilon}(t + dt) - \vec{\varepsilon}(t)$ "tends" to become perpendicular to $\vec{\varepsilon}(t)$ and thus parallel to the direction specified by $\vec{\eta}(t)$ and ii). the length of the vector $d \vec{\varepsilon}$ "tends" to become equal to the length of the arc $ds$ thus
\begin{equation}  \label{def3}
|d \vec{\varepsilon}| = |\vec{\varepsilon}(t + dt) - \vec{\varepsilon}(t)| = ds = d \varphi
\end{equation}
(where $d \varphi$ is measured in radians).
   Now, combining \eqref{def2} and \eqref{def3} we can write \eqref{def1} as follows
$$
\begin{array}{c}
\frac{d}{dt}\vec{\varepsilon}(t) = \lim_{dt \rightarrow 0} \frac{\vec{\varepsilon}(t + dt) - \vec{\varepsilon}(t)}{dt} = \lim_{dt \rightarrow 0} \frac{d \varepsilon}{dt} = \frac{d \varphi}{dt} \vec{\eta}(t)
\end{array}
$$
   Finally, if we define a vector $\vec{\omega}$ with direction perpendicular to the plane defined by $(\vec{\varepsilon}(t), \vec{\eta}(t))$ and with length $|\vec{\omega}|=\frac{d \varphi}{dt}$ then we have
$$
\frac{d}{dt}\vec{\varepsilon}(t) = \vec{\omega} \times \vec{\varepsilon}(t) =  \frac{d \varphi}{dt} \vec{\eta}(t)
$$ 

Friday, January 17, 2014

Question of the week #6

Let $z_{1}=x_{1}+iy_{1}$ and $z_{2}=x_{2}+iy_{2}$ be two points on the complex plane. They determine a line segment. Consider the perpendicular bisector of this line segment. Find the point where this perpendicular bisector cuts the vertical (i.e. the $Oy$) axis. 

Waiting for your thoughts and ideas till next week!


Thursday, January 16, 2014

Question of the week #5 - the answer

Let us come to consider a detailed look at the solution of last week's question:
For the reader's convenience we repeat the statement of the question:
Question of the week #5:
(a). Use integration by parts to show that:
$$
\int sinx \cdot cosx \cdot e^{-sinx} dx = -e^{-sinx} \cdot \big( 1 + sinx \big) + c
$$
Now consider the following differential equation:
$$
\frac{dy}{dx}-y \cdot cosx = sinx \cdot cosx
$$
   (b). Determine the integration factor and find the general solution $y=f(x)$
   (c). Find the special solution satisfying $f(0)=-2$

Solution: (a). Noticing (by applying the chain rule of differentiation) that $\big( e^{-sinx} \big)' = e^{-sinx}(-sinx)' = - cosx \cdot e^{-sinx}$ we can proceed with a straightforward integration by parts:
$$
\begin{array}{c}
\int sinx \cdot cosx \cdot e^{-sinx} dx = -\int sinx \frac{d\big( e^{-sinx} \big)}{dx} dx = \\
     \\
= -\int sinx \big( e^{-sinx} \big)' dx = -sinx \cdot e^{-sinx} + \int e^{-sinx} \cdot cosx dx = \\
    \\
= -sinx \cdot e^{-sinx} - \int \frac{d\big( e^{-sinx} \big)}{dx} dx = -sinx \cdot e^{-sinx} - \int \big( e^{-sinx} \big)' dx = \\
       \\
= -sinx \cdot e^{-sinx} -  e^{-sinx} + c = - e^{-sinx} \big( 1 + sinx \big) + c
\end{array}
$$
(b). The given DE is $y'-y \cdot cosx = sinx \cdot cosx$. So we compute for the integration factor:

  • $-\int cosx dx = -sinx + d$, where $d \in \mathbb{R}$ is a constant of integration. Since we need only one (actually: anyone) of the indefinite integrals -in order to find an integration factor- we can pick $d=0$
  • The integration factor thus reads: $\mu(x) = e^{-sinx}$ 

(c). Multiplying both sides of the DE with the integration factor $\mu$ determined in (b) and using the result of (a) we get:
$$
\begin{array}{c}
e^{-sinx} \cdot y'- e^{-sinx} \cdot y \cdot cosx = e^{-sinx} \cdot sinx \cdot cosx  \Leftrightarrow  \\
     \\
\Leftrightarrow \big( e^{-sinx} \cdot y \big) ' = e^{-sinx} \cdot sinx \cdot cosx   \Leftrightarrow \\
     \\
\Leftrightarrow e^{-sinx} \cdot y  = \int e^{-sinx} \cdot sinx \cdot cosx dx \Leftrightarrow \\
    \\
\Leftrightarrow e^{-sinx} \cdot y = -sinx \cdot e^{-sinx} -  e^{-sinx} + c \Leftrightarrow   \\
     \\
\Leftrightarrow y = -sinx + c \cdot e^{sinx} - 1
\end{array}
$$
Hence, we have determined the general solution of the given DE. It is a family of functions parameterized by $c \in \mathbb{R}$. In order to single out that special solution satisfying $x=0$, $y=2$ we have to simply substitute these values in the expression of the general solution and solve the resulting expression for $c$:
$$
-2 = sin0 + c \cdot e^{sin0} - 1 \Leftrightarrow -2 = c - 1 \Leftrightarrow c = -1
$$
thus, the special solution is
$$
y = -sinx - e^{sinx} - 1
$$ 

Sunday, January 12, 2014

Theoretical Remarks #4

   In Monday's 6/Jan/2014 post we mentioned a proof for the existence of an antiderivative for any function continuous on an interval.
   In today's post, i am going to supply an alternative proof for the same proposition. For the reader's convenience, i am repeating at this point the statement of the proposition:
 Proposition: Let a real function $f$, continuous on an interval $\Delta$ and let $a \in \Delta$ be a fixed point. Then the function $F(x)=\int_{a}^{x}f(t)dt$ is an antiderivative function of $f$ in $\Delta$. In other words:
$$
F'(x) = \big( \int_{a}^{x}f(t)dt \big)' = f(x)
$$
for all $x \in \Delta$.
Prooof: (alternative)
   It is sufficient to show that for any fixed point $x_{0} \in \Delta$ we have $F'(x_{0})=f(x_{0})$. Let $x_{0}, x_{0}+h \in \Delta$ with $h \neq 0$. Then we can compute
$$
\begin{array}{c}
F(x_{0}+h) - F(x_{0}) = \int_{a}^{x_{0}+h}f(t)dt - \int_{a}^{x_{0}}f(t)dt = \\
     \\
\bigg( \int_{a}^{x_{0}}f(t)dt + \int_{x_{0}}^{x_{0}+h}f(t)dt \bigg)- \int_{a}^{x_{0}}f(t)dt =
 \int_{x_{0}}^{x_{0}+h}f(t)dt
\end{array}
$$
and since $h \neq 0$, this implies that
\begin{equation} \label{diff}
\frac{F(x_{0}+h) - F(x_{0})}{h} = \frac{1}{h} \int_{x_{0}}^{x_{0}+h}f(t)dt
\end{equation}
In order to proceed, we will distinguish between two cases:
  • $h > 0$   $\rightsquigarrow$  (I)
  • $h < 0$   $\rightsquigarrow$  (II)
(I). $h > 0$: Since $[x_{0},x_{0}+h] \subseteq \Delta$, $f$ is continuous on $[x_{0},x_{0}+h]$ and the Extreme value theorem applies: there are numbers $c,d \in [x_{0},x_{0}+h]$ such that $f(c)=m$ and $f(d)=M$ are the absolute minimum and absolute maximum values respectively of $f$ in $[x_{0},x_{0}+h]$. Consequently
$$
\begin{array} {c}
mh \leq \int_{x_{0}}^{x_{0}+h}f(t)dt \leq Mh \Leftrightarrow f(c)h \leq \int_{x_{0}}^{x_{0}+h}f(t)dt \leq f(d)h \Leftrightarrow \\ \\
   \\
\Leftrightarrow f(c) \leq \frac{1}{h} \int_{x_{0}}^{x_{0}+h}f(t)dt \leq f(d) \stackrel{\eqref{diff}}{\Leftrightarrow} f(c) \leq \frac{F(x_{0}+h) - F(x_{0})}{h} \leq f(d)
\end{array}
$$
So we have concluded that
\begin{equation} \label{sand1}
 f(c) \leq \frac{F(x_{0}+h) - F(x_{0})}{h} \leq f(d)
\end{equation}
At this point, we have to observe the following thing: by the application of the extreme value theorem on the continuous function $f$ on the interval $[x_{0},x_{0}+h]$ it follows that both $c$ and $d$ depend in general on the value of $h > 0$. It is easy to see that their values are actually functions of the positive $h$: So we can write $c(h)$ and $d(h)$. Not much needs to be said about these functions; their behaviour may be complicated in general (for example, you can provide an argument to show that $c(h), \ d(h)$ need not even be continuous in general!). However we have:
\begin{equation} \label{concomplim1}
\begin{array}{c}
\lim_{h \rightarrow 0^{+}} c(h) = x & ,   &  \lim_{h \rightarrow 0^{+}} d(h) = x
\end{array}
\end{equation}
\eqref{concomplim1} can be proved as a simple application of the $(\varepsilon, \delta)$-definition of the limit. Readers are adviced to show that explicitly for practise!
   Taken that $f$ is continuous on $\Delta$ and thus on $[x_{0},x_{0}+h]$, \eqref{concomplim1} imply that
\begin{equation} \label{concomplim2}
\begin{array}{c}
\lim_{h \rightarrow 0^{+}} f(c) = \lim_{h \rightarrow 0^{+}} f(c(h)) = f(x) \\
    \\
\lim_{h \rightarrow 0^{+}} f(d) = \lim_{h \rightarrow 0^{+}} f(d(h)) = f(x)
\end{array}
\end{equation}
Now combining \eqref{sand1} together with \eqref{concomplim2} and applying the squeeze theorem from the right, we get
\begin{equation} \label{from the right}
\lim_{h \rightarrow 0^{+}} \frac{F(x_{0}+h) - F(x_{0})}{h} = f(x)
\end{equation}

(II). $h < 0$:  In this case $[x_{0}+h,x_{0}] \subseteq \Delta$ and we proceed again following exactly the same steps as before keeping however in mind that now $h < 0$. We leave the intermediate details to the reader. We finally end up in
 \begin{equation} \label{from the left}
\lim_{h \rightarrow 0^{-}} \frac{F(x_{0}+h) - F(x_{0})}{h} = f(x)
\end{equation}
   Combining \eqref{from the right}, \eqref{from the left} we get the result
$$
\lim_{h \rightarrow 0} \frac{F(x_{0}+h) - F(x_{0})}{h} = F'(x_{0}) = f(x_{0})
$$
which finally concludes the proof!

Monday, January 6, 2014

Theoretical Remarks #3

We come in today's post to supply a proof for a well known Calculus proposition:  In friday's 27/Dec/2013 post we mentioned (without proof), the following proposition:
Proposition: Let a real function $f$, continuous on an interval $\Delta$ and let $a \in \Delta$ be a fixed point. Then the function $F(x)=\int_{a}^{x}f(t)dt$ is an antiderivative function of $f$ in $\Delta$. In other words:
$$
F'(x) = \big( \int_{a}^{x}f(t)dt \big)' = f(x)
$$
for all $x \in \Delta$.

Proof: 
   It is sufficient to show that for any fixed point $x_{0} \in \Delta$ we have $F'(x_{0})=f(x_{0})$.

   Let us first study the difference quotient $\frac{F(x)-F(x_{0})}{x-x_{0}}$ whose limit at $x \rightarrow x_{0}$, $x \neq x_{0}$ defines the value of $F'(x_{0})$:
$$
\begin{array}{c}
\frac{F(x)-F(x_{0})}{x-x_{0}}= \frac{1}{x-x_{0}}\bigg( \int_{a}^{x}f(t)dt - \int_{a}^{x_{0}}f(t)dt \bigg) =  \\
    \\
= \frac{1}{x-x_{0}}\bigg(  \int_{x_{0}}^{a}f(t)dt + \int_{a}^{x}f(t)dt  \bigg) =  \frac{1}{x-x_{0}} \int_{x_{0}}^{x}f(t)dt
\end{array}
$$
thus
\begin{equation} \label{diff*}
\frac{F(x)-F(x_{0})}{x-x_{0}}=\frac{1}{x-x_{0}} \int_{x_{0}}^{x}f(t)dt
\end{equation}
and since
\begin{equation} \label{fun}
f(x_{0}) = \frac{1}{x-x_{0}}(x-x_{0})f(x_{0}) = \frac{1}{x-x_{0}}\int_{x_{0}}^{x}f(x_{0}) dt
\end{equation}
combining \eqref{diff*} and \eqref{fun}, we readily get the following relation:
\begin{equation} \label{diffun}
\frac{F(x)-F(x_{0})}{x-x_{0}}-f(x_{0}) = \frac{1}{x-x_{0}} \int_{x_{0}}^{x} \big( f(t) - f(x_{0}) \big) dt
\end{equation}
   Since $f$ is continuous at $x_{0} \in \Delta$, for any $\varepsilon > 0$ there is a $\delta > 0$ such that: for any $t \in \Delta$ with $|t-x_{0}| < \delta$ we will have $|f(t)-f(x_{0})| < \varepsilon$.

   Thus, for any $x \in \Delta$ with $0 < |x-x_{0}| < \delta$, using \eqref{diffun} we get:
$$
\begin{array}{c}
\bigg| \frac{F(x)-F(x_{0})}{x-x_{0}}-f(x_{0})  \bigg| = \frac{1}{|x-x_{0}|} \bigg| \int_{x_{0}}^{x} \big( f(t) - f(x_{0}) \big) dt \bigg|  \leq \\
      \\
\leq \frac{1}{|x-x_{0}|} \bigg| \int_{x_{0}}^{x} \big| f(t) - f(x_{0}) \big| dt \bigg| < \frac{1}{|x-x_{0}|} \big| \int_{x_{0}}^{x} \varepsilon dt \big| =  \\
    \\
= \frac{1}{|x-x_{0}|} \varepsilon |x-x_{0}| = \varepsilon
\end{array}
$$
But the above means -according to the $(\varepsilon, \delta)$ definition of the limit- that
$$
F'(x_{0}) = \lim_{x \rightarrow x_{0}} \bigg( \frac{F(x)-F(x_{0})}{x-x_{0}} \bigg) = f(x_{0})
$$
which finally concludes the proof.

Sunday, January 5, 2014

Question of the week #5

This week's post comes from differential equations:
   (a). Use integration by parts to show that:
$$
\int sinx \cdot cosx \cdot e^{-sinx} dx = -e^{-sinx} \cdot \big( 1 + sinx \big) + c
$$
Now consider the following differential equation:
$$
\frac{dy}{dx}-y \cdot cosx = sinx \cdot cosx
$$
   (b). Determine the integration factor and find the general solution $y=f(x)$
   (c). Find the special solution satisfying $f(0)=-2$

Thursday, January 2, 2014

Question of the week #4 - the answer

Question of the week #4Let a continuous real function $f$ and $f(x)=e^{\int_{0}^{x}f(t)dt}$  for all $x<1$.  Find the formula of the function $f$.

Solution: First we note that $f(0)=e^{\int_{0}^{0}f(t)dt}=e^{0}=1$. Moreover, we can see that
$$
f(x)>0, \ \  \forall x<1
$$
   For any $x<1$ we have:
$$
\begin{array}{c}
f^{\prime}(x)=\bigg( e^{\int_{0}^{x}f(t)dt} \bigg) ^{\prime}=e^{\int_{0}^{x}f(t)dt} \big( \int_{0}^{x}f(t)dt \big)^{\prime}=f(x) \cdot f(x) \Leftrightarrow \\
      \\
\Leftrightarrow f^{\prime}(x)=f^{2}(x)  \Leftrightarrow \frac{f^{\prime}(x)}{f^{2}(x)} =1 \Leftrightarrow \\
    \\
\Leftrightarrow \bigg( -\frac{1}{f(x)} \bigg)^{\prime}=(x)^{\prime} \Leftrightarrow  -\frac{1}{f(x)}=x+c
\end{array}
$$
where $c \in \mathbb{R}$ is the constant of integration.
But the above relation readily implies (for $x=0$) that $c=-\frac{1}{f(0)}=-1$.
So we finally get:
$$
f(x)=\frac{1}{1-x}, \ \  \forall x<1
$$