3 Differential Equations
3.1 Basic Notions
A differential equation is any sort of equation involving a function (or functions) and their derivatives. For example,
\[f'(x) = f(x)\]
or
\[\sin(x^2)y + y''x - 3x^2 = \cos(x)\]
or
\[\begin{align*} X' &= 2XY - X^2 Y\\ Y' &= Y^5 + XY^2 \end{align*}\]
The order is the highest derivative involved. The first and third examples are 1st order, and the second one is 2nd order. We won’t consider differential equations with partial derivatives.
One way to think about a differential equation is that it is a rule constraining the growth (velocity, acceleration, and higher derivatives) of a function in terms of its own values. For example, the first equation describes exponential growth (or decay) where a function increases in proportion to its own size.
Usually one function is considered unknown, and we’re interested in understanding the behavior of the function in terms of the differential equation it satisfies. Many functions can satisfy a given differential equation, but in typical circumstances a single function can be picked out by specifying “initial conditions”, values of the unknown function (and some of its derivatives) at a particular point. For example, \[f'(x) = f(x)\] with the initial condition \(f(0) = 1\) uniquely determines the exponential function \(f(x) = e^x\).
In the case of a system of equations, like the third example, we can think of it as a vector-valued function, so an initial condition would specify both \(X\) and \(Y\), such as \(X(0) = 2\) and \(Y(0) = 5\).
Definition 3.1 An order \(n\) differential equation in unknown function \(x\) with variable \(t\) can be written as \[F(x,x',...,x^{(n)},t) = 0.\]
When \(n\) initial conditions \(x(t_0) = x_0, x'(t_0) = x'_0,...\) are specified, this is called an initial value problem.
You’ve seen many of these before: integrating a function \(f(x)\) by finding an antiderivative means looking for a function \(F(x)\) which satisfies the equation \[F'(x) = f(x).\] The bounds of the integral turn into initial conditions: an integral from \(a\) to \(X\) is the same as the initial condition \(F(a) = 0\).
3.2 Existence-Uniqueness
Not all initial value problems have solutions. From calculus, we know that the initial value problem associated to integrating a continuous function on a closed interval has a solution (unique, even). More generally
Theorem 3.1 Consider a first-order differential equation \[F(x,x',t) = 0\] with an initial condition \((t_0,x_0)\). Consider the rectangle \([t_0 - A, t_0 A] \times [ x_0 - B, x_0 + B ]\) centered at this point. Suppose that \(F\) is continuous on this rectangle and bounded by \(M\).
Then this initial value problem has a solution for all \(t\) such that \[|t_0 - t | \leq \min \{A,B/M\}.\] (a sub-rectangle of the original rectangle)
Most IVPs have solutions, but even relatively simple ones need not have a unique solution.
Example 3.1 Consider the differential equation
\[f'(x) = \frac 1 x.\]
Let \(x_0\) be some nonzero number. Every initial value problem \(f(x_0) = y_0\) has infinitely many solutions.
Indeed, suppose \(x_0 > 0\). Then one solution is \[f(x) = \ln(x) - \ln(x_0) + y_0\] but one can add any constant to the piece of \(f(x)\) left of zero and obtain another solution. The opposite can be done (inserting some absolute values) if \(x_0\) is negative.
(Calculus II courses sometimes gloss over this: the antiderivative of \(\frac 1 x\) is not \(\ln|x| + C\); the constant can be specified independently on each half of the \(x\)-axis)
In the example, the lack of continuity plays a major role in allowing the differential equation to have infinitely many solutions.
Theorem 3.2 Continue from Theorem 3.1 and further assume that \(\partial f/\partial x\) is continuous. Then the solution is unique.
A “nice” initial value problem has a unique solution. Most of the IVPs we encounter from applied questions are nice or nearly so – they describe physical phenomena, which we generally expect to unfold in the same manner when set up identically.
The proofs of these results is beyond the scope of our course, but to give you a sense of the ideas at play, we will prove the uniqueness result in the special case of integration.
Theorem 3.3 Consider the initial value problem \[y' = f(t)\]
If \(f\) is continuous on \([a,b]\), then any initial value problem \(f(a) = A\) has a unique solution on \([a,b]\).
Proof. Existence follows from the fact that continuous functions are integrable on closed intervals.
Suppose that \(y_1\) and \(Y_2\) are two solutions to this IVP, meaning that both
\[\begin{align*} y_1' &= f(t)&y_1(a) = A\\ y_2' &= f(t)&y_2(a) = A \end{align*}\]
We would like to show that \(y_1 = y_2\), or equivalently that \(y_1-y_2 = 0\). Subtracting the IVPs for \(y_1,y_2\) we see that, \(z=y_1-y_2\) satisfies the IVP
\[z' = 0\ \ \ \ z(a) = 0.\]
If \(z\neq 0\) then we can find some \(c \in [a,b]\) such that \(z(c) \neq 0\). Apply the MVT to \(z\) on \([a,c]\): there is some \(\zeta \in (a,c)\) such that \[z'(\zeta) = \frac{z(c) - z(a)}{c-a} = \frac{z(c)}{c-a},\]
using \(z(a) = 0\). The RHS is nonzero by assumption, yet \(z'(\zeta) = 0\) because of the IVP \(z\) satisfies. This is a contradiction, so there is no such \(c\).
One can adapt this to show, more generally, that the only solution to \(y' = 0\) on a closed interval is a constant function. These are the source of the ubiquitous “\(+C\)” in Calculus II. The “counterexample” with the logarithm is explained by the impossibility of integrating \(1/x\) across an interval containing zero.
3.3 Applications
The Existence-Uniqueness theorems are among the most useful and interesting facts we’ll encounter. They suggest the following general principle:
An initial value problem is a definition (in nice conditions).
Which is to say that “solving” a differential equation by working out an “exact” solution in terms of familiar functions is not necessarily a priority for us. Later in the course, we will develop tools for approximating values of functions from differential equations, which is usually all one wants from the “solution” (and the values of standard functions – \(\sqrt, \sin\), etc – are usually decimal approximations anyway!). Instead, we’ll apply these results to show how easy it is to prove facts about functions which are definied by differential equations.