Curve Sketching

The figure illustrates a means to sketch a sine curve - identify as many of the following values as you can: asymptotic behaviour, periodic behaviour, vertical asymptotes, $y$ intercept, $x$ intercepts, local peaks and valleys (extrema). With these, a sketch fills in between the points/lines associated with these values.

After identifying asymptotic behaviours, a curve sketch involves identifying the $y$ intercept, if applicable; the $x$ intercepts, if possible; and the local extrema. From there a sketch fills in between the points. In this example, the periodic function $f(x) = 10\cdot\sin(\pi/2\cdot x)$ is sketched over $[0,4]$.

Though this approach is most useful for hand-sketches, the underlying concepts are important for properly framing graphs made with the computer.

We can easily make a graph of a function over a specified interval. What is not always so easy is to pick an interval that shows off the feature of interest. In the section on rational functions there was a discussion about how to draw graphs for rational functions so that horizontal and vertical asymptotes can be seen. These are properties of the "large." In this section, we build on this, but concentrate now on more local properties of a function.

Positive and increasing on an interval

Before beginning, we need some vocabulary:

A function $f$ is positive on an interval $I$ if for any $a$ in $I$ it must be that $f(a) > 0$.

Of course, we define negative in a parallel manner. The intermediate value theorem says a continuous function can not change from positive to negative without crossing $0$. This is not the case for functions with jumps, of course.

A function, $f$, is (strictly) increasing on an interval $I$ if for any $a < b$ it must be that $f(a) < f(b)$.

The word strictly is related to the inclusion of the $<$ precluding the possibility of a function being flat over an interval that the $\leq$ inequality would allow.

A parallel definition with $a < b$ implying $f(a) > f(b)$ would be used for a strictly decreasing function.

We can try and prove these properties for a function algebraically - we'll see both are related to the zeros of some function, but, as with most problems, However, before proceeding to that it is usually helpful to get an idea of where the answer is using exploratory graphs.

This helper function plots the function f twice: the second time only when the second function g is positive.

function plotif(f, g, a, b)
       xs = range(a, stop=b, length=251)
       ys = f.(xs)
       cols = [gx < 0 ? :red : :blue for gx in g.(xs)]
       p = plot(xs, ys, color=cols, linewidth=5, legend=false)
       p
end

Such a function is defined for us in the accompanying package, loaded with

using CalculusWithJulia

To see where a function is positive, we simply pass the function object in for bothf and g above. For example, let's look at where $f(x) = \sin(x)$ is positive:

f(x) = sin(x)
plotif(f, f, -2pi, 2pi)

Let's graph with cos in the masking spot and see what happens:

plotif(sin, cos, -2pi, 2pi)

Maybe surprisingly, we see that the increasing parts of the sine curve are now highlighted. Of course, the cosine is the derivative of the sine function, now we discuss that this is no coincidence.

For the sequel, we will use f' notation to find numeric derivatives, with the notation being defined in the CalculusWithJulia package using the ForwardDiff package.

Relative extrema

When a function changes from increasing to decreasing, or decreasing to increasing, it will have a peak or a valley. We call such points relative extrema. A definition may be:

The function $f(x)$ has a relative maximum at $x$ if the value $f(x)$ is an absolute maximum for some open interval containing $x$. Similarly, a relative minimum is defined.

If $x$ is a relative maximum, then there is some $a < x < b$ with $f(u) \leq f(x)$ for any $u \in (a,b)$. This does not say there is an absolute maximum over any interval $I$ at $x$, but rather there is some (open) interval for which $f(x)$ is an absolute maximum.

Fermat's theorem implies that any local extrema must occur at a critical point of the function $f(x)$.

The relationship of the derivative and increasing

The derivative, $f'(x)$, computes the slope of the tangent line to the graph of $f(x)$ at the point $(x,f(x))$. If the derivative is positive, the tangent line will have an increasing slope. Clearly if we see an increasing function and mentally layer on a tangent line, it will have a positive slope. Intuitively then, increasing functions and positive derivatives are related concepts. But there are some technicalities.

Suppose $f(x)$ has a derivative on $I$ . Then

If $f'(x)$ is positive on an interval $I=(a,b)$, then $f(x)$ is strictly increasing on $I$.

Meanwhile,

If a function $f(x)$ is increasing on $I$, then $f'(x) \geq 0$.

The technicality being the equality parts. In the second statement, we have the derivative is non-negative, as we can't guarantee it is positive, even if we considered just strictly increasing functions.

We can see by the example of $f(x) = x^3$ that strictly increasing functions can have a zero derivative, at a point.

The mean value theorem provides the reasoning behind the first statement: on $I$, the slope of any secant line between $d < e$ (both in $I$) is matched by the slope of some tangent line, which by assumption will always be positive. If the secant line slope is written as $(f(e) - f(d))/(e - d)$ with $d < e$, then it is clear then that $f(e) - f(d) > 0$, or $f(e) > f(d)$.

The second part, follows from the secant line equation. The derivative can be written as a limit of secant-line slopes, each of which is positive. The limit of positive things can only be non-negative, though there is no guarantee the limit will be positive.

So, to visualize where a function is increasing, we can just pass in the derivative as the masking function in our plotif function, as long as we are wary about places with $0$ derivative (flat spots).

For example, here we see where a more complicated function is increasing by passing in its derivative to plotif:

f(x) = sin(pi*x) * (x^3 - 4x^2 + 2)
plotif(f, f', -2, 2)

First derivative test

We know since Fermat that relative maxima and minima occur at critical points. But what is not true is that all critical points correspond to relative maxima and minima. Again, $f(x)=x^3$ provides the example at $x=0$. This is a critical point, but clearly not a relative maximum or minimum - it is just a slight pause for a strictly increasing function.

When will a critical point correspond to a relative maximum or minimum? That question can be answered by considering the first derivative.

The first derivative test: If $c$ is a critical point for $f(x)$ and if$f'(x)$ changes sign at $x=c$, then $f(c)$ will be either a relative maximum or a relative minimum. It will be a relative maximum if the derivative changes sign from $+$ to $-$ and a relative minimum if the derivative changes sign from $-$ to $+$. If $f'(x)$ does not change sign at $c$, then $(c,f(c))$ is not a relative maximum or minimum.

The classification part, should be clear: e.g., if the derivative is positive then negative, the function $f$ will increase to $(c,f(c))$ then decrease from $(c,f(c))$ - so a local maximum.

Our definition of critical point assumes$f(c)$ exists, as $c$ is in the domain of $f$. With this assumption, vertical asymptotes are avoided. However, it need not be that $f'(c)$ exists. The absolute value function at $x=0$ provides an example: this point is a critical point where the derivative changes sign, but is not defined exactly at $x=0$. Regardless, it is guaranteed that $(c,f(c))$ will be a relative minimum by the first derivative test.

Example

Consider the function $f(x) = e^{-\lvert x\rvert} \cos(\pi x)$ over $[-3,3]$:

f(x) = exp(-abs(x)) * cos(pi * x)
plotif(f, f', -3, 3)

We can see the first derivative test in action: at the peaks and valleys - the relative extrema - the color changes, as the function changes from increasing to decreasing or vice versa.

Example

Produce a graph of the function $f(x) = x^4 -13x^3 + 56x^2-92x + 48$.

We identify this as a fourth-degree polynomial with postive leading coefficient. Hence it will eventually look $U$-shaped. If we graph over a too-wide interval, that is all we will see. Rather, we do some work to produce a graph that shows the zeros, peaks, and valleys of $f(x)$. To do so, we need to know the extent of the zeros. We can try some theory, but instead we just guess and if that fails, will work harder:

f(x) = x^4 - 13x^3 + 56x^2 -92x + 48
rts = find_zeros(f, -10, 10)
4-element Array{Float64,1}:
 0.9999999999999999
 2.000000000000005
 4.000000000000004
 6.000000000000003

As we found $4$ roots, we know by the fundamental theorem of algebra we have them all. This means, our graph need not focus on values much larger than $6$ or much smaller than $1$.

To know where the peaks and valleys are, we look for the critical points:

cps = find_zeros(f', 1, 6)
3-element Array{Float64,1}:
 1.4257862014364733
 3.0704645527012464
 5.253749245862282

Because we have the $4$ distinct zeros, we must have the peaks and valleys appear in an interleaving manner, so a search over $[1,6]$ finds all three critical points and without checking, they must correspond to relative extrema.

We finally check that if we were to just use $[0,7]$ as a domain to plot over that the function doesn't get too large to mask the oscillations. This could happen if the $y$ values at the end points are too much larger than the $y$ values at the peaks and valleys, as only so many pixels can be used within a graph. For this we have:

f.([0, cps..., 7])
5-element Array{Float64,1}:
  48.0
  -2.8788980209096167
   6.035382559783841
 -12.949453288874281
  90.0

The values at $0$ and at $7$ area a bit large, and since we know the graph is eventually $U$-shaped, this offers no insight. So we narrow the range a bit for the graph:

plot(f, 0.5, 6.5)
Example

Find all the relative maxima and minima of the function $f(x) = \sin(\pi \cdot x) \cdot (x^3 - 4x^2 + 2)$ over the interval $[-2, 2]$.

We will do so numerically, rather than attempt this algebraically. For this task we first need to gather the critical points. As each of the pieces of $f$ are everywhere differentiable and no quotients are involved, the function $f$ will be everywhere differentiable. As such, only zeros of $f'(x)$ can be critical points. We find these with

f(x) = sin(pi*x) * (x^3 - 4x^2 + 2)
cps = find_zeros(f', -2, 2)
6-element Array{Float64,1}:
 -1.6497065688663188
 -0.8472574303034358
 -0.3264827018362353
  0.35183579595045034
  0.8981933926622453
  1.6165308438251855

We should be careful though, as find_zeros may miss zeros that are not simple or too close together. A critical point will correspond to a relative maximum if the function crosses the axis, so these can not be "pauses." As this is exactly the case we are screening for, we double check that all the critical points are accounted for by graphing the derivative:

plot(f', -2, 2, legend=false)
plot!(zero)
scatter!(cps, 0*cps)

We see the six zeros as stored in cps and note that at each the function clearly crosses the $x$ axis.

From this last graph of the derivative we can also characterize the graph of $f$: The left-most critical point coincides with a relative minimum of $f$, as the derivative changes sign from negative to positive. The critical points then alternate relative maximum, relative minimum, relative maximum, relative, minimum, and finally relative maximum.

Example

Consider the function $f(x) = \sqrt{\lvert x^2 - 1\rvert}$. Find the critical points and characterize them as relative extrema or not.

We will apply the same approach, but need to get a handle on how large the values can be. The function is a composition of three functions. We should expect that the only critical points will occur when the interior polynomial, $x^2-1$ has values of interest, which is around the interval $(-1, 1)$. So we look to the slightly wider interval $[-2, 2]$:

f(x) = sqrt(abs(x^2 - 1))
cps = find_zeros(f', -2, 2)
3-element Array{Float64,1}:
 -0.9999999999999999
  0.0
  0.9999999999999999

We see the three values $-1$, $0$, $1$ that correspond to the two zeros and the relative minimum of $x^2 - 1$. We could graph things, but instead we characterize these values using a sign chart. A continuous function only can change sign when it crosses $0$ and the derivative will be continuous, except possibly at the three values above.

We can then pick intermediate values to test for positive or negative values:

pts = union(-2, cps, 2)  # this includes the endpoints (a, b) and the critical points
test_pts = pts[1:end-1] + diff(pts)/2 # midpoints of intervals between pts
[test_pts sign.(f'.(test_pts))]
4×2 Array{Float64,2}:
 -1.5  -1.0
 -0.5   1.0
  0.5  -1.0
  1.5   1.0

Reading this we have:

Just for fun, we define a function to print + or - instead of just using sign:

plus_or_minus(x, tol=1e-12) = x > tol ? "+" : (x < -tol ? "-" : "0")
[test_pts plus_or_minus.(f'.(test_pts))]
4×2 Array{Any,2}:
 -1.5  "-"
 -0.5  "+"
  0.5  "-"
  1.5  "+"

We did this all without graphs. But, let's look at the graph of the derivative:

plot(f', -2, 2)

We see asymptotes at $x=-1$ and $x=1$! These aren't zeroes of $f'(x)$, but rather where $f'(x)$ does not exist. The conclusion is correct - each of $-1$, $0$ and $1$ are critical points with the identified characterization - but not for the reason that they are all zeros.

plot(f, -2, 2)

Finally, why does find_zeros find these values that are not zeros of $f'(x)$? It uses the bisection algorithm on bracketing intervals to find zeros which are guaranteed by the intermediate value theorem. But we see from the graph that $f'(x)$ is not continuous. Still the algorithm will also converge to values where the function jumps over $0$, which this function clearly does.

Example

Consider the function $f(x) = \sin(x) - x$. Characterize the critical points.

We will work symbolically for this example.

f(x) = sin(x) - x
@vars x
fp = diff(f(x), x)
cps = solve(fp)
\[ \left[ \begin{array}{r}0\\2 \pi\end{array} \right] \]

We get values of $0$ and $2\pi$. Let's look at the derivative at these points:

At $x=0$ we have to the left and right signs found by

vals = fp.([-1/10, 1/10])
[vals plus_or_minus.(vals)]
2×2 Array{Any,2}:
 -0.00499583472197429  "-"
 -0.00499583472197429  "-"

Both are negative. The derivative does not change sign at $0$, so the critical point is neither a relative minimum or maximum.

What about at $2\pi$? We do something similar:

vals = fp.(2*pi .+ [-1/10,  1/10])
[vals plus_or_minus.(vals)]
2×2 Array{Any,2}:
 -0.00499583472197418  "-"
 -0.00499583472197418  "-"

Again, both negative. The function $f(x)$ is just decreasing near $2\pi$, so again the critical point is neither a relative minimum or maximum.

A graph verifies this:

plot(f, -3pi, 3pi)

We see that at $0$ and $2\pi$ there are "pauses" as the function decreases. We should also see that this pattern repeats. The critical points found by solve are only those within a certain domain. Any value that satisfies $\cos(x) - 1 = 0$ will be a critical point, and there are infinitely many of these of the form $n \cdot 2\pi$ for $n$ an integer.

As a comment, the solveset function, which is replacing solve, returns the entire collection of zeros:

solveset(fp)
\begin{equation*}\left\{2 n \pi\; |\; n \in \mathbb{Z}\right\}\end{equation*}
Example

Suppose you know $f'(x) = (x-1)\cdot(x-2)\cdot (x-3) = x^3 - 6x^2 + 11x - 6$ and $g'(x) = (x-1)\cdot(x-2)^2\cdot(x-3)^3 = x^6 -14x^5 +80x^4-238x^3+387x^2-324x+108$.

How would the graphs of $f(x)$ and $g(x)$ differ, as they share identical critical points?

The graph of $f(x)$ - a function we do not have a formula for - can have its critical points characterized by the first derivative test. As the derivative changes sign at each, all critical points correspond to relative maxima. The sign pattern is negative/positive/negative/positive so we have from left to right a relative minimum, a relative maximum, and then a relative minimum.

For the graph of $g(x)$ we can apply the same analysis. Thinking for a moment, we see as the factor $(x-2)^2$ comes as a power of $2$, the derivative of $g(x)$ will not change sign at $x=2$, so there is no relative extreme value there. However, at $x=3$ the factor has an odd power, so the derivative will change sign at $x=3$. So, as $g'(x)$ is positive for large negative values, there will be a relative maximum at $x=1$ and, as $g'(x)$ is positive for large positive values, a relative minimum at $x=3$.

The latter is consistent with a $7$th degree polynomial with positive leading coefficient. It is intuitive that since $g'(x)$ is a $6$th degree polynomial, $g(x)$ will be a $7$th degree one, as the power rule applied to a polynomial results in a polynomial of lesser degree by one.

Concavity

Consider the function $f(x) = x^2$. Over this function we draw some secant lines for a few pairs of $x$ values:

The graph attempts to illustrate that for this function the secant line between any two points $a$ and $b$ will lie above the graph over $[a,b]$.

This is a special property not shared by all functions. Let $I$ be an open interval.

Concave up: A function $f(x)$ is concave up on $I$ if for any $a < b$ in $I$, the secant line between $a$ and $b$ lies above the graph of $f(x)$ over $[a,b]$.

A similar definition exists for concave down where the secant lines lie below the graph. Notationally, concave up says $f(a) + (f(b) - f(a))/(b-a) \cdot (x-a) \geq f(x)$ for any $x$ in $[a,b]$. Replacing $\geq$ with $\leq$ defines concave up, and with either $>$ or $<$ will add the prefix "strictly." These definitions are useful for a general definition of convex functions. We won't work with these definitions, rather we will characterize concavity for functions which have either a first or second derivative.

If $f'(x)$ exists and is increasing on $(a,b)$ then $f(x)$ is concave up on $(a,b)$.

A proof of this makes use of the same trick used to establish the mean value theorem from Rolle's theorem. Let $g(x) = f(x) - (f(a) + M \cdot (x-a)$, where $M$ is the slope of the secant line between $a$ and $b$. By construction $g(a) = g(b) = 0$. If $f'(x)$ is increasing, then so is $g'(x) = f'(x) + M$. Concave up means $g(x) \leq 0$. Suppose to the contrary that there is a value where $g(x) > 0$ in $[a,b]$. We show this can't be. Assuming $g'(x)$ always exists, after some work, Rolle's theorem will ensure there is a value where $g'(c) = 0$ and $(c,g(c))$ is a relative maximum, and as we know there is at least one positive value, it must be $g(c) > 0$. The first derivative test then ensures that $g'(x)$ will increase to the left of $c$ and decrease to the right of $c$, since $c$ is at a critical point and not an endpoint. But this can't happen as $g'(x)$ is assumed to be increasing on the interval.

Similarly, if a function has a decreasing derivative on $I$ then it will be concave down on $I$.

The relationship between increasing functions and their derivatives - if $f'(x) > 0 $ on $I$ it is increasing on $I$ - gives this second characterization of concavity when the second derivative exists:

If $f''(x)$ exists and is positive on $I$, then $f(x)$ is concave up on $I$.

This follows, as we can think of $f''(x)$ as just the first derivative of the function $f'(x)$, so the assumption will force $f'(x)$ to exist and be increasing, and hence $f(x)$ to be concave up.

Example

Let's look at the function $x^2 \cdot e^{-x}$ for positive $x$. A quick graph shows the function is concave up, then down, then up in the region plotted:

f(x) = x^2 * exp(-x)
plotif(f, f'', 0, 8)

From the graph, we would expect that the second derivative - which is continuous - would have two zeros on $[0,8]$:

ips = find_zeros(f'', 0, 8)
2-element Array{Float64,1}:
 0.5857864376269049
 3.414213562373095

As well, between the zeros we should have the sign pattern +, -, and +, as we verify:

ps = [0, 1, 4]
[ps plus_or_minus.(f''.(ps))]
3×2 Array{Any,2}:
 0  "+"
 1  "-"
 4  "+"

Second derivative test

Concave up functions are "opening" up, and often just $U$-shaped. At a relative minimum, the graph will be concave up, and conversely concave down at a relative maximum. This observation becomes:

The second derivative test: If $c$ is a critical point of $f(x)$ with $f''(c)$ existing in a neighborhood of $c$, then $(c,f(c))$ will be a relative maximum if $f''(c) > 0$ and a relative minimum if $f''(c) < 0$.

If $f''(c)$ is positive in an interval about $c$, then $f''(c) > 0$ implies the function is concave up at $x=c$, Concave up implies the derivative is increasing so must go from negative to positive at the critical point.

The second derivative test is inconclusive when $f''(c)=0$. No such general statement exists, as there isn't enough information. For example, the function $f(x) = x^3$ has $0$ as a critical point, $f''(0)=0$ and the value does not correspond to a relative maximum or minimum. On the other hand $f(x)=x^4$ has $0$ as a critical point, $f''(0)=0$ is a relative minimum.

Example

Use the second derivative test to characterize the critical points of $f(x) = x^5 - x^4 + x^3$.

f(x) = x^5 - 2x^4 + x^3
cps = find_zeros(f', -3, 3)
3-element Array{Float64,1}:
 0.0
 0.6000000000000001
 1.0

The critical point $x=0$ is masked by floating point issues. Here we work around that:

cps = [0.0, 0.6, 1.0]
3-element Array{Float64,1}:
 0.0
 0.6
 1.0

We can check the sign of the second derivative for each critical point:

vals = f''.(cps)
[cps plus_or_minus.(vals)]
3×2 Array{Any,2}:
 0.0  "0"
 0.6  "-"
 1.0  "+"

That $f''(0.6) < 0$ implies that at $0.6$, $f(x)$ will have a relative maximum. As $f''(1) > 0$, the second derivative test says at $x=1$ there will be a relative minimum. That $f''(0) = 0$ says that only that there may be a relative maximum or minimum at $x=0$, as the second derivative test does not speak to this situation.

This should be consistent with this graph, where $-0.25$, and $1.25$ are chosen to capture the zero at $0$ and the two relative extrema:

plotif(f, f'', -0.25, 1.25)

For the graph we see that $0$is not a relative maximum or minimum. We could have seen this numerically by checking the first derivative test, and noting there is no sign change:

plus_or_minus.(f'.([-0.1, 0.1]))
2-element Array{String,1}:
 "+"
 "+"

Inflection points

An inflection point is a value where the second derivative of $f$ changes sign. At an inflection point the derivative will change from increasing to decreasing (or vice versa) and the function will change from concave up to down (or vice versa).

We can use the find_zeros function to find inflection points, by passing in the second derivative function. For example, consider the bell-shaped function

\[ ~ f(x) = e^{-x^2/2}. ~ \]

A graph suggests relative a maximum at $x=0$, a horizontal asymptote of $y=0$, and two inflection points:

f(x) = exp(-x^2/2)
plotif(f, f'', -3, 3)

The inflection points can be found directly, if desired, or numerically with:

find_zeros(f'', -3, 3)
2-element Array{Float64,1}:
 -1.0
  1.0

(The find_zeros function may return points which are not inflection points. It primarily returns points where $f''(x)$ changes sign, but may also find points where $f''(x)$ is $0$ yet does not change sign at $x$.)

Example

Graph the function

\[ ~ f(x) = \frac{(x-1)\cdot(x-3)^2}{x \cdot (x-2)}. ~ \]

Not much to do here if you are satisfied with a graph that only gives insight into the asymptotes of this rational function:

f(x) = ( (x-1)*(x-3)^2 ) / (x * (x-2) )
plot(f, -10, 10)

We can see the slant asymptote and hints of vertical asymptotes, but, we'd like to see more of the basic features of the graph.

Previously, we have discussed rational functions and their asymptotes. This function has numerator of degree 3 and denominator of degree 2, so will have a slant asymptote. As well, the zeros of the denominator, $0$ and $-2$, will lead to vertical asymptotes.

To identify how wide a viewing window should be, for the rational function the asymptotic behaviour is determined after the concavity is done changing and we are past all relative extrema, so we should take an interval that include all potential inflection points and critical points:

cps = find_zeros(f', -10, 10)
poss_ips = find_zero(f'', (-10, 10))
extrema(union(cps, poss_ips))
(-2.1527576020103956, 3.0)

So a range over $[-5,5]$ should display the key features including the slant asymptote.

Previously we used the trimplot function defined in CalculusWithJulia to avoid the distortion that vertical asymptotes can have:

trimplot(f, -5, 5)

With this graphic, we can now clearly see in the graph the two zeros at $x=1$ and $x=3$, the vertical asymptotes at $x=0$ and $x=2$, and the slant asymptote.

Example

A car travels from a stop for 1 mile in 2 minutes. A graph of its position as a function of time might look like any of these graphs:

All three graphs have the same average velocity which is just the $1/2$ miles per minute (30 miles an hour). But the instantaneous velocity - which is given by the derivative of the position function) varies.

The graph f1 has constant velocity, so the position is a straight line with slope $v_0$. The graph f2 is similar, though for first and last 30 seconds, the car does not move, so must move faster during the time it moves. A more realistic graph would be f3. The position increases continuously, as do the others, but the velocity changes more gradually. The initial velocity is less than $v_0$, but eventually gets to be more than $v_0$, then velocity starts to increase less. At no point is the velocity not increasing, for f3, the way it is for f2 after a minute and a half.

The rate of change of the velocity is the acceleration. For f1 this is zero, for f2 it is zero as well - when it is defined. However, for f3 we see the increase in velocity is positive in the first minute, but negative in the second minute. This fact relates to the concavity of the graph. As acceleration is the derivative of velocity, it is the second derivative of position - the graph we see. Where the acceleration is positive, the position graph will be concave up, where the acceleration is negative the graph will be concave down. The point $t=1$ is an inflection point, and would be felt by most riders.

Questions

Question

Consider this graph:

plot(airyai, -5, 0)  # airyai in `SpecialFunctions` loaded with `CalculusWithJulia`

On what intervals (roughly) is the function positive?

Question

Consider this graph:

On what intervals (roughly) is the function negative?

Question

Consider this graph

On what interval(s) is this function increasing?

Question

Consider this graph

On what interval(s) is this function concave up?

Question

Consider this graph

What kind of asymptotes does it appear to have?

Question

If it is known that:

What can be concluded?

Question

Mystery function $f(x)$ has $f'(2) = 0$ and $f''(0) = 2$. What is the most you can say about $x=2$?

Question

Find the smallest critical point of $f(x) = x^3 e^{-x}$.

Question

How many critical points does $f(x) = x^5 - x + 1$ have?

Question

How many inflection points does $f(x) = x^5 - x + 1$ have?

Question

At $c$, $f'(c) = 0$ and $f''(c) = 1 + c^2$. Is $(c,f(c))$ a relative maximum? ($f$ is a "nice" function.)

Question

At $c$, $f'(c) = 0$ and $f''(c) = c^2$. Is $(c,f(c))$ a relative minimum? ($f$ is a "nice" function.)

Question

The graph shows $f'(x)$. Is it possible that $f(x) = e^{-x} \sin(\pi x)$?

Question

The graph shows $f'(x)$. Is it possible that $f(x) = x^4 - 3x^3 - 2x + 4$?

Question

The graph shows $f''(x)$. Is it possible that $f(x) = (1+x)^{-2}$?

Question

This plot shows the graph of $f'(x)$. What is true about the critical points and their characterization?

Question

You know $f''(x) = (x-1)^3$. What do you know about $f(x)$?

Question

While driving we accelerate to get through a light before it turns red. However, at time $t_0$ a car cuts in front of us and we are forced to break. If $s(t)$ represents position, what is $t_0$:

Question

Two models for population growth are exponential growth: $P(t) = P_0 a^t$ and logistic [growth[(https://en.wikipedia.org/wiki/Logisticfunction#Inecology:modelingpopulationgrowth): P(t) = K P0 a^t / (K + P_0(a^t - 1))$. The exponential growth model has growth rate proportional to the current population. The logistic model has growth rate depending on the current population and the available resources (which can limit growth).

Letting $K=10$, $P_0=5$, and $a= e^(1/4)$. A plot over $[0,5]$ shows somewhat similar behaviour:

K, P0, a = 50, 5, exp(1/4)
egrowth(t) = P0 * a^t
lgrowth(t) = K * P0 * a^t / (K + P0*(a^t-1))

plot([egrowth, lgrowth], 0, 5)

Does a plot over $[0,50]$ show qualitatively similar behaviour?

Exponential growth has $P''(t) = P_0 a^t \log(a)^2 > 0$, so has no inflection point. By plotting over a sufficiently wide interval, can you answer: does the logistic growth model have an inflection point?

If yes, find it numerically:

The available resources are quantified by $K$. As $K \rightarrow \infty$ what is the limit of the logistic growth model:

Question

The plotting algorithm for plotting functions starts with a small initial set of points over the specified interval ($21$) and then refines those sub-intervals where the second derivative is determined to be large.

Why are sub-intervals where the second derivative is large different than those where the second derivative is small?

Question

Is there a nice algorithm to identify what domain a function should be plotted over to produce an informative graph? Wilkinson has some suggestions. (Wilkinson is well known to the R community as the specifier of the grammar of graphics.) It is mentioned that "finding an informative domain for a given function depends on at least three features: periodicity, asymptotics, and monotonicity."

Why would periodicity matter?

Why should asymptotics matter?

Monotonicity means increasing or decreasing. This is important for what reason?