Assignment title: Information
Math 3316, Fall 2016
Due Sep. 15, 2016
Project 1 { Taylor Series and Floating-Point Error
The first section of this project will be checked in lab on Wednesday, Sep. 7 { this completion
grade will count for 10% of the overall project.
The final project is due by 5:00 pm on Thursday, September 15, and should be uploaded to
Canvas. Instructions on what should be turned in are included at the end of this document.
Late work will lose points based on the following schedule:
1 minute to 24 hours 10 points
24 hours to 48 hours 20 points
48 to 72 hours 30 points
72 to 96 hours 40 points
over 96 hours no credit
1. Approximation of a Function by Taylor Polynomials:
In a file nest.cpp, write a C++ function with prototype
double nest(Matrix& a, double x);
to implement the Nested Multiplication algorithm (page 8 of the book) to evaluate the degree-n
polynomial
p = a_0 + a_1 x + a_2 x^2 + ... + a_{n-1} x^{n-1} + a_n x^n.
Here, the n + 1 coefficients a0; a1; :::; an are entries of the input vector a, and the input x is a
scalar. The value of the polynomial, p should be returned by the function.
Let pn(x) be the Taylor polynomial of degree n for f(x) = ex at 0. Write a C++ main() routine
in the file proj1 a.cpp that uses Linspace and your function nest to compute the following:
• Create the row vector
z = −1; −0:99; −0:98; :::; −0:01; 0; 0:01; :::; 0:98; 0:99; 1:0:
• Compute the vector p4 as the value of the Taylor polynomial p4(x) for all points x 2 z.
• Compute the vector p8 as the value of p8(x) for all x 2 z.
• Compute the vector p12 as the value of p12(x) for all x 2 z.
• Compute the vector f by evaluating ex at all x 2 z.
• Compute the vector err4 as jex − p4(x)j for each x 2 z.
• Compute the vector err8 as jex − p8(x)j for each x 2 z.
• Compute the vector err12 as jex − p12(x)j for each x 2 z.• Save the vectors z, p4, p8, p12, f, err4, err8 and err12 to unique text files named z.txt,
p4.txt, p8.txt, p12.txt, f.txt, err4.txt, err8.txt and err12.txt, respectively.
Write a Jupyter notebook named proj1 a.ipynb that will perform the following tasks:
• Load your vectors from the files written in C++ above.
• Plot ex, p4(x), p8(x) and p12(x) in the same figure window with different colors. Add a
legend, axis labels and title to the plot.
• Plot jex−p4(x)j, jex−p8(x)j and jex−p12(x)j in another figure window using the matplotlib
command semilogy, again with a legend, axis labels and a title.
Based on your results, comment on which one of p4(1), p8(1), and p12(1) is a better approximation of e. Manually derive an upper bound on the error for −1 ≤ x ≤ 1 for each of these
approximations, e.g. determine E4 satisfying jex − p4(x)j ≤ E4 for all −1 ≤ x ≤ 1. Are your
plots consistent with your derivations?
Explain your results.
2. Errors in a Forward Finite Difference Approximation:
Read the following mathematical derivation. Make sure you understand it.
For a sufficiently smooth function f(x), the forward difference estimate of f 0(a) is defined as
δ+f(a) = f(a + h) − f(a)
h ; (1)
where h is called the increment. From Taylor's theorem, we know that
f(a + h) = f(a) + f 0(a)h + f 00(θ)
2
h2; (2)
where θ is between a and a + h. So for very small values of h, θ ≈ a, and hence
δ+f(a) ≈ f 0(a) + f 00(a)
2
h: (3)
As the increment h ! 0, the value of δ+f(a) ! f 0(a), a familiar fact from Calculus I.
For the moment, suppose that when computing δ+f(a), the only errors that occur are caused
by rounding the exact values of f(a) and f(a + h) to their double precision (DP) floating-point
versions flDP(f(a)) and flDP(f(a + h)). Since these result in small errors, we may write
flDP(f(a + h)) = f(a + h) − µ1f(a + h); flDP(f(a)) = f(a) + µ2f(a); (4)
where µ1 and µ2 are relative errors in the rounding. It can be shown that these errors are
bounded by
jµ1j ≤ DP
2 ; jµ2j ≤ DP 2 ; (5)
where DP = 2−52 is the DP floating point roundoff (approximately 10−16).However, there are actually two sources of error in approximating f 0(a). In addition to the DP
floating-point approximations of numbers (i.e. (4)), there is also an error in approximating the
derivative, f 0(a) ≈ δ+f(a), as shown in equation (3).
Therefore, we wish to estimate the overall error. First let δDP + f(a) denote the computed value
of the approximation δ+f(a). Combining (3) and (4), we have
δ+
DPf(a) = flDP(f(a + h)) − flDP(f(a))
h
=
(f(a + h) − µ1f(a + h)) − (f(a) + µ2f(a))
h
= δ+f(a) + −µ1f(a + h) − µ2f(a)
h
≈ f 0(a) + f 00(a)
2
h − µ1f(a + h) + µ2f(a)
h : (6)
With this estimate, we can now estimate the relative error r in approximating f 0(a) by δDP + f(a)
as
r =
f 0(a) − δDP + f(a)
f 0(a) ≈ −h2 ff 000((a a)) + h 1 µ1f(a +fh 0() + a) µ2f(a)
≈ −h f 00(a)
2f 0(a) +
1 h
µ1f(a) + µ2f(a)
f 0(a) ≤ h 2 ff 000((a a)) + h 1 µ1f(af ) + 0(aµ ) 2f(a)
≤ h f 00(a)
2f 0(a) +
1 h
f(a)(jµ1j + jµ2j)
f 0(a) ≤ h 2 ff 000((a a)) + h 1 f( fa 0( )a DP ) :
If we define c1 = f 00(a)
2f 0(a) and c2 = f( fa 0( )a DP ) , then we see that we can estimate an upper bound
on the relative error r as
R = c1h + c2 1
h: (7)
Now you will write C++ and Python codes to test the above mathematical derivation.
Write a C++ main() routine in the file proj1 b.cpp to compute the forward difference estimates
δ+f(3) (see equation 1) of the derivative f 0(3) for the function f(x) = x−3 with the sequence of
increments h = 2−n (n = 1; 2; 3; : : : ; 52). For each of these increment values, compute both the
relative error r in your approximation, and the upper bound on the relative error, R, from the
derivation in (7). Save the arrays n, h, r and R to disk in the files n.txt, h.txt, r.txt and
R.txt, respectively.
In a Jupyter notebook named proj1 b.ipynb, perform the following tasks:
• load the arrays n.txt, h.txt, r.txt and R.txt,
• create a semilogy plot that overlays r verus n with a solid blue line, and R versus n with
a red dashed line,
• create a different loglog plot that overlays r verus h with a solid blue line, and R versus
h with a red dashed line,
• for both plots, add a legend, axis labels and title.Qualitatively describe how r and R behave as the increment h goes from its largest value to
its smallest value (equivalently, as n goes from its smallest value to its largest value). Which
value of n (and correspondingly h) gives the best approximation to the derivative? Based on
your understanding of the mathematical derivation above, give an explanation for the behavior
which you have observed.
What to turn in:
Everything should be turned in on Canvas, in a single \.zip" or \.tgz" file containing all of the
required items listed below.
Turn in all of the requested C++ functions and Jupyter notebooks as separate \.hpp", \.cpp",
and \.ipynb" files. Include a Makefile so that all of your executables can all be built from within
the same directory with only the \make" command, and so that \make clean" will remove all
of the \.txt" data files generated when running your project. Do not include these \.txt" data
files in either this zip file or in your report.
You will also discuss your project in a technical report. I strongly suggest your to write your
report using the Jupyter notebook. In this report you should:
• Use complete sentences and paragraphs.
• Briefly explain the problems that were solved, and the mathematical approaches used on
them.
• Concisely describe your codes, including a discussion on any unique decisions that you
made.
• Discuss all of your computed results. In this portion, you should include your plots.
• Answer all questions posed in this project.
• In your own words, explain why you found the results that you did, justifying them mathematically if possible.
• Include your Makefile, C++ and Python code as attachments inside the report file (or
included inline), for grading.
This report should be in \.pdf" format.