UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

An algorithm for computing the riemann zeta function based on an analysis of Backlund’s remainder estimate Menz, Petra Margarete 1994

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1994-0541.pdf [ 1.04MB ]
Metadata
JSON: 831-1.0080016.json
JSON-LD: 831-1.0080016-ld.json
RDF/XML (Pretty): 831-1.0080016-rdf.xml
RDF/JSON: 831-1.0080016-rdf.json
Turtle: 831-1.0080016-turtle.txt
N-Triples: 831-1.0080016-rdf-ntriples.txt
Original Record: 831-1.0080016-source.json
Full Text
831-1.0080016-fulltext.txt
Citation
831-1.0080016.ris

Full Text

AN ALGORITHM FOR COMPUTING THE RIEMANN ZETA FUNCTION BASED ON AN ANALYSIS OF BACKLUND’S REMAINDER ESTIMATE By Petra Margarete Menz B. Sc., The University of Toronto, 1992  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE  in THE FACULTY OF GRADUATE STUDIES MATHEMATICS  We accept this thesis as conforming to the required standard  THE UNIVERSITY OF BRITISH COLUMBIA  September 1994  ©  Petra Margarete Menz, 1994  In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission.  (Signature)  Department of  ReSt€ —tAc c  The University of British Columbia Vancouver, Canada  Date  DE-6 (2)88)  Abstract  The Riemann zeta function, C(s) with complex argument s, is a widely used special function in mathematics. This thesis is motivated by the need of a cost reducing algorithm for the computation of C(s) using its Euler-Maclaurin series. The difficulty lies in finding small upper bounds, call them n and k, for the two sums in the Elller—Maclaurin series of C(s) which will compute C(s) to within any given accuracy for any complex argument s, and provide optimal computational cost in the use of the Euler-Maclaurin series. This work is based on Backlund’s remainder estimate for the Euler-Maclaurin remain der, since it provides a close enough relationship between n, k, s, and  .  We assumed that  the cost of computing the Bernoulli numbers, which appear in the series, is fixed, and briefly discuss how this may influence high precision calculation. Based on our study of the behavior of Backlund’s remainder estimate, we define the ‘best’ pair (n, k), and present a reliable method of computing the best pair. Furthermore, based on a compu tational analysis, we conjecture that there is a relationship between n and k which does not depend on s. We present two algorithms, one based on our method and the other on the conjecture, and compare their costs of finding n and k as well as computing the Euler-Maclaurin series with an algorithm presented by Cohen and Olivier. We conclude that our algorithm reduces the cost of computing  C(s)  drastically, and that good numer  ical techniques need to be applied to our method and conjecture for finding n and k in order to keep this computational cost low as well.  11  Table of Contents  Abstract List of Figures  v  Acknowledgement  Viii  1  Introduction  2  Euler-Maclaurin Series of  3  C(s)  4  2.1  Statement and Proof of the E-M Summation Formula  4  2.2  E-M Summation Formula Applied to C(s)  7  2.3  Growth Ratio of the E-M Series for C(s)  12  2.4  Backlund’s Remainder Estimate  13  2.5  The EM Series of C(s) in Practice  15  Analysis of Backlund’s Remainder Estimate  19  3.1  Defining the Best Pair (n, k)  19  3.2  Analyzing the Best Pair (n, k)  24  3.2.1  Case I: s is Real, a  24  3.2.2  Case II:  3.2.3  Method for Finding the Best Pair  3.2.4  Conjecture 1  3.3  s  =  1  is Complex, a  The Smallest Pair  =  1  30  (n, k)  for a> 0  33 35  (n, k)  39  111  4  Algorithm and Conclusion  43  4.1  Presentation and Discussion of Algorithm for Computing C(s)  43  4.2  Conclusion and Future Work  48  Bibliography  49  Appendices  50  A Validation of Backlund’s Remainder Estimate  50  B Bernoulli Polynomials  52  C Gamma Function  55  D Program  57  iv  List of Figures  3.1  The graphs of tion of  n  (=  —  in applied to Backlund’s remainder estimate as a func for three different values of  f(n))  notes the number of accurate digits to the base  =  =  + lOi, k  3.2  The graphs of tion of k  (=  —  The steepest curve has  = 3, k = s = 4 + 6i, k = 28  94, the middle curve has  with the most gradual slope has  e.  and k, where D de  s  s  125, and the curve 20  in applied to Backlund’s remainder estimate as a func  g(k)) for three different values of s and  n,  where D denotes  the number of accurate digits to the base e. The curve with the highest + lOi, n  maximum has .s has 3.3  4 + 6i, n  s  Hr(k)  (=  =  = 40,  the curve with the lowest maximum  10, and the intermediate curve has s  n + /c) for .s  =  =  3 and  10_50, c 2  =  10_b00,  = 3, n = 37. = 10—200. and .  .  21  The respective minima are at (27,53.8291), (55, 107.272), and (111, 214.346) 26 3.4  H(k) (n+k) for s  =  20 and  =  iO0,  2  =  10150,  and  =  10250.  The  respective minima are at (17,41.8848), (73,148.856), and (128,255.962). 3.5  Hr(k)  (= n -f- k)  for  s  =  50 and  =  10_b00,  2 E  =  10—200,  and  =  26  10_300.  The respectiveminimaare at (18, 57.0659), (73,148.1641), and (129,271.227) 26 3.6  Plots of best where  3.7  =  n  e_D  The graphs of  = e’  with 1  together with best k for with 1  f()  <  D  <  s  = 3, 20,  and 50, respectively,  700  and 2g(E) for  28 s  D <700  = 3, 20,  and 50, respectively, where 29  v  3.8  The graphs of the ratio  = 3.9  and 50, respectively, where  with 1 <D <700  e  H(k)  = 3, 20,  for s  (=  n+k) for  s  =  29  =  -+10i and  i0°,  2  =  10—150,  =  and  10250.  The respective minima are at (31,59.7432), (87,166.976), and (142,274.14) 31 3.10 H(k)  =  (= n+k) for s = 5+9i and  =  i0°,  10b00,  =  and  10_200.  The respective minima are at (28,55.997), (56, 109.609), and (111,216.763) 32 3.11 Plots of best n together with best k for s spectively, where 3.12 The graphs of where  =  e  f()  =  eD  with 1  and 2g(€) for s  with 1  <  D  <  3.13 The graphs of the ratio where c  =  e_D with 1  <  <  3.14 Scatter plot of the ratio  =  + lOi and s  5 + 9i, re  700  32  + lOi and s  = 5 + 9i,  respectively, 33  =  + lOi and s  =  5 + 9i, respectively,  700  34  as a function of a. At a  ratio has been computed for the following s and  €  ,  values: .s  2i, + 10i, + 40i,2 + 3i,5 + 9i, and 10 + 15i; and  230,350,460,575, and 700.  =  700 for s  D  D  =  €  =  1,  ,.  .  .  ,  6 the  = 3, 20, 50,  +  e with D  = =  The smooth curve is the graph of t(a)  (1—)+ 3.15 Plots of best n together with ((i  +  —  ) best k for (s,a) = (20,2),  36  (+2i,3), (3,4), (5+9i,5), (2+3i,6), and (-f-10i,7), where €=e_D withlD<500  37  3.16 The graphs of the ratio (5 + 9i, 5), (2 + 3i, 6), and  for (s,a)  ( + lOi, 7),  vi  where  = (20,2), (+2i,3), € = e with 1 < D  (3,4), 500  38  3.17 Scatter plot of the ratio  smU  k  as a function of a. At a  =  the ratio has been computed for the following s and values: 2i, + 10i, + 40i,2 + 3i,5 + 9i, and 10 + 15i; and 230, 350,460, 575, and 700.  s  ,  =  1,  ,  .  .  .  ,  6  3, 20, 50, + with D  =  The smooth curve is the graph of t(a)  =  =  e_D  (1—)+ A.18 The ratio  abso1;ror  corresponds to and  41  s =  for k  =  40 and 1 <  n  <  3, 20, and 50; the second plot to  + 40i; and the third plot to s  =  100. s  The first plot + 2i,  =  + lOi,  2 + 3i, 5 + 9i, and 10 + 15i. Note  that for the complex values the ratios are very close to each other, so that their graphs overlap A.19 The ratio s =  absolterror,  51 for  n  =  k with 1 < k < 100, and  2 + 3i, respectively  s  =  3,  + 2i, and 51  vii  Acknowledgement  I want to thank Dr. Bill Casselman, my thesis supervisor, for his acceptance of my interest in the presented problem, and his support and guidance throughout my work on this thesis when it seemed impossible to complete in such a short amount of time. I would like to thank Dr. Richard Froese for his willingness to read this thesis. Most of all, I want to thank John Michael Stockie for his believe in my abilities and his readiness to listen. He has been an endless source of encouragement. I would also like to thank Dr. Ragnar Buchweitz, who put me on the right track, and Kirby Kim, who has been there for me from the very beginning and without whom I would not be where I am today. Finally, I would like to dedicate this thesis to my parents, Gerlinde and Peter Menz, and my sister, Manuela Schmidt, whose love from so far away pulled me through difficult times.  v111  Chapter 1 Introduction  One of the most important special functions in mathematics is the Riemann zeta function, (s), which takes complex argument s.  Because of its widespread importance, it is  essential to have an accurate and efficient technique of computing the zeta function. One method of approximating ç(s) is via the Euler-Maclaurin (E-M) summation formula. Given s E C and absolute error tolerance e > 0, we will present an algorithm for the computation of C(s) to within  E,  which is based on the E-M series of ((s) and a simplified  version of Backlund’s remainder estimate (BRE) for the E-M remainder. Our objective is to choose that number of terms in the E-M series which minimizes computational cost. We take into consideration that the terms of the two sums in the E-M series have different computational cost. This is due in part to the appearance of the Bernoulli numbers in one of the sums. Throughout this paper, we assume that there is a fixed cost for obtaining the Bernoulli numbers, since they can always be tabulated. However, a Bernoulli number table takes an excessive amount of storage space for high precision calculation. Therefore, we employ an estimate of the Bernoulli numbers. Then, by a recursive calculation of the sum involving the Bernoulli numbers, we can assume fixed computational cost for the terms of that sum. Besides s and c, BRE involves two unknowns, which we will call n and k, and which determine the number of terms used in the E-M series of C(s). So, our goal becomes finding that pair (n, k) which gives the least number of terms needed, i. e. which will minimize the computational cost. Based on a thorough computational analysis of BRE, 1  Chapter 1. Introduction  2  we conjecture that there is a relationship between the optimal n and k, which does not depend on s and  ,  but only on the cost of computing terms from the E-M series of C(s).  This relationship allows us to eliminate n, giving us a simplified version of BRE. Thereby, given s and  ,  we make it possible to solve BRE for the single unknown k, and compute  the E-M series of C(s) using the least number of terms. The following is an outline of how the material of this thesis is presented. Chapter 2 covers in detail the proof of the E-M summation formula and an estimate of its remainder; introduces the Riemann zeta function,  C(s),  and some of its properties required for the  algorithm; shows the application of the E-M summation formula to growth ratio of consecutive terms in the E-M series of  C(s);  C(s);  derives the  and gives a proof of BRE.  At the end of Chapter 2, we also work through practical examples demonstrating the behaviour of the E-M series of C(s) and BRE for the real and complex cases. Thereby, we point out the difficulties involved in choosing optimal values for n and k. For those familiar with the E-M series of  C(s)  and BRE, we suggest that Chapter 2 be skipped.  Chapter 3 is an in-depth discussion of the behaviour of BRE. After a thorough inves tigation of the optimal n and k, we will present a method which allows us to compute the optimal n and k and give the aforementioned conjecture. Lastly, we will use the growth ratio, together with BRE, to approximate the size of the smallest term in the E-M series for any given s. We conclude Chapter 3 by considering the existence of a relationship between the n and k from the smallest term and the optimal n and k. In Chapter 4 we present two algorithms for the computation of  C(s).  One which  utilizes our method of computing the optimal n and k, and one which is based on our conjecture. Furthermore, we will compare these two algorithms as well as Cohen and Olivier’s recent algorithm [2], and discuss their differences in distributing computational cost.  Chapter 1. Introduction  3  The Appendices comprise brief discussions of a validation of BRE, the Bernoulli poiy nomials, and the Gamma function, as well as Mathematica [6] programs which facilitate the above analyses and generate the graphs displayed throughout this work.  Chapter 2 Euler-Maclaurin Series of  C(s)  This chapter provides a detailed treatise of the E—M summation formula, its remainder, and its application to the Riemann zeta function. Furthermore, BRE is introduced and its practical use is demonstrated for real and complex cases. Through these examples, it will become evident that it is no easy task to solve for the two unknowns in BRE given s  e  C and e > 0, and that we need a reliable method for choosing values for the two  unknowns. The E-M summation formula, its proof, and its application to C(s) all involve the Bernoulli polynomials and the Gamma function as well as some of their properties and estimates. The reader can find all relevant information regarding the Bernoulli polyno mials and the Gamma function in Appendices B and C, respectively. 2.1  Statement and Proof of the E-M Summation Formula  Proposition 1 Letf be afunction defined on the interval [a, b], where a < band a, bE Z.  Suppose that  f  has continuous derivatives up to order 2k or 2k + 1. Then  f(a)+f(a+1)+...+f(b-1)  j  =  b  1 f(x)dx -(f(b)-f(a))  + +Rk,  4  (2j)!  (b) 21 (f  -  f(2i1)(a))  Chapter 2. Euler-Maclaurin Series of C(s)  5  where ifa f(2k)(x) L’2k(x)  =  (2.1)  or (2k+1)!  depending on how smooth Proof: Let  f  k+l)(x) ‘2k+1(x) dx, 2 J f(  is, and /-‘m(x) is as defined by equation (B.2).  be a function that satisfies the conditions of Proposition 1 on the interval  [a, b], where a, b Let r  f  dx,  Z and a < b.  Z and consider the integral f’ f(x) ‘L’(x) dx over the unit interval [r, r+1].  Using integration by parts and the periodicity 1 of  f  =  r+1  f(x)(x)dx  f(x)i(x)I1_j  = f(r +1)  r+1  urn b (R) 1  -  R—*r+1 -  1 f(r)(-) -  1 (f(r) + f(r + 1)) m(r) =  j  Bm(r)  ,r+1  f(r) lirn b (R) 1  f  r+1 -  Using the fact that  f’(x)i(x)dx  -  1 = f(r + 1) =  we get:  R*r+  -  r1 f’(x)i(x)  J  f’(x)bi(x)  r  dx  dx  f’(x)i(x) dx.  Bm(O) =  /3m  for any integer r and rn> 1 as  well as property (B.23) of 7 bm(x) for m > 0, we can continue with integration by parts successively and obtain: r+1 j  f(x)b(x)dx  =  1 (f(r) + f(r + 1))  =  1 (f(r) + f(r + 1))  =  1 (f(r) + f(r + 1))  r+1 -  j  -  -  f’(x)i(x) dx  r+1 1 1 f’(x)2(x) +  (f’(r +1)  -  f  r+1  f’(r)) +  1  f”(x)2(x) dx  f  +i (x)dx 2 f”(x)  Chapter 2. Euler-Maclauriri Series of C(s)  =  (f(r) + f(r + 1)) + ()k jr+l  (f(r) +  =  since  m = 13  0 for  rn  the interval [r, r +  f(r  +  6  k  f(k)(x)(x)  + 1))  -  f(i’)(r))  dx  (2j)!  -  (f’)(r +1)  (f(2i1)(r  +1)  -  f(2i1)(r)) -  odd, and Rk is Rk as given in Proposition 1 but restricted to  1]. Next, subdivide the interval [a, b] into its unit subintervals.  Again, using the periodicity 1 of  we can sum the above equation over the unit  subintervals of [a, b] and get:  j  f(x)(x)  dx  =  f(a) + f(a +1) + -  + f(b -1) + f(b)  k -  f(2i1)(a))  -  Rk.  .  Now, keeping in mind that by definition of ‘ (x) we have &x) 1  =  1 for all x, we  can rewrite this equation and Proposition 1 follows.  D  We need to have an estimate for the remainder Rk of Proposition 1. Applying the absolute value to equation (2.1) and using the result of equation (B.22) we get  !  IRk!  =  Iba  k)(x) 2k(x) 2 f(  dx  or (2k+1)!  !  Iba  lb  k+l)(x) 2k+1(x) 2 f(  k)( 2 f(  )  dx  I2k(x)I dx,  or (2k+1)!  ja  f(2k+1)(x)  2k+1(x)l  dx,  Chapter 2. Euler-Maclaurin Series of C(s)  7  2(2k)! ‘çoo cos2irlx j2k L.4=1  a j(2k)( ‘ 1 1 x (2k)! Jb J  )2k 2 (  ,j  (iX  or 1 j(2k+1)( a 1 iXI (2k+1)! Jb J  2(2k+1)! ‘coo sin2wlx )2k+1 L.41 2 ( 12k+1  ‘  2c(2k) ja f(2k)(x)  dX  dx,  or fba f(2k+1)(x)  depending on how smooth  f is.  dx,  This is the standard way of estimating the E-M remain  der. As we will see in Section 2.4 we need to take a similar but modified approach in estimating the E-M remainder of the zeta function. 2.2  E-M Summation Formula Applied to C(s)  Next, we introduce Riemann’s definition of the zeta function and some of its properties necessary for a computational algorithm, but our main concern is to derive the E-M series of C(s). Definition 1 The Riemann Zeta function C(s) is defined by  C(s) for s in the setS  :=  {s  E C  (s)  >  (2.2)  -—,  =  1}.  Proposition 2 C(s) is analytic on S. Proof: Let fr(s)  such that  =  (s) =  for r E I/V. Let  s = o  + it E S. Let M be any positive number  u> M> 1. Then =  e  e =  Chapter 2. Euler-Maclaurin Series of C(s) and  converges since M  >  8  1. Hence, by the Weierstrass Test  converges uniformly on {s E C I R(s) for all s E C, it follows that  C(s)  fr(5)  M}. Since further, {fr(5)}i are analytic  is analytic in S.  0  Proposition 3 (Functional Equation) For all s E C\{O, 1}  C(s)  ()  = 2(2K)’ sin  F(1  s)C(1  -  -  s).  (2.3)  Proof: By a change of variables in the definition of F(s) (see Appendix C) we obtain F(s)  =  kS  I  dt t  tse_kt_  Jo s 1 F \21k8 = Jo  I  —  te  -  2 d 1 t  Summing over k, we get  ()  rF  C(s)  j  =  Now, let ‘0(1) = that ‘0(1) =  (  j  =  dl tte 2 dl e) ti—.  (2.4)  By the Poisson summation formula [3,  ‘0(4’).  p.209]  Then ‘/(t) =  Let /(t) =  it follows  and together  with the functional equation for ‘0(1) this implies that b(-) = /i’?/’(t) +  4j_i.  equation (2.4) becomes  irF  () C(s)  =  = = = = =  fb(t)t  f f j  f f  1  00 dt 5 dt 3 b(t)t_+f b(t)t—  1  dt 5  r--  81 d1 b(t)r-s  b(l) (t +  + +  r) i—s  j  dt 8 b(t)t-  f  1°° dl  (t -i)  f 1  +  0  d1 8 r--  (r 1—s  + s  -  (t)(t+t)T_(-+1).  f  dl  d1 3 b(t)t-  So,  Chapter 2. Euler-Maclaurin Series of C(s)  9  The right hand side of the above equation converges for all s symmetrical with respect to s and 1 F  ()  c(s)  —  C\{O, 1}, and it is  s. It follows that (1  =  5)  2  =  =  e  (1  -  s)  (1s) (2)’2 sin  ()  F(1  —  s)C(1  —  s)  by equation (C.26) in Appendix C.  D  Note: Throughout the rest of this paper, we write s  =  a + i t for a, t E JR, unless  otherwise stated. Definition 2 Let n  e  liv. define Ca(s) to be the partial sum of C(s),  Then we can write C(s) as C(s)  =  lim÷C(s). The following proposition uses the  notation of Hutchinson [4] in a slightly modified form for easier cross references with the functions in the programs of Appendix D. Proposition 4 Let n, k E IN. Then for s  C(s)  =  Cm(s)  +  1  +  S  S the Riemann Zeta function satisfies k  1  + T(n,s) +R(n,k,s),  (2.5)  j=1  where Tn ‘  —  —  2j 3 1  (2j)!  s(s + 1)...(s + 2j  1 (2’  —  2)  s+2j1  s+2i_1  2j—2  fi (s + m)  mO  (2.6)  Chapter 2. Euler-Maclaurin Series of c(s)  10  and the remainder term is k 8j  1 R  s(s + 1)...(s +2k (2k)!  —  1)  —  V with n> 1. Let s  f°°  in  s(s+1)...(s+2k) [00 (2k + 1)! in  —  Proof: Let n  —  k(x) 2 b k 2 xs+  d  ?/2k+1(X)  27  1 k 2 xs+ +  C\{1}. Define f(x)  =  -4.  Then  f  is a function  on the interval [0, 1] with continuous derivatives: f(d)(x) =  (_l)d (s(s +1) ...(s + d  Hence, we can apply Proposition 1 to  f  -  1)).  on [0, 1] with derivative order 1 and get  f(1)+f(2)+.+f(n_1)=J f(x)dx 1  1 -(f(n)-f(1)) 2 + jf’(x)i(x)dx. -  This implies (ni(x) 1 11 1 —dx ——-—+——s i dx 2ns 2 ii XS Ji XS n [ni(x) — 1 —1 11 X + j 5 s 2 X3+l (s—1)x’ 1 1 1 1 11 j bi(x) d x. + S—l s 1 2 s 1 8 2n ii Xs+l =  n ,  —  —  —  —  —  In other words, dx Now,  let n  —*  =  oc in equation  C(s)  =  =  =  1  1  1  s—i  1  1 1  —  (2.8), so that  + _ 1 sj1dX  1 1 —+ 2 s— 1  [00  —5I  1  in 1  &j(x)  x 11  n  dx—si  Cn(5)+ i 1 +8j  ii  bi(x) XS  dx  xs+id  (2 8) .  Chapter 2. Euler-Maclaurin Series of C(s)  11  Continuing with integration by parts just as was done in detail in the proof of Proposition 1, we get  C(s)  =  C(s)+  +  1  1  11 2ns  S_1fl8_l  /32  +  +  s(s+1)...(s+2n—2)  /32n  (2n)!  s+ n 2 _l  2 ( x) (2n)! 1  =  +  /32  2t  11  1  +  dx  +  /32n  s(s+1)...(s+2n—2)  (2)’  s+ n 2 _l  s(s+1)...(s+2n—1) x (2n)!  (urn  =  2 + ’(x) (2n + 1)x+ 2  Cm(s)+  +  1 1 1 s—1n  b  (2n)!  s(s + 1) (s + 2n) (2n + 1)! .  which proves Proposition 4.  .  In  2 n +i(X)  dx)  11 2ns /32n  +  s+2n 2n + 1  +  s(s+1)...(s+2n—2)  f in  (x) 21 Xs+ n 2 +l  dx D  Chapter 2. Euler-Maclaurin Series of C(s)  12  Remark 1: Actually, equation (2.5) makes sense for all s  C with J(s) > —2k, because  the integral for R(n, k, s) converges throughout the half-plane J(s + 2k + 1) > 1. Hence,  C(s)  can be extended by analytic continuation to other values of s. Fur  thermore, equation (2.5) also shows that C(s) can be extended to a meromorphic function of s, since C(s) has as its only singularity a simple pole at s = 1. Remark 2: Now, remember that the functional equation for C(s) is not valid for s equal  to 0 or 1. The E-M series of C(s) (2.5) gives C(0) =  —  and C(1) =  co.  Note: Throughout the rest of this paper, the letters n and k are solely reserved to  represent the bounds for the two series respectively in the E-M summation formula for C(s) as given in equation (2.5). 2.3  Growth Ratio of the E-M Series for  C(s)  The terms in the E-M series of C(s) grow large in a graded way. This implies that if we ask, when do the terms of that series get large, it is not too important if we are off by a few terms. For the estimate of the growth ratio of the E-M series of following three limits:  /k+1 limi k-ooV k  =  1,  =  2 e, and  k 2 (k+l  Limi  k—*oo  lim  k  j  2 (k—i-i)  k—co(2k+i)(2k+2)  —  —  1 22  C(s)  we need the  Chapter 2. Euler-Maclaurin Series of c(s)  13  Now, the ratio of two successive terms can be estimated for large k: Tk+1(n, s) Tk(n, s)  —  =  flL(s + m) 2)!n’ k(2k + 21 2 13 + rn) 2k+211(+2k1)(+2k) J1 27r(k +  l)()212  (k)2k  /k+1 V k  1 (7re)2  1 1 k 2 2 (2k+1)(2k+2 ) )n2(8+  k 2 (k+l  \ k  )  2 (k+1) 2 (2k+1)(2k+2 )n  2 (s + 2k 2rn  )  This implies that the terms in the E-M series (2.5) will start to grow when  I + 2k1 2.4  2Trn.  (2.9)  Backlund’s Remainder Estimate  A more practical estimate of the E-M remainder of (s) than the one previously derived in Section 2.1 is due to Backlund [1]. It states that the magnitude of the remainder in the E-M series (2.5) is at most IR(n,k-1,s)I  times the magnitude of the first term omitted: Sk1ITk(nS)  (2.10)  This shows that the error may get worse as the imaginary part of .s gets larger. The practical effect of this is that in a given band J(s)  b, where b E ]R, we need roughly  E’(s) many terms to calculate ((s). When s is real, the remainder is less than or equal to the first term omitted, i.e. the actual value of C(s) always lies between two consecutive partial sums of the E-M series (2.5).  Chapter 2. Euler-Maclaurin Series of c(s)  14  Now, Backlund’s estimate of the E-M remainder of  c(s)  is derived as follows:  IR(n,k— i,s)! b ( 1 _ ) s(s+i)...(s+2k—2) roo 2 dx (2k 1)! in x 1 — 2 —  s(s+i)...(s+2k—2) (2k 1)! —  s(s+i)...(s+2k—i) (2k)!  <  r  + 1) ...(s+2k—i) (2k)!  s(s+i)...(s+2k—i) (2k)! s(s+i)...(s+2k—1) (2k)!  —  Is+2k—iI  Tk(ri,s){.  [00  fll_s_2k  2k 3 /  n  [00  -  2 2kx9+ 00  s + 2k—i  s(s+1)...(s+2k—1) (2k)! I  (2k)!  —  // 2k 3  Jn  s(s+i)...(s+2k—1)  <  (lim  s(s+i)...(s+2k—1) (2k)!  s(s  (  b—*oo  k(x) 2 b  /32k  -  2k(x) xs+  Jn  k(x) 2 b  —  (/32k  100  2kIin 3 %!  [00  /32k!]  /32k!  dx x+2c  —  /2k(X))  a+2k 2k+1  dx x+2k  1 (u  + 2k  —  k(x) 2 b  dx  I dx)  dx  in  I  in  dx  k(x)! 2 b  (_i)1c+1  —s—2k+1 2k  i)n+2k_1  dx  ‘°°  j  (_i)1 b2k+l(x) k+1 2 x+  dx  )  Chapter 2. Euler-Maclaurin Series of C(s)  15  For a validation of BRE see Appendix A, where a brief computational discussion is given on BRE compared with the absolute error. 2.5  C(s)  The E-M Series of  in Practice  We want to demonstrate how the E-M series of  C(s)  (2.5) works in practice. This will  show that choosing n and k requires careful consideration of the calculations involved in the series. We will also find out that there is more than one pair (n, k) that will yield a desired accuracy, and that we have to ask ourselves what the “best” pair (n, k) should be. As suggested by Edwards [3, p.115], we need to compute the first several numbers in the sequence fs(s + 1)  ...  (s + 2j  —  2) and to see how large n must be in order to  make the terms of the series decrease rapidly in size. First, consider the case when s is real. Then Let s  j  =  3, and say, we want to compute  IR(n, k  C(3)  —  1, s)  Tk(n, s) by BRE (2.10).  to within six digits of accuracy. For  1,2,3,4 in the above sequence we get: =  i3.4.5  =  —  =  3.4. 8!  .9  20  =  0.25  =  —0.08  =  0.083  =  —015  Table 2.1 shows many possible values for n and k which give of accuracy, e. g. n  =  3, k  =  4 or n  =  6, k  =  C(3)  2. However, we choose n  to within six digits =  5, k  =  2 since the  Chapter 2. Euler-Maclaurin Series of C(s)  n  k=  1  2  16  4  3  5  =  2  1.3 l0  3.3 10  1.5 l0  1.0 lO  1.0 1O  3  1.1 1O  1.3 i0  2.5 10_6  7.8 lO  3.4 10  4  2.0 l0  1.3 106  1.4 i0  2.5 108  6.1 1O  5  5.3 106  2.1 iO  1.5 108  1.7 i0  2.7 10’°  6  1.8 10—6  5.0 108  2.5 i0  1.9 10’°  2.1 10_il  Table 2.1: Values of BRE for s = 3 as k ranges from 1 to 5 and n ranges from 2 to 6. Oniy two digits of accuracy are given! terms of the series (2.5) decrease more rapidly in size than for the choice n and we do not compute superfluous terms as with the choice n The following is the evaluation of  j 1 5 —3  C(3)  using n  =  5, k  =  1.0000000  =  0.1250000  =  0.0370370  =  0.0156250  =  0.0080000  =  0.080 0000  5_3  6, k  =  3, k  =  4,  2.  2:  0.0040000  T ( 1 5,3)  =  0.0004000  T ( 2 5,3)  =  —0.0000053  C(3)  =  =  =  •.‘  1.2650673  Now, consider the case when s is not real. Let s  =  + lOi, and say, we want to  Chapter 2. Euler-Maclaurin Series of c(s)  compute  ( + ioi)  17  to within six digits of accuracy. Here, si is considerably larger  than in the previous case. This implies that we need a much larger value for n in order to achieve comparable accuracy. However, the terms in (a(s) are much more difficult to compute when s is complex. This is because  nS  =  n(cos(tlnn)  —  isin(tlnn)), so,  computation involves a root, a logarithm and two trigonometric functions. Therefore, our goal is to keep n still as small as possible, but larger than k, because Ca(s) will then converge much faster. Table 2.2 shows many possible values for n and k which give  n  k=  2  3  4  5  6  =  5  1.7 i0  1.6 i0’  2.2 i0  3.8 10—6  8.6 iO’  6  6.4 1O  4.1 iO  3.8 10_6  4.7 io  7.3 10  7  2.7 iO  1.3 10  8.8 iO  7.9 10  9.1 iO  8  1.3 i0  4.8 10—6  2.5 iO  1.7 1O  1.5 10  9  6.9 iO  2.0 10—6  8.1 10—8  4.4 iO  3.1 10_lU  10  3.8 i0  9.0 iO’  3.0 10—8  1.3 iO  7.4 10k’  11  2.3 iO  4.4 10  1.2 10-8  4.4 10-10  2.0 lO  12  1.4 iO  2.3 iO  5.3 iO  1.6 10’°  6.3 10_12  Table 2.2: Values of BRE for s = + lOi as k ranges from 2 to 6 and n ranges from 5 to 12. Only two digits of accuracy are given!  ( + lOi) to within six digits of accuracy, e. g. n  we choose  ii =  10, k  =  than for the choice n choice n  =  12, k  =  =  6, k  =  5 or n  =  12, k  =  3. However,  3 since the terms of the series (2.5) decrease more rapidly in size  =  6, Ic  =  5, and we do not compute superfluous terms as with the  3. The following is the evaluation of  C  ( + lOi) using n  10, Ic  =  3:  Chapter 2. Euler-Maclaurin Series of ((s)  Cio  ( + lOi)  1  =  18  1.37020921  0.38646841i  0.27924235  +  0.14756128i  =  -0.08076170  -1-  0.13593214i  -—1Oi 10  —  1 T  (io,  + lOi)  =  -0.02332837  2 T  (io,  + lOi)  =  -0.00045640  3 T  (io,  + lOi)  =  -0.00000996  C(+1oi)  —  -‘  1.54489512  —  —  + —  0.01232751i 0.00004223i 0.00000785i 0.11533689i  The above two examples show that in order to compute <is) to within a certain accuracy, we need some method that tells us how to choose n and k. This method has to be reliable and efficient, since it is too cumbersome to calculate a table of remainder values for each s, and decide by comparison what values of n and k to choose. Chapter 3 discusses the influence of n and k on the behavior of BRE, and explains how we can choose n and k in order to minimize the computations involved. This way we can develop an efficient algorithm for the calculation of  c(s)  based on the E-M series (2.5).  Chapter 3 Analysis of Backlund’s Remainder Estimate  The main objective of this chapter is to find those n and k such that  ij Tk(n, s)  and such that the computational cost is minimized by the least number of terms in the E-M series of C(s). We want to do so in an efficient and accurate manner. To this end, we require an analysis of both the behavior of BRE, and the computational cost, as well as a reliable method of finding the optimal n and k. 3.1  Defining the Best Pair (n, k)  Throughout this section, let s be fixed, and write  =  e_D > 0, where D  e  1R. First, we  analyze the behavior of BRE for fixed k and n, and then we discuss the computational cost involved with the E-M series of  C(s).  This will lead to a definition of that pair (n, k)  we require to solve our problem. Case I: Let k be fixed. Then Backlund’s estimate of R(n, k  C ( 1 s, k) C2(s,k), where C 1  C ( 1 s, k), and C 2  constants depending on k and s. within any accuracy 2 ln n C  —  e_D,  —  1, s) is of the form  C ( 2 s, k) are positive, non-zero  C(s)  to  D. Now, f(n)  :=  This means that if we want to find  we need n to satisfy C 2 in n  1 as a function of n, for inC  E  ]f?+\{o},  concave downward. Figure 3.1 shows the graphs of expected, the general shape of the graph of  f  —  in C 1  is monotone increasing and  f  for different s and k. As  is independent of the choices for  s and k, since a change in C 1 results in a vertical translation of the curve of  f,  Chapter 3. Analysis of Backlund’s Remainder Estimate  20  and a change in C 2 results in an expansion or a contraction of the curve of  f is invertible; so, given any D integer n such that f(n) D. Furthermore,  E  f.  we can always find a positive  D 400 300 200 100 n  Figure 3.1: The graphs of ln applied to Backlund’s remainder estimate as a function of n (= f(n)) for three different values of s and k, where D denotes the number of accurate digits to the base e. The steepest curve has s = + lOi, k = 94, the middle curve has s = 3, k = 125, and the curve with the most gradual slope has s = 4 + 6i, k = 28. —  The above discussion implies that for any fixed k, we can always find an n such that R(n, k  —  1, s)I <  .  However, we want to keep computations at a minimum,  and by randomly choosing k we have no influence on the number of terms used in the E-M series of C(s). In particular, we will not know, if the choice for k minimizes the number of terms used. Case II: Let n be fixed. Then every term in Backlund’s estimate of the remainder  R(n, k  —  1, s)j depends on k. In order to show how BRE behaves for fixed .s and  n, let us simplify the estimate by applying estimate (B.24), Stirling’s formula for factorials, and equation (C.25) to it. We can do this, because the asymptotic estimate  is sufficiently accurate for values of k as small as 3. We  obtain R(n,k  —  1,s)  .s + 2k 1 u + 2k—i —  Chapter 3. Analysis of Backlund’s Remainder Estimate  2 s+2k—1 (2K) k 2 a + 2k 1 —  — —  1 C  1 u+2k_i  s+2k—1 a + 2k 1  :=  C(s, n)  >  define g(k) : 2k in 2irn  F(s+2k—1) F(s)  F(s+2k—1)l (2n)2k  —  where C  21  0 is a constant depending only on fixed s and n. Let us —  (in  to find C(s) to within accuracy  + in e_D,  IF( +2k  —  1)1 + in c). So, if we want  we need k to satisfy g(k)  D. Is this possibie?  In order to answer this question, we need to analyze g(k), for k D  k  Figure 3.2: The graphs of in applied to Backlund’s remainder estimate as a function of k (= g(k)) for three different values of s and n, where D denotes the number of accurate digits to the base e. The curve with the highest maximum has s = + lOi, n = 40, the curve with the lowest maximum has s = 4 + 6i, n = 10, and the intermediate curve has 5 = 3, Ti = 37. —  We have that in  increases for increasing k.  g depends largely on the behavior of in function (see Appendix C),  + 2k  —  IF(s+2k—i) .  Therefore, the behavior of By definition of the Gamma  1) grows factorially in k, and (2irn) 21 grows  exponentially in k. So, there must exist a k 0 E 1R such that for k  I( + 2k  —  <(2irn) k , and for k> k 1) 2 0 we have  + 2k  —  1)1  <  0 we have k  > 2 rn) 7 (2 k . j  other words, g will increase to some best number D 0 of accurate digits to the base e and then decrease again, where D 0 depends on k . Finding k 0 0 just means looking for the smallest term in the E-M series of  C(s),  but this term can be found by the  Chapter 3. Analysis of Back]und’s Remainder Estimate  22  growth ratio (2.9). So, we are able to deduce from this growth ratio that k 0 is that number, which satisfies  0 I + 2k  2irn, for our fixed s and n. Figure 3.2 shows  the graphs of g for various values of s and n. As expected, the general shape of g is independent of the choices for s and n, since modifying s or n shifts the maximum of the graph of g according to the growth ratio. Now, we can answer our previous question if, given D E ]R+, we can find a k that will satisfy g(k)  D. For fixed s and n, given any error  k that satisfies the previous inequality if D  =  e_D,  we can only find a  0 is as defined above. In , where D 0 D  other words, for fixed n, we cannot necessarily find a k such that R(n, k—i, s) <e holds. The behavior of Backlund’s remainder estimate for fixed n and for fixed k shows that the best possible n and k, which satisfy R(n, k  —  i, s) <  ,  depend on k. This is because  for any k we can always find an n that satisfies the previous inequality, but not vice versa. There is one further discussion we need to have regarding n and k, before we will define what exactly is meant by “best” pair (n, k): We have to analyze how the choice of n and k influence the cost of computing  C(s)  using the E-M series (2.5).  Essentially, the E-M series of ((s) consists of two sums, namely (a(s) T ( 3 n, .s). Let S,, denote the series Ca(s), and Sk denote the sum  and  =  Tj(n, s). Then,  n gives the number of terms to be used from the series S,,, and k gives the number of terms to be used from the series  k• 3  Since our goal is to write an efficient algorithm for  the computation of C(s) using its E-M series, we need to analyze the work involved in computing Sn and Sk. This will determine how to choose the ideal integers n and k. There are two important issues to keep in mind. First of all, if n is too large, we are not making efficient use of the E-M series for  C(s),  because the behavior of  forces the  addition of large and small numbers in the sum S,. This causes problems in machine  Chapter 3. Analysis of Backlund’s Remainder Estimate  23  computation, as valuable digits will be lost. On the other hand, if k is too large, we need many relatively expensive calculations of the terms Tj(n, s) for 1  k. Lehmer [5]  suggests a method in which the real and imaginary parts of Tj(n, s) are computed re cursively. However, this uses a lot of storage space and is again a question of machine economics. It is therefore important to find that pair (n, k) which will make efficient use of the behavior of the E-M series of C(s), and will be economical for numerical computation  of C(s). In machine arithmetic, an addition (subtraction) has negligible cost in comparison to a multiplication (division). Other operations such as a logarithm (exponential) or a table look-up cost a fixed amount. Assume we have a table of Bernoulli numbers. Then there is no cost associated with computing /3,. What is the cost of calculating Sk, or rather, of  )(8+1+232)?  The cost of computing the first term,  is 5  multiplications. Every other term of Sk can be obtained from its previous term by an additional 6 multiplicative operations: (s(s+1O)...(s+2j-2)((s+2j-1)(s+2j)s(s+1)...(s+2j) (2j)!n2i (2j + 1)(2j + 2)n2 (2j + 2)!n2i+2  )  Hence,  Sk  )  —  costs roughly 6k multiplicative operations to compute. There are some ways to  speed up the computation (see Lehmer [5] and Hutchinson [4]) but, ultimately, the cost depends on k. Let us say, therefore, that the cost to compute Sk is pk multiplications, for some p  1. The computation of S, on the other hand, costs roughly qn multiplications,  since each term in the sum costs a fixed number q of multiplications, and there are n  —  1  terms in total. The result of the above discussion is that the total cost of computing  C(s)  using its  E-M series is roughly qn + pk, where q and p are as defined above. Therefore, we need to choose n and k such that qn +pk is minimized. Together with the result of the discussion about the behavior of Backlund’s remainder estimate for fixed n and for fixed k, we can  Chapter 3. Analysis of Backlund’s Remainder Estimate  24  finally give the following definition for the best pair (n, k): Definition 3 Let s E C\{i} and e> 0. For any k  such that R(ri, k—i, s)  e  IIV\{0}, we can find an n  e  ]J\T  e using Backlund’s remainder estimate. The ‘best” pair (n, k)  for the E-M series of(s) such that R(n, k  —  i, s) < e is defined by that positive integer  n, for which qn + pk is the smallest. The positive numbers q and p depend on the cost of computing a single term of S and Sk, respectively, and satisfy p, q  1.  Note: In the best pair (n, k), we will refer to n and k as the “best” n and “best” k, respectively. 3.2  Analyzing the Best Pair (n, k)  Let us look at BRE (2.10). We notice that, given s and e, we can solve for n in terms of k, and then we can substitute this expression for n in qn + pk to obtain an expression in k only. Furthermore, minimizing qn + pk is equivalent to minimizing an + k with a and a> 0, since p, q  =  i. Because the procedure of finding the smallest an + k does not  depend on a, we study the simpler case a  =  1 at first, and then look at the general case.  Since BRE simplifies for s E JR, we consider the real and complex cases separately. 3.2.1  Case I: s is Real, a  Throughout this section let s  =  e  i  11?, i. e. s  =  a, and let e > 0. Then Backlund’s remainder  estimate becomes  IR(n,k—i,s)I  Tk(n,s)L  We want to simplify ITk(n, s) as much as possible without significant loss of accuracy. As in Case I of Section 3.1, we apply estimate (B.24), Stirling’s formula for factorials, and  Chapter 3. Analysis of Backlund’s Remainder Estimate  25  equation (C.25) to Backlund’s estimate, and obtain an estimate which holds for k > 3, since the asymptotic estimate  Given  such that  Tk (n  )  ITk(n, s)  =  =  1  is nearly an identity for k  “-‘  2 —  ,  1  )2k s+2k_i 2 (  1)  —  we can solve the above estimate for n and get: 2  F(s +2k F(s)  Note that for now, we must consider n E shortly, we will return to n  F(8 + 2k F(s)  3:  —  1)  ]F+\{o}  )  (3.11)  .  in order to solve this equation, but  1/V. Now, we can substitute the above expression for n in  n + k, and obtain an expression involving only the variable k. Let us use this result and define the following function:  1_2  F  F(s) with k E ff+\{o}, and s and is developed from R(n, k  —  2k—i_‘\s+2k_1  (2ir)21c  )  are parameters. We also require k> 1, since this function  1, s). In order to identify the best k, we need to minimize Hr.  First, let us determine if H,. does in fact have a minimum. Since .s and Now, I’(s + 2k  are fixed, the term that embodies the dependence of H,. on k is —  1) grows factorially in k by the definition of the Gamma function (see  Appendix C), and exists a k 0  )2k 2 (  grows exponentially in k. As a result we can show that there  ii? such that for k < k 0 we have F(s + 2k  we have F(s + 2k  F(+2)-1)  —  , and for k > k 21 1) < (2ir) 0  2 r 7 (2 ) . This implies that for increasing k, H,. first decreases to 1) > k  —  a minimum, and increases monotonically thereafter. Figures 3.3, 3.4, and 3.5 depict H,. for different .s and  .  the choice for s and  The graphs show that the general shape of Hr is independent of  .  Since H,. has a minimum, we are able to find the best pair (n, k). Let kmin denote that k at which Hr has its minimum,  i.  e. kmin is the solution to  dH,.(k) = dk  0. Then the  Chapter 3. Analysis of Backlund’s Remainder Estimate  26  n+k  700 600  500 400 300 200 100 50  100  150  200  25&  = 1050, 2 = 10_b00, and E3 Figure 3.3: Hr(k) (= n + k) for s = 3 and respective minima are at (27, 53.8291), (55, 107.272), and (111,214.346).  =  10200.  The  =  10_250.  The  n+k  700 600 500 400 300 200 100  50  100  150  200  25&  = 10_50, E2 = 10_150, and Figure 3.4: Hr(k) (n + k) for s = 20 and respective minima are at (17, 41.8848), (73,148.856), and (128, 255.962). n+k 700 600 500 400 300 200  50  100  150  200  25&  Figure 3.5: Hr(k) (= n + k) for s = 50 and €1 = 10_b00, E 2 = 10_200, and The respective minima are at (18, 57.0659), (73, 148.1641), and (129, 271.227).  =  10_300.  Chapter 3. Analysis of Backlund’s Remainder Estimate  best n, denote it b mjn, is given by Hr(kmin) so let the best pair (n, k) be given by kmin is solving  dH(k)  =  —  27  kmin. However, we require n, k é JZV,  (Fflminl, F1rnin1). The remaining issue in finding  0. Because of the complicated dependence of H on k in equation  (3.12), an analytic expression for the minimum of Hr will be difficult to find (if it is even possible). Hence, we will instead utilize a numerical technique such as Newton’s method; but considering the complicated forms of the derivatives of Hr, the secant method may be more appropriate. Now that we can actually find the best pair (n, k), let us take a closer look at the sum of the best n and the best k for different s and  .  Examining the minima of Figures  3.3, 3.4, and 3.5, we notice a curious phenomenon: The value of n + k is approximately twice the size of the best k; in other words, the best n is almost the same size as the best k. We computed the best n and the best k for different s as  decreases, and depicted  their graphs in Figure 3.6. These graphs show that the best n and the best k are almost identical. If this were true in general, it would allow us to replace n by k in Tk(n,s)I. Then, BRE involves only one variable, namely k. This means that we can solve BRE for k given s and  without having to find the best n and the best k. Therefore, we will  investigate the relationship between the best pair (ri, k) and the k found by taking n  =  k  Let s be fixed. We will compute the values for the best pair (n, k) as a function of  ,  in BRE.  as well as those k which satisfy  IR(k, k  —  1, s)l <  according to BRE, and then compare  the two results. To that effect, we define the following two functions:  where  f()  :=  Hr(kmim)Ir,  (3.13)  g()  :=  kITk(k,s)I<,  (3.14)  > 0 and kmin is that number which solves  0. In other words, f(e) is  the sum of the best n and the best k as a function of e, and g() is the smallest k for  Chapter 3. Analysis of Backlund’s Remainder Estimate  28  150 125 100 75 50 25 D 140 120 100 80 60 40 20 D  Figure 3.6: Plots of best n together with best k for = e_D with 1 <D <700. which R(k, k  —  1, s) <  that we can choose n 2g(E), i. e.  ii  =  s =  3, 20, and 50, respectively, where  using Backlund’s remainder estimate. Since we want to show k in the Backlund’s remainder estimate, we will compare f(E) to  + k to 2k. Figure 3.7 depicts the graphs of  We observe that for decreasing  ,  f()  i. e. higher precision of  2g are nearly identical. This further supports the choice n estimate. In order to get a clearer picture of how We suspect that as  —*  which depicts the ratio  0 we have  and 2g(e) for different  —*  f and  =  C(s),  the curves of  f  s.  and  k in Backlund’s remainder  2g differ, we compute their ratio.  1. This conjecture is supported by Figure 3.8,  for the same values of  s  used in Figure 3.7. Consequently, it  seems reasonable to let n equal to k in BRE, obtaining the same accuracy as if using the best n and the best k. In this way, we are able to reduce the number of variables in our problem from two to one, thereby significantly reducing the amount of computational effort required.  Chapter 3. Analysis of Backlund’s Remainder Estimate  2k-n+k  2k-n*k  3c  300  29  250  25( 20(  200  15(  150  bC  100  SC  5 100  200  300  500  600  100  200  100  200  500  600  300  400  500  600  D  2kn+k 250 200 150 100 50 300  Figure 3.7: The graphs of f() and 2g() for with 1. <D <700.  s  400  =  3, 20, and 50, respectively, where  =  e_D  =  e_D  n+k/2k  n+k/2k 100  2( 0.98  0.99  0.96 0.99  0.94 0.92  0.97  0.9  0.96  0.88 n+k/2k 00  300  0.9  Figure 3.8: The graphs of the ratio with 1 <D <700.  ----  2g(c)  for  s  =  3, 20, and 50, respectively, where  Chapter 3. Analysis of Backlund’s Remainder Estimate  30  Before we summarize the important result of this section, we will consider the case when s is complex. 3.2.2  Case II:  s  is Complex,  Throughout this section let  s =  1  c =  a + it E C, and let e > 0. Remember, we want to solve  BRE for n, and substitute the resulting expression in n + k in order to get an expression in k only. Again, we need to simplify Tk(n, s) as much as possible without significant I’(s+2k—1)  loss of accuracy. Unlike the real case,  C, so we need to take the absolute value  of this term. For computational reasons, we also need to estimate f’( + 2k because, when trying to minimize  n  —  1). This is  + k, we need to take the derivative of jF(s + 2k  —  1)1  with respect to k, and Mathematica [6] attempts to take the derivative of the absolute value of this term, which does not exist except at zero for complex arguments. However, the derivative of this function with respect to the real variable k does exist everywhere. Therefore, we will apply estimate (B.24), Stirling’s formula for factorials, equation (C.25) as well as Stirling’s formula for the Gamma function (see Appendix C) to Tk(n,s)I. We get: ITk(n, s)  2  I —  c  1  )2k flU+2k_1 2 (  —  Then, given  1  2\/ 21 (2ir)  1 +2ki  If(s) I + 2k  I  /  V  —  2w s + 2k  /  —  1  is+2k—1  \ s+2k—1  e  cr+2k- _sargs 11  I()I  we can solve this estimate of BRE for n. Unlike the real case, we have the  additional term  71 =  .  (  We get for  2  ri:  I(s)I (s)g(s)+1  Is + 2k  —  k 2 (2ire)  }  (3.15)  Chapter 3.  with  n  31  Analysis of Backlund’s Remainder Estimate  1R\{0}. Substituting the above expression for  n  in  n + k,  we can define the  following function:  kE  H(k)  —  2/  F(s)I  s+2k—1I e  —  with k> 0, and  s  and  (s)arg(s)+s_1  (  s + 2k  (s)arg(s)  )  a+2k_4\ 11  k 2 (27re)  2/ F(s)  —  1 c+2k-1  (3.16)  s+2k—1\ k 2 (2re)  )  +2k-1  +k,  are parameters. Now, we need to minimize H in order to identify  the best k. So again, we need to determine if H does in fact have a minimum. Since s and 21 Is+2k1I°  are parameters, the dependence of H on k is embodied by the fraction The term s + 2k  —  l0+21  grows factorially in k, and (27re) 21 grows  exponentially in k. Similar to the real case, this implies that for increasing k, fI first decreases to a minimum, and increases monotonically thereafter. Figures 3.9, and 3.10 depict II for different s and  ,  and show that the general shape of H,. is independent of  the choice for .s and c. n+k 700 600 500 400 300 200 100  50  100  150  200  25&  Figure 3.9: H(k) (= n + k) for s = + lOi and = i0°, = 10150, and The respective minima are at (31, 59.7432), (87, 166.976), and (142, 274.14). As with the real case, H has a minimum and we let the best pair  (fminl, Fkminl), where kmin is the solution to  dH(k) dk  =  0 and min  =  (n,  =  10250.  k) be given by  Hc(kmin)  —  kmin.  Chapter 3. Analysis of Backlund’s Remainder Estimate  32  n+k 700 600 500 400 300 200 100 50  100  150  200  25  Figure 3.10: H(k) (= n + k) for s = 5 + 9i and i = io°, 2 = 10_b00, and c The respective minima are at (28, 55.997), (56,109.609), and (111, 216.763).  =  10_200  Since equation (3.16) is even more complicated than equation (3.12), we will again use a numerical method to find an analytic expression for the minimum of H. Our task now is to check if the best pair (n, k) behaves the same way as in the real case. The minimum of Figures 3.9, and 3.10 seem to indicate that the best n is again about the same size as the best k. Figure 3.11 depict the graphs of the best n and the best k for different s as  decreases, and show that the graphs in each of these Figures  are almost identical. Therefore, we continue investigating the relationship between the best pair (n, k) and the k found by taking n  =  k in BRE in the same manner as the real  case.  1(  100  200  300  400  500  600  100  Figure 3.11: Plots of best n together with best k for s tively, where = e_D with 1 < D 700.  200  =  300  D  400  + lOi and s  =  5 + 9i, respec  Chapter 3. Analysis of Backlund’s Remainder Estimate  33  Let s be fixed. Similar to equation (3.14) we define the following function g(f) and we let  f be the  :=  12 k 1 :  IjTk(k,3)I6’  same function as defined by equation (3.13) except we replace H by  FIG. Figure 3.12 shows the graphs of  f()  together with 2g() for different s. This way,  we are comparing n + k to 2k, and notice that for decreasing e the curves of are nearly identical. Furthermore, the ratio  ,  and 2g  which is depicted in Figure 3.13 for the  same values of s as in Figure 3.12, supports our previous conjecture that —*  f  1 as  —*  0. Therefore, we can draw the same conclusion as in the real case, namely, that it  seems reasonable to let n equal to k in BRE. 2kn+k  2k-n+k  30(  D  -  100  Figure 3.12: The graphs of f() and 2g(€) for s = where c = e with 1 <D <700.  200  300  400  500  600  D  + lOi and s = 5 + 9i, respectively,  Now, let us finally summarize the results from Sections 3.2.1 and 3.2.2. 3.2.3  Method for Finding the Best Pair (n, k) for c> 0  Our original question was, given s and  C(s)  to within  using the E-M series of  ,  what is the smallest cn + k that will give us  C(s)?  In Sections 3.2.1, and 3.2.2 we discussed  the case a = 1 for real and complex arguments  s,  respectively. As a result of that  discussion, we found a method of finding the smallest n + k, which works equally well for  Chapter 3. Analysis of Backlund’s Remainder Estimate  ni-k! 2k  34  n+k /2k D  0.995  0. 99  0.99  0. 9S  0.985  0. 98  0.98  0.  0.975  0. 97  0.97  Figure 3.13: The graphs of the ratio where = e_D with 1 <D < 700.  for s  =  + lOi and s  =  5 + 9i, respectively,  the real and complex cases, and can be summarized as follows. In essence, we simplified BRE by estimating it further applying estimate (B.24), Stirling’s formula for factorials, equation (C.25), and in the complex case also applying Stirling’s formula for the Gamma function (see Appendix C); solved the new estimate for n given e; and substituted the resulting expression in n + k. Thereby, we obtained an expression in k only, and were able to employ a numerical technique for finding the minimum. In order to answer our origmal question, we generalize this method in the following way. Based on equations (3.11), and (3.15) write  I  n(k)  :=  I ç  I and define H(k)  :=  (2F(s+2k—1)’ (2,r)2 ‘F(s)  s+2k—1  )  /  (  .9  ,  k 2 s2k_1I0+ 2 (27re)  _1 2  )  +2k  E  ia,  1 ‘  ‘  cn(k) + k with c> 0, so that  I  H(k),  L  H(k), s E C,  where Hr and H are as defined in Sections 3.2.1, and 3.2.2, respectively. By solving dHa(k)  =  0 using a numerical technique in the same way as before, we obtain the best k,  which in turn gives us the best n by evaluating n(bestk). So, we have demonstrated a practical method of finding the best pair (n, k) for arbitrary a > 0.  Chapter 3. Analysis of Backlund’s Remainder Estimate  3.2.4  35  Conjecture 1  Now that we have a working method for finding the best n and k, let us pick up from the intriguing result of Sections 3.2.1, and 3.2.2 about the best pair (n, k) for a  =  1. For  various real and complex arguments s, we demonstrated via careful analysis and detailed graphing that best n  best k. This led us to substitute k for n in BRE. In order to  verify the validity of this substitution, we looked at the ratio number which gives  best  n+kbest  k,  where k is that  Tk(k, s) < e. We noticed that this ratio tends to 1 as e  and therefore were sufficiently convinced that taking n given s and c will yield the best k as e  —*  =  —+  0,  k in BRE and solving for k  0.  We want to know if this result can be generalized to all a > 0. To this end, we compute the ratio  best k  for different a at a number of s values as e decreases. The scatter  plot of Figure 3.14 shows this ratio. As e decreases, this plot strongly suggests that the ratio converges for each a. This leads us to believe that there is a relationship between best n and best k, and we are prepared to make the following conjecture. Conjecture 1 Lets  4,  e  respectively, for any  solves  C\{1}. Let p,q be the cost of computing the terms Tj(n,s) and  j  J1V. Given e > 0, let (ne, k) be the best pair (n, k) which  ITk(n,s) <e. Then there exists a real constant c, 0 <c < 1, such that limn  =  p(p,q) k,  where (p,q)  =  (1— c)— +c.  Conjecture 1 has the following implications for BRE: It allows us to reduce the number of unknowns in BRE from two to one, so that we get the new E-M remainder estimate R(n,k—1,s)f  s + 2k — 1 u+2k—1  Chapter 3. Analysis of Backlund’s Remainder Estimate  36  n/k=u  1.4 1.2 $  a 0.8  0.6  Figure 3.14: Scatter plot of the ratio as a function of a. At a = 1, 6 the ratio has been computed for the following s and values: s = 3,20,50,+2i,+10i,+40i,2+3i,5+9i, and 10+15i; and e = e_D with D = 230,350,460,575, arid 700. The smooth curve is the graph of t(a) = (1 — + ,  ,.  .  .  ,  .  which depends only on k. Furthermore, solving for k given s and  ,  which can be done  employing a numerical technique, ensures that the solution is the best possible k. Consider the choice c  =  ,  and remember that a  =  .  Let us write t(a)  =  (p, q).  From Figure 3.14 we can see how well the graph of (a) fits onto the scatter plot at the points of convergence for each a. How accurate is this choice of c? Figure 3.15 shows the graphs of best n together with t(a) best k for some arbitrarily chosen values of s and different a; and Figure 3.16 shows the ratio a, and where k is again that number which gives  k  for the previous values of Tk((a)k, s)  plots of Figure 3.16 indicate that the ratio tends to a value near 1 as e value,  ,  for c must be close to the actual value of c.  —*  s  and  e. Since the 0, the chosen  Chapter 3. Analysis of Backlund’s Remainder Estimate  37  80 60 40 20 D  80 60 40  40  20  20  100  200  100  200  D  300  400  100  D  200  200  300  400  best k for (s,c) = (20,2), Figure 3.15: Plots of best n together with ((i + (+2i,3), (3,4), (5+9i,5), (2+3i,6), and (+10i,7), where €.= e_D with 1<D<500. —  Chapter 3. Analysis of Backlund’s Remainder Estimate  38  n*k/2k  n+k/2k  A  0 0.95 0.9  300\-  400  \_.j.9.Q—J  400  0.99  0.85 0.8  0.98  0.75 0.97  0.7 0.65 n+k/2k  n.kJ2k 1.03 1.02 1.01 100  200  D 0.99 0 99 0.97 0.96 n+k/2k  n+k/2k  A  1.02  100 0.98  0.96  \?0  300  0  200  \/300  48—’  0  0.99 0.98 0.97  best m+besi k Figure 3.16: The graphs of the ratio a (ii(cs)a+1)k for (s, a) = (20, 2), ( + 2i, 3), (3,4), e with 1 <D 500. (5+ 9i, 5), (2 + 3i, 6), and ( + lOi, 7), where  Chapter 3. Analysis of Backlund’s Remainder Estimate  3.3  39  The Smallest Pair (n, k)  In Section 3.1 we have defined that pair (n, k), which gives us the minimum number of terms to be used in the E-M series (2.5) for the computation of C(s), and we denoted this the best pair. Now, we want to introduce a different pair (n, k), which we shall call “smallest” pair, and discuss a possible relationship between the best and smallest pairs. By the behavior of BRE for fixed k and s from Section 3.1, we know that the terms Tk(n, s) first decrease to some best error, and then grow large again. The turning point  of growth is given by the growth ratio of Section 2.3, which tells us that the size of the smallest term can be established with the relationship s + 2k  Let us find the  27rn.  magnitude of the smallest term by applying the growth ratio to BRE. As before, we also utilize estimate (B.24), Stirling’s formula for factorials, equation (C.25), as well as Stirling’s formula for the Gamma function (see Appendix C). We get s + 2k—i u+2k—i  IR(n,k—1,s)I  s+2k—1 u + 2k 1  F(s+2k—1)j  2  —  Is+2k—iI u +2k  —  —  Is + 2k o + 2k  + 2k o + 2k  =  1  —  +2k u + 2k  ii  —  —  1  1  2  2 )2k_ 2 (  i 2(2ir) 4  —  —  1 F(s)  )2k_ 2 (  1  —  —  2 k 2 (2K)  1  1  F(s)  )2k 2 (  (s)I  / 2ir V s +2k  —  (s+2k—i’ ’’ 21 1 e  1 (s + 2k I(s) e  )  —  1 u+2k_1  y+ 1 k 2 l _targ(s+2k_1)  )  kl 2 (2nNJ+ i I’(s)  1 (c+2k_1)+targ(s+2k_1)  fl+2k1  _arg(s+2k_1) +2k_1  Chapter 3. Analysis of Backlund’s Remainder Estimate  40  Amazingly, this estimate depends only on k, so given s and e we can solve for k empioying a numericai tecnnique, ann n is given 1  •  i  i  s—f2k  i  D  2  accorcung to tne growtn ratio. 1  1  1  Based on this result, let us make the following definition. Definition 4 Let s E C\{1}, and e > 0. The “smallest” pair (n, k) for the E-M series  of((s) such that R(n,k  —  1,s)H  < e is defined by the smallest positive integer k that  satisfies Is+2k—12(2ir) jF(s) + 2k 1 —  1 (+2k_1)+targ(s+2k_1)  <  —  ,  ( 317 )  and then fls+2k 2ir Note: In the smallest pair (n, k), we will refer to n and k as the “smallest” n and  “smallest” k, respectively. We are interested, if there might be a relationship between the best and smallest pairs that will help us to reduce computational cost. A possible relationship does not ease computations any further if we compare solving equation (3.17) with solving equation  FiCii  Tk((p, q)k, s)]  solve numerically than  =  e; however, both of these equations are less demanding to =  k. This is because the derivative  0 from our working method for finding best n and best dH(k)  is more complicated and involves the Polygamma  function. So, just as with Conjecture 1 we can simplify the computations involved in finding best n and best k if their is a relationship between the best and smallest pairs. Therefore, we compute the ratio  k  for different o > 0 at a number of s values as  e decreases. Figure 3.17 depicts this ratio in a scatter plot. This plot provides strong evidence that the ratio converges for each a as e decreases. Just as in Section 3.2.4, this leads us to believe that there is a relationship between smallest k and best k, and so we make the following conjecture.  Chapter 3. Analysis of Backlund’s Remainder Estimate  41  sin n / bst n  2.25  .1.::.,,  3  I  I  2.75  I 1  I  !  !  I ‘  ‘  I  I ‘  a  1.75 1.5 Figure 3.17: Scatter plot of the ratio sm,aUest k as a function of a. At a = 1, 6 the ratio has been computed for the following s and e values: = 3,20,50, with + 2i, + 10i, + 40i,2 + 3i,5 + 9i, and 10 + 15i; arid € = e D = 230,350,460,575, and 700. The smooth curve is the graph of z(a) = (1— + ,  ,..  .  ,  .  Conjecture 2 Let s E C\{1}, and let p, q be the cost of computing the terms Tj(n, s)  and  ,  respectively, for any  j  E ]1V. Given  €  > 0,  let k denote the best k, and let k  denote the smallest k. Then there exists a real function v dependent on p, and q only such that limk  v(p,q)k.  Note: Conjectures 1 and 2 also imply a relationship between best n and smallest n: Given  €  >  0, let n denote the best n, and let n denote the smallest n. Then limns  =  S  +2  (p,q)  flb  2ir  The result of this section has not given us any other insight than that of reducing the computational cost involved in finding best n and best k over the method given in  Chapter 3. Analysis of Backlund’s Remainder Estimate  42  Section 3.2.3. However, Conjecture 2 involves the same amount of work in finding best n and best k as Conjecture 1; and since we justified Conjecture 1 in more detail, we will only employ the result of Conjecture 1 in our algorithm.  Chapter 4 Algorithm and Conclusion  4.1  Presentation and Discussion of Algorithm for Computing ((s)  We finally outline an algorithm for the computation of the Riemann zeta function using the E-M series (2.5), which is based on the method described in Section 3.2.3: 1. begin Zeta(s,ef) (a) initialize p, q respectively (b) ifs  =  0 then  i. return (c) ifs  =  1 then  i. return co (d) if (s) <0 then i. return 2(2ir)’ sin (e) else i. let c ii.  =  () f(1  —  s) Zeta(1  —  s,  p  let k be that positive integer closest to the solution of  iii. let n  =  iv. return  [n(k)1 Ca(s) +  =i-  +  (f) end if  2. end Zeta 43  +  Tj(n, s)  dH(k) = dk  0  Chapter 4. Algorithm and Conclusion  44  Notes: The following is a list of explanations about variables and functions involved in the above algorithm: • s E C is the argument for the Riemann zeta function. • > 0 is the accuracy with which we want to compute ((s).  • p, q denote the cost of computing T(n, s) and  4,  respectively.  • The Gamma function, I’, on line 1.d.i. is built into most of the common computer languages, but if necessary, we can compute F(s) using the Stirling series for log F(s). • The functions H(k) and n(k) on lines 1.f.ii. and 1.f.iii., respectively, are as defined in Section 3.2.3. • On line 1.f.ii., any numerical root finding technique can be employed to solve dHa(k)  0 for k.  • The work required to compute the terms Tj(n, .s), 1  <j  < k, on line 1.f.iv. can  be reduced by applying a recursive method such as suggested by Lehmer [5]. We have to take into account that such methods lower the computational cost and have to decrease p accordingly. As the algorithm, stands it can be converted into any suitable programming language and applied for the computation of C(s) to within any number of accurate digits for all s E C. Before we discuss the complexity of this algorithm, we show how Conjecture 1 can be employed to yield a slightly different algorithm. Let c be the constant of Conjecture 1. As was shown in Section 3.2.4, c  =  is a reasonable choice; so, we will use it. The new  algorithm is obtained by replacing lines 1.f.i. through 1.f.iii. with the following three lines: let  t =  (1  —  c) + c  Chapter 4. Algorithm and Conclusion  45  let k be the smallest positive integer that satisfies  Tj(çtk, s)j <  let n = pk We now have two operative algorithms for computing the values of the Riemann zeta function to any precision for any complex argument. Furthermore, these algorithms are optimal in that they have been derived so that the least number of terms are used from the E-M series of ç’(s);i. e. these algorithms achieve minimum cost in the computation of the series. However, these algorithms differ in their computational cost of finding n and k. We discuss their differences, and also draw another recent algorithm into the discussion, which has been developed by Cohen and Olivier [2]. For easy reference, we denote the algorithm based on the method of Section 3.2.3 by A, the algorithm using Conjecture 1 by B, and Cohen and Olivier’s algorithm by C. Cohen and Olivier’s objective is to compute the value of the Riemann zeta function for any s  C to within any given accuracy. Their algorithm C utilizes the relationship  between n and k as given by the growth ratio, and an estimation of F(s + 2k) to find the values of n and k: For details see reference [2]. First, we look at the values of n and k given in Table 4.3 as computed by the three different algorithms A, B, and C; and compare them. Cohen and Olivier did not take into account the different computational cost of terms from the two sums in the E-M series of ((s). So for comparison reason, we assume that the cost is the same, and therefore, take p = q = 1 =>  =  1 in the computation of n and k. The subscripts A, B, and C of n and  k indicate with what algorithm n and k have been computed. As was shown in Sections 3.2.1 and 3.2.2 it is immediately apparent from Table 4.3 that kA + A importantly, Table 4.3 also shows that 0 nc + k  I 1  (nA+kA),  3k.  kB + B• More  Chapter 4. Algorithm and Conclusion  This means that algorithm C uses 1  46  times as many terms from the E-M series for the  computation of C(s) as algorithms A or B. Further, the values of k compared to kA or kB are almost twice their size; however, k controls the number of terms used in the sum  Tj(n, s), and it takes more work to compute Tj(n, s) than  4.  This means  that the cost of computing the E-M series is far less in either algorithm A or B than in algorithm C.  s  d  k  riG  kA  A  kB=nB  3  50  59  20  27  27  27  3  200  232  75  111  114  108  20  50  47  18  17  25  22  20  250  277  92  128  128  128  50  100  64  29  18  40  32  50  300  294  102  129  143  136  + lOi  50  67  22  31  29  30  + lOi  250  298  95  142  133  138  5+9i  50  62  21  28  28  28  5+9i  200  235  76  111  106  109  Table 4.3: Values of k and n as computed with algorithms A, B, and C, where s is the complex argument of the zeta function and = 10 is the given accuracy. The question that needs to be answered next is, how does the computational cost of finding n and k compare in the three algorithms. As pointed out earlier, algorithm A’s main work lies in solving  dH(k)  =  0 for k, for all  s  E C\{1}. Regardless of the  numerical technique employed, we need at least the first derivative of dHa(k) dk  dHa(k)  However,  already involves the Log, Gamma, and Polygamma functions and is, therefore, .  .  Chapter 4. Algorithm and Conclusion  47  quite complex iu structure. On the other haud, algorithm B’s core work lies in solving  ITkCttk, )I  = €  for k, for all s  e  C, which only involves the Log and Gamma  functions after the usual simplifications. Now, Newton’s method converges quadratically, so for a good enough initial guess very few iterations should be required to solve for k with a tolerance of 1, which is all the accuracy we ask for since we take the ceiling of that k. In fact, with the approximations involved, being within one or two of the best k is reasonable. Algorithm C’s main work comes from Newton’s method applied to a function whose most intricate term is the Arctan; moreover, Cohen and Olivier provide an initial guess. Furthermore, this work is only required for s E C, since for s € 18 algorithm C solves for  ii  and k by given identities.  To sum up, we have that algorithm C requires very little cost in finding ri and k and algorithm A the most. Algorithm B is less expensive than algorithm A, but somewhat more expensive than algorithm C in computing the values for ri and Ic. Of course, much depends on the numerical root finding technique employed, but a numerical analysis is beyond the scope of this paper. What we have, now, are two algorithms, namely B and C, with opposite qualities. Algorithm B takes some work in finding n and Ic, but uses the least computational cost in the E-M series, while algorithm C requires very little work in finding n and Ic, but uses a large number of terms from the E-M series. This implies that for lower precision calculation it is entirely possible that Cohen and Olivier’s algorithm is less expensive than our algorithm, even though they use more terms in the E-M series of C(s). However, for high precision calculation, when the cost is dominated by the computation of the E-M series, our algorithm will excel due to the above points.  Chapter 4. Algorithm and Conclusion  4.2  48  Conclusion and Future Work  We have met the objective of this paper in that we presented two algorithms for the computation of the Riemann zeta function, which are based on the E-M series of c(s) and minimize the cost of computing the E-M series with respect to BRE. One of the algorithms utilizes a method for computing the optimal values of the upper bounds, n and k, for the two sums in the E-M series. This method is derived and stated in Section 3.2. The other algorithm is based on Conjecture 1 in Section 3.2.4 and requires less computational cost in finding n and k than the previous algorithm. There are two issues that have been mentioned, which deserve further investigation. First, in high precision calculation the Bernoulli numbers need to be dealt with in another way than tabulating them, as storage space becomes expensive. They can be estimated, but this causes a loss in accuracy of n and k. Secondly, for both algorithms an efficient numerical root finding technique has to be implemented. Neither of these issues have been addressed here, since they are beyond the scope of this paper. Our main result is the conjectured relationship between n and k. In the case when n  =  ,uk, we have not performed an exhaustive analysis of BRE. Based on the graphs of  Figure 3.15, we believe that there is a much simpler relationship between k, s, and  than  is given through BRE. We suggest that a relationship between k and s in the above case merits further investigation. We also want to point out that considering a relationship between the upper bounds of the two sums in a series is worthwhile to look into in the general case. For example, using the identity (C.25) and the Stirling’s series for the Gamma function, we get a series for log F(s) that has two sums. Since the Gamma function is related to the Riemann zeta function, e. g. by the Functional Equation (2.3), the technique employed in this paper should be applicable to the Gamma series as well.  Bibliography  [1] R. Backlund, Uber die Nullstellen der Riemannschen Zetafunktion, Dissertation, Helsingfors, 1916. [2] H.Cohen and M. Olivier, Calcul des valeurs de la fonction zeta de Riemann en multiprécision, C.R. Acad. Sci. Paris, t. 314, Série I, 427-430, 1992 [3] H. M. Edwards, Riemann’s Zeta Function, Academic Press, New York and London, 1974. [4] J. I. Hutchinson, On the Roots of the Riemann Zeta Function, Trans Amer Math Soc, 27:49-60 (1925). [5] D. H. Lehmer, Extended Computation of the Riemann Zeta-Function, Mathematica, 3:102-108 (1956). [6] S. Wolfram, Mathematica: A System for Doing Mathematics by Computer, AddisonWesley Publishing Company, Inc., 1991.  49  Appendix A Validation of Backlund’s Remainder Estimate  Backlund’s estimate of the E-M remainder is derived by leaving out a negative integral (see Section 2.4). We have been unable to evaluate or estimate this integral. However, we need to validate BRE as a good estimate of the actual error to justify its use. Therefore, we compute the ratio  as a function of  n  for fixed large k and different s E C,  where AE denotes the absolute error. Remember that the absolute error is defined as the absolute value of the difference between the actual value and the computed value; in our case we have AE=  k(s)-  Figure A.18 depicts the graphs of the ratio for nine values of s. It is evident from these graphs that in all cases considered, BRE increases the number of accurate digits asked for by about 20% when n  k. This means that  n  and k are slightly larger than  required for a given accuracy. Since the simplification of the remainder introduced by BRE is needed to obtain a relationship between  n  and k, this small increase in cost is  justified. Furthermore, by taking  n  =  k in BRE we obtain an estimate for the error which is  even sharper than in the general case, as is shown in Figure A.19. We used  n  comparing Cohen and Olivier’s [2] algorithm to our algorithm in Section 4.1.  50  =  k when  Appendix A. Validation of Backlund’s Remainder Estimate  51  n  n  error < 100. The first plot Figure A.18: The ratio absolute , for k = 40 and 1 BRE corresponds to s = 3, 20, and 50; the second plot to s = + 2i, +lOi, and +40i; and the third plot to s = 2 + 3i, 5 + 9i, and 10 + 15i. Note that for the complex values the ratios are very close to each other, so that their graphs overlap.  60  100  80  n=k  0.  0.  0.  error Figure A.19: The ratio absolute , for n BRE s = 2 + 3i, respectively.  =  k with 1  <  k  <  100, and s  =  3,  + 2i, and  Appendix B Bernoulli Polynomials  Definition 5 Let Bm(X) denote the m-th Bernoulli polynomial, where x E 1R. Bm(x) is inductively defined by Bo(x) = 1 and for m> 0 by the two conditions (x) 1 B  =  (m + 1)Bm() and  (B.18)  jBm+i(X)dX  =  0.  (B.19)  For example, Bi(x)  =  x—  (x) 2 B  =  x —x+,  (x) 3 B  =  3 x  —  x2  + -x.  of Bm(X)  Definition 6 Let bm() be the periodic extension of the restriction One of its properties is that it is a continuous function for to x E [0, 1] is continuous and Bm(0) = Bm(1) for  m >  to x  [0,  1].  1, since Bm(X) restricted  1.  m>  We want to express L’m() in terms of its Fourier series. In general, the constant term of the Fourier series of L’m() is zero because of condition (B.19) in the definition of the Bernoulli polynomial. The k-th Fourier coefficient of bi(x), k  J  Bi(x)  dx  =  =  =  j(x  —  7nI 2 e_ —x  2ki  27rki 52  L  0, is given by  dx  1  +  1 (2kj  1 —  f  1  2 e  dx (B.20)  Appendix B. Bernoulli Polynomials  Assume that m > 0. function  f  53  Given the definition of the Fourier series for a continuous  with period 1, i.e.  f(x)  ix, the k-th Fourier coefficient of 2 f(k)e  =  f’  is 2irkif(k). Together with condition (B.18) in the definition of the Bernoulli polynomial and the result of equation (B.20), this implies that the k-th Fourier coefficient of m(X) must be  —  (2)m.  Hence, the Fourier series of L’m(X) is given by 00  E  m.  k=—oo,kO  (2irki’m \ I  ” 2 e  (B 21  for m> 0, or, alternately  =  m_i2(2m). 1) r) 7 (2 m 2 /  °°  k=1  cos2rkx m 2 k (B.22)  /‘2m+1  for m > 0 and 2m  =  (—1) rn_i  2(2m + 1)! m+l 2 (27r)  03  k=1  sin 27rkx m+l 2 k  0. Furthermore, from equation (B.21) it can be easily seen that  bm() has the following property for m> 0: x) = mrn_i(x).  Definition 7 Let  13m  denote the m-th Bernoulli number, which is defined by the constant  term of the m-th Bernoulli polynomial: = Bm(0). Since Brn(0) = 0 for  (B.23)  ni  odd, it follows that for all  rn  2rn+1 3 /  =  0 and  132m  =  m(0). 2 B  Appendix B. Bernoulli Polynomials  54  The following is an estimate of the magnitude of /3m for large m. Even though the first few Bernoulli numbers are quite small in size, their magnitude grows rapidly as can be seen from the following estimate: 132m  =  m(O) 2 B  =  m(O) 2 ?b °°  (“  -‘  “  “  12  1 ‘2m I k=1  1  m 2 k  m 2 (2r)  since lim m—+c  m 2 k  ——  k=1  = lim 2 C( ) = 1. m—*c m  Applying Stirling’s formula for factorials, we finally obtain the estimate /32m1  for large m.  4/  ()  2m  (B.24)  Appendix C Gamma Function  Definition 8 The Gamma function F(s) is defined by  F(s)  =  fts_1e_tdt  fors EC with J(s)>O. Integration by parts shows that F(s + 1) F(s)  =  =  sF(s). More explicitly we have  F(s + n) s(s+1)...(s+n—1)  This shows that F(s) can be extended to all s E C with J(s)  (C.25) >  —n, except at the  negative integers, where it will have simple poles. The following is a short collection of special values and properties of F(s) used in this paper. Special value s F at s  =  : By a change of variables in the definition of F(s) we can calculate  =  F  () 1  00  =  2 dt et  =  Euler’s Formula: For all s E C F(s)  =  urn  n00s(s+l)(s+n)  55  Appendix C. Gamma Function  56  Legendre’s Duplication Formula: For all s E C F  ()  Reflection Formula: For all s  F(s) = 2’F  (5)  (8+1)  C with 0 <J(s) < 1 F(s)F(1 —s) =  sin irs .  Stirling’s Formula: For large s E C with args <ir F(s) Using Legendre’s Duplication Formula and the Reflection Formula we can also get the following relationship:  23F(1_s)=F( ) ) 2 F( ’ =,  28ir  sin()F(1s)F()  (2ir)’2 sin  () sir  F(1  —  s) =  =r(l;S)  F(i=) 1 .  (C.26)  Appendix D Program  The following is the Mathematica code that produces all the graphs and tables in the text. All routines are thoroughly commented.  (* Z(s) denotes the Riemann Zeta function. R(n, k, s) is as defined in Section 2.2. The variable a in this program denotes alpha, which is defined in Section 3.3.2. *)  (* Zn[n, s] takes a complex number s and a positive integer n; and returns the partial sum of the Riemann Zeta function. *)  Zn[n_Integer, sj  Block[{i},  :  Sum[i(—s), {i, n—1}]J  I;  (n>O && NumberQ[s]);  (* T[k, n, s] takes a complex number s and positive integers  j  and n; and returns T as defined in  the Euler-Maclaurin series for Z(s) of Section 2.2. *)  57  Appendix D. Program  58  T[k_Integer, n_Integer, s_]  := Block[{a, b,  c, d, i},  a = (2k)’; b = n(s+2k-1); c = BernoulliBE2k]; d = Product[s (c*d)/(a*b)]  +  j, {j, 0, (2 k)  —  2}];  I;  (k>0 && n>0 && NumberQ[s]);  (* BRE[n, k, s] is Backlund’s estimate for the remainder of the EM series for (s).  BRE[n_Integer, k_Integer, s_]  :  Block[{a, b},  a = Abs[(s+2k+1)/(Re[s]+2k+1)]; b = Abs[T[k+1, n, s]]; a*b] I; (n>0 && k>0 && NumberQ[s]);  (* GErrVBRE[s, k, nmax] takes a complex number s, and positive nonzero integers k and nmax; and returns the graph of (absolute error)/(Backlund’ s remainder estimate) as a function of n, where 1 <= n <= nmax.  GErrVBREEs_, k_Integer, nmax_Integer]  :=  Block[{n, ti, t2, t3, t4, tS, t6, t7, t8, t9},  Appendix D. Program  59  ti = Table[Zs[n, k, s], {n, 4, nmax, 4}]; t2 = Zeta[s] — Prepend[tl, ZsE1, k, s]]; t3 = Map[Abs[#] Sc, t2]; t4 = Table[BRE[n, k, s], {n, 4, nmax, 4}]; t5 = Prepend[t4, BREE1, k, s]]; t6 = N[Transpose[{t3, t5}], 200]; t7 = Apply[Divide, t6, 1]; t8 = PrependETable[n, {n, 4, nmax, 4}], 1]; t9 = Transpose[{t8, t7}]; ListPlot [t9, PlotJoined -> True, AxesLabel —>  {Hn,  IIlI}]]  /;  (NumberQ[s] ScSc k>0 ScSc nmax>0);  (* GErrVC[s, kmax] takes a complex number s, and a positive non-zero integer kmax; and returns the graph of (absolute error)/(Backlund’ s remainder estimate) as a function of k, where 1 <  k <  kmax  and n=k. *)  GErrVC[s, kmaxlnteger]  :  Block[{k, tl, t2, t3, t4, t5, t6, tT, t8, t9}, ti = Table[ZsEk, k, s], {k, 4, kmax, 4}]; t2 = Zeta[s] — Prepend[tl, Zs[1, 1, s]]; t3 = Map[Abs[#] Sc, t2];  Appendix D. Program  60  t4  =  Table[BRE[k, k, s], {k, 4, kmax, 4}];  t5  =  Prependtt4, BRE[1, 1, s]];  t6  =  N[Transpose[{t3, t5}], 200];  t7  =  Apply[Divide, t6, 1];  t8  =  Prepend[Table[k, {k, 4, kmax, 4}], 1];  t9  =  Transpose[{t8, t7}];  ListPlot [t9, PlotJoined —> True, AxesLabel —> {“n-i-k”, ““}]] I; (NumberQ[s] && kinax>0);  (* GRn[s, n, kmax] takes a complex number s and positive integers n and kmax, and plots the graph of Backlund’s remainder estimate for IR(n, k, s)  I  for fixed n,  where kmax is the largest value for k. *)  GRn[s_, n_Integer, kmax_Integer:340] b  =  :  Block[{b, t, k},  E; t  =  Table[{k, —Log[b, N[BRE[n, k, s], 300]]},  {k, kmax}]; ListPlot [t, PlotJoined -> True, AxesLabel —> {“k”, “D”}]]  I;  (NumberQ[s] &&c n>0 && kmax>0);  (* GRk[s, k, nmax] takes a complex number s and positive  Appendix D. Program  61  integers k and nmax, and plots the graph of Backlund’s remainder estimate for R(n, k, s) for fixed  k,  where nmax is the largest value for n. *)  GRk[s_, k_Integer, nmax_Integer:70] b  =  :  Block[{b, t, n},  E; t  Table[{n, —Log[b, N[BRE[n, k, s], 300]]},  =  {n, nmax}]; ListPlot [t, PlotJoined —> True, AxesLabel —>  {“n”,  “D”}]] I;  (NumberQ[s] && k>O && nmax>O);  (* LogRSk[s, k] takes a complex number s and a positive integer k; and returns the logarithm to the base b of the size of the smallest term in the E-M sum for Z(s). Default value for b is e. *)  LogRSk[s_, k_Integer] b  Block[{b, al, a2, a3, a4, aS, a6},  E;  =  al  =  (1/2) Log[b, Abs[s+2k—1]];  a2  =  —Log[b, Re[s]+2k—1];  a3  =  Log[b, 2];  a4  =  (Re[s]—1) Log[b, 2 Pu;  a5  =  —(Re[s]—1) Logtb, Abs[s]];  a6  =  ImEs]  (ArgEs] —ArgEs+2k—1])—(2k—1);  Appendix D. Program  62  N[al+a2+a3+a4+a5+a6]] I; (NumberQ[s] && k>1);  (* SkGR[s, d] takes a complex number s and an integer d, where d is the number of precision digits required in the calculation of Z(s), and returns that number k for which the estimate for the size of the smallest term in the E—M sum for Z(s) is less than or equal to 1O(—d). *)  SkGR[s_, d_Integer]  Block[{k, b, e},  k  =  2;  b  =  E;  e  =  N[— d Log[b, 10]];  While[LogRSk[s, k] > e && k < 600, k++]; k]  I;  (NumberQ[s] && d>0);  (* GuessuEa] takes a positive number a greater than one; and returns u(a) as chosen in Section 3.2.4. *)  Guessu[a_]  :  N[(1—(1/E))(1/a)+(1/E)] I;  (NumberQ[a] && a>0);  (* HEk, s, d, a] takes a positive real number k, a complex number s, and a positive number d, and  Appendix D. Program  63  returns n+k as defined by the function H in Chapter 3.2.3. *)  s_, d_, a_: 1]  Block[{ci, c2, e},  :  ci  =  2 1Od / (2 PiY(2k);  c2  =  Gainma[s+2k—i]/Gairnua[s];  e  i/(s+2k—i);  =  N[a ((ci c2Ye)  k]] I;  +  (NumberQ[s] && Im[s]0 && NumberQ[d] Sc& d>O Sc& NumberQ[a] && a>O);  s_, d_, a_: 1]  Block[{ci, c2, c3, e},  :  ci  =  2 iOd / (2 PiY(2k);  c2  =  Sqrt[(Re[s]+2k—i)2  c3  =  Abs[Gamma[s+2k-i]/Gamma[s]];  e  =  +  (Im[s]Y2] / (Re[s]+2k—i);  i/(ReCs]+2k—i);  N[a (ci c2 c3Ye  +  k]] I;  (NumberQ[s] && NumberQ[d] && d>O && NumberQ[a] Sc& a>O);  (* HCEk, s, d, a] takes a positive real number k, a complex number s, and a positive number d, and returns n+k as defined by the function H in Chapter 3.2.3. *)  HC[k_, s_, d_, a_ :1]  :  Block[{ci, c2, c3, e},  Appendix D. Program  64  ci = ((Re[s]+2k—i)2+Im[s]’2Y(i/2) / E; c2 = (2 SqrtE2 Pi] iOd) / (Abs [Gainma[s]] E (Im Es] Arg Es])); c3 = ((ReEs]+2k—1)2+ImEs]2)(i/4) / ((ReEs]+2k—i)  *  (2 PiY(2k));  e = i/(Re[s]+2k—i); N[a (ci (c2 c3)e)  +  k]] I;  (NumberQ[s] && NumberQEd] && d>O && NumberQEa] && a>O);  (* BnHEk, s, d, a] takes positive numbers k, and d, and a complex number s; and returns the best n. *)  BnHEk_, s_, d_, a_: 1]  :  (HEk,  5,  d, a]—k)/a I;  (NumberQ[k] && k>=i Sc& NumberQEs] && NumberQEd] && d>O && NumberQEa] && a>O);  (* GBESTnk[s, d, kmin, kmax] takes a complex number s, a positive number d, and positive integers kmin and kmax; and returns the graph of best (n, k) using H as defined in Section 3.2.3, where kmin <  k <= kmax. *)  GBESTnkEs_, d_, kmin_Integer, kmax_Integer, a_ : i]  :  BlockE{t, k, m, t = TableE{k, HE1C, s, d, a]}, {k, kmin, kmax}];  Appendix D. Program  65  m  =  Min[Map[#[[2]] &, NEt]]];  c  =  AppendE PositionEMapE#[E2]]&,  g  =  NEt]],  m][[1]]+kmin—1,m];  ListPlot[t, PlotJoined —> True, PlotRange —> {{kmin, kmax}, {1, 700}}, AxesLabel —> {“k”, “n+k”}];  Show[{Graphics[{AbsoluteThickness[2], Cross Ec]}], g}]]  I;  (NumberQ[s] && NumberQ[d] &8c d>O Sc& kmin>O &&  kmax>kmin && NumberQEa] && a>O);  (* BkdHEs, d, a] takes a complex number s and a positive number d; and returns that k at which H(k, s, d) has a minimum, using its derivative. *)  BkdH[s_, d_, a_: 1] F[y_] k  =  :  :  Block[{F, x, y, k},  D[H[x, s, d, a], xl I. x —> y;  1;  While[F[k] < 0 && k < 400, k++]; k]  I;  (NumberQ[s] && Im[s]O && NumberQEd] && d>0 && NumberQ[a] && a>O);  BkdH[s_, d_, a_: 1] FEy_]  :=  Block[{F, x, y, k},  D[HC[x, s, d, a], x] I. x ->  Appendix D. Program  66  k = 1; While EFEk] < 0 && k < 400, k++]; k]  I;  (NumberQ[s] && NumberQ[d] && d>0 && NumberQ[a] && a>0);  (* BkBRE[s, d, a] takes a complex number s and a positive number d; and returns the best k for which  IR(k, k-i, s)I <= i0(-d), using Backlund’s remainder estimate. *)  BkBRE[s_, d_, a_:i]  :  Block[{b, e, u, k},  b = E; e = NE— d LogEb, iOu; u = GuessuEa];  k = 2; While[LogEb, N[BREECeiling[u k], k—i, s], 300]] > e &&  k < 600, k++]; k]  I;  (NumberQEs] && NumberQEd] && d>0 && NumberQEa] && a>0);  (* GBkVBnEs, dmax, a] takes a complex number s and a positive integer dmax; and returns the graph of the best n versus the best k. *)  GBkVBnEs.., dmax_, a..: i]  :  Appendix D. Program  67  BlockE{dm, u, d, ti, t2, t3, t4, t5, t6, t7}, dm u  =  Ceiling[dmax]; Guessu[a];  =  ti  =  Table[{s, d, a}, {d, 1, dm, 1O}];  t2  =  Table[d Log[1O], {d, 1, dm, 1O}];  t3  =  Apply[BkdH, ti, 1];  t4  =  Transpose[{t2, u t3}];  t5  =  Partition[Flatten[Transpose[{t3, tl}], 2], 4];  t6  =  Apply[BnH, t5, 1];  t7 = Transpose[{t2, t6}];  gi  =  ListPlotEJoin[{{O, —1O}, {O, O}}, t4], PlotJoined —> True, AxesLabel —> -(“D”, 11k11}];  g2  =  ListPlot[t7, PlotJoined —> True, AxesLabel —> {“D”,  “n”}];  Show[{gl, g2}, AxesLabel —> {“D”,  ‘“}]]  I;  (NumberQ[s] && NumberQ[dmax] && dmax>O && NumberQ[a] ScSc a>O);  (* Ratiok[s, dmax, a] takes a complex number s and a positive integer dmax; and returns the graph of H(kmin)  =  best n+k Id, the graph of two times  those k for which IR(k, k-i, s)I  <=  1O(-d)  using Backlund’s remainder estimate, and the graph of the ratio of 2k to (best n  +  best k), where  Appendix D. Program  68  the number of accurate digits d runs from 1 to 1O(—dmax) in steps of 5. *)  Ratiok[s., dmax_, a_:1]  :  Block[{dm, c, d, ti, t2, t3, t4, t5, t6, t7, t8, gi, g2}, dm c  Ceiling[dmax];  =  a Guessu[a] + 1;  =  ti  =  Table[{s, d, a}, {d, 1, dm, 5}];  t2  =  Table[d LogElO], -Cd, 1, dm, 5}];  t3  =  Apply[BkBRE, ti, 1];  t4  =  Transpose[{t2, c t3}];  t5  =  Apply[BkdH, tI, 1];  t6  =  Partition[Flatten[Transpose[{t5, tl}]], 4];  t7  =  If[Im[s]O, Apply[H, t6, 1], Apply[HC, t6, 1]];  t8  =  Transpose[{t2, t7}];  t9  =  ApplyEDivide, Transpose[{t7, c t3}], 1];  tlO gi  =  =  TransposeE{t2, t9}]; ListPlot[t4, PlotJoined —> True, AxesLabel —> {“D”, “k”}];  g2  =  ListPlot[Join[{{O, —30}, {O, O}}, t8], PlotJoined -> True, AxesLabel —> {“D”,  ‘an+k}];  Show[{gl, g2}, AxesLabel —> {“D”, ““}]; ListPlot EtlO, PlotJoined -> True,  Appendix D. Program  69  AxesLabel —> {“D”, “}]]  I;  (NumberQ[s] && dmax>O && NumberQ[a] && a>O);  (* Gu[siist, dust, amax, ainc] takes a list of complex s values, a list of positive d values, where d is the number of accurate digits, the maximum number amax of a, and the increase ainc of a from 1 to amax. Gu returns a statistical plot of n/k values based on slist, dust, and a values, together with the function u(a) as chosen in Section 3.2.4. *)  Gu[siist_List, dust_List, amax_, ainc_: 1] BlockE{sl, di, al, i,  j,  :  sdt, at, Hi, sdat, kt, ksdatl,  ksdat2, nt, nktl, nkt2, nkt3, gi, g2}, si  =  Length[slist];  di  =  Length[diist];  al  =  si  sdt  *  di;  Flatten[Table[Transpose[{Table[slisttEi]], {di}],  =  dlist}], {i, si}], 1]; at  =  Tabie[a, {a,  Hi  =  Length[atl;  sdat  =  .5, amax, ainc}];  Tabie[Table[AppendEsdtC[j]], at[[ilJJ, {i, Hl}];  kt  =  Appiy[BkdH, sdat, {2}];  ksdatl  =  Tabie[Tabie[Prepend[sdat  [Li]] [Li]],  {j,  al}],  70  Appendix D. Program  kt[[i]] [[JIll, {j, ksdat2 nt  =  =  al}]  ,  {i, Hl}]  Partition[Flatten[ksdatl], 4];  ApplyEBnH, ksdat2, {1}];  nktl  =  Transpose[{nt, Flatten[kt]}];  nkt2  =  Partition[Apply[Divide, nktl, {1}], all;  nkt3  =  Flatten[Table[Transpose[{Table[at[[i]], {al}], nkt2[[i]]}], {i, Hl}], 1];  gi  =  ListPlot[nkt3, AxesLabel —> {“a”, “n/k”}];  g2  =  Plot[Guessu[a], {a,  .5, amax},  AxesLabel —> {“a”, “u”}]; Show[{gl, g2}, AxesLabel —> {“a”, “n/k=u”}]] I; (NumberQ[amax] && amax>=.5 && NumberQ[ainc]);  (* Gv[slist, dust, aiuax, ainc] takes a list of complex s values, a list of positive d values, where d is the number of accurate digits, the maximum number amax of a, and the increase ainc of a from 1 to amax. Gu returns a statistical plot of (smallest n)/(best n).  Gv[slist_List, dlist_List, amax_, ainc_:1] Block[{(*sl, dl, al, i,  j,  :  sdt, at, Hl, tl, t2, sdat,  ri, r2, r3, r4, si, s2, s3*)}, si  =  Length[slist];  dl  =  Length[dlist];  Appendix D. Program  ai  si  =  sdt  *  71  di;  FlattenETabie[Transpose[{Table[siist[[i]], {di}],  =  diist}], {i, si}], 1]; at  =  Tabie[a, {a,  Hi  =  Length[at];  ti t2  .5, amax, ainc}];  Appiy[SkGR, sdt, 1]; =  sdat  Flatten[Tabie[tl, =  {j,  Hi}]];  Tabie[Tabie[Append[sdt[[j]], at[[i]]],  -(j,  ai}],  {i, H1}]; ri  =  Appiy[BkdH, sdat, {2}];  r2  =  TabieETabie[Prepend[sdat[[i]] rl[[i]]  [[j]]]  ,  {j  [Ej]], ai}]  ,  {i, Hi}]  r3  =  Partition[Fiatten[r2], 4];  r4  =  Appiy[BnH, r3, {1}];  si  =  Transpose[{t2, r4}];  s2  =  Partition[Appiy[Divide, si, {1}], ai];  s3  =  FiattenETabie[Transpose[{Tabie[attti]], {al}], s2[[i]]}], {i, Hi}], 1];  ListPiot [s3, AxesLabel  —>  {“a”,  ‘sm n / bst n”}]] I;  (NumberQ[amax] && amax>.5 && NumberQ[ainc]);  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0080016/manifest

Comment

Related Items