ABSTRACT Title of Dissertation: QUANTUM ALGORITHMS FOR LINEAR AND NONLINEAR DIFFERENTIAL EQUATIONS Jin-Peng Liu Doctor of Philosophy, 2022 Dissertation Directed by: Professor Andrew M. Childs Department of Computer Science Quantum computers are expected to dramatically outperform classical computers for certain computational problems. Originally developed for simulating quantum physics, quantum algorithms have been subsequently developed to address diverse computational challenges. There has been extensive previous work for linear dynamics and discrete models, including Hamiltonian simulations and systems of linear equations. However, for more complex realistic problems characterized by differential equations, the capability of quantum computing is far from well understood. One fundamental challenge is the substantial difference between the linear dynamics of a system of qubits and real-world systems with continuum and nonlinear behaviors. My research is concerned with mathematical aspects of quantum computing. In this dissertation, I focus mainly on the design and analysis of quantum algorithms for differential equations. Systems of linear ordinary differential equations (ODEs) and linear elliptic partial differential equations (PDEs) appear ubiquitous in natural and social science, engineering, and medicine. I propose a variety of quantum algorithms based on finite difference methods and spectral methods for producing the quantum encoding of the solutions, with an exponential improvement in the precision over previous quantum algorithms. Nonlinear differential equations exhibit rich phenomena in many domains but are notoriously difficult to solve. Whereas previous quantum algorithms for general nonlinear equations have been severely limited due to the linearity of quantum mechanics, I give the first efficient quantum algorithm for nonlinear differential equations with sufficiently strong dissipation. I also establish a lower bound, showing that nonlinear differential equations with sufficiently weak dissipation have worst-case complexity exponential in time, giving an almost tight classification of the quantum complexity of simulating nonlinear dynamics. Overall, utilizing advanced linear algebra techniques and nonlinear analysis, I attempt to build a bridge between classical and quantum mechanics, understand and optimize the power of quantum computation, and discover new quantum speedups over classical algorithms with provable guarantees. QUANTUM ALGORITHMS FOR LINEAR AND NONLINEAR DIFFERENTIAL EQUATIONS by Jin-Peng Liu Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2022 Advisory Committee: Professor Andrew M. Childs, Chair/Advisor Professor Alexey V. Gorshkov Professor Carl A. Miller Professor Konstantina Trivisa Professor Xiaodi Wu ? Copyright by Jin-Peng Liu 2022 Acknowledgments I would like to express my gratitude to my advisor, Andrew M. Childs, for his invaluable guidance, encouragement, and support. Andrew is an incredible researcher as well as an amazing advisor from whom I can constantly learn a lot. Without Andrew, I cannot imagine how I would get through the challenges of my studies. He taught me all I know about becoming an independent researcher, from solving research problems, writing papers, to completing a professional presentation. I will never forget the days when we worked on problems, talked about my career path, and had lunch together on occasion. I am always proud of being a Childs group member. I am grateful to Alexey Gorshkov, Carl Miller, Konstantina Trivisa, and Xiaodi Wu for serving as my dissertation committee members. Many thanks to Konstantina for her invaluable advice and warm-hearted assistance throughout my applied mathematics studies and during my post-doctoral applications. I would like to thank Furong Huang for serving as my preliminary exam committee member and for discussing machine learning topics. The work described in this dissertation is the product of collaborations with many people. I cherish the opportunities to work with Dong An, Di Fang, Herman Kolden, Hari Krovi, Tongyang Li, Noah Linden, Nuno Loureiro, Ashley Montanaro, Aaron Ostrander, Changpeng Shao, Chunhao Wang, Chenyi Zhang, and Ruizhe Zhang. Despite the fact ii that most of the collaborations were virtual, I benefited greatly from their diverse research backgrounds and areas of expertise. I expect to meet everyone in person in the future. At the University of Maryland, I had the privilege of learning from many fantastic faculties, including Gorjan Alagic, John Baras, Alexander Barg, Jacob Bedrossian, James Duncan, Howard Elman, Tom Goldstein, Lise-Marie Imbert-Ge?rard, Pierre-Emmanuel Jabin, Richard La, Brad Lackey, David Levermore, Yi-Kai Liu, Ricardo Nochetto, and Eitan Tadmor. I appreciate the instructors? willingness to share knowledge and skills with me, as well as widening my horizons. At the Joint Center for Quantum Information and Computer Science (QuICS), I had in-depth interactions with many quantum information colleagues: Chen Bai, Aniruddha Bapat, Charles Cao, Shouvanik Chakrabarti, Nai-Hui Chia, Ze-Pei Cian, Abhinav Deshpande, Bill Fefferman, Hong Hao Fu, Andrew Guo, Kaixin Huang, Shih-Han Hung, Jiaqi Leng, Yuxiang Peng, Eddie Schoute, Yuan Su, Minh Tran, Chiao-Hsuan Wang, Daochen Wang, Guoming Wang, Xin Wang, Yidan Wang, Xinyao Wu, Penghui Yao, Qi Zhao, Daiwei Zhu, Guanyu Zhu, and Shaopeng Zhu. We had wonderful days at QuICS. I would also like to thank Andrea Svejda, the QuICS coordinator. Whenever I was looking for assistance, she was always there to offer help. I appreciated the support from the QISE-NET Award, which allows me to collaborate with Microsoft Quantum researchers: Guang Hao Low and Stephen Jordan. Special thanks to Stephen for his generous assistance during my graduate studies, regardless of whether he works for the University of Maryland or Microsoft. I enjoyed my internship at Amazon Web Sciences (AWS) Center for Quantum Computing in the summer of 2021, and I am grateful to Fernando Brandao, Earl Campbell, Michael Kastoryano, and Nicola iii Pancotti for their mentorship. Over that summer, I had a lot of fun speaking with AWS scientists: Steve Flammia, Sam McArdle, Ash Milstead, and Martin Schuetz, as well as talking with my fellow internship students: Alexander Delzell, Hsin-Yuan Huang, Noah Shutty, Thomas Bohdanowicz, and Kianna Wan. I enjoyed many unforgettable visits to Berkeley, Harvard, MIT, Caltech, Chicago, Corolado, and Shenzhen. In the winter of 2019 and the spring of 2020, I appreciated the hosts from the University of California, Berkeley, the Lawrence Berkeley National Laboratory, and the Simons Institute for the Theory of Computing. I would like to thank Lin Lin and Chao Yang for the numerous discussions about many valuable ideas, as well as the many challenging questions. I greatly appreciated Lin?s strong support during my post-doctoral applications. I am also grateful to Umesh Vazirani for organizing the Simons programs and for inviting me to participate. In the winter of 2021, I had the pleasure of visiting the MIT Center for Theoretical Physics and Plasma Science and Fusion Center in Boston. I was delighted to interact with Soonwon Choi, Issac Chuang, Edward Farhi, Aram Harrow, and Seth Lloyd. Indeed, my graduate studies relied heavily on Harrow and Lloyd?s results. It is an honor for me to collaborate with and compete against MIT folks. In the winter of 2021, I was also thrilled to interview the Harvard Quantum Initiative, where I had the opportunity to speak with Arthur Jaffe, Mikhail Lukin, and Susannie Yelin. In these years, I also enjoyed attending conferences hosted by Caltech, Chicago, Corolado, and Shenzhen. I would like to express my gratitude to many individuals for their friendliness and hospitality during my visits. I enjoyed numerous enlightening discussions with many incredible quantum information researchers apart from those mentioned above: Scott Aaronson, Anurag Anshu, Ryan iv Babbush, Dominic Berry, Adam Bouland, Sergey Bravyi, Paola Cappellaro, Steve Flammia, David Gosset, Stuart Hadfield, Matthew Hastings, Patrick Hayden, Zhengfeng Ji, Robin Kothari, Debbie Liang, Shunlong Luo, Jarrod McClean, Christopher Monroe, Michele Mosca, John Preskill, Yun Shang, Fang Song, Nikitas Stamatopoulos, Nathan Wiebe, Thomas Vidick, Beni Yoshida, Henry Yuan, Bei Zeng, William Zeng, and Shenggeng Zheng. I wish I could be as smart as these amazing people. I would also like to thank my colleagues and friends in the broader quantum information community: Kaifeng Bu, Chenfeng Cao, Ningping Cao, Lijie Chen, Mo Chen, Yifang Chen, Rui Chao, Andrea Coladangelo, David Ding, Yulong Dong, Xun Gao, Andra?s Gilye?n, Cupjin Huang, Yichen Huang, Hong-Ye Hu, Jiala Ji, Jiaqing Jiang, Ce Jin, Gushu Li, Jianqiang Li, Yinan Li, Jiahui Liu, Jinguo Liu, Junyu Liu, Mengke Liu, Qipeng Liu, Yunchao Liu, Yupan Liu, Ziwen Liu, Chuhan Lu, Di Luo, Chinmay Nirkhe, Luowen Qian, Yihui Quek, Yixin Shen, Qichen Song, Ewin Tang, Yuanjia Wang, Yadong Wu, Zhujing Xu, Yuxiang Yang, Jiahao Yao, Cong Yu, Zhan Yu, Haimeng Zhang, Jiayu Zhang, Yuxuan Zhang, Zhendong Zhang, Zijian Zhang, Chen Zhao, Chunlu Zhou, Hengyun Zhou, Shangnai Zhou, Sisi Zhou, and Jiamin Zhu. I wish you all the best in your future endeavors. During my studies at the University of Maryland, I enjoyed illuminating discussions with fellow graduate students at the Applied Mathematics & Statistics, and Scientific Computation (AMSC) Program and the Department of Mathematics: Stephanie Allen, Christopher Dock, Alexis Boleda, Muhammed Elgebali, Luke Evans, Blake Fritz, Siming He, Gareth Johnson, Sophie Kessler, Wenbo Li, Ying Li, Yiran Li, Jiaxing Liang, Yuchen Luo, Jingcheng Lu, Michael Rawson, Tengfei Su, Manyuan Tao, Cem Unsal, Peng Wan, v Qiong Wu, Shuo Yang, Anqi Ye, Yiran Zhang, and Yi Zhou. These five years could not be so enjoyable without you. I would like to thank Jessica Sadler, the AMSC program coordinator, for offering help during my graduate years. I also enjoyed insightful interactions with many fellow graduate students at the Department of Computer Science: Jingling Li, Yanchao Sun, Jiahao Su, and Xuchen You. I would like to thank Jingling in particular for her thoughtfulness and heartwarming encouragement. Thank you for being there, and I will cherish every moment with you. I would especially thank Yanwen Zhang for giving me the courage to believe in myself, so that I can complete my Ph.D. and pursue my dream of becoming a successful professor. Yanwen, I will never stop moving forward, as you encouraged. Finally, I would like to thank my parents for their endless love, faith, and support. You are the true heroes. vi Table of Contents Acknowledgements ii Table of Contents vii List of Tables ix List of Figures x Chapter 1: Introduction: quantum scientific computation 1 1.1 Notations and terminologies . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Hamiltonian simulations . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Quantum linear system algorithms . . . . . . . . . . . . . . . . . . . . . 7 1.4 Quantum algorithms for linear differential equations . . . . . . . . . . . . 10 1.5 Quantum algorithms for nonlinear differential equations . . . . . . . . . . 16 Chapter 2: High-precision quantum algorithms for linear ordinary differential equations 21 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2 Spectral method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3 Linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.4 Solution error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.6 Success probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.7 State preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.8 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.9 Boundary value problems . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.10 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 3: High-precision quantum algorithms for linear elliptic partial differential equations 65 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.2 Linear PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3 Finite difference method . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.3.1 Linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.3.2 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.3.3 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.3.4 FDM algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 vii 3.3.5 Boundary conditions via the method of images . . . . . . . . . . 83 3.4 Multi-dimensional spectral method . . . . . . . . . . . . . . . . . . . . . 86 3.4.1 Quantum shifted Fourier transform and quantum cosine transform 90 3.4.2 Linear system . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.4.3 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . 103 3.4.4 State preparation . . . . . . . . . . . . . . . . . . . . . . . . . . 117 3.4.5 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 3.5 Discussion and open problems . . . . . . . . . . . . . . . . . . . . . . . 122 Chapter 4: Efficient quantum algorithms for dissipative nonlinear differential equations 125 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 4.2 Quadratic ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 4.3 Quantum Carleman linearization . . . . . . . . . . . . . . . . . . . . . . 131 4.4 Algorithm analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 4.4.1 Solution error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 4.4.2 Condition number . . . . . . . . . . . . . . . . . . . . . . . . . . 154 4.4.3 State preparation . . . . . . . . . . . . . . . . . . . . . . . . . . 158 4.4.4 Measurement success probability . . . . . . . . . . . . . . . . . . 161 4.4.5 Proof of Theorem 4.1 . . . . . . . . . . . . . . . . . . . . . . . . 164 4.5 Lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 4.5.1 Hardness of state discrimination . . . . . . . . . . . . . . . . . . 170 4.5.2 State discrimination with nonlinear dynamics . . . . . . . . . . . 171 4.5.3 Proof of Theorem 4.2 . . . . . . . . . . . . . . . . . . . . . . . . 175 4.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Chapter 5: Conclusion and future work 188 Bibliography 192 viii List of Tables 3.1 Summary of the time complexities of classical and quantum algorithms for d-dimensional PDEs with error tolerance ?. Portions of the complexity in bold represent best known dependence on that parameter. . . . . . . . 68 ix List of Figures 4.1 Integration of the forced viscous Burgers equation using Carleman linearization on a classical computer (source code available at https://github. com/hermankolden/CarlemanBurgers). The viscosity is set so that the Reynolds number Re = U0L0/? = 20. The parameters nx = 16 and nt = 4000 are the number of spatial and temporal discretization intervals, respectively. The corresponding Carleman convergence parameter is R = 43.59. Top: Initial condition and solution plotted at a third of the nonlinear time 1T L0nl = . Bottom: l2 norm of the absolute error3 3U0 between the Carleman solutions at various truncation levels N (left), and the convergence of the corresponding time-maximum error (right). . . . . 182 x Chapter 1: Introduction: quantum scientific computation Quantum computing exploits quantum-mechanical phenomena such as superposition and entanglement to perform computation. Quantum computers have the potential to dramatically outperform classical computers at solving diverse computational challenges. Quantum scientific computation is a fast-growing multidisciplinary field that combines classical numerical analysis with advanced quantum technologies to model, analyze, and solve complex problems arising in physics, chemistry, biology, engineering, social sciences, and so on. Originally developed for simulating quantum physics, various quantum algorithms have been proposed to address scientific computational problems by performing linear algebra in Hilbert space. For such problems, quantum algorithms are supposed to provide polynomial and even exponential speedups. The basic notations and terminologies of quantum computing are covered in this chapter, followed by quantum algorithms for scientific computation problems, including topics such as Hamiltonian simulations, linear systems, and differential equations. 1.1 Notations and terminologies To comprehend the remainder of the dissertation, we provide a brief overview of the notations and terminologies of quantum computing. More details are available in 1 standard textbooks, such as Nielsen and Chuang, Quantum Computation and Quantum Information [1], and Watrous, The Theory of Quantum Information [2]. Quantum mechanics can be formulated in terms of linear algebra. Throughout this dissertation, we consider a finite-dimensional complex vector space CN equipped with an inner product (the Hilbert space). We usually take N = 2n for some non-negative integer n. We use the Dirac notation |?? to represent a quantum state, and ??| = |??? to represent its Hermitian conjugation. The scalar ??|?? gives the inner product of |?? and |??. For the quantum state, we always assume ??|?? = 1, i.e. |?? is a unit complex vector. We also let {|j?}Nj=1 be the standard basis of space. The j-th entry of |?? can be written as ?j|??. Given two quantum states, |?1? ? CN1 , |? ? ? CN22 , their tensor product can be written as |?1? |?2? = |?1? ? |?2? ? CN1?N2 , where ? denotes the Kronecker product. One quantum bit (one qubit) is a quantum state in C2, and the tensor product of one-qubit state |?j? forms an n-qubit state |?1? ? n . . .? |? 2n? ? C . The n-qubit quantum gate ? C2n?2nU is a unitary matrix, i.e. U ?U = I , where I is the identity matrix. It is used to map one n-qubit state to the other, i.e. U : |?? ? |???. A sequence of quantum gates as a product forms a (reversible) quantum logic circuit. The universality of two-qubit gates means that every n-qubit gate can be written as a composition of a sequence of two-qubit gates. Therefore, we usually count the number of two-qubit gates as the gate complexity of quantum algorithms. For the quantum measurement, we consider a quantum?observable that corresponds to a Hermitian M . It has the spectral decomposition M = m ?mPm, where Pm is the projection operator onto the eigenspace, i.e. P 2m = Pm, associated the eigenvalue ?m. We 2 ? usually assume m Pm = I . When the quantum?state |?? is measured by M , the outcome is ?m with probability pm = ??|Pm |??, with m pm = 1. After the measurement, the ? post-state is Pm |?? / pm. We now discuss the notations of norms. For a vector a = [a1, a2, . . . , an] ? Rn, we denote the vector ?p norm as (? )n 1/p ?a?p := |a |pk . (1.1) k=1 For a matrix A ? Rn?n, we denote the operator norm ???p,q induced by the vector ?p and ?q norms as ? ? ?Ax?qA p,q := sup , ?A?p := ?A?? ? p,p . (1.2) x ?=0 x p For a continuous scalar function f(t) : [0, T ] ? R, we denote the L? norm as ?f?? := max |f(t)|. (1.3) t?[0,T ] For a continuous scalar function u(x, t) : ? ? [0, T ] ? R, where ? ? Rd, for a fixed t, the Lp norm of u(?, t) is given by (? )1/p ?u(?, t)? pLp(?) := |u(x, t)| dx . (1.4) ? In particular, when no subscript is used in the vector, matrix and function norms, we mean ??? = ???2 by default. For real functions f, g : R ? R, we write f = O(g) if there exists c > 0, such that 3 |f(?)| ? c|g(?)| for all ? ? R. We write f = ?(g) if g = O(f), and f = ?(g) if both f = O(g) and g = O(f). We use(O? to suppress)logarithmic factors in the asymptotic expression, i.e. f = O?(g) if f = O g poly(log g) . 1.2 Hamiltonian simulations Simulating quantum physics is one of the primary applications of quantum computers [3]. The first explicit quantum simulation algorithm was proposed by Lloyd [4] using product formulas, and numerous quantum algorithms for quantum simulations have been developed since then[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27], with various applications ranging from quantum field theory [28, 29] to quantum chemistry [18, 30, 31] and condensed matter physics [32]. We first introduce the quantum oracles. We assume there is a state preparation oracle O? that produces an N -dimensional quantum state |??, i.e. O?|0? = |??. (1.5) We then assume there is a sparse matrix oracle OH that computes the locations and values of nonzero entries of an N ?N sparse Hermitian matrix H . In detail, on input (j, l), OH gives the location of the l-th nonzero entry in row j, denoted as k. Then the oracle OH gives the value Hj,k, i.e. OH(|j?|k?|0?) = |j?|k?|Hj,k?. (1.6) The quantum oracle OH is reversible, and it allows access to different input elements (j, l) 4 in superposition, which is essential for quantum computing. Here we require the number of nonzero entries of this matrix in every row and column is at most s, where s is much smaller than the dimension N . We count the number of querying these oracles as the query complexity. If such oracles can be implemented by one- and two-qubit gates, we usually count the number of these one- and two-qubit gates as the gate complexity of quantum algorithms. We state the Hamiltonian simulation problem as follows. Problem 1.1. In the Hamiltonian simulation problem, we consider an N -dimensional Hamiltonian system d i |?(t)? = H(t) |?(t)? , |?(0)? = |?in? . (1.7) dt Given the ability to prepare a quantum state |?(0)?, a sparse matrix oracle to provide the locations and values of nonzero entries of a Hamiltonian H , and an evolution time T , the goal is to produce a quantum state that is ?-close to |?(T )? in ?2 norm. When the Hamiltonian H is time-independent, we have the close-form solution |?(T )? = e?iHT |?(0)?, for which a computation process attempts to produce an evolution operator as an approximation of e?iHT . This is a difficult problem in classical computation, since a classical computer cannot even represent the quantum state efficiently. All classical algorithms require time complexity at least ?(N) to explicitly store all the entries of the quantum state. In quantum computation, we intend to design a sequence of quantum gates (a quantum circuit) to process |?(t)? from |?(0)? to |?(T )? while lowering the cost dramatically. 5 In Problem 1.1, we aim to produce the final state |?(T )? with query and gate complexity poly(logN), an exponential improvement over classical simulations. The first explicit Hamiltonian simulation algorithm proposed by Lloyd is based on produ?ct formulas [4]. Specifically, if H is a time-independent k-local Hamiltonian, i.e. H = j Hj and each Hj acts on k = O(1) qubits, then the evolution operator e ?iHt for a short time t can be well approximated by the Lie-Trotter formula, ? e?iHt = e?iHjt +O(t2). (1.8) j Each operator e?iHjt can be efficiently implemented on a quantum computer. For a long time t, we can divide the time interval onto r subintervals, in which we simulate each e?iHjt/r, and finally approximate e?iHt by ? e?iHt = ( e?iHjt/r)r +O(t2/r). (1.9) j It takes r = O(?H?2t2/?) to ensure the error tolerance of exact and approximated normalized states in ?2 norm at most ?. High-order product formulas are known for a better approximation. Using the 2kth-order Suzuki formula, Berry, Ahokas, Cleve, and Sanders showed that the number of exponentials required for an approximation with error at most ? is 52k?H?1+1/2kt1+1/2k/?1/2k [33]. A large body of work has substantially developed fast quantum algorithms based on product formulas [33, 34, 35, 36, 37, 38, 39, 40]. Recently, some Hamiltonian simulation algorithms based on post-Trotter methods have emerged diversely. Berry, Childs, Cleve, Kothari, and Somma proposed high-precision 6 Hamiltonian simulations by implementing the linear combinations of unitaries (LCU) with complexity t poly(log(t/?)) [35, 36], an exponential improvement over Trotter- based Hamiltonian simulations with respect to ?. Low and Chuang developed Hamiltonian simulations based on the quantum signal processing and qubitization [41, 42] with complexity O(t+ log(1/?) ), realizing an optimal tradeoff in terms of t and ?. log log(1/?) Note that the Hamiltonian simulation problem can be regarded as a particular initial value problem of the differential equation. A more general problem will be introduced later. 1.3 Quantum linear system algorithms Quantum computers are expected to outperform classical computers for characterizing the solution of a N -dimensional linear system in Hilbert space. Originally proposed by Harrow, Hassidim, and Lloyd [43], advanced quantum linear system algorithms [43, 44, 45, 46, 47, 48, 49, 50, 51] have been well developed to provide a quantum state encoding the solution with complexity poly(logN). Such algorithms have been considerably applied to address high-dimensional problems governed by linear differential equations [52, 53, 54, 55, 56, 57, 58, 59, 60, 61]. We introduce the oracles for the quantum linear system problem. Given an N - dimensional vector b, we assume there is a state preparation oracle Ob as defined in (1.5) that produces an N -dimensional quantum state |b?, where |b? is proportional to the vector b. Given an N ?N sparse matrix A, we then assume there is a sparse matrix oracle OA as defined in (1.6). In detail, on input (j, l), OA gives the location of the l-th nonzero entry 7 in row j, denoted as k. Then the oracle OA gives the value Aj,k. Here we require the number of nonzero entries of this matrix in every row and column is at most s, where s is much smaller than the dimension N . We state the quantum linear system problem as follows. Problem 1.2. In the quantum linear system problem, we consider an N -dimensional linear system Ax = b. (1.10) Given the ability to prepare a quantum state proportional to the vector b, and a sparse matrix oracle to provide the locations and values of nonzero entries of a matrix A, the goal is to produce a quantum state that is ?-close to the normalized A?1b in ?2 norm. It takes at least ?(N) for a classical computer (and even a quantum computer) to explicitly write down every entry of an N -dimensional vector. In Problem 1.2, we aim to design a quantum circuit to provide a quantum state as an encoding of the inverse solution A?1b with ?2 normalization, with query and gate complexity poly(logN). The first quantum linear system algorithm (QLSA), known as the HHL algorithm, was proposed by Harrow, Hassidim, and Lloyd [43]. The algorithm requires that A be Hermitian so that it can be converted into a unitary operator. If A is not Hermitian, the linear system in Problem 1.2 is modified as ?? ?? ? ? ??? 0 A??????0??? ?b?= ?? ?? . (1.11) A? 0 x 0 The algorithm requires that a quantum state |b? proportional to the vector b is given. 8 Let {? Nj, |?j?}j=1 be eige?nvalues and eigenvectors of A, and let |b? be expanded on the eigenbasis of A as |b? = j ?j|?j?. The HHL algorithm queries the unitary operator eiAt with the sparse matrix oracle. Hamiltonian simulation techniques are employed to perform eiAt with a superposition of different time t onto |b?. The HHL algorithm then estimates the corresponding ?j for each eigenbasis of |b?1, yielding the state ? ? ?j|0?|?j? ? ?j|?j?|?j?, (1.12) j j where each ?j is stored in the ancilla register. Next, the HHL algorithm performs a controlled rotation to multiply ??1j onto each eigenbasis, ? ? ?j|?j?|? ? ? ? ??1j j j |?j?|?j?. (1.13) j j ? Noticing the fact that A?1|b? = j ? ? ?1 j j |?j?, we have prepared the inverse solution encoded in the quantum state. This algorithm achieves query and gate complexity ?2 poly(logN)/?, where where N is the dimension, ? measures the error tolerance of exact and approximated normalized states in ?2 norm, and ? is the condition number of the matrix A. Subsequent work has been widely developed to improve the performance of that algorithm. Regarding the dependence on ?, Childs, Kothari, and Somma exponentially improved the scale of ? from poly(1/?) to poly(log(1/?)) by utilizing the linear combinations 1The HHL algorithm uses the quantum phase estimation (QPE) [62] to estimate the eigenvalues. More details can be found in [1]. 9 of unitaries (LCU) [46]. The LCU approach approximates the inverse of A by employing a Fourier series or a Chebyshev polynomial, which can be implemented by linear combinations of unitary operations with cost poly(log(1/?)). Gilye?n, Su, Low, and Wiebe provided an alternative encoding to approximate the inverse of A based on the quantum singular value transformation (QSVT) with the same ? scaling [47]. The dependence on the condition number ? also attracts considerable attension. Although the best-known classical linear system solver based on conjugate gradient descent ? can produce explicit N entries of the solution with cost scaling as ?, it is known that a quantum algorithm must make at least ? queries for encoding the solution of Problem 1.2 in poly(logN) [43]. Compared to the HHL algorithm, Ambainis adopted the variable time amplitude amplification (VTAA) to improve the ?2 scaling to linear scaling[44], but it resulted in a worse 1/?3 scaling. Childs, Kothari, and Somma combined VTAA with LCU as introduced above to reach ? poly(logN, log(1/?)) [46]. Alternatively, quantum adiabatic approaches [45, 48, 49, 51] have been recently investigated to reach the same complexity without using the VTAA. 1.4 Quantum algorithms for linear differential equations Models governed by both ordinary differential equations (ODEs) and partial differential equations (PDEs) arise extensively in natural and social science, medicine, and engineering. Such equations characterize physical and biological systems that exhibit a wide variety of complex phenomena, including turbulence and chaos. By utilizing QLSAs, quantum algorithms offer the prospect of rapidly characterizing the solutions of high-dimensional 10 systems of linear ODEs [52, 53, 54] and PDEs [55, 56, 57, 58, 59, 60, 61]. We introduce the oracles for the quantum linear ODE problem. Given an N - dimensional vector u(0), we assume there is a state preparation oracle O0 as defined in (1.5) that produces an N -dimensional quantum state |u(0)?, where |u(0)? is proportional to the vector u(0). Given an N -dimensional vector f(t) with a specific t, we assume there is a state preparation oracle Of such that Of (|t?|0?) = |t?|f(t)?, (1.14) which produces an N -dimensional quantum state |f(t)? proportional to f(t). Given an N ?N sparse matrix A(t) with a specific t, we assume there is a sparse matrix oracle OA as defined in (1.6). In detail, on input (t, j, l), OA gives the location of the l-th nonzero entry in row j, denoted as k. Then the oracle OA gives the value Aj,k(t), i.e. OA(|t?|j?|k?|0?) = |t?|j?|k?|Aj,k(t)?. (1.15) Here we require the number of nonzero entries of this matrix in every row and column is at most s for any time t, where s is much smaller than the dimension N . We informally state the quantum linear ODE problem as follows. Problem 1.3. In the quantum linear ODE problem, we are given a system of d-dimensional differential equations du(t) = A(t)u(t) + f(t). (1.16) dt Given the ability to prepare a quantum state proportional to the initial condition u(0) and 11 the inhomogeneity f(t) with a specific t, a sparse matrix oracle to provide the locations and values of nonzero entries of a matrix A(t) with a specific t, and an evolution time T , the goal is to produce a quantum state that is ?-close to the normalized u(T ) in ?2 norm. Problem 2.1 gives a formal statement of Problem 1.3. Berry presented the first quantum algorithm for general linear ODEs [52]. This work explicitly considered the time-independent case, assuming that real parts of the eigenvalues of A are non-positive, whereas in principle, it is natural to extend it to general time-dependent ODEs. Berry?s algorithm discretized the system of differential equations into small time intervals as a system of linear equations using the Euler method or high- order linear multistep methods [63, 64], for which QLSAs can be applied to produce an approximated encoded solution. For instance, the first-order forward Euler method approximates the time derivative at the point x(t) as dx(t) x(t+ h)? x(t) = +O(h). (1.17) dt h The kth-order linear multistep methods can reduce the error to O(hk). This approach achieves complexity poly-logarithmic in the dimension d. However, when solving an equation over the interval [0, T ], the number of iterations is T/h = ?(??1/k) for fixed k, offering a total complexity poly(1/?) even using high-precision QLSAs. Reference [53] improved Berry?s result from poly(1/?) to poly(log(1/?)) by using a high-precision QLSA based on linear combinations of unitaries [46] to solve a linear system that encodes a truncated Taylor series. However, this approach assumes that A(t) and f(t) are time-independent so that the solution of the ODE can be written as an explicit 12 series, and it is unclear how to generalize the algorithm to time-dependent ODEs. Prior to the work [54], it was an open problem regarding whether there was a quantum algorithm for time-dependent ODEs with complexity poly(log(1/?)). In Chapter 2, we introduce the quantum spectral method with such an improvement [54]. Our main contribution is to implement a method that uses a global approximation of the solution instead of locally discretizing the ODEs into small time intervals. We do this by developing a quantum version of so-called spectral methods, a technique?from classical numerical analysis that represents the components of the solution u(t)i ? j cij?j(t) as linear combinations of basis functions ?j(t) expressing the time dependence. Specifically, we implement a Chebyshev pseudospectral method [65, 66] using a high-precision QLSA. This approach approximates the solution by a truncated Chebyshev series with undetermined coefficients and solves for those coefficients using a linear system that interpolates the differential equations. According to the convergence theory of spectral methods, the solution error decreases exponentially provided the solution is sufficiently smooth [67, 68]. We use the LCU-based QLSA to solve this linear system with high precision [46]. To analyze the algorithm, we upper bound the solution error and condition number of the linear system and lower bound the success probability of the final measurement. Overall, we show that the total complexity of this approach is poly(log(1/?)) for general time- dependent ODEs. We give a formal statement of the main result in Theorem 2.1. It is natural to extend the ordinary differential equations to the partial differential equations (PDEs) by involving multivariate derivatives. Prominent examples include Maxwell?s equations for electromagnetism, Boltzmann?s equation and the Fokker-Planck equation in thermodynamics, and Schro?dinger?s equation in continuum quantum mechanics. 13 For solving PDEs on a digital computer, it is common to consider a system of linear equations that approximates the PDE on the grid space, and produce the solution on those grid points within the ?2 discretization error ?. We introduce the oracles for the quantum linear PDE problem. Given an N -dimensional vector f(x) with a specific x, we assume there is a state preparation oracle Ob as defined in (1.14) that produces an N -dimensional quantum state |f(x)?, where |f(x)? is proportional to the vector f(x). Given N ? N sparse matrices Aj1j2(x), Aj(x), and A0(x) with a specific x, we then assume there are sparse matrix oracles as defined in (1.15). For instance, Aj1j2(x) is modeled by a sparse matrix oracle that, on input (m, l), gives the location of the l-th nonzero entry in row m, denoted as n, and gives the value Aj1j2(x)m,n. We informally state the quantum linear PDE problem as follows. Problem 1.4. In the quantum linear PDE problem, we are given a system of second-order d-dimensional equations ?d 2 ?d? u(x) ?u(x) Aj1j2(x) + Aj(x) + A0(x)u(x) = f(x). (1.18)?x ?x ?x j1,j2=1 j1 j2 j=1 j Given the ability to prepare a quantum state proportional to the inhomogeneity f(x), sparse matrix oracles to provide the locations and values of nonzero entries of a matrix Aj1j2(x), Aj(x), and A0(x) on a set of interpolation nodes x, the goal is to produce a quantum state |u(x)? that is ?-close to the normalized u(x) on a set of interpolation nodes x in ?2 norm. Problem 3.1 gives a formal statement of Problem 1.4, for which we consider additional technical assumptions introduced in Chapter 3. 14 The discretized solution u(x) on a set of interpolation nodes x is a multi-dimensional vector function. If each spatial coordinate has n discrete values, then nd points are needed to discretize a d-dimensional problem. Simply producing the solution on these grid points takes time ?(nd). Compared to classical algorithms, quantum algorithms can produce a quantum state proportional to the solution on the grid, which requires only poly(d, log n) space. There are a large variety of quantum algorithms using different numerical schemes and the QLSA to reach poly(d, 1/?) [55, 56, 57, 58, 59, 60, 61], because of the additional approximation errors in the numerical schemes. It was unknown how to achieve poly(d, log(1/?)) prior to the work [59]. In Chapter 3, we introduce two classes of quantum algorithms for linear elliptic PDEs with such an improvement [59]. Our first algorithm is based on a quantum version of the FDM approach: we use a finite-difference approximation to produce a system of linear equations and then solve that system using the QLSA. We analyze our FDM algorithm as applied to Poisson?s equation under periodic, Dirichlet, and Neumann boundary conditions. Whereas previous FDM approaches [69, 70] considered fixed orders of truncation, we adapt the order of truncation depending on ?, inspired by the classical adaptive FDM [71]. As the order increases, the eigenvalues of the FDM matrix approach the eigenvalues of the continuous Laplacian, allowing for more precise approximations. The main algorithm we present uses the quantum Fourier transform (QFT) and takes advantage of the high- precision LCU-based QLSA [46]. This quantum adaptive FDM approach produces a quantum state that approximates the solution of the Poisson?s equation with complexity d6.5 poly(log d, log(1/?)). 15 We also propose a quantum algorithm for more general second-order elliptic PDEs under periodic or non-periodic Dirichlet boundary conditions. This algorithm is based on the quantum spectral method [54] that globally approximates the solution of a PDE by a truncated Fourier or Chebyshev series with undetermined coefficients, and then finds the coefficients by solving a linear system. This system is exponentially large in d, so solving it is infeasible for classical algorithms but feasible in a quantum context. To be able to apply the QLSA efficiently, we show how to make the system sparse using variants of the quantum Fourier transform. Our bound on the condition number of the linear system uses global strict diagonal dominance, and introduces a factor in the complexity that measures the extent to which this condition holds. We give a complexity of d2 poly(log(1/?)) for producing a quantum state approximating the solution of general second-order elliptic PDEs with Dirichlet boundary conditions. Both of these approaches have complexity poly(d, log(1/?)), providing optimal dependence on ? and an exponential improvement over classical methods with respect to d. We state our main results in Theorem 3.1 and Theorem 3.2. 1.5 Quantum algorithms for nonlinear differential equations We now turn attention to the nonlinear generalization of Problem 1.3. We focus here on differential equations with nonlinearities that can be expressed with quadratic polynomials. Note that polynomials of degree higher than two, and even more general nonlinearities, can be reduced to the quadratic case by introducing additional variables [72, 73]. The quadratic case also directly includes many archetypal models, such as the 16 logistic equation in biology, the Lorenz system in atmospheric dynamics, and the Navier? Stokes equations in fluid dynamics. We introduce the oracles for the quantum quadratic PDE problem. Given an N - dimensional vector u(0), we assume there is a state preparation oracle O0 as defined in (1.5) that produces an N -dimensional quantum state |u(0)?, where |u(0)? is proportional to the vector u(0). Given N ? N sparse matrices F2, F1, and F0(t) with a specific t, we then assume there are sparse matrix oracles as defined in (1.6) and (1.15). For instance, F2 is modeled by a sparse matrix oracle OF2 that, on input (j, l), gives the location of the l-th nonzero entry in row j, denoted as k, and gives the value (F2)j,k. We informally state the quantum quadratic ODE problem as follows. Problem 1.5. In the quantum quadratic ODE problem, we are given a system of N - dimensional differential equations du(t) = F u?22 (t) + F1u(t) + F0(t). (1.19) dt Given the ability to prepare a quantum state proportional to the initial condition u(0) and sparse matrix oracles to provide the locations and values of nonzero entries of matrices F2, F1, and F0(t) for any specified t, and an evolution time T , the goal is to produce a quantum state that is ?-close to the normalized u(T ) in ?2 norm. Problem 4.1 gives a formal statement of Problem 1.5. Early work on quantum algorithms for differential equations already considered the nonlinear case by Leyton and Osborne [74]. It gave a quantum algorithm that simulates the nonlinear ODE by storing and maintaining multiple copies of the solution. In each 17 iteration from t ? t + ?t, it costs multiple copies |x(t)? to represent the nonlinearity F (x(t)) to obtain one copy of |x(t+?t)?. The complexity of this approach is polynomial in the logarithm of the dimension but exponential in the evolution time, scaling as O(1/?T ) due to exponentially increasing resources used to maintain sufficiently many copies of the solution throughout the evolution. Recently, heuristic quantum algorithms for nonlinear ODEs have been studied. Reference [75] explores a linearization technique known as the Koopman?von Neumann method that might be amenable to the quantum linear system algorithm. In [76], the authors provide a high-level description of how linearization can help solve nonlinear equations on a quantum computer. However, neither paper makes precise statements about concrete implementations or running times of quantum algorithms. The recent preprint [77] also describes a quantum algorithm to solve a nonlinear ODE by linearizing it using a different approach from the one taken here. However, neither paper makes precise statements about concrete implementations or rigours time complexities of quantum algorithms. The authors also do not describe how barriers such as those of [78] could be avoided in their approach. While quantum mechanics is described by linear dynamics, possible nonlinear modifications of the theory have been widely studied. Generically, such modifications enable quickly solving hard computational problems (e.g., solving unstructured search among n items in time poly(log n)), making nonlinear dynamics exponentially difficult to simulate in general [78, 79, 80]. Therefore, constructing efficient quantum algorithms for general classes of nonlinear dynamics has been considered largely out of reach. Prior to the work [81], it was a long-standing and celebrated open problem regarding 18 whether quantum computing can efficiently characterize nonlinear differential equations. In Chapter 4, we design and analyze a quantum algorithm that overcomes this limitation using Carleman linearization [73, 82, 83]. This approach embeds polynomial nonlinearities into an infinite-dimensional system of linear ODEs, and then truncates it to obtain a finite-dimensional linear approximation. We discretize the finite ODE system in time using the forward Euler method and solve the resulting linear equations with the quantum linear system algorithm [46, 84]. We control the approximation error of this approach by combining a novel convergence theorem with a bound for the global error of the Euler method. Furthermore, we upper bound the condition number of the linear system and lower bound the success probability of the final measurement. Subject to the condition R < 1, where the quantity R characterizes the relative strength of the nonlinear and dissipative linear terms, we show that the total complexity of this quantum Carleman linearization algorithm is sT 2q poly(log T, log n, log 1/?)/?, where s is the sparsity, T is the evolution time, q quantifies the decay of the final solution relative to the initial condition, n is the dimension, and ? is the allowed error (see Theorem 4.1). In the regime R < 1, this is an exponential improvement over [74], which has complexity exponential in T . We state our main algorithmic result in Theorem 4.1. We also provide a quantum lower bound for the worst-case complexity of simulating strongly nonlinear dynamics, demonstrating that the algorithm?s condition R < 1 cannot be significantly improved in general. Following the approach of [78, 79], we construct a protocol for distinguishing two states of a qubit driven by a certain quadratic ODE. ? Provided R ? 2, this procedure distinguishes states with overlap 1?? in time poly(log(1/?)). Since nonorthogonal quantum states are hard to distinguish, this implies a lower bound 19 on the complexity of the quantum ODE problem. We state our main lower bound result in Theorem 4.2. Our quantum algorithm could potentially be applied to study models governed by quadratic ODEs arising in biology and epidemiology as well as in fluid and plasma dynamics. In particular, the celebrated Navier?Stokes equation with linear damping, which describes many physical phenomena, can be treated by our approach provided the Reynolds number is sufficiently small. We also note that while the formal validity of our arguments assumes R < 1, we find in one numerical experiment that our proposed approach remains valid for larger R. The remainder of this dissertation is outlined as follows. Chapter 2 covers high- precision quantum algorithms for linear ordinary differential equations. Chapter 3 presents high-precision quantum algorithms for linear elliptic partial differential equations. Chapter 4 introduces an efficient quantum algorithm for dissipative nonlinear differential equations. Finally, we conclude the results of the dissertation and discuss future work in Chapter 5. 20 Chapter 2: High-precision quantum algorithms for linear ordinary differential equations 2.1 Introduction In this chapter, we focus on systems of first-order linear ordinary differential equations (ODEs)1. As earlier introduced in Problem 1.3, such equations can be written in the form dx(t) = A(t)x(t) + f(t) (2.1) dt where t ? [0, T ] for some T > 0, the solution x(t) ? Cd is a d-dimensional vector, and the system is determined by a time-dependent matrix A(t) ? Cd?d and a time-dependent inhomogeneity f(t) ? Cd. Provided A(t) and f(t) are continuous functions of t, the initial value problem (i.e., the problem of determining x(t) for a given initial condition x(0)) has a unique solution [85]. Recent work has developed quantum algorithms with the potential to extract information about solutions of systems of differential equations even faster than is possible classically. This body of work grew from the quantum linear systems algorithm (QLSA) [84], which produces a quantum state proportional to the solution of a sparse system of d linear 1This chapter is based on the paper [54]. 21 equations in time poly(log d). We have introduced the quantum linear system algorithm in Chapter 1. Berry presented the first efficient quantum algorithm for general linear ODEs [52]. His algorithm represents the system of differential equations as a system of linear equations using a linear multistep method and solves that system using the QLSA. This approach achieves complexity logarithmic in the dimension d and, by using a high-order integrator, close to quadratic in the evolution time T . While this method could in principle be applied to handle time-dependent equations, the analysis of [52] only explicitly considers the time-independent case for simplicity. Since it uses a finite difference approximation, the complexity of Berry?s algorithm as a function of the solution error ? is poly(1/?) [52]. Reference [53] improved this to poly(log(1/?)) by using a high-precision QLSA based on linear combinations of unitaries [46] to solve a linear system that encodes a truncated Taylor series. However, this approach assumes that A(t) and f(t) are time-independent so that the solution of the ODE can be written as an explicit series, and it is unclear how to generalize the algorithm to time- dependent ODEs. Most of the aforementioned algorithms use a local approximation: they discretize the differential equations into small time intervals to obtain a system of linear equations or linear differential equations that can be solved by the QLSA or Hamiltonian simulation. For example, the central difference scheme approximates the time derivative at the point x(t) as dx(t) x(t+ h)? x(t? h) = +O(h2). (2.2) dt 2h 22 High-order finite difference or finite element methods can reduce the error to O(hk), where k ? 1 is the order of the approximation. However, when solving an equation over the interval [0, T ], the number of iterations is T/h = ?(??1/k) for fixed k, giving a total complexity that is poly(1/?) even using high-precision methods for the QLSA or Hamiltonian simulation. For ODEs with special structure, some prior results already show how to avoid a local approximation and thereby achieve complexity poly(log(1/?)). When A(t) is anti-Hermitian and f(t) = 0, we can directly apply Hamiltonian simulation [37]; if A and f are time-independent, then [53] uses a Taylor series to achieve complexity poly(log(1/?)). However, the case of general time-dependent linear ODEs had remained elusive. In this chapter, we use a nonlocal representation of the solution of a system of differential equations to give a new quantum algorithm with complexity poly(log(1/?)) even for time-dependent equations. While this is an exponential improvement in the dependence on ? over previous work, it does not necessarily give an exponential runtime improvement in the context of an algorithm with classical output. In general, statistical error will introduce an overhead of poly(1/?) when attempting to measure an observable with precision ?. However, achieving complexity poly(log(1/?)) can result in a polynomial improvement in the overall running time. In particular, if an algorithm is used as a subroutine k times, we should ensure error O(1/k) for each subroutine to give an overall algorithm with bounded error. A subroutine with complexity poly(log(1/?)) can potentially give significant polynomial savings in such a case. Time-dependent linear differential equations describe a wide variety of systems in 23 science and engineering. Examples include the wave equation and the Stokes equation (i.e., creeping flow) in fluid dynamics [86], the heat equation and the Boltzmann equation in thermodynamics [87, 88], the Poisson equation and Maxwell?s equations in electromagnetism [89, 90], and of course Schro?dinger?s equation in quantum mechanics. Moreover, some nonlinear differential equations can be studied by linearizing them to produce time-dependent linear equations (e.g., the linearized advection equation in fluid dynamics [91]). We focus our discussion on first-order linear ODEs. Higher-order ODEs can be transformed into first-order ODEs by standard methods. Also, by discretizing space, PDEs with both time and space dependence can be regarded as sparse linear systems of time-dependent ODEs. Thus we focus on an equation of the form (2.1) with initial condition x(0) = ? (2.3) for some specified ? ? Cd. We assume that A(t) is s-sparse (i.e., has at most s nonzero entries in any row or column) for any t ? [0, T ]. Furthermore, we assume that A(t), f(t), and ? are provided by black-box subroutines (which serve as abstractions of efficient computations). In particular, following essentially the same model as in [53] (see also Section 1.1 of [46]), suppose we have an oracle OA(t) that, for any t ? [0, T ] and any given row or column specified as input, computes the locations and values of the nonzero entries of A(t) in that row or column. We also assume oracles Ox and Of (t) that, for any t ? [0, T ], prepare normalized states |?? and |f(t)? proportional to ? and f(t), and that also compute ??? and ?f(t)?, respectively. Given such a description of the instance, the goal is to produce a quantum state ?-close to |x(T )? (a normalized quantum state 24 proportional to x(T )). As mentioned above, our main contribution is to implement a method that uses a global approximation of the solution. We do this by developing a quantum version of so- called spectral methods, a technique from classical num?erical analysis that (approximately) represents the components of the solution x(t)i ? j cij?j(t) as linear combinations of basis functions ?j(t) expressing the time dependence. Specifically, we implement a Chebyshev pseudospectral method [65, 66] using the QLSA. This approach approximates the solution by a truncated Chebyshev series with undetermined coefficients and solves for those coefficients using a linear system that interpolates the differential equations. According to the convergence theory of spectral methods, the solution error decreases exponentially provided the solution is sufficiently smooth [67, 68]. We use the LCU-based QLSA to solve this linear system with high precision [46]. To analyze the algorithm, we upper bound the solution error and condition number of the linear system and lower bound the success probability of the final measurement. Overall, we show that the total complexity of this approach is poly(log(1/?)) for general time-dependent ODEs. Informally, we show the following: Theorem 2.1 (Informal). Consider a linear ODE (2.1) with given initial conditions. Assume A(t) is s-sparse and diagonalizable, and Re(?i(t)) ? 0 for all eigenvalues of A(t). Then there exists a quantum algorithm that produces a state ?-close in l2 norm to(the exact solution, succeeding with probability ?(1), with query and gate complexity O s?A?T poly(log(s?A?T/?))). In addition to initial value problems (IVPs), our approach can also address boundary 25 value problems (BVPs). Given an oracle for preparing a state ?|x(0)?+?|x(T )? expressing a general boundary condition, the goal of the quantum BVP is to produce a quantum state ?-close to |x(t)? (a normalized state proportional to x(t)) for any desired t ? [0, T ]. We also give a quantum algorithm for this problem with complexity poly(log(1/?)), as follows: Theorem 2.2 (Informal). Consider a linear ODE (2.1) with given boundary conditions. Assume A(t) is s-sparse and diagonalizable, and Re(?i(t)) ? 0 for all eigenvalues of A(t). Then there exists a quantum algorithm that produces a state ?-close in l2 norm to(the exact solution, succeeding with probability ?(1), with query and gate complexity O s?A?4T 4 poly(log(s?A?T/?))). We give formal statements of Theorem 2.1 and Theorem 2.2 in Section 2.8 and Section 2.9, respectively. Note that the dependence of the complexity on ?A? and T is worse for BVPs than for IVPs. This is because a rescaling approach that we apply for IVPs (introduced in Section 2.3) cannot be extended to BVPs. The remainder of this chapter is organized as follows. Section 2.2 introduces the spectral method and Section 2.3 shows how to encode it into a quantum linear system. Then Section 2.4 analyzes the exponential decrease of the solution error, Section 2.5 bounds the condition number of the linear system, Section 2.6 lower bounds the success probability of the final measurement, and Section 2.7 describes how to prepare the initial quantum state. We combine these bounds in Section 2.8 to establish the main result. We then extend the analysis for initial value problems to boundary value problems in Section 2.9. Finally, we conclude in Section 2.10 with a discussion of the results and 26 some open problems. 2.2 Spectral method Spectral methods provide a way of solving differential equations using global approximations [67, 68]. The main idea of the approach is as follows. First, express an approximate solution as a linear combination of certain basis functions with undetermined coefficients. Second, construct a system of linear equations that such an approximate solution should satisfy. Finally, solve the linear system to determine the coefficients of the linear combination. Spectral methods offer a flexible approach that can be adapted to different settings by careful choice of the basis functions and the linear system. A Fourier series provides an appropriate basis for periodic problems, whereas Chebyshev polynomials can be applied more generally. The linear system can be specified using Gaussian quadrature (giving a spectral element method or Tau method), or one can simply interpolate the differential equations using quadrature nodes (giving a pseudo-spectral method) [68]. Since general linear ODEs are non-periodic, and interpolation facilitates constructing a straightforward linear system, we develop a quantum algorithm based on the Chebyshev pseudo-spectral method [65, 66]. In this approach, we consider a truncated Chebyshev approximation x(t) of the exact solution x?(t), namely ?n xi(t) = ci,kTk(t), i ? [d]0 := {0, 1, . . . , d? 1} (2.4) k=0 27 for any n ? Z+. Here Tk(t) = cos(k arccosx) is the Chebyshev polynomial of the first kind. (See [54, Appendix A] for its properties.) The coefficients ci,k ? C for all i ? [d]0 and k ? [n+ 1]0 are determined by demanding that x(t) satisfies the ODE and initial conditions at a set of interpolation nodes {tl}nl=0 (with 1 = t0 > t1 > ? ? ? > tn = ?1), where x(t0) and x(tn) are the initial and final states, respectively. In other words, we require dx(tl) = A(tl)x(tl) + f(tl), ? l ? [n+ 1], t ? [?1, 1], (2.5) dt and xi(t0) = ?i, i ? [d]0. (2.6) We choose the domain [?1, 1] in (2.5) because this is the natural domain for Chebyshev polynomials. Correspondingly, in the following section, we rescale the domain of initial value problems to be [?1, 1]. We would like to be able to increase the accuracy of the approximation by increasing n, so that ?x?(t)? x(t)? ? 0 as n ? ?. (2.7) There are many possible choices for the interpolation nodes. Here we use the Chebyshev-Gauss-Lobatto quadrature nodes, tl = cos l? for l ? [n+ 1]0, since thesen nodes achieve the highest convergence rate among all schemes with the same number of nodes [63, 64]. These nodes also have the convenient property that Tk(tl) = cos kl? ,n making it easy to compute the values xi(tl). To evaluate the condition (2.5), it is convenient to define coefficients c?i,k for i ? [d]0 28 and k ? [n+ 1]0 such that n dxi(t) ? = c?i,kTk(t). (2.8)dt k=0 We can use the differential property of Chebyshev polynomials, T ? ? 2T (t) = k+1 (t) T (t) k ? k?1 , (2.9) k + 1 k ? 1 to determine the transformation between c and c?i,k i,k. Based on the property of derivatives of Chebyshev polynomials (as detailed in [54, Appendix A]), we have ?n c?i,k = [Dn]kjci,j, i ? [d]0, k ? [n+ 1]0, (2.10) j=0 where Dn is the (n+ 1)? (n+ 1) upper triangular matrix with nonzero entries 2j [Dn]kj = , k + j odd, j > k, (2.11) ?k where ??????2 k = 0?k := ??? (2.12)1 k ? [n] := {1, 2, . . . , n}. Using this expression in (2.5), (2.10), and (2.11), we obtain the following linear equations: ?n ?d?1 ?n Tk(t ? l)ci,k = Aij(tl) Tk(tl)cj,k + f(tl)i, i ? [d]0, l ? [n+ 1]0. (2.13) k=0 j=0 k=0 29 We also demand that the Chebyshev series satisfies the initial condition xi(1) = ?i for all i ? [d]0. This system of linear equations gives a global approximation of the underlying system of differential equations. Instead of locally approximating the ODE at discretized times, these linear equations use the behavior of the differential equations at the n + 1 times {tl}nl=0 to capture their behavior over the entire interval [?1, 1]. Our algorithm solves this linear system using the high-precision QLSA [46]. Given an encoding of the Chebyshev coefficients cik, we can obtain the approximate solution x(t) as a suitable linear combination of the cik, a computation that can also be captured within a linear system. The resulting approximate solution x(t) is close to the exact solution x?(t): Lemma 2.1 (Lemma 19 of [67]). Let x?(t) ? Cr+1(?1, 1) be the solution of the differential equations (2.1) and let x(t) satisfy (2.5) and (2.6) for {t = cos l?}nl l=0. Then there is an constant C, independent of n, such that (n+1) ? ? ? ? ?x? (t)?max x?(t) x(t) C max . (2.14) t?[?1,1] t?[?1,1] nr?2 This shows that the convergence behavior of the spectral method is related to the smoothness of the solution. For a solution in Cr+1, the spectral method approximates the solution with n = poly(1/?). Furthermore, if the solution is smoother, we have an even tighter bound: Lemma 2.2 (Eq. (1.8.28) of [68]). Let x?(t) ? C?(?1, 1) be the solution of the differential 30 equations (2.1) and let x(t) satisfy (2.5) and (2.6) for {t = cos l?}nl l=0. Thenn ? ? ? ? ? 2 ( e )n max x?(t) x(t) max ?x?(n+1)(t)? . (2.15) t?[?1,1] ? t?[?1,1] 2n ? For simplicity, we replace the value 2/? by the upper bound of 1 in the following analysis. This result implies that if the solution is in C?, the spectral method approximates the solution to within ? using only n = poly(log(1/?)) terms in the Chebyshev series. Consequently, this approach gives a quantum algorithm with complexity poly(log(1/?)). 2.3 Linear system In this section we construct a linear system that encodes the solution of a system of differential equations via the Chebyshev pseudospectral method introduced in Section 2.2. We consider a system of linear, first-order, time-dependent ordinary differential equations, and focus on the following initial value problem. This is a formal statement of Problem 1.3. Problem 2.1. In the quantum ODE problem, we are given a system of equations dx(t) = A(t)x(t) + f(t) (2.16) dt where x(t) ? Cd, A(t) ? Cd?d is s-sparse, and f(t) ? Cd for all t ? [0, T ]. We assume that Aij, f ? C?i (0, T ) for all i, j ? [d]. We are also given an initial condition x(0) = ? ? Cd. Given oracles that compute the locations and values of nonzero entries 31 of A(t) for any t2, and that prepare normalized states |?? proportional to ? and |f(t)? proportional to f(t) for any t ? [0, T ], the goal is to output a quantum state |x(T )? that is ?-close to the normalized x(T ) in ?2 norm. Without loss of generality, we rescale the interval [0, T ] onto [?1, 1] by the linear map t 7? 1 ? 2t/T . Under this rescaling, we have d 7? ?T d , so A ?7 ?TA, which dt 2 dt 2 can increase the spectral norm. To reduce the dependence on T?specifically, to give an algorithm with complexity close to linear in T?we divide the interval [0, T ] into subintervals [0,?1], [?1,?2], . . . , [?m?1, T ] with ?0 := 0,?m := T . Each subinterval [?h,?h+1] for h ? [m]0 is then rescaled onto [?1, 1] with the linear map Kh : [?h,?h+1] ? [?1, 1] defined by 2(t? ? : h ) Kh t 7? 1? , (2.17) ?h+1 ? ?h which satisfies Kh(?h) = 1 and Kh(?h+1) = ?1. To solve the overall initial value problem, we simply solve the differential equations for each successive interval (as encoded into a single system of linear equations). Now let ?h := |?h+1 ? ?h| and define ?h Ah(t) := ? A(Kh(t)) (2.18) 2 xh(t) := x(Kh(t)) (2.19) ?h fh(t) := ? f(Kh(t)). (2.20) 2 2A(t) is modeled by a sparse matrix oracle OA that, on input (j, l), gives the location of the l-th nonzero entry in row j, denoted as k, and gives the value A(t)j,k. 32 Then, for each h ? [m]0, we have the rescaled differential equations dxh = Ah(t)xh(t) + fh(t) (2.21) dt for t ? [?1, 1] with the initial conditions ??????? h = 0xh(1) = ??? (2.22)xh?1(?1) h ? [m]. By taking ? 2?h (2.23) maxt?[? ,? ] ?A(t)?h h+1 where ??? denotes the spectral norm, we can ensure that ?Ah(t)? ? 1 for all t ? [?1, 1]. In particular, it suffices to take ? := max ?h ? 2 . (2.24) h?{0,1,...,m?1} maxt?[0,T ] ?A(t)? Having rescaled the equations to use the domain [?1, 1], we now apply the Chebyshev pseudospectral method. Following Section 2.2, we substitute the truncated Chebyshev series of x(t) into the differential equations with interpolating nodes {tl = cos l? : l ?n [n]}, giving the linear system dx(tl) = Ah(tl)x(tl) + fh(tl), h ? [m]0, l ? [n+ 1] (2.25) dt 33 with initial condition x(t0) = ?. (2.26) Note that in the following, terms with l = 0 refer to this initial condition. We now describe a linear system L|X? = |B? (2.27) that encodes the Chebyshev pseudospectral approximation and uses it to produce an approximation of the solution at time T . The vector |X? ? Cm+p ? Cd ? Cn+1 represents the solution in the form m??1?d?1 ?n m?+p?d?1 ?n |X? = ci,l(?h+1)|hil?+ xi|hil? (2.28) h=0 i=0 l=0 h=m i=0 l=0 where ci,l(?h+1) are the Chebyshev series coefficients of x(?h+1) and xi := x(?m)i is the ith component of the final state x(?m). The right-hand-side vector |B? represents the input terms in the form m??1 |B? = |h?|B(fh)? (2.29) h=0 where ?d?1 ?d?1 ?n |B(fh)? = ?i|i0?+ fh(cos l? )n i|il?, h ? [m? 1]. (2.30) i=0 i=0 l=1 Here ? is the initial condition and f (cos l?h )i is ith component of fh at the interpolationn point t = cos l?l .n 34 We decompose the matrix L in the form m??1 ?m m?+p m?+p L = |h??h|?(L1+L2(Ah))+ |h??h?1|?L3+ |h??h|?L4+ |h??h?1|?L5. h=0 h=1 h=m h=m+1 (2.31) We now describe each of the matrices Li for i ? [5] in turn. The matrix L1 is a discrete representation of dx , satisfyingdt ?d?1 ?n ?d?1 ?n |h??h| ? L1|X? = Tk(t0)ci,k|hi0?+ Tk(tl)[Dn]krci,r|hil? (2.32) i=0 k=0 i=0 l=1,k,r=0 (recall from (2.8) and (2.10) that Dn encodes the action of the time derivative on a Chebyshev expansion). Thus L1 has the form ?d?1 ?n ?d?1 ?n L1 = Tk(t0)|i0??ik| kl? + cos [Dn]kr|il??ir| (2.33) n i=0 k=0 ? i=0 l=1,k,r=0n = Id ? (|0??0|Pn + |l??l|PnDn) (2.34) l=1 where the interpolation matrix is a discrete cosine transform matrix: ?n kl? Pn := cos |l??k|. (2.35) n l,k=0 The matrix L2(Ah) discretizes Ah(t), i.e., ?d?1 ?n |h??h| ? L2(Ah)|X? = ? Ah(tl)ijTk(tl)cj,k|hil?. (2.36) i,j=0 l=1,k=0 35 Thus ?d?1 ?n ? kl?L2(Ah) = Ah(tl)ij cos |il??jk| (2.37) i? n,j=0 l=1,k=0n = ? Ah(tl)? |l??l|Pn. (2.38) l=1 Note that if Ah is time-independent, then L2(Ah) = ?Ah ? Pn. (2.39) The matrix L3 combines the Chebyshev series coefficients ci,l to produce xi for each i ? [?d]0. To express the?final state x(?1), L3 represents the linear combination xi(?1) = nk=0 ci,kT n k(?1) = k=0(?1)kci,k. Thus we take ?d?1 ?n L3 = (?1)k|i0??ik|. (2.40) i=0 k=0 Notice that L3 has zero rows for l ? [n]. When h = m, L4 is used to construct xi from the output of L3 for l = 0, and to repeat xi n times for l ? [n]. When m+1 ? h ? m+p, both L4 and L5 are used to repeat xi (n+ 1)p times for l ? [n]. This repetition serves to increase the success probability of the final measurement. In particular, we take ?d?1 ?n ?d?1 ?n L4 = ? |il??il ? 1|+ |il??il| (2.41) i=0 l=1 i=0 l=0 36 and ?d?1 L5 = ? |i0??in|. (2.42) i=0 In summary, the linear system is as follows. For each h ? [m]0, (L1+L2(Ah))|X? = |Bh? solves the differential equations over [?h,?h+1], and the coefficients ci,l(?h+1) are combined by L3 into the (h + 1)st block as initial conditions. When h = m, the final coefficients ci,l(?m) are combined by L3 and L4 into the final state with coefficients xi, and this solution is repeated (p+ 1)(n+ 1) times by L4 and L5. 2.4 Solution error In this section, we bound how well the solution of the linear system defined above approximates the actual solution of the system of differential equations. Lemma 2.3. For the linear system L|X? = |B? defined in (2.27), let x be the approximate ODE solution specified by the linear system and let x? be the exact ODE solution. Then for n sufficiently large, the error in the solution at time T satisfies n+1 ? ex?(T )? x(T )? ? m max ?x?(n+1)(t)? . (2.43) t?[0,T ] (2n)n Proof. First we carefully choose n satisfying ? ? ? e log(?)n (2.44) 2 log(log(?)) 37 where ?x?(n+1)(t)? ? := max (m+ 1) (2.45) t?[0,T ] ??? to ensure that ?x?(n+1)(t)?( e )n max ? 1 . (2.46) t?[0,T ] ??? 2n m+ 1 According to the quantum spectral method defined in Section 2.3, we solve dx = Ah(t)x(t) + fh(t), h ? [m]0. (2.47) dt ? ? We denote the exact solution by x?(?h+1), and we let x(? ) = d n n h+1 i=0 l=0(?1) ci,l(?h+1), where ci,l(?h+1) is defined in (2.28). Define ?h+1 := ?x?(?h+1)? x(?h+1)?. (2.48) For h = 0, Lemma 2.2 implies ( e )n ?1 = ?x?(?1)? x(?1)? ? max ?x?(n+1)(t)? . (2.49) t?[0,T ] 2n For h ? [m], the error in the approximate solution of dx = Ah(t)x(t)+fh(t) has twodt contributions: the error from the linear system and t(he error in the)initial condition. We let x?(?h+1) denote the solution of the linear system L1 + L2(Ah) |x?(?h+1)? = |B(fh)? under the initial condition x?(?h). Then ?h+1 ? ?x?(?h+1)? x?(?h+1)?+ ?x?(?h+1)? x(?h+1)?. (2.50) 38 The first term can be bounded using Lemma 2.2, giving ( )n ? ex?(?h+1)? x?(?h+1)? ? max ?x?(n+1)(t)? . (2.51) t?[0,T ] 2n The second term comes from the initial error ?h, which is transported through the linear system. Let Eh+1 = E?h+1 + ?h+1 (2.52) where Eh+1 is the solution of the linear system with input ?h and E?h+1 is the exact solution of dx = Ah+1(t)x(t) + fdt h+1(t) with initial condition x(?h) = ?h. Then by Lemma 2.2, ? ( e )n? h?h+1? = ?E? (n+1)h+1 ? Eh+1? ? max ?x? (t)? , (2.53)??? t?[0,T ] 2n so ( )n ? ?h ex?(?h+1)? x(? )? ? ? (n+1)h+1 h + max ?x? (t)? . (2.54)??? t?[0,T ] 2n Thus, we have an inequality recurrence for bounding ?h: ( ? ?x? (n+1)(t)?( e )n) ( e )n ?h+1 1 + max ? (n+1) ? ? h + max ?x? (t)? . (2.55) t?[0,T ] ? 2n t?[0,T ] 2n Now we iterate h from 1 to m. Equation (2.46) implies ?x?(n+1)(t)?( e )n ? 1 1max ? , (2.56) t?[0,T ] ??? 2n m+ 1 m 39 so ( ?x?(n+1)(t)?( e )n)m?1 (? 1 )m1 + max 1 + ? e. (2.57) t?[0,T ] ??? 2n m Therefore ( ?x?(n+1)(t)?( e )n)m?1 ?m ? 1 + max ? ?t(?[0,T ] ??? 1 2n m?1 ?x?(n+1)(t)?( e )n)h?1 ( e )n + 1 + max max ?x?(n+1)(t)? ( ) t?[0,T ] ??? ( 2n ) t?[0,T ] 2nh=1 1 m?1 1 m?1 ( )n? 1 + ? + (m? 1) 1 + max ?x?(n+1) e (2.58)1 (t)? m m t?[0,T ] 2n n+1 n+1 ? max ?x?(n+1) ? e ? e(t) + (m 1) max ?x?(n+1)(t)? t?[0,T ] (2n)n t?[0,T ] (2n)n n+1 = m max ?x?(n+1) e(t)? , t?[0,T ] (2n)n which shows that the solution error decreases exponentially with n. In other words, the linear system approximates the solution with error ? using n = poly(log(1/?)). Note that for time-independent differential equations, we can directly estimate ?x?(n+1)(t)? using x?(n+1)(t) = An+1 nh x?(t) + Ahfh. (2.59) Writing A ?1 Ah ?h ?1h = Vh?hVh where ?h = diag(?0, . . . , ?d?1), we have e = Vhe Vh . Thus the exact solution of time-independent equation with initial condition x?(1) = ? is x?(t) = eAh(1?t)? + (eAh(1?t) ? I)A?1h fh (2.60) = V e?hV ?1? + V (e?h(1?t) ? I)??1V ?1h h h h h fh. 40 Since Re(?i) ? 0 for all eigenvalues ?i of Ah for i ? [d]0, we have ?e?h? ? 1. Therefore ?x?(t)? ? ?V (???+ 2?fh?). (2.61) Furthermore, since maxh,t ?Ah(t)? ? 1, we have max ?x?(n+1)(t)? ? max (?x?(t)?+ ?fh(t)?) t?[0,T ] t?[0,T ] ? ? (???+ 3?f ?) (2.62)V h ? ?V (???+ 2??f?). Thus the solution error satisfies en+1?x?(T )? x(T )? ? m?V (???+ 2??f?) . (2.63) (2n)n Note that, although we represent the solution differently, this bound is similar to the corresponding bound in [53, Theorem 6]. 2.5 Condition number We now analyze the condition number of the linear system. Lemma 2.4. Consider an instance of the quantum ODE problem as defined in Problem 2.1. For all t ? [0, T ], assume A(t) can be diagonalized as A(t) = V (t)?(t)V ?1(t) for some ?(t) = diag(?0(t), . . . , ?d(t)), with Re(?i(t)) ? 0 for all i ? [d]0. Let ?V := maxt?[0,T ] ?V (t) be an upper bound on the condition number of V (t). Then for m, p ? Z+ 41 and n sufficiently large, the condition number of L in the linear system (2.27) satisfies ?L ? (?m+ p+ 2)(n+ 1)3.5(2?V + e???). (2.64) Proof. We begin by bounding the norms of some operators that appear in the definition of L. First we consider the l? norm of Dn since this is straightforward to calculate: ??? ???n(n+2)n ? n even,2?Dn?? := max |[Dn]ij| = (2.65)1?i?n j=0 ??? (n+1)2 ? 2 n odd. 2 Thus we have the upper bound ? ? ? ? ? ? ? (n+ 1) 2.5 Dn n+ 1 Dn ? . (2.66) 2 Next we upper bound the spectral norm of the discrete cosine transform matrix Pn: ?n ?P 2n? ? kl? max cos2 ? max{n+ 1} = n+ 1. (2.67) 0?l?n n 0?l?n k=0 Therefore ? ?Pn? ? n+ 1. (2.68) Thus we can upper bound the norm of L1 as 3 ?L1? ? ? ?? (n+ 1) Dn Pn? ? . (2.69) 2 42 Next we consider the spectral norm of L2(Ah) for any h ? [m]0. We have ?n L2(Ah) = ? Ah(tl)? |l??l|Pn. (2.70) l=1 Since the eigenvalues of each Ah(tl) for l ? [n+ 1]0 are all eigenvalues of ?n Ah(tl)? |l??l|, (2.71) l=0 we have ?????n ?? ???n ??Ah(t )? |l??l|?? ? ?l ? Ah(tl)? |l??l|?? ? max ?Ah(t)? ? 1 (2.72)t?[?1,1] l=1 l=0 by (2.23). Therefore ? ?L2(Ah)? ? ?Pn? ? n+ 1. (2.73) By direct calculation, we have ? ?L3? = n+ 1, (2.74) ?L4? ? 2, (2.75) ?L5? = 1. (2.76) Thus, for n ? 5, we find (n+ 1)3 ? ??L? ? + n+ 1 + n+ 1 + 2 + 1 ? (n+ 1)3. (2.77) 2 43 Next we upper bound ?L?1?. By definition, ?L?1? = sup ?L?1|B??. (2.78) ?|B???1 We express |B? as m?+p?n ?d?1 m?+p?n |B? = ?hil|hil? = |bhl? (2.79) h=0 l=0 i=0 h=0 l=0 ? where |b ? := d?1 ? hl i=0 ?hil|hil? satisfies ?|bhl??2 = d?1 i=0 |?hil|2 ? 1. For any fixed h ? [m+ p+ 1]0 and l ? [n+ 1] , we first upper bound ?L?10 |bhl?? and use this to upper bound the norm of L?1 applied to linear combinations of such vectors. Recall that the linear system comes from (2.13), which is equivalent to ?n ?d?1 ?n T ?k(tr)ci,k(?h) = Ah(tr)ij Tk(tr)cj,k(?h) + fh(tr)i, i ? [d]0, r ? [n+ 1]0. k=0 j=0 k=0 (2.80) For fixed h ? [m+ p+ 1] ? d0 and r ? [n+ 1]0, define vectors xhr, xhr ? C with ?n ?n (xhr)i := Tk(tr)ci,k(?h), (x ? ? hr)i := Tk(tr)ci,k(?h) (2.81) k=0 k=0 for i ? [d]0. We claim that x ?hr = xhr = 0 for any r ?= l. Combining only the equations from (2.80) with r ?= l gives the system x?hr = Ah(tr)xhr. (2.82) 44 Consider a corresponding system of differential equations dx?hr(t) = Ah(tr)x?(t) + b (2.83) dt where x? dhr(t) ? C for all t ? [?1, 1]. The solution of this system with b = 0 and initial condition x?hr(1) = 0 is clearly x?hr(t) = 0 for all t ? [?1, 1]. Then the nth-order truncated Chebyshev approximation of (2.83), which should satisfy the linear system (2.82) by (2.4) and (2.5), is exactly xhr. Using Lemma 2.3 and observing that x?(n+1)(t) = 0, we have xhr = x?hr(t) = 0. (2.84) When t = tl, we let |B? = |bhl? denote the first nonzero vector. Combining only the equations from (2.80) with r = l gives the system x?hl = Ah(tl)xhl. (2.85) Consider a corresponding system of differential equations dx?hr(t) = Ah(tr)x?(t) + b, (2.86) dt with ? = bh0, b = 0 for l = 0; or ? = 0, b = bhl for l ? [n]. Using the diagonalization Ah(tl) = V (tl)?h(tl)V ?1(tl), we have eA = V (t )e?h(tl)V ?1l (tl). Thus the exact solution of the differential equations (2.83) with r = l and initial condition 45 x?hr(1) = ? is x? (t) = eAh(tl)(1?t)? + (eAh(tl)(1?t)hr ? I)A ?1h(tl) b (2.87) = V (t )e?h(tl)(1?t)V ?1l (tl)? + V (e ?h(tl)(1?t) ? I)?h(tl)?1V ?1b. According to equation (2.46) in the proof of Lemma 2.3, we have xhl = x?hl(?1) + ?hl (2.88) where n+1 ? ? ? (n+1) e e????hl max ?x?hl (t)? ? . (2.89) t?[0,T ] (2n)n m+ 1 Now for h ? [m+ 1]0, we take xhl to be the initial condition ? for the next subinterval to obtain x(h+1)l. Using (2.87) and (2.88), starting from ? = bh0, b = 0 for l = 0, we find (m??h+1 ) m??h (?k ) x = V (t ) e2?h(tl) ?1ml l V (tl)?+ V (tl) e 2?h(tl) V ?1(tl)?(m?k)l. (2.90) j=1 k=0 j=1 Since ??h(tl)? ? ??? ? 1 and ?h(tl) = diag(?0, . . . , ?d?1) with Re(?i) ? 0 for i ? [d]0, we have ?e2?h(tl)? ? 1. Therefore ?xhl? ? ?xml? ? ?V (tl)?bhl?+ (m? h+ 1)?V (tl)??hl? ? ?V (tl) + e??? ? ?V + e???. (2.91) 46 On the other hand, with ? = 0, b = bhl for l ? [n], we have (m??h )( ) x = V (t ) e2?h(tl) (e2?h(tl)ml l ? I)?h(tl)?1 V ?1(tl)b j=1 (2.92) m??h (?k ) + V (t ) e2?h(tl) V ?1l (tl)?(m?k)l, k=0 j=1 so ?xhl? ? 2?V (tl)?bhl?+(m?h+1)?V (tl)??hl? ? 2?V (tl)+e??? ? 2?V +e???. (2.93) For h ? {m,m + 1, . . . ,m + p}, according to the definition of L4 and L5, we similarly have ?xhl? = ?xml? ? 2?V + e???. (2.94) According to (2.87), x?hl(t) is a monotonic function of t ? [?1, 1], which implies ?x? 2hl(t)? ? max{?x?hl(?1)?2, ?x?hl(1)?2} ? (2?V + e???)2. (2.95) Using the identity ? 1 ? dt = ?, (2.96) ?1 1? t2 we have ? 1 ? 1 ?x?hl(t)?2? dt ? dt(2?V + e???)2 ? = ?(2? + e???)2V . (2.97) ?1 1? t2 ? 21 1? t 47 Consider the Chebyshev expansion of x?hl(t) as in (2.4): ?d?1 ?? x?hl(t) = ci,l(?h+1)Tl(t). (2.98) i=0 l=0 By the orthogonality of Chebyshev polynomials (as specified in [54, Appendix A]), we have ? 1 ? 1(?d?1 ?? )2 ?x?hl(t)?2? dt dt = ci,l(?h+1)Tl(t) ? ? 1? t2 ? 1? t2?1? ? 1 i=0 l=0 ? (2.99)d?1 ? d?1 d?1 ?n ?d?1 = c2i,l(?h+1) + 2 c 2 i,0(?h+1) ? c2 2i,l(?h+1) + 2 ci,0(?h+1). i=0 l=1 i=0 i=0 l=1 i=0 Using (2.97), this gives ?d?1 ?n ? 1 2 2 dtc 2i,l(?h+1) ? x?hl(t)? ? ?(2?V + e???) . (2.100) 1? t2 i=0 l=0 ?1 Now we compute ?|X??, summing the contributions from all ci,r(?h) and xmr, and notice that ci,r = 0 and xmr = 0 for all r =? l, giving m??1?d?1 ?|X??2 = c2 2i,l(?h+1) + (p+ 1)(xml) h=0 i=0 ? (2.101)?m(2?V + e???)2 + (p+ 1)(?V + e???)2 ? (?m+ p+ 1)(2?V + e???)2. Finally, considering all h ? [m+ p+ 1]0 and l ? [n+ 1]0, from (2.79) we have m?+p?n ?|B??2 = ?|bhl??2 ? 1, (2.102) h=0 l=0 48 so m?+p?n ?L?1?2 = sup ?L?1|B??2 = sup ?L?1|b 2hl?? ?|B???1 ?|B???1 h=0 l=0 ? (2.103)(?m+ p+ 1)(m+ p+ 1)(n+ 1)(2?V + e???)2 ? (?m+ p+ 1)2(n+ 1)(2?V + e???)2, and therefore ?L?1? ? (?m+ p+ 1)(n+ 1)0.5(2?V + e???). (2.104) Finally, combining (2.77) and (2.104) gives ?L = ?L??L?1? ? (?m+ p+ 1)(n+ 1)3.5(2?V + e???) (2.105) as claimed. 2.6 Success probability We now evaluate the success probability of our approach to the quantum ODE problem. Lemma 2.5. Consider an instance of the quantum ODE problem as defined in Problem 2.1 with the exact solution x?(t) for t ? [0, T ], and its corresponding linear system (2.27) with m, p ? Z+ and n sufficiently large. When applying?the QLSA to this system, the probability of measuring a state proportional to |x(T )? = d?1i=0 xi|i? is (p+ 1)(n+ 1) Pmeasure ? , (2.106) ?mq2 + (p+ 1)(n+ 1) 49 where xi is defined in (2.28), ? is defined in (2.24), and ?x?(t)? q := max . (2.107) t?[0,T ] ?x(T )? Proof. After solving the linear system (2.27) using the QLSA, we measure the first and third registers of |X? (as defined in (2.28)). We decompose this state as |X? = |Xbad?+ |Xgood?, (2.108) where m??1?d?1 ?n |Xbad? = ci,l(?h+1)|hil?, (2.109) m?h=0+p?i=0 l=0d?1 ?n |Xgood? = xi|hil?. (2.110) h=m i=0 l=0 When the first register is observed in some h ? {m,m+ 1, . . . ,m+ p} (no matter what outcome is seen for the third register), we output the second register, which is then in a normalized state proportional to the final state: | |x(T )?Xmeasure? = , (2.111)?|x(T )?? with ?d?1 ?d?1 ?n |x(T )? = xi|i? = ci,kTk(t)|i?. (2.112) i=0 i=0 k=0 50 Notice that ?d?1 ?|x(T )??2 = x2i (2.113) i=0 and ?d?1 ?|X 2 2good?? = (p+ 1)(n+ 1) xi = (p+ 1)(n+ 1)?|x(T )??2. (2.114) i=0 Considering the definition of q, the contribution from time interval h under the rescaling (2.17), and the identity (2.96), we have ? 1 q2?x(T )?2 ? ?2 1 d?= max x?(t) = ? max ?x?(t)?2 t?[0?,T ] ? ?1 1? ? 2 t?[0,T ] ? 1 1 ? d?? max ?x?(t)? 2 ? ? 1? ? 21 t?[?h,?h+1] (2.115) 1 1 d? = ? ? max ?x?h(t)? 2 ? ?1 1? ? 2 t?[?1,1] 1 ? 1 ?x?h(t)?2? dt , ? ? 21 1? t where x?h(t) is the solution of (2.47) with the rescaling in (2.19). By the orthogonality of Chebyshev polynomials (as specified in [54, Appendix A]), ? 1 ? 1 ?d?11 dt 1 ?? dt q2?x(T )?2 ? ?x?h(t)?2? = ( ci,k(?h+1)Tk(t))2? ? ? ? 2 ? ? 2?1 1 t 1 i=0 k=0 1? td?1 ?? ?d?1 ?d?1 ?n1 = ( c2 1 i,k(?h+1) + 2 c 2 i,0(?h+1)) ? c2i,k(?h+1).? ? i=0 k=1 i=0 i=0 k=0 (2.116) 51 For all h ? [m]0, we have m??1 ?d?11 ?n 1 mq2?x(T )?2 ? c2i,k(?h+1) = ?|X 2bad?? , (2.117)? ? h=0 i=0 k=0 and therefore ?|Xgood??2 (p+ 1)(n+ 1) = (p+ 1)(n+ 1)?x(T )?2 ? ?|Xbad??2. (2.118) ?mq2 Thus we see that the success probability of the measurement satisfies (p+ 1)(n+ 1) Pmeasure ? (2.119) ?mq2 + (p+ 1)(n+ 1) as claimed. 2.7 State preparation We now describe a procedure for preparing the vector |B? in the linear system (2.27) (defined in (2.29) and (2.30)) using the given ability to prepare the initial state of the system of differential equations. We also evaluate the complexity of this procedure. Lemma 2.6. Consider state preparation oracles acting on a state space with basis vectors |h?|i?|l? for h ? [m]0, i ? [d]0, l ? [n]0, where m, d, n ? N, encoding an initial condition ? ? Cd and function f l? dh(cos ) ? C as in (2.30). Specifically, for any h ? [m]0 andn l ? [n], let Ox be a unitary oracle that maps |0?|0?|0? to a state proportional to |0?|??|0? and |h?|??|l? to |h?|??|l? for any |?? orthogonal to |0?; let Of (h, l) be a unitary that maps 52 |h?|0?|l? to a state proportional to |h?|f (cos l?h )?|l? and maps |0?|??|0? to |0?|??|0? forn any |?? orthogonal to |0?. Suppose ??? and ?fh(cos l? )? are known. Then the normalizedn quantum state m??1?n |B? ? |0?|??|0?+ |h?|fh(cos l? )?|l? (2.120)n h=0 l=1 can be prepared with gate and query complexity O(mn). Proof. We normalize the components of the state using the coefficients ??? b00 = ? ? , ???2 + n ?f (cos l? )?2l=1 h n ? (2.121)?f?h(cos l? )?b = nhl , h ? [m]0, l ? [n] ???2 + n ?f (cos l? )?2l=1 h n so that m??1?n b2hl = 1. (2.122) h=0 l=0 First we perform a unitary transformation mapping |0?|0?|0? 7? b00|0?|0?|0?+ b01|0?|0?|1?+ ? ? ?+ b(m?1)n|m? 1?|0?|n?. (2.123) This can be done in time complexity O(mn) by standard techniques [92]. Then we perform Ox and Of (h, l) for all h ? [m]0, l ? [n], giving m??1?n |0?|??|0?+ |h?|fh(cos l? )?|l? (2.124)n h=0 l=1 using O(mn) queries. 53 2.8 Main result Having analyzed the solution error, condition number, success probability, and state preparation procedure for our approach, we are now ready to establish the main result. Theorem 2.1. Consider an instance of the quantum ODE problem as defined in Problem 2.1. Assume A(t) can be diagonalized as the form A(t) = V (t)?(t)V ?1(t) where ?(t) = diag(?1(t), . . . , ?d(t)) with Re(?i(t)) ? 0 for each i ? [d]0 and t ? [0, T ]. Then there exists a quantum algorithm that produces a state x(T )/?x(T )? ?-close to x?(T )/?x?(T )? in l2 norm, succeeding with probability ?(1), with a flag indicating success, using ( ) O ?V s?A?Tq poly(log(?V s?A?g?T/?g)) (2.125) queries to oracles OA(h, l) (a sparse matrix oracle for Ah(tl) as defined in (2.18)) and Ox and Of (h, l) (as defined in Lemma 2.6). Here ?A? := maxt?[0,T ] ?A(t)?; ?V := maxt ?V (t), where ?V (t) is the condition number of V (t); and g := ?x?(T )? ? ?x?(t)?, g := max max ?x?(n+1)(t)?, q := max . (2.126) t?[0,T ] n?N t?[0,T ] ?x(T )? The gate complexity is larger than the query complexity by a factor of poly(log(?V ds?A?g?T/?)). Proof. We first present the algorithm and then analyze its complexity. Statement of the algorithm. First, we choose m to guarantee ?A?T ? 1. (2.127) 2m 54 Then, as in Section 2.3, we divide the interval [0, T ] into small subintervals [0,?1], [?1,?2], . . . , [?m?1, T ] with ?0 = 0,?m = T , and define T ? := max {?h}, ?h := |?h+1 ? ?h| = . (2.128) 0?h?m?1 m Each subinterval [?h,?h+1] for h ? [m? 1] is mapped onto [?1, 1] with a linear mapping Kh satisfying Kh(?h) = 1, Kh(?h+1) = ?1: 2(t? ?h) Kh : t 7? t = 1? . (2.129) ?h+1 ? ?h We choose {? ? ? ?} e log(?) log(?) n = max , (2.130) 2 log(log(?)) log(log(?)) where g?em g?em(1 + ?) ? := = (2.131) ? g? and g? ? := (m+ 1). (2.132) ??? Since maxt?[0,T ] ?x?(n+1)(t)? ? g?, by Lemma 2.3, this choice guarantees n+1 ?x?(T )? x(T )? ? m max ?x?(n+1) e(t)? ? ? (2.133) t?[0,T ] (2n)n and ?x?(n+1)(t)?( e )n 1 max ? . (2.134) t?[0,T ] ??? 2n m+ 1 55 Now ?x?(T )? x(T )? ? ? implies ???? ?x?(T ) ? x(T ) ??? ? ? ? ? =: ?, (2.135)?x?(T )? ?x(T )? min{?x?(T )?, ?x(T )?} g ? ? so we can choose such n to ensure that the normalized output state is ?-close to x?(T )/?x?(T )?. Following Section 2.3, we build the linear system L|X? = |B? (see (2.27)) that encodes the quantum spectral method. By Lemma 2.4, the condition number of this linear system is at most (?m + p + 1)(n + 1)3.5(2?V + e????). Then we use the QLSA from reference [46] to obtain a normalized state |X? and measure the first and third register of |X? in the standard basis. If the measurement outcome for the first register belongs to S = {m,m+ 1, . . . ,m+ p}, (2.136) we output the state of the second register, which is a normalized state |x(T )?/?|x(T )?? satisfying (2.135). By Lemma 2.5, the probability of this event happening is at least (p+1)(n+1) 2 . To ensure m+ p = O(?A?T ), we can choose?mq +(p+1)(n+1) p = O(m) = O(?A?T ), (2.137) ? so we can achieve success probability ?(1) with O(q/ n) repetitions of the above procedure. Analysis of the complexity. The matrix L is an (m+p+1)d(n+1)?(m+p+1)d(n+1) matrix with O(ns) nonzero entries in any row( or column. By)Lemma 2.4 and our choice of parameters, the condition number of L is O ?V (m+ p)n3.5 . Consequently, by Theorem 56 5 of [46], the QLSA produces the state |x(T )? with ( ) ( ) O ?V (m+ p)n 4.5s poly(log(?Vmns/?)) = O ?V s?A?T poly(log(?V s?A?g?T/?g)) (2.138) queries to the oracles OA(h, l), Ox, and Of (h, l), and its gate complexity is larger by ? a factor of poly(log(?Vmnds/?)). Using O(q/ n) steps of amplitude amplification to achieve success probability ?(1), the overall query complexity of our algorithm is ( ) ( ? )O ?V (m+p)n4sq poly(log(?Vmns/?)) = O ?V s?A?Tq poly(log(?V s?A?g T/?g)) , (2.139) and the gate complexity is larger by a factor of poly(log(?V ds?A?g?T/?g)) (2.140) as claimed. In general, g? could be unbounded above as n ? ?. However, we could obtain a useful bound in such a case by solving the implicit equations (2.133) and (2.134). Note that for time-independent differential equations, we can replace g? by ??? + 2??f? as shown in (2.62). In place of (2.131) and (2.132), we choose (???+ 2??f?)em?V (???+ 2??f?)em?V (1 + ?) ? := = (2.141) ? g? 57 and ???+ 2??f? ? := (m+ 1)?V . (2.142)??? By Lemma 2.3, this choice guarantees en+1?x?(T )? x(T )? ? max ?x?(t)? x(t)? ? m?V (???+ 2??f?) ? ? (2.143) t?[?1,1] (2n)n and ?x?(n+1)(t)?( e )n ? ?V (???+ 2??f?)( e )n 1max ? . (2.144) t?[0,T ] ??? 2n ??? 2n m+ 1 Thus we have the following: Corollary 2.1. For time-independent differential equations, under the same assumptions of Theorem 2.1, there exists a quantum algorithm using ( ) O ?V s?A?Tq poly(log(?V s??A??f?T/?g)) (2.145) queries to OA(h, l), Ox, and Of (h, l). The gate complexity of this algorithm is larger than its query complexity by a factor of poly(log(?V ds??A??f?T/?)). The complexity of our algorithm depends on the parameter q in defined in (2.126), which characterizes the decay of the final state relative to the initial state. As discussed in Section 8 of [53], it is unlikely that the dependence on q can be significantly improved, since renormalization of the state effectively implements postselection and an efficient procedure for performing this would have the unlikely consequence BQP = PP. 58 We also require the real parts of the eigenvalues of A(t) to be non-positive for all t ? [0, T ] so that the solution cannot grow exponentially. This requirement is essentially the same as in the time-independent case considered in [53] and improves upon the analogous condition in [52] (which requires an additional stability condition). Also as in [53], our algorithm can produce approximate solutions for non-diagonalizable A(t), although the dependence on ? degrades to poly(1/?). For further discussion of these considerations, see Sections 1 and 8 of [53]. 2.9 Boundary value problems So far we have focused on initial value problems (IVPs). Boundary value problems (BVPs) are another widely studied class of differential equations that appear in many applications, but that can be harder to solve than IVPs. Consider a sparse, linear, time-dependent system of differential equations as in Problem 2.1 but with a constraint on some linear combination of the initial and final states: Problem 2.2. In the quantum BVP, we are given a system of equations dx(t) = A(t)x(t) + f(t), (2.146) dt where x(t) ? Cd, A(t) ? Cd?d is s-sparse, and f(t) ? Cd for all t ? [0, T ], and a boundary condition ?x(0)+?x(T ) = ? with ?, ?, ? ? Cd. Suppose there exists a unique solution x? ? C?(0, T ) of this boundary value problem. Given oracles that compute the 59 locations and values of nonzero entries of A(t) for any t, and that prepare quantum states ?|x(0)?+?|x(T )? = |?? and |f(t)? for any t, the goal is to output a quantum state |x(t?)? that is proportional to x(t?) for some specified t? ? [0, T ]. As before, we can rescale [0, T ] onto [?1, 1] by a linear mapping. However, since we have boundary conditions at t = 0 and t = T , we cannot divide [0, T ] into small subintervals. Instead, we directly map [0, T ] onto [?1, 1] with a linear map K satisfying K(0) = 1 and K(T ) = ?1: ?7 ? 2tK : t t = 1 . (2.147) T Now the new differential equations are dx ?T ( ) = A(t)x+ f(t) . (2.148) dt 2 If we define AK(t) := ?TA(t) and fK(t) = ?T f(t), we have2 2 dx = AK(t)x(t) + fK(t) (2.149) dt for t ? [?1, 1]. Now the boundary condition takes the form ?x(1) + ?x(?1) = ?. (2.150) Since we only have one solution interval, we need to choose a larger order n of the 60 Chebyshev series to reduce the solution error. In particular, we take {? ? ? ?} e? ? log(?) log(?)n = A T max , (2.151) 2 log(log(?)) log(log(?)) where ? and ? are the same as Theorem 2.1. As in Section 2.3, we approximate x(t) by a finite Chebyshev series with interpolating nodes {tl = cos l? : l ? [n]} and thereby obtain a linear systemn dx(tl) = AK(tl)x(tl) + f(tl), l ? [n] (2.152) dt with the boundary condition ?x(t0) + ?x(tn) = ?. (2.153) Observe that the linear equations have the same form as in (2.25). Instead of (2.26), the term with l = 0 encodes the condition (2.153) expanded in a Chebyshev series, namely ?n ?n ?i ci,kTk(t0) + ?i ci,kTk(tn) = ?i (2.154) k=0 k=0 for each i ? [d]0. Since Tk(t0) = 1 and Tk(tn) = (?1)k, this can be simplified as ?n (?i + (?1)k?i)ci,k = ?i. (2.155) k=0 If ?i + (?1)k?i = 0, the element of |il??ik| of L2(AK) is zero; if ?i + (?1)k?i ?= 0, without loss of generality, the two sides of this equality can be divided by ?i + (?1)k?i 61 to guarantee that the terms with l = 0 can be encoded as in (2.26). Now this system can be written in the form of equation (2.27) with m = 1. Here L, |X?, and |B? are the same as in (2.31), (2.28), and (2.29), respectively, with m = 1, except for adjustments to L3 that we now describe. ? The matrix L3 represents the linear combination xi(t?) = n c T (t?k=0 i,k k ). Thus we take ?d?1 ?n L3 = Tk(t ?)|i0??ik|. (2.156) i=0 k=0 Since |T ?k(t )| ? 1, we have ?L3? ? n+ 1, (2.157) and it follows that Lemma 2.3 also holds for boundary value problems. Similarly, Lemma 2.4 still holds with m = 1. We are now ready to analyze the complexity of the quantum BVP algorithm. The matrix L defined above is a (p+2)d(n+1)? (p+2)d(n+1) matrix with O(ns) nonzero entries in any row or column, with condition number O(? 3.5V pn ). By Lemma 2.5 with ? p = O(1), O(q/ n) repetitions suffice to ensure success probability ?(1). By (2.151), n is linear in ?A?T and poly-logarithmic in ? and ?. Therefore, we have the following: Theorem 2.2. Consider an instance of the quantum BVP as defined in Problem 2.2. Assume A(t) can be diagonalized as the form A(t) = V (t)?(t)V ?1(t) where ?(t) = diag(?1(t), . . . , ?d(t)) with Re(?i(t)) ? 0 for each i ? [d]0 and t ? [0, T ]. Then there exists a quantum algorithm that produces a state x(t?)/?x(t?)? ?-close to x?(t?)/?x?(t?)? 62 in l2 norm, succeeding with probability ?(1), with a flag indicating success, using ( O ? s?A?4T 4q poly(log(? s?A?g? ) V V T/?g)) (2.158) queries to OA(h, l), Ox, and Of (h, l). Here ?A?, ?V , g, g? and q are defined as in Theorem 2.1. The gate complexity is larger than the query complexity by a factor of poly(log(?V ds?A?g?T/?)). As for initial value problems, we can simplify this result in the time-independent case. Corollary 2.2. For a time-independent boundary value problem, under the same assumptions of Theorem 2.2, there exists a quantum algorithm using ( ) O ?V s?A?4T 4q poly(log(?V s??A??f?T/?g)) (2.159) queries to OA(h, l), Ox, and Of (h, l). The gate complexity of this algorithm is larger than its query complexity by a factor of poly(log(?V ds??A??f?T/?)). 2.10 Discussion In this chapter, we presented a quantum algorithm to solve linear, time-dependent ordinary differential equations. Specifically, we showed how to employ a global approximation based on the spectral method as an alternative to the more straightforward finite difference method. Our algorithm handles time-independent differential equations with almost the same complexity as [53], but unlike that approach, can also handle time-dependent differential 63 equations. Compared to [52], our algorithm improves the complexity of solving time- dependent linear differential equations from poly(1/?) to poly(log(1/?)). This work raises several natural open problems. First, our algorithm must assume that the solution is smooth. If the solution is in Cr, the solution error is O( 1 nr?2 ) by Lemma 2.1. Can we improve the complexity to poly(log(1/?)) under such weaker smoothness assumptions? Second, the complexity of our algorithm is logarithmic in the parameter g? defined in (2.126), which characterizes the amount of fluctuation in the solution. However, the query complexity of Hamiltonian simulation is independent of that parameter [34, 35]. Can we develop quantum algorithms for general differential equations with query complexity independent of g?? Third, our algorithm has nearly optimal dependence on T , scaling as O(T poly(log T )). According to the no-fast-forwarding theorem [33], the complexity must be at least linear in T , and indeed linear complexity is achievable for the case of Hamiltonian simulation [93]. Can we handle general differential equations with complexity linear in T ? Furthermore, can we achieve an optimal tradeoff between T and ? as shown for Hamiltonian simulation in [42]? 64 Chapter 3: High-precision quantum algorithms for linear elliptic partial differential equations 3.1 Introduction In this chapter, we study high-precision quantum algorithms for linear elliptic partial differential equations (PDEs)1. As earlier introduced in Problem 1.4, the problem we address can be stated as follows: Given a linear PDE with boundary conditions and an error parameter ?, output a quantum state that is ?-close to one whose amplitudes are proportional to the solution of the PDE at a set of grid points in the domain of the PDE. We focus on elliptic PDEs, and we assume a technical condition that we call global strict diagonal dominance (defined in (3.8)). Our first algorithm is based on a quantum version of the FDM approach: we use a finite-difference approximation to produce a system of linear equations and then solve that system using the QLSA. We analyze our FDM algorithm as applied to Poisson?s equation (which automatically satisfies global strict diagonal dominance) under periodic, Dirichlet, and Neumann boundary conditions. Whereas previous FDM approaches [69, 70] considered fixed orders of truncation, we adapt the order of truncation depending on ?, inspired by the classical adaptive FDM [71]. As the order increases, the eigenvalues 1This chapter is based on the paper [59]. 65 of the FDM matrix approach the eigenvalues of the continuous Laplacian, allowing for more precise approximations. The main algorithm we present uses the quantum Fourier transform (QFT) and takes advantage of the high-precision LCU-based QLSA [46]. We first consider periodic boundary conditions, but by restricting to appropriate subspaces, this approach can also be applied to homogeneous Dirichlet and Neumann boundary conditions. We state our result in Theorem 3.1, which (informally) says that this quantum adaptive FDM approach produces a quantum state approximating the solution of Poisson?s equation with complexity d6.5 poly(log d, log(1/?)). We also propose a quantum algorithm for more general second-order elliptic PDEs under periodic or non-periodic Dirichlet boundary conditions. This algorithm is based on quantum spectral methods [54]. The spectral method globally approximates the solution of a PDE by a truncated Fourier or Chebyshev series (which converges exponentially for smooth functions) with undetermined coefficients, and then finds the coefficients by solving a linear system. This system is exponentially large in d, so solving it is infeasible for classical algorithms but feasible in a quantum context. To be able to apply the QLSA efficiently, we show how to make the system sparse using variants of the quantum Fourier transform. Our bound on the condition number of the linear system uses global strict diagonal dominance, and introduces a factor in the complexity that measures the extent to which this condition holds. We state our result in Theorem 3.2, which (informally) gives a complexity of d2 poly(log(1/?)) for producing a quantum state approximating the solution of general second-order elliptic PDEs with Dirichlet boundary conditions. Both of these approaches have complexity poly(d, log(1/?)), providing optimal dependence on ? and an exponential improvement over classical methods as a function of 66 the spatial dimension d. Bounding the complexities of these algorithms requires analyzing how d and ? affect the condition numbers of the relevant linear systems (finite difference matrices and matrices relating the spectral coefficients) and accounting for errors in the approximate solution provided by the QLSA. Furthermore, the complexities of both approaches scale logarithmically with high-order derivatives of the solution and the inhomogeneity. The detailed complexity dependence is presented in Theorem 3.1 and Theorem 3.2, and is further discussed in Section 3.5. Table 3.1 compares the performance of our approaches to other classical and quantum algorithms for PDEs. Compared to classical algorithms, quantum algorithms improve the dependence on spatial dimension from exponential to polynomial (with the significant caveat that they produce a different representation of the solution). Compared to previous quantum FDM/FEM/FVM algorithms [57, 69, 70, 94], the quantum adaptive FDM and quantum spectral method improve the error dependence from poly(1/?) to poly(log(1/?)). Our approaches achieve the best known dependence on the parameter ? for the Poisson equation with homogeneous boundary conditions. Furthermore, our quantum spectral method approach not only achieves the best known dependence on d and ? for elliptic PDEs with inhomogeneous Dirichlet boundary conditions, but also improves the dependence on d for the Poisson equation with inhomogeneous Dirichlet boundary conditions, as compared to previous quantum algorithms. The remainder of the chapter is structured as follows. Section 3.2 introduces technical details about linear PDEs and formally states the problem we solve. Section 3.3 covers our FDM algorithm for Poisson?s equation. Section 3.4 details the spectral algorithm for elliptic PDEs. Finally, Section 3.5 concludes with a brief discussion of the results, their 67 Algorithm Equation Boundary conditions Complexity FDM/FEM/FVM general general poly((1/?)d) Adaptive FDM/FEM [71] general general poly((log (1/?))d) Spectral method [67, 68] general general poly((log (1/?))d) Sparse grid FDM/FEM [95, 96] general general poly((1/?)(log(1/?))d) Sparse grid spectral method [97, 98] elliptic general poly(log (1/?)(log log(1/?))d) FEM [57] Poisson homogeneous poly(d, 1/?) FDM [69] Poisson homogeneous Dirichlet dpoly(log d, 1/?) FDM [70] wave homogeneous d5/2 poly(1/?) FVM [94] hyperbolic periodic dpoly(1/?) Adaptive FDM [59] Poisson periodic, homogeneous d13/2 poly(log d, log (1/?)) Spectral method [59] Poisson homogeneous Dirichlet dpoly ( log d, log (1/?)) Spectral method [59] elliptic inhomogeneous Dirichlet d2 poly ( log (1/?)) Table 3.1: Summary of the time complexities of classical and quantum algorithms for d- dimensional PDEs with error tolerance ?. Portions of the complexity in bold represent best known dependence on that parameter. possible applications, and some open problems. 3.2 Linear PDEs In this chapter, we focus on systems of linear PDEs. Such equations can be written in the form L (u(x)) = f(x), (3.1) where the variable x = (x1, . . . , xd) ? Cd is a d-dimensional vector, the solution u(x) ? C and the inhomogeneity f(x) ? C are scalar functions, and L is a linear differential operator acting on u(x). In general, L can be written in a linear combination of u(x) and its derivatives. A linear differential operatorL of order h has the form ? ?j L (u(x)) = Aj(x) u(x), (3.2) ?xj ?j?1?h 68 Quantum Classical where j = (j1, . . . , jd) is a d-dimensional non-negative vector with ?j?1 = j1+? ? ?+jd ? h, Aj(x) ? C, and ?j ?j1 ?jd u(x) = j ? ? ? j u(x). (3.3)?xj ?x 1 ?x d1 d The problem reduces to a system of linear ordinary differential equations (ODEs) when d = 1. For d ? 2, we call (3.1) a (multi-dimensional) PDE. For example, systems of first-order linear PDEs can be written in the form ?d ?u(x) Aj(x) + A0(x)u(x) = f(x), (3.4) ?x j=1 j where Aj(x), A0(x), f(x) ? C for j ? [d] := {1, . . . , d}. Similarly, systems of second- order linear PDEs can be expressed in the form ?d ?2 du(x) ? ?u(x) Aj1j2(x) + Aj(x) + A0(x)u(x) = f(x), (3.5)?x ?x ?x j1,j2=1 j1 j2 j=1 j where Aj1,j2(x), Aj(x), A0(x), f(x) ? C for j1, j2, j ? [d]. A well-known second-order linear PDEs is the Poisson equation ?d ?2 ?u(x) := u(x) = f(x). (3.6) ?x2 j=1 j A linear PDE of order h is called elliptic if its differential operator (3.2) satisfies ? A jj(x)? ?= 0, (3.7) ?j?1=h 69 for all nonzero ?j = ?j11 . . . ? jd d with ?1, . . . , ? m d ? R and all x. Note that ellipticity only depends on the highest-order terms. When h = 2, the linear PDE (3.5) is called a second- order elliptic PDE if and only if Aj1j2(x) is positive-definite or negative-definite for any x. In particular, the Poisson equation (3.6) is a second-order elliptic PDE. We consider a class of elliptic PDEs that also satisfy the condition ?d ? C := 1? 1 |Aj1,j2(x)| > 0 (3.8)|Aj1,j1(x)|j1=1 j2?[d]\{j1} for all x. We call this condition global strict diagonal dominance, since it is a strengthening of the standard (strict) diagonal dominance condition ?d ? 1 ? d |Aj1,j2(x)| > 0. (3.9)|A (x)| j1=1 j1,j1 j2?[d]\{j1} Observe that (3.8) holds for the Poisson equation (3.6) with C = 1. In this chapter, we focus on the following boundary value problem. This is a formal statement of Problem 1.4. Problem 3.1. In the quantum PDE problem, we are given a system of second-order elliptic equations ? j ?d? ?2u(x) L (u(x)) = Aj u(x) = Aj j ?xj 1 2 = f(x) (3.10) ?x ?x ? ? j ,j =1 j1 jj =2 21 1 2 satisfying the global strict diagonal dominance condition (3.8), where the variable x = (x1, . . . , xd) ? D = [?1, 1]d is a d-dimensional vector, the inhomogeneity f(x) ? C is a 70 scalar function of x satisfying f(x) ? C?, and the linear c?oefficients Aj ? C. We are also given boundary conditions u(x) = ?(x) ? ?D or ?u(x) ? ? = ?(x)|?x x = 1 xj=?1 ? ?Dj j where ?(x) ? C?. We assume there exists a weak solution u?(x) ? C for the boundary value problem (see Reference [99, Section 6.1.2]). Given sparse matrix oracles to provide the locations and values of nonzero entries of a matrix Aj1j2(x), Aj(x), and A0(x) on a set of interpolation nodes x2, and that prepare normalized states |?(x)? and |f(x)? whose amplitudes are proportional to ?(x) and f(x) on a set of interpolation nodes x, the goal is to output a quantum state |u(x)? that is ?-close to the normalized u(x) on a set of interpolation nodes x in ?2 norm. 3.3 Finite difference method We now describe our first approach to quantum algorithms for linear PDEs, based on the finite difference method (FDM). Using this approach, we show the following. Theorem 3.1. There exists a quantum algorithm that outputs a state ?-close to |u? that runs in time ( (??? ?d2k+1O? d6.5 log4.5 u ??? ) [ (??d2k+1u??/? log d4 log3 ? ? ) ])/? /? (3.11) dx2k+1 dx2k+1 and makes ( (???d2k+1u??? )? [ (4 3 4 3 ?? )d2k+1u?? ) ]O? d log /? log d log ? ?/? /? (3.12) dx2k+1 dx2k+1 2For instance, Aj j (x) is modeled by a sparse matrix oracle that, on input (m, l), gives the location of1 2 the l-th nonzero entry in row m, denoted as n, and gives the value Aj1j2(x)m,n. 71 queries to the oracle for f . To show this, we first construct a linear system corresponding to the finite difference approximation of Poisson?s equation with periodic boundary conditions and bound the error of this high-order FDM in Section 3.3.1 (Lemma 3.1). Then we bound the condition number of this system in Section 3.3.2 (Lemma 3.2 and Lemma 3.3) and bound the error of approximation in Section 3.3.3 (Lemma 3.4). We use these results to give an efficient quantum algorithm in Section 3.3.4, establishing Theorem 3.1. We conclude by discussing how to use the method of images to apply this algorithm for Neumann and Dirichlet boundary conditions in Section 3.3.5. The FDM approximates the derivative of a function f at a point x in terms of the values of f on a finite set of points near x. Generally there are no restrictions on where these points are located relative to x, but they are typically taken to be uniformly spaced points with respect to a certain coordinate. This corresponds to discretizing [?1, 1]d (or [0, 2?)d) to a d-dimensional rectangular lattice (where we use periodic boundary conditions). For a scalar field, in which u(x) ? C, the canonical elliptic PDE is Poisson?s equation (3.6), which we consider solving on [0, 2?)d with periodic boundary conditions. This also implies results for the domain ? = [?1, 1]d under Dirichlet (u(??) = 0) and Neumann (n???u(??) = 0 where? n? denotes the normal direction to ??, which for domain ? = [?1, 1]d is equivalent to ?u ? ? = 0 for j ? [d]) boundary conditions.?xj xj= 1 72 3.3.1 Linear system To approximate the second derivatives appearing in Poisson?s equation, we apply the central finite difference formula of order 2k. Taking xj = jh for a lattice with spacing h, this formula gives the approximation ?k f ?? ? 1(0) rjf(jh) (3.13) h2 j=?k where the coefficients are [6, 100] ? ???????? 2(?1)j+1(k!)2? 2 j ? [k]j (k??j)!(k+j)!rj := ??????? ?2 k (3.14)j=1 rj j = 0 ??r?j j ? ?[k]. We leave the dependence on k implicit in this notation. The following lemma characterizes the error of this formula. Lemma 3.1 ([6, Theorem 7]). Let k ? 1 and suppose f(x) ? C2k+1 for x ? R. Define the coefficients rj as in (3.14). Then d2 k u(x ) 1 ? (??d2k+1u ??(0 eh)2k?1) = rjf(x0 + jh) +O ? ? (3.15) dx2 h2 dx2k+1 2 j=?k 73 where ???d2k+1u??? ???d2k+1u ??:= max (y)?. (3.16) dx2k+1 y?[x ?kh,x +kh] dx2k+10 0 Since we assume periodic boundary conditions and apply the same FDM formula at each lattice site, the matrices we consider are circulant. Define the 2n ? 2n matrix S?to have entries Si,j = ?i,j+1 mod 2n. If we represent the solution u(x) as a vector u =2n j=1 u(?j/n)ej , then we can approximate Poisson?s equation using a central difference formula as 1 1 ( ?k ) Lu = r I + r (Sj0 j + S ?j) u = f (3.17) h2 h2 j=1 ? where f = 2nj=1 f(?j/n)ej . The solution u corresponds exactly with the quantum state we want to produce, so we do not have to perform any post-processing such as in Reference [70] and other quantum differential equation algorithms. The matrix in this linear system is just the finite difference matrix, so it suffices to bound its condition number and approximation error (whereas previous quantum algorithms involved more complicated linear systems). 3.3.2 Condition number The following lemma characterizes the condition number of a circulant Laplacian on 2n points. ? Lemma 3.2. For k < (6/?2)1/3n2/3, the matrix L = r I + k j0 j=1 rj(S +S ?j) with rj as 74 in (3.14) has condition number ?(L) = O(n2). Proof. We first upper bound ?L? using Gershgorin?s circle theorem [101] (a similar argument appears in Reference [6]). Note that | | 2(k!) 2 2 rj = ? (3.18) j2(k ? j)!(k + j)! j2 since (k!)2 k(k ? 1) ? ? ? (k ? j + 1) = < 1. (3.19) (k ? j)!(k + j)! (k + j)(k + j ? 1) ? ? ? (k + 1) The radii of the Gershgorin discs are ?k ?k 2 2 |rj| ? 2 ? 2?2 . (3.20) j2 3 j=1 j=1 The discs are centered at r0, and ?k 2 |r0| ? 2? 2 |rj| ? , (3.21) 3 j=1 2 so ?L? ? 4? . 3 To lower bound ?L?1? we lower bound the (absolute value of the) smallest non- zero eigenvalue of L (since by construction the all-ones vector is a zero eigenvector). Let 75 ? := exp(?i/n). Since L is circulant, its eigenvalues are ?k ? = r + r (?lj + ??ljl 0 j ) (3.22) ?j=1k (?lj) = r0 + 2rj cos (3.23) n ?j=1k ( )?2? l2j2 (?c )4 (j ?c )j= r0 + 2rj 1 + cos (3.24) 2n2 4!n4 n ? j=1k ( )?2l2j2? (?c )4 (j ?c )j= 2rj + cos (3.25) 2n2 4!n4 n j=1 where the cj ? [0, lj] arise from the Taylor remainder theorem. Using (3.18), we have ???? ?2 ?k? + r j2???? ? ?4k31 j . (3.26)n2 6n4 j=1 76 We now compute the sum ?k ?k ? 2 2 2(?1) j(k!)2 rjj = j (3.27) j2(k + j)!(k ? j)! j=1 j=1 ?k j = 2(k!)2 (?1) (3.28) (k + j()!(k ?j?=1k ) j)! 2(k!)2 ? j 2k= ( 1) (3.29) (2k)! k +(jj=12k ) 2(k!)2 ? 2k = (?1)j+k (3.30) (2k)! j j=k+1 2k ( ) ? k (k!) 2 ? 2k = ( 1) (?1)j (3.31) (2k)!( jj=0, j ?=k ( )) ? k (k!) 2 = ( 1) (1? 1)2k ? (? 2k1)k (3.32) (2k)! k = ?1. (3.33) Therefore, we have ? ?? 2 ?4k3 ?1 + . (3.34) n2 6n4 Finally, we see that ?(L) = ?L??L?1( ? ) (3.35)4?2 ?2? ?4k3 ?1 3 4 ( ? (3.36) n2 6n4 2 ? ? 2k3)?1 = n 1 (3.37) 3 6n2 which is O(n2) provided k < (6/?2)1/3n2/3. 77 In d dimensions, a similar analysis holds. ? Lemma 3.3. For k < (6/?2)1/3n2/3, let L := r k0I + j=1 r (S j + S?jj ) with rj as in (3.14). The matrix L? := L ? I?d?1 + I ? L ? I?d?2 + ? ? ? + I?d?1 ? L has condition number ?(L?) = O(dn2). Proof. By the triangle inequality for spectral norms, ?L?? ? d?L?. Since L has zero-sum rows by construction, the all-ones vector lies in its kernel, and thus the smallest non-zero eigenvalue of L is the same as that of L?. Therefore we have ( 2 3)?1 ?(L?) ? 4 ? kdn2 1? (3.38) 3 6n2 which is O(dn2) provided k < (6/?2)1/3n2/3. 3.3.3 Error analysis There are two types of error relevant to our analysis: the FDM error and the QLSA error. We assume that we are able to perfectly generate states proportional to f . The FDM errors arise from the remainder terms in the finite difference formulas and from inexact approximations of the eigenvalues. We introduce several states f?or the purpose of?error analysis. Let |u? be the quantum state that is proportional to u = dj?Zd u(?j/n) i=1 ej for the exact solution of thei2n differential equation. Let |u?? be the state output by a QLSA that exactly solves the linear system. Let |u?? be the state output by a QLSA with error. Then the total error 78 of approximating |u? by |u?? is bounded by ?|u? ? |u??? ? ?|u? ? |u???+ ?|u?? ? |u??? (3.39) = ?FDM + ?QLSA (3.40) and without loss of generality we can take ?FDM and ?QLSA to be of the same order of magnitude. ? Lemma 3.4. 2 dLet u(x) be the exact solution of ( d d? i=1 2 )u(x) ?= f(x). Let u ? R (2n) dxi d encode the exact solution in the sense that u = j?Zd u(?j/n) d e . Let u? ? R(2n) 2n i=1 ji be the exact solution of the FDM linear system 12L?u? = f , where L? is?a d-dimensionalh (2k)th-order Laplacian as above w?ith k < (6/? 2)1/3? n 3/2, and f = 2nj=1 f(?j/n)ej . Then ?u? u?? ? O(2d/2n(d/2)?2k+1?d2k+1u ?2k+1 (e2/4)k).dx ??d2k+1 ?Proof. The remainder term of the central difference formula is O( u?h2k?1(e/2)2k), dx2k+1 so 1 (? ???d2k+1u?? )L u = f +O ?(eh/2)2k?1 ? (3.41) h2 dx2k+1 where ? is a (2n)d dimensional vector whose entries are O(1). This implies 1 (???d2k+1L? u(u? u?) = O ??? )(eh/2)2k?1 ? (3.42) h2 dx2k+1 79 and therefore (? 2k+1 ? ) ?u? ?d u?u?? = O(? ??(eh/2) 2k+1 ? ?(L ?)?1?? (3.43) dx2k+1?d2k+1u? ) = O (2n)d/2? ?(eh/2)2k+1/?1 . (3.44) dx2k+1 By Lemma 3.2 we have ? = ?(1/n21 ), and since h = ?(1/n), we have ( ? 2k+1 ? ? ? d/2 (d/2)?2k+1??d u??? )u u? = O 2 n (e/2)2k (3.45) dx2k+1 as claimed. 3.3.4 FDM algorithm To apply QLSAs, we must consider the complexity of simulating Hamiltonians that correspond to Laplacian FDM operators. For periodic boundary conditions, the Laplacians are circulant, so they can be diagonalized by the QFT F (or a tensor product of QFTs for the multi-dimensional Laplacian L?), i.e., D = F ?LF is diagonal. In this case the simplest way to simulate exp(iLt) is to perform the inverse QFT, apply controlled phase rotations to implement exp(iDt), and perform the QFT. Reference [102] shows how to exactly implement arbitrary diagonal unitaries on m qubits using O(2m) gates. Since we consider Laplacians on n lattice sites, simulating exp(iLt) takes O(n) gates with the dominant contribution coming from the phase rotations (alternatively, the methods of Reference [103] or Reference [104] could also be used). Using this Hamiltonian simulation algorithm in a QLSA for the FDM linear system gives us the following theorem. 80 We restate Theorem 3.1 as follows. Theorem 3.1. There exists a quantum algorithm that outputs a state ?-close to |u? that runs in time ( (??? ?d2k+1u??? ) [ (??d2k+1u?? ) ])O? d6.5 log4.5 /? log d4 log3 ? ?/? /? (3.11) dx2k+1 dx2k+1 and makes ( (???d2k+1u??? )? [ (??? )d2k+14 uO? d log3 /? log d4 log3 ??? ) ]/? /? (3.12) dx2k+1 dx2k+1 queries to the oracle for f . Proof. We use the Fourier series ba?sed QLSA from Reference [46]. By Theorem 3 of that work, the QLSA makes O(? log(?/?QLSA)) uses of a Hamiltonian simulation algorithm and uses of the oracle for the inhomogeneity. For Hamiltonian simulation we use d p?arallel QFTs and phase rotations as described in Reference [102], for a total of O(dn? log(?/?QLSA)) gates. The condition number for the d-dimensional Laplacian scales as ? = O(dn2). We take ?FDM and ??QLSA to be of the same order and just write ?. Then the QLSA has time complexity O(d2n3 log(dn2/?)) and query complexity O(dn2 log(dn2/?)). The adjustable parameters are the number of lattice sites n and the order 2k of the finite difference formula. To keep the error below the target error of ? we require ???d2k+1d/2 (d/2)?2k+1 u??2 n ?(e/2)2k = O(?), (3.46) dx2k+1 81 or equivalently, ( (??d2k+1u?? )) (?d/2) + (2k ? 1? (d/2)) log(n)? 2k log(e/2) = ? log ? ?/? . (3.47) dx2k+1 Now we focus on the choice of adjustable n and k relying on ?. This procedure is inspired by the classical adaptive FDM [71], so we call it the adaptive FDM approach. We must have 2k ? 1 > d/2 for the left-hand side of (3.47) to be positive for large n. Indeed, we find the best performance by taking k as large as possible subject to the assumption of Lemma 3.2, i.e., k = cn2/3 where c := (6/?2)1/3. For this choice of k and for n sufficiently large, (3.47) is equivalent to ( (??d2k+1u?? )) k log(n) = cn2/3 log(n) = ? log ? ?/? . (3.48) dx2k+1 To satisfy the condition 2cn2/3 ? 1 > d/2, we must have n = ?(d3/2). Combining this observation with (3.48), we choose ( (? 2k+1 ? )) n = ? d3/2 log3/2 ??d u??/? (3.49) dx2k+1 so that ( (? 2k+1 ? )) k = cn2/3 ??d u?= ? d log ?/? . (3.50) dx2k+1 82 The QLSA then has the stated time complexity ? ( (?? ?? ?d2k+1u [ (?? )d2k+1u?? ) ] O?(d2n3 log(dn2/?)) = O d6.5 log4.5 ? ?/?) log d4 log3 ? ?/? /? , dx2k+1 dx2k+1 (3.51) and makes ( (?? ? )d2k+1u?? ) [ (??d2k+1u?? ] O?(dn2 log(dn2/?)) = O d4 log3 ? ?/? log d4 log3 ? ?/?)/? . dx2k+1 dx2k+1 (3.52) queries to the oracle for f . This can be compared to the cost of using the conjugate gradient method to solve the same linear system classically. The sparse conjugate gradient algorithm for an N ?N ? matrix has time complexity O(Ns ? log(1/?)). For arbitrary dimension N = ?(nd), we have s = dk?= cdn 2/3 ? and ? = O(dn 2), so that the time complexity is O(d4+3d/2 log(1/?) log5/2+3d/2 2k+1 (?d u ?2k+1 /?)). Alternatively, d fast Fou?rier tra?nsforms could be used, althoughdx 2k+1 this will generally take ?(nd) = ?(d3d/2 log3d/2(?d u ? dx2k+1 /?)) time. 3.3.5 Boundary conditions via the method of images We can apply the method of images to deal with homogeneous Neumann and Dirichlet boundary conditions using the algorithm for periodic boundary conditions described above. In the method of images, the domain [?1, 1] is extended to include all of R, and the boundary conditions are related to symmetries of the solutions. For a pair of Dirichlet 83 boundary conditions there are two symmetries: the solutions are anti-symmetric about ?1 (i.e., f(?x? 1) = ?f(x? 1)) and anti-symmetric about 1 (i.e., f(1 + x) = ?f(1? x)). Continuity and anti-symmetry about ?1 and 1 imply f(?1) = f(1) = 0, and furthermore that f(x) = 0 for all odd x ? Z and that f(x + 4) = f(x) for all x ? R. For Neumann boundary conditions, the solutions are instead symmetric about ?1 and 1, which also implies f(x+ 2) = f(x) for all x ? R. We would like to combine the method of images with the FDM to arrive at finite difference formulas for this special case. In both cases, the method of images implies that the solutions are periodic, so without loss of generality we can consider a lattice on [0, 2?) instead of a lattice on R. It is useful to think of this lattice in terms of the cycle graph on 2n vertices, i.e., (V,E) = (Z2n, {(i, i + 1) | i ? Z2n}), which means that the vectors encoding the solution u(x) will lie in R2n. Let each vector ej correspond to the vertex j. Then we divide R2n into a symmetric and an anti-symmetric subspace, namely span{ej + e n2n+1?j}j=1 and span{ej ? e2n+1?j}nj=1, respectively. Vectors lying in the symmetric subspace correspond to solutions that are symmetric about 0 and ?, so they obey Neumann boundary conditions at 0 and ?; similarly, vectors in the anti-symmetric space correspond to solutions obeying Dirichlet boundary conditions at 0 and ?. Restricting to a subspace of vectors reduces the size of the FDM vectors and matrices we consider, and the symmetry of that subspace indicates how to adjust the coefficients. 84 If the FDM linear system is L??u?? = f ?? then L?? has entries ?? ????????r|i?j| ? ri+j?1 i ? k L??i,j = ?????r (3.53)? |i?j| k < i ? n? k???r|i?j| ? r2n?i?j+1 n? k ? i where + (?) is chosen for Neumann (Dirichlet) boundary conditions and due to the truncation order k, rj = 0 for any j > k. This is similar to how Laplacian coefficients are modified when imposing boundary conditions in discrete variable representations [105]. For the purpose of solving the new linear systems using quantum algorithms, we still treat these cases as obeying periodic boundary conditions. We assume access to an oracle that produces states |f ??? proportional to the inhomogeneity f ??(x). Then we apply the QLSA for periodic boundary conditions using |f ???|?? to encode the inhomogeneity, which will output solutions of the form |u???|??. Here the ancillary state is chosen to be |+? (|??) for Neumann (Dirichlet) boundary conditions. Typically, the (second-order) graph Laplacian for the path graph with Dirichlet boundary conditions has diagonal entries that are all equal to 2; however, using the above specification for the entries of L leads to the (1, 1) and (n, n) entries being 3 while the rest of the diagonal entries are 2. To reproduce this case, we consider an alternative subspace restriction used in Reference [106] to diagonalize the Dirichlet graph Laplacian. In this case it is easiest to consider the lattice of a cycle graph on 2n+2 vertices, where the vertices 0 and n+1 are selected as boundary points where the field takes the value 0. The relevant antisymmetric 85 subspace is now span({ej ? e n2n+2?j}j=1) (which has no support on e0 and en+1). If we again write the linear system as L??u?? = f ??, then the Laplacian has entries ?? ????????r|i?j| ? ri+j i ? k L??i,j = ??????r|i?j| k < i ? n? k???r|i?j| ? r2n?i?j+2 n? k ? i. We again assume access to an oracle producing states proportional to f ??(x); however, we assume that this oracle operates in a Hilbert space with one additional dimension compared to the previous approaches((i.e.,)whereas previously we considered implementing U , here we consider implementing U 0T ). With this oracle we again prepare the state0 1 |f ???|?? and solve Poisson?s equation for periodic boundary conditions to output a state |u???|?? (where |u??? lies in an (n + 1)-dimensional Hilbert space but has no support on the (n+ 1)st basis state). 3.4 Multi-dimensional spectral method We now turn our attention to the spectral method for multi-dimensional PDEs. Since interpolation facilitates constructing a straightforward linear system, we develop a quantum algorithm based on the pseudo-spectral method [67, 68, 107] for second- order elliptic equations with global strict diagonal dominance, under various boundary conditions. Using this approach, we show the following. Theorem 3.2. Consider an instance of the quantum PDE problem as defined in Problem 3.1 with Dirichlet boundary conditions (3.81). Then there exists a quantum algorithm that 86 produces a state in the form of (3.82) whose amplitudes are proportional to u(x) on a set of interpolation nodes x (with respect to the uniform grid nodes for periodic boundary conditions or the Chebyshev-Gauss-Lobatto quadrature nodes for non-periodic boundary conditions, as defined in in (3.60)), where u(x)/?u(x)? is ?-close to u?(x)/?u?(x)? in l2 norm for all nodes x, succeeding with probability ?(1), with a flag indicating success, using ( ) d?A?? + qd2 poly(log(g?/g?)) (3.54) C?A?? ? ?queries to oracles as defined in Section 3.4.4. Here ?A?? := ?j?1?h ?Aj?, ?A?? :=d j=1 |Aj,j|, C > 0 is defined in (3.8), and g = m??in ??u?(x)?, g ? := maxmax ?u?(n+1)? (x)?, (3.55)x? x n?N? ? d?k? f? 2 + (A ??j+)2 + (A ??j?)2q = ??n ?j=1 k j,j k j,j k . (3.56)d ?k? ?n j=1(f?k + A ?? j+ + A ??j?)2 ? j,j k j,j k The gate complexity is larger than the query complexity by a factor of poly(log(d?A??/?)). After introducing the method, we discuss the complexity of the quantum shifted Fourier transform (Lemma 3.5) and the quantum cosine transform (Lemma 3.6) in Section 3.4.1. These transforms are used as subroutines in our algorithm. Then we construct a linear system whose solution encodes the solution of the PDE in Section 3.4.2 , analyze its condition number in Section 3.4.3 (Lemma 3.10, established using Lemma 3.7, Lemma 3.8, and Lemma 3.9), and consider the complexity of state preparation in Section 3.4.4 (Lemma 3.11). Finally, we prove our main result (Theorem 3.2) in Section 3.4.5. In the spectral approach, we approximate the exact solution u?(x) by a linear combination 87 of basis functions ? u(x) = ck?k(x) (3.57) ?k???n for some n ? Z+. Here k = (k1, . . . , kd) with kj ? [n+ 1]0 := {0, 1, . . . , n}, ck ? C, and ?d ?k(x) = ?k (xj), j ? [d]. (3.58)j j=1 We choose different basis functions for the case of periodic boundary conditions and for the more general case of non-periodic boundary conditions. When the boundary conditions are periodic, the algorithm implementation is more straightforward, and in some cases (e.g., for the Poisson equation), can be faster. Specifically, for any kj ? [n+ 1]0 and xj ? [?1, 1], we take ?????ei(kj??n/2?)?xj? , periodic conditions,?k (xj) = ??? (3.59)j Tk (xj) := cos(kj arccosxj), non-periodic conditions.j Here Tk is the degree-k Chebyshev polynomial of the first kind. The coefficients ck are determined by demanding that u(x) satisfies the ODE and boundary conditions at a set of interpolation nodes {?l = (?l1 , . . . , ?l )}d ?l??? withn lj ? [n+ 1]0, where ????? 2lj? ? 1, periodic conditions,n+1?l = ??? (3.60)j ?lcos j , non-periodic conditions. n 88 Here { 2l ? 1 : l ? [n+ 1]0} are called the uniform grid nodes, and {cos ?l : l ?n+1 n [n+ 1]0} are called the Chebyshev-Gauss-Lobatto quadrature nodes. We require the numerical solution u(x) to satisfy L (u(?l)) = f(?l), ? lj ? [n+ 1]0, j ? [d]. (3.61) We would like to be able to increase the accuracy of the approximation by increasing n, so that ?u?(x)? u(x)? ? 0 as n ? ?. (3.62) The convergence behavior of the spectral method is related to the smoothness of the solution. For a solution in Cr+1, the spectral method approximates the solution with n = poly(1/?). Furthermore, if the solution is in C?, the spectral method approximates the solution to within ? using only n = poly(log(1/?)) [68]. Since we require kj ? [n+ 1]0 for all j ? [d], we have (n+ 1)d terms in total. Consequently, a classical pseudo-spectral method solves multi-dimensional PDEs with complexity poly(logd(1/?)). Such classical spectral methods rapidly become infeasible since the number of coefficients (n + 1)d grows exponentially with d. Here we develop a quantum algorithm for multi-dimensional PDEs. The algorithm applies techniques from the quantum spectral method for ODEs [54]. However, in the case of PDEs, the linear system to be solved is non-sparse. We address this difficulty using a quantum transform that restores sparsity. 89 3.4.1 Quantum shifted Fourier transform and quantum cosine transform The well-known quantum Fourier transform (QFT) can be regarded as an analogue of the discrete Fourier transform (DFT) acting on the amplitudes of a quantum state. The QFT maps the (n + 1)-dimensional quantum state v = (v0, v1, . . . , v ) ? Cn+1n to the state v? = (v?0, v?1, . . . , v?n) ? Cn+1 with 1 ?n (? 2?ikl)v?l = exp vk, l ? [n+ 1]0. (3.63) n+ 1 n+ 1 k=0 In other words, the QFT is the unitary transform n 1 ? (2?ikl) Fn := ? exp |l??k|. (3.64) n+ 1 n+ 1 k,l=0 Here we also consider the quantum shifted Fourier transform (QSFT), an analogue of the classical shifted discrete Fourier transform, which maps v ? Cn+1 to v? ? Cn+1 with ?n1 (? 2?i(k ? ?n/2?)(l ? (n+ 1)/2))v?l = exp vk, l ? [n+ 1]0. (3.65) n+ 1 n+ 1 k=0 In other words, the QSFT is the unitary transform ?n ( ) F s 1 2?i(k ? ?n/2?)(l ? (n+ 1)/2) n := ? exp |l??k|. (3.66) n+ 1 n+ 1 k,l=0 90 We define the multi-dimensional QSFT by the tensor product, namely s : ? 1 ? ?d (2?i(kj??n/2?)(l ?(n+1)/2))Fn = exp j |l1? . . . |ld??k1| . . . ?kd|, (n+ 1)d n+1?k??,?l???n j=1 (3.67) where k = (k1, . . . , kd) and l = (l1, . . . , ld) are d-dimensional vectors with kj, lj ? [n]0. The QSFT can be efficiently implemented as follows: Lemma 3.5. The QSFT F sn defined by (3.66) can be performed with gate complexity O(log n log log n). More generally, the d-dimensional QSFT F sn defined by (3.67) can be performed with gate complexity O(d log n log log n). Proof. The unitary matrix F sn can be written as the product of three unitary matrices F sn = SnFnRn, (3.68) where ?n ( 2?ik(n+ 1)/2) Rn = exp ? |k??k| (3.69) n+ 1 k=0 and ?n ( ?2?i ?n/2? (l ? (n+ 1)/2) ) Sn = exp |l??l|. (3.70) n+ 1 l=0 It is well known that Fn can be implemented with gate complexity O(log n log log n), and it is straightforward to implement Rn and Sn with gate complexity O(log n). Thus the total complexity is O(log n log log n). 91 We rewrite v in the form ? v = vk|k1? . . . |kd?, (3.71) ?k???n where vk ? C with k = (k1, . . . , kd), and each kj ? [n]0 for j ? [d]. The unitary matrix F sn can be written as the tensor product ?d F sn = F s n. (3.72) j=1 Performing the multi-dimensional QSFT is equivalent to performing the one-dimensional QSFT on each register. Thus, the gate complexity of performing F sn is O(d log n log log n). Another efficient quantum transformation is the quantum cosine transform (QCT) [108, 109]. The QCT can be regarded as an analogue of the discrete cosine transform (DCT). The QCT maps v ? Cn+1 to v? ? Cn+1 with ? ?n2 kl? v?l = ?k?l cos vk, l ? [n+ 1]0, (3.73) n n k=0 where ??????1 l = 0, n 2 ?l := ???? (3.74)1 l ? [n? 1]. 92 In other words, the QCT is the orthogonal transform ? n 2 ? kl? Cn := ?l?k cos |l??k|. (3.75) n n k,l=0 Again we define the multi-dimensional QCT by the tensor product, namely ?( 2)d ? ?d kjlj? Cn := ?k ?l cos |l1? . . . |lj j d??k1| . . . ?kd|, (3.76)n n ?k??,?l???n j=1 where k = (k1, . . . , kd) and l = (l1, . . . , ld) are d-dimensional vectors with kj, lj ? [n+ 1]0. The classical DCT on (n + 1)-dimensional vectors takes ?(n log n) gates, while the QCT on (n + 1)-dimensional quantum states can be implemented with complexity poly(log n). According to Theorem 1 of Reference [108], the gate complexity of performing Cn is O(log2 n). We observe that this can be improved as follows. Lemma 3.6. The quantum cosine transform Cn defined by (3.75) can be performed with gate complexity O(log n log log n). More generally, the multi-dimensional QCT Cn defined by (3.76) can be performed with gate complexity O(d log n log log n). Proof. According to the quantum circuit in Figure 2 of Reference [108], Cn can be 93 decomposed into a QFT Fn+1, a permutation ?? ?1??? ????? ?? ?1 ? ? ?? Pn = ? 1? ? ? ?? . ? ?? , (3.77) ? . . ???? 1 and additional operations with O(1) cost. The QFT Fn+1 has gate complexity O(log n log log n). We then consider an alternative way to implement Pn that improves over the approach in [110]. The permutation Pn can be decomposed as P = F T F?1n n n n , (3.78) ? 2?ik where Fn is the Fourier transform (3.64) and Tn = n k=0 e ? n+1 |k??k| is diagonal. The gate complexities of performing Fn and Tn are O(log n log log n) and O(log n), respectively. It follows that Cn can be implemented with circuit complexity O(log n log log n). The matrix Cn can be written as the tensor product ?d Cn = Cn. (3.79) j=1 As in Lemma 3.5, performing the multi-dimensional QCT is equivalent to performing a QCT on each register. Thus, the gate complexity of performing Cn is O(d log n log log n). 94 3.4.2 Linear system In this section we introduce the quantum PDE solver for the problem (3.1). We construct a linear system that encodes the solution of (3.1) according to the pseudo- spectral method introduced above, using the QSFT/QCT introduced in Section 3.4.1 to ensure sparsity. We consider a linear PDE problem (Problem 3.1) with periodic boundary conditions u(x+ 2v) = u(x) ?x ? D , ?v ? Zd (3.80) or non-periodic Dirichlet boundary conditions u(x) = ?(x) ?x ? ?D . (3.81) According to the elliptic regularity theorem (Theorem 6 in Section 6.3 of Reference [99]), there exists a unique solution u?(x) in C? for Problem 3.1. We now show how to apply the Fourier and Chebyshev pseudo-spectral methods to this problem. Our goal is to obtain the quantum state ? |u? ? ck?k(?l)|l1? . . . |ld?, (3.82) ?k??,?l???n where ?k(?l) is defined by (3.58) using (3.59) for the appropriate boundary conditions 95 (periodic or non-periodic). This state corresponds to a truncated Fourier/Chebyshev approximation and is ?-close to the exact solution u?(?l) with n = poly(log(1/?)) [68]. Note that this state encodes the values of the solution at the interpolation nodes (3.60) appropriate to the boundary conditions (the uniform grid nodes in the Fourier approach, for periodic boundary conditions, and the Chebyshev-Gauss-Lobatto quadrature nodes in the Chebyshev approach, for non-periodic boundary conditions). Instead of developing our algorithm for the standard basis, we aim to produce a state ? |c? ? ck|k1? . . . |kd? (3.83) ?k???n that is the inverse QSFT/QCT of |u?. We then apply the QSFT/QCT to transform back into the interpolation node basis. The truncated spectral series of the inhomogeneity f(x) and the boundary conditions ?(x) can be expressed as ? f(x) = f?k?k(x) (3.84) ?k???n and ? ?(x) = ??k?k(x), (3.85) ?k???n respectively. We define quantum states |f? and |?? by interpolating the nodes {?l} defined by (3.60) as ? |f? ? ?k (?l)f?k|l1? . . . |ld?, (3.86)j ?k??,?l???n 96 and ? |?? ? ?k (?l)??k|l1? . . . |ld?, (3.87)j ?k??,?l???n respectively. These are the states that we assume we can produce using oracles. We perform the multi-dimensional inverse QSFT/QCT to obtain the states ? |f?? ? f?k|k1? . . . |kd?, (3.88) ?k???n and ? |??? ? ??k|k1? . . . |kd?. (3.89) ?k???n Having defined these states, we now detail the construction of the linear system. At a high level, we construct two linear systems: one system Ax = f (where x corresponds to (3.83)) describes the differential equation, and another system Bx = g describes the boundary conditions. We combine these into a linear system with the form Lx = (A+B)x = f + g. (3.90) Even though we do not impose the two linear systems separately, we show that there exists a unique solution of (3.90) (which is therefore the solution of the simultaneous equations Ax = f and Bx = g), since we show that L has full rank, and indeed we upper bound its condition number in Section 3.4.3. 97 Part of this linear system will correspond to just the differential equation ? ?j L (u(?l)) = Aj u(?l) = f(?l), (3.91) ?xj ?j?1=2 ? while another part will come from imposing the boundary conditions on ?D = j?[d] ?Dj , where ?Dj := {x ? D | xj = ?1} is a (d? 1)-dimensional subspace. More specifically, the boundary conditions u(?l) = ?(?l) ??l ? ?D (3.92) can be expressed as conditions on each boundary: u(x1, . . . , xj?1, 1, x j+ j+1, . . . , xd) = ? , x ? ?Dj, j ? [d] (3.93) u(x1, . . . , xj?1,?1, xj+1, . . . , xd) = ?j?, x ? ?Dj, j ? [d]. 3.4.2.1 Linear system from the differential equation To evaluate the matrix corresponding to the differential operator from (3.91), it is convenient to define coefficients (j)ck and ?k?? ? n such that ?j ? (j) u(x) = c ? j k k (x) (3.94) ?x ?k???n for some fixed j ? Nd (as we explain below, such a decomposition exists for the choices of basis functions in (3.59)). Using this expression, we obtain the following linear equations 98 for (j)ck : ? ? ? (j) Aj ?k(?l)ck |l1? . . . |ld? = ?k(?l)f?k|l1? . . . |ld?. (3.95) ?j?1=2 ?k??,?l???n ?k??,?l???n To determine the transformation between (j)ck and ck, we can make use of the differential properties of Fourier and Chebyshev series, namely d eik?x = ik?eik?x (3.96) dx and T ?k+1(t) T ? k?1(t)2Tk(t) = ? , (3.97) k + 1 k ? 1 respectively. We have ? (j) ck = [D (j) n ]krcr., ?k?? ? n, (3.98) ?r???n where (j)Dn can be expressed as the tensor product D(j) = Dj1 ?Dj2n n n ? ? ? ? ?Djdn , (3.99) 99 with j = (j1, . . . , jd). The matrix Dn for the Fourier basis functions in (3.59) can be written as the (n+ 1)? (n+ 1) diagonal matrix with entries [Dn]kk = i(k ? ?n/2?)?. (3.100) As detailed in Appendix A of Reference [54], the matrix Dn for the Chebyshev polynomials in (3.59) can be expressed as the (n+ 1)? (n+ 1) upper triangular matrix with nonzero entries 2r [Dn]kr = , k + r odd, r > k, (3.101) ?k where ??????2 k = 0?k := ??? (3.102)1 k ? [n]. Substituting (3.99) into (3.95), with Dn defined by (3.100) in the periodic case or (3.101) in the non-periodic case, and performing the multi-dimensional inverse QSFT/QCT (for a reason that will be explained in the next section), we obtain the following linear equations for cr: ? ? ? Aj [D (j) n ]krcr|l1? . . . |ld? = f?k|l1? . . . |ld?. (3.103) ?j?1=2 ?k??,?l??,?r???n ?k??,?l???n Notice that the matrices (3.100) and (3.101) are not full rank. More specifically, there exists at least one zero row in the matrix of (3.103) when using either (3.100) (k = ?n/2?) or (3.101) (k = n). To obtain an invertible linear system, we next introduce the boundary conditions. 100 3.4.2.2 Adding the linear system from the boundary conditions When we use the form (3.82) of u(x) to write linear equations describing the boundary conditions (3.93), we obtain a non-sparse linear system. Thus, for each x ? ?Dj in (3.93), we perform the (d ? 1)-dimensional inverse QSFT/QCT on the d ? 1 registers except the jth register to obtain the linear equations ? ? ck|k1? . . . |k 1+d? = ??k |k1? . . . |kd?, ?? j+ k ? ?Dj, ?k???n ?k???n ? kj=n k?j=n (3.104) (?1)kjc |k ? . . . |k ? = ??1?|k ? . . . |k ?, ??j?k 1 d k 1 d k ? ?Dj ?k???n ?k???n kj=n?1 kj=n?1 for all j ? [d], where the values of kj indicate that we place these constraints in the last two rows with respect to the jth coordinate. We combine these equations with (3.103) to obtain the linear system ? ? ? ?d(j) A j+ j?j [Dn ]krcr|k1? . . . |kd? = (Aj,j ??k +Aj,j ??k +f?k)|k1? . . . |kd?, ?j?1=2 ?k??,?r???n ?k???n j=1 (3.105) where ????? (j) (j)?Dn +Gn , ?j?1 = 2, ?j?? = 2;(j)Dn = ??? (3.106)(j)Dn , ?j?1 = 2, ?j?? = 1 (j) with (j)Gn defined below. In other words, (j) (j) Dn = Dn +Gn for each j that has exactly (j) one entry equal to 2 and all other entries , whereas (j)0 Dn = Dn for each j that has exactly two entries equal to 1 and all other entries 0. Here (j)Gn can be expressed as the 101 tensor product G(j) = I?r?1 ?G ? I?d?rn n (3.107) where the rth entry of j is 2 and all other entries are 0. For the Fourier case in (3.59) used for periodic boundary conditions, Dn comes from (3.100), and the nonzero entries of Gn are [Gn]?n/2?,k = 1, k ? [n+ 1]0. (3.108) Alternatively, for the Chebyshev case in (3.59) used for non-periodic boundary conditions, Dn comes from (3.101), and the nonzero entries of Gn are [Gn]n,k = 1, k ? [n+ 1]0, (3.109) [Gn] k n?1,k = (?1) , k ? [n+ 1]0. The system (3.105) has the form of (3.90). For instance, the matrix in (3.90) for Poisson?s equation (3.6) is (2,0,...,0) (0,2,...,0) LPoisson := Dn +Dn + ? ? ? (0,0,...,2) +Dn (3.110)?d (2) (2) = D = D ? I?d?1 (2)n n + I ?Dn ? I?d?2 + ? ? ?+ I?d?1 ? (2) Dn . j=1 (3.111) For periodic boundary conditions, using (3.98), (3.100), and (3.108), the second-order 102 (2) differential matrix Dn has nonzero entries (2) [D 2n ]k,k = ?((k ? ?n/2?)?) , k ? [n+ 1]0\{?n/2?}, (3.112) (2) [Dn ]?n/2?,k = 1, k ? [n+ 1]0. (2) For non-periodic boundary conditions, using (3.98), (3.101), and (3.109), Dn has nonzero entries ?r?1 r?12r ? 2l r(r2 ? k2(2) ) [Dn ]kr = [Dn]kl[Dn]lr = = , k + r even, r > k + 1,?k ?l ?k l=k+1 l=k+1 k + l odd k + l odd l + r odd l + r odd (2) [Dn ]n,k = 1, k ? [n+ 1]0, (2) [Dn ] k n?1,k = (?1) , k ? [n+ 1]0. (3.113) We discuss the invertible linear system (3.105) and upper bound its condition number in the following section. 3.4.3 Condition number We now analyze the condition number of the linear system. We begin with two lemmas bounding the singular values of the matrices (3.112) and (3.113) that appear in the linear system. Lemma 3.7. Consider the case of periodic boundary conditions. Then for n ? 4, the 103 (2) largest and smallest singular values of Dn defined in (3.112) satisfy (2) ?max(Dn ) ? (2n)2.5, (3.114) (2) 1 ?min(Dn ) ? ? . 2 Proof. By direct calculation of the l? norm (i.e., the maximum absolute column sum) of (3.112), for n ? 4, we have ( )2 ? (2)? ? (n+ 1)?Dn ? ? (2n)2. (3.115)2 Then the inverse of the matrix (3.112) is (2) 1 [(D )?1n ]k,k = ? , k ? [n+ 1]0\{?n/2?},((k ? ?n/2?)?)2 (3.116) (2) 1 [(Dn ) ?1]?n/2?,k = , k ? [n+ 1]? ? ? 2 0((k n/2 )?) as can easily be verified by a direct calculation. By direct calculation of the Frobenius norm of (3.112), we have ?? 4 ? 2 ?1 2 1 2 ?(Dn) ?F ? 1 + 2 = 1 + ? 2. (3.117)k4?4 ?4 90 k=1 Thus we have the result in (3.114): (2) ?? 2? (D ) n+ 1?D ? ? (2n)2.5max n n ? , (3.118) (2) 1 1 ?min(Dn ) ? 2 ? ??(D ?1 2n) ?F 104 as claimed. Lemma 3.8. Consider the case of non-periodic boundary conditions. Then the largest (2) and smallest singular values of Dn defined in (3.113) satisfy (2) ?max(Dn ) ? n4, (3.119) (2) 1 ?min(Dn ) ? .16 Proof. By direct calculation of the Frobenius norm of (3.113), we have ( )2 ? (2)?2 ? 2 r(r 2 ? k2) D 2 6 8n F n max ? n ? n = n . (3.120) k,r ?k (2) Next we upper bound ?(D ?1n ) ?. By definition, ? (2)(D )?1? 2= sup ?(D )?1n n b?. (3.121) ?b??1 Given any vector b satisfying ?b? ? 1, we estimate ?x? defined by the full-rank linear system (2) Dn x = b. (3.122) (2) Notice that Dn is the sum of the upper triangular matrix D 2 n and (3.109), the coordinates x2, . . . , xn are only defined by coordinates b0, . . . , bn?2. So we only focus on the partial system D(2)n [0, 0, x2, . . . , xn] T = [b0, . . . , b T n?2, 0, 0] . (3.123) 105 Given the same b, we also define the vector y by Dn[0, y1, . . . , yn?1, 0] T = [b0, . . . , bn?2, 0, 0] T , (3.124) where each coordinate of y can be expressed by ?n?1 ?n?1 2l bk = [Dn]klyl = yl, k + l odd, l > k, k ? [n? 1]0. (3.125) ?k l=1 l=1 Using this equation with k = l ? 1 and k = l + 1, we can express yl in terms of bl?1 and bl+1: 2l ? 1yl = bl?1 bl+1, l ? [n? 1], (3.126) ?l?1 ?l?1 where we let bn?1 = bn = 0. Thus we have ?n?1 ?n?1 ( ( ))2 2 ?l?1 1yl = b( l?1 ? b ) l+1? 2l ?l?1l=1 l=1n?1 ?2 ? l?1 11 + (b2 2 4l2 ?2 l?1 + bl+1) l=1 l?1 ? (3.127)n 2 ? 5 ? (b2 2 1 2 2 4 0 + b2) + (b + b ) ? 16 l?1 l+1 l=2 n?2 ? 2 b2l . l=0 We notice that y also satisfies [0, y1, . . . , y T T n?1, 0] = Dn[0, 0, x2, . . . , xn] , (3.128) 106 where each coordinate of y can be expressed by ?n ?n 2r yl = [Dn]lrxr = xr, l + r odd, r > l, l ? [n? 1]. (3.129) ? r=1 r=1 l Substituting the (r?1)st and the (r+1)st equations of (3.129), we can express x in terms of y: 2r ? 1xr = yr?1 yr+1, r ? [n]\{1}, (3.130) ?r?1 ?r?1 where we let yn = yn+1 = 0. Similarly, according to (3.130), we also have ?n ?n?1 x2l ? 2 y2l . (3.131) l=2 l=1 Then we calculate x20+x 2 1 based on the last two equations of (3.122), (3.127), and (3.130), 107 giving 2 2 1x0 + x1 = [?(x(0 + x 21) + (x)0 ? x( 21) ]2 ? ? ) ?n 2 ?n 21= bn ? xl + bn?1 ? (?1)lx ? 2 ? l?( ? l=2 ( l=2 ))n 2 1 ?l?1 1 = bn ? yl?1 ? yl+1 2 ( 2l ?l?1l=2? ) ?n ( ) 2?l?1 1 + b l ( n?1 ? (?1) y )[ l?1 ? y ?l+1 2l ( ?l?1l=2n n )2 ? 1 ? ?2 ?l?1 2 ? 1 2 (3.132)1 + b + y y + b 2 ( 4l2 n ) l?1 l+1? n?1? l?1l=2 l=2]n 21 (+ yl?1 ?? )?l=2 [ yl+1 l?1 ] ? n ( ) ? 1 1 1 ? 2 2 11 + b + b + 1 + (y2 2 2 ( 4 )[l2 n n?1 ] ?2 l?1 + yl+1) l=2 ? l=2 l?12 n?1 ? 1 ?1 + b2n + b2n?1 + 4 y22 24 l? l=1n?2 ? b2n + b2 2n?1 + 8 bl . l=0 Thus, based on (3.127), (3.131), and (3.132), the inequality ?n ?n x2l = x 2 0 + x 2 1 + x 2 l l=0 l=2?n?2 ?n?2 ? b2n + b2 + 8 b2n?1 l + 4 b2l (3.133) l=?0 l=0n?2 ? b2 + b2 2n n?1 + 12 bl ? 12 l=0 108 holds for any vectors b satisfying ?b? ? 1. Thus ? (2)(D )?1n ? = sup ?x? ? 12 < 16. (3.134) ?b??1 Altogether, we have (2) ?max(Dn ) ? ? 2 Dn?F ? n4, (3.135) (2) 1 1 ?min(Dn ) ? 2 ??(D ?1 16n) ? as claimed in (3.119). Using these two lemmas, we first upper bound the condition number of the linear system for Poisson?s equation, and then extend the result to general elliptic PDEs. For the case of the Poisson equation, we use the following simple bounds on the extreme singular values of a Kronecker sum. Lemma 3.9. Let ?d L = M ?d?1j = M1 ? I + I ?M2 ? I?d?2 + ? ? ?+ I?d?1 ?Md, (3.136) j=1 where {Mj}dj=1 are square matrices. If the largest and smallest singular values of Mj satisfy ? maxmax(Mj) ? sj , (3.137) ?min(Mj) ? sminj , 109 respectively, then the condition number of L satisfies ??d smax? j=1 j?L . (3.138)d min j=1 sj Proof. We bound the singular values of the matrix exponential exp(Mj) by max ?max(exp(M )) ? esjj , (3.139) min ?min(exp(M sj j)) ? e ? using (3.137). The singular values of the Kronecker product dj=1 exp(Mj) are (?d ) ?d ?k1,...,k exp(Mj) = ?k (exp(Mj)) (3.140)d j j=1 j=1 where ?k (exp(Mj)) are the singular values of the matrix exp(Mj) for each j ? [d], wherej kj runs from 1 to the dimension of Mj . Using the property of the Kronecker sum that (?d ) ?d exp(L) = exp Mj = exp(Mj), (3.141) j=1 j=1 we bound the singular values of the matrix exponential of (3.111) by ?d max ?max(exp(L)) ? e j=1 sj , ? (3.142)d ? (exp(L)) ? mine j=1 sjmin . 110 Finally, we bound the singular values of the matrix logarithm of (3.142) by ?d ? (L) ? smaxmax j , ?j=1 (3.143)d ?min(L) ? sminj . j=1 Thus the condition number of L satisfies ??dj=1 smax? j?L (3.144)d min j=1 sj as claimed. This lemma easily implies a bound on the condition number of the linear system for Poisson?s equation: Corollary 3.1. Consider an instance of the quantum PDE problem as defined in Problem 3.1 for Poisson?s equation (3.6) with Dirichlet boundary conditions (3.81). Then for n ? 4, the condition number of LPoisson in the linear system (3.90) satisfies ?L ? (2n)4. (3.145)Poisson Proof. The matrix in (3.90) for Poisson?s equation (3.6) is LPoisson defined in (3.111). For both the periodic and the non-periodic case, we have (2) ? 4max(Dn ) ? n , (3.146) (2) 1 ?min(Dn ) ? 16 111 (2) by Lemma 3.7 and Lemma 3.8. Let Mj = Dn for j ? [d] in (3.136), and apply Lemma 3.9 with smax = n4 and sminj j = 1/16 in (3.138). Then the condition number of LPoisson is bounded by (2) ? ?max(Dn )? ? (2n)4L (3.147)Poisson (2) ?min(Dn ) as claimed. We now consider the condition number of the linear system for general elliptic PDEs. Lemma 3.10. Consider an instance of the quantum PDE problem as defined in Problem 3.1 with Dirichlet boundary conditions (3.81). Then for n ? 4, the condition number of L in the linear system (3.90) satisfies ? ?A??? (2n)4L , (3.148) C?A?? ? ? ? where ?A? := d d? ?j? ?2 |Aj | = j ,j =1 |Aj :1,j2 |, ?A?? = j=1 |Aj,j|, and C > 0 is1 1 2 defined in (3.8). Recall that C quantifies the extent to which the global strict diagonal dominance condition holds. Proof. According to (3.105), the matrix in (3.90) is ? (j) L = AjDn . (3.149) ?j?1=2 112 We upper bound the spectral norm of the matrix L by ? ?L? ? | (j)Aj|?Dn ?. (3.150) ?j?1=2 (j) For the matrix Dn defined by (3.106), Lemma 3.7 (in the periodic case) and Lemma 3.8 (in the non-periodic case) give the inequality ? (j)D 4n ? ? n , (3.151) so we have ? ?L? ? |Aj |n4 = ?A? 4? n . (3.152) ?j?1=2 Next we lower bound ?L?? for any ??? = 1. It is non-trivial to directly compute the singular values of a sum of non-normal matrices. Instead, we write L as a sum of terms L1 and L2, where L1 is a tensor sum similar to (3.111) that can be bounded by Lemma 3.9, and L2 is a sum of tensor products that are easily bounded. Specifically, we have (2) L ?d?11 = A1,1Dn ? I + ? ? ? ?d?1 ? (2) + Ad,dI Dn (3.153) L2 = L? L1. ? The ellipticity condition (3.7), ?? j?j? =h Aj(x)? ?= 0, can only hold if the A1 j,j for j ? [d] are either all positive or all negative; we consider Aj,j > 0 without loss of 113 generality, so ?d ?d ?A?? = |Aj,j| = Aj,j. (3.154) j=1 j=1 Also, the global strict diagonal dominance condition (3.8) simplifies to ?d 1 ? C = 1? |Aj1,j2| > 0, (3.155)A j1=1 j1,j1 j2?[d]\{j1} where 0 < C ? 1. We now upper bound ?L L?12 1 ? by bounding ? (j) D ?1n L1 ? for each j = (j1, . . . , jd) that has exactly two entries equal to 1 and all other entries 0. Specifically, consider jr1 = jr2 = 1 for r1, r2 ? [d], r1 ?= r2, and jr = 0 for r ? [d]\{r1, r2}. We denote L(j) := I?r1?1 ?D2 ? I?d?r1 + I?r2?1 2 ?d?r2n ?Dn ? I . (3.156) We first upper bound ? (j)D ? by 1?L(j)n ?. Notice the matrices (j)Dn and L(j) share the2 same singular vectors. For k ? [n+ 1]0, we let vk and ?k denote the right singular vectors and corresponding singular?values of Dn, respectively. Then the right singular vectors of (j)D (j) dn and L are vk?:= j=1 vk , where k = (k1, . . . , kd) with kj ? [n+ 1]j 0 for j ? [d]. For any vector v = ?k? ?n ?kvk, we have? ? ? ?D(j)v?2n = |?k|2?D(j)v 2 2 2n k? = |?k| (?k ?k ) , (3.157)jr1 jr2 ?k???n ?k???n ? ? ?L(j)v?2 = |? |2?L(j)v 2 2 2 2 2k k? = |?k| (?k + ?k ) , (3.158)jr1 jr2 ?k???n ?k???n which implies ? (j)Dn v? ? 1?L(j)v? by the inequality of arithmetic and geometric means2 114 (also known as the AM-GM inequality). Since this holds for any vector v, we have ?D(j)L?1n 1 ? ? 1?L(j)L?11 ?. (3.159)2 (2) Next we upper bound ?D2n? by ?Dn ?. For any vector u = [u0, . . . , u ]Tn , define two vectors w = [w0, . . . , wn]T and w = [w0, . . . , wn]T such that D2n[u0, . . . , un] T = [w0, . . . , w T n] (3.160) and 2 Dn[u0, . . . , u T T n] = [w0, . . . , wn] . (3.161) Notice that w?n/2? = 0 and wk = wk for k ? [n+ 1]0\{?n/2?} for periodic conditions, and wn?1 = wn = 0 and wk = wk for k ? [n+ 1]0\{n ? 1, n} for non-periodic conditions. Thus, for any vector v, ?n ?n ?D2 ?2 ? ?2 2 ? 2 ? ?2 ? (2)nv = w = wk wk = w = Dn v?2. (3.162) k=0 k=0 Therefore, ?2 ?2 ? (j) ?1? ? ? ?r ?1 ? 2 ? ?d?r ?1? ? ? ?r ?1 ? (2)L L I s D I sL I s D ? I?d?rsL?11 n 1 n 1 ?. s=1 s=1 (3.163) We also have ?2 ?D(j)L?1 1 ?rs?1 (2) ?d?rs ?1n 1 ? ? ?I ?Dn ? I L1 ?. (3.164)2 s=1 115 ?r ?1 ? (2)We can rewrite I s Dn ? I?d?rsL?11 in the form (? )d ?1 ?r (2) (2)I s?1 ?D ? I?d?rs A I?rh?1 ?D ? I?d?rhn h,h n . (3.165) h=1 (2) The matrices I?rh?1?D ? I?d?rhn share the same singular values and singular vectors, so ?I?rs?1 ? (2) ?k 1 Dn ? I?d?rsL?11 ? = max ? rs < , (3.166)d ?kr h=1Ah,h?k Ar ,rh s s (2) where ?k are singular values of I?rh?1 ? D ? I?d?rhn for kh ? [n]0, h ? [d]. Thish implies ? (j) ?1? ? 1 1 1Dn L1 ( + ). (3.167)2 Ar1,r1 Ar2,r2 Using (3.155), considering each instance of (j)Dn in L2, we have ? ?d ? ?L L?12 1 ? ? | 1 A (j)j1,j2|?Dn L?11 ? ? |Aj1,j2| ? 1? C. (3.168)Aj ,j j1=? j 1 12 j1=1 j2?[d]\{j1} Since L and L1 are invertible, ?L?11 L2? ? 1 ? C < 1, and by Lemma 3.9 applied to ?L?11 ?, we have ? ?1? ? ?1? ? ? ?1 ?1?? ?1? ? ?L ?1 1 ? 1/ 1 ?A?? 16 L = (L +L ) (I+L 161 2 2L1 ) L1 1? ?L L?1 ? = . 2 1 ? C C?A?? (3.169) Thus we have ?L = ?L??L?1? ? ?A?? (2n)4 (3.170) C?A?? 116 as claimed. 3.4.4 State preparation We now describe a state preparation procedure for the vector f + g in the linear system (3.90). Lemma 3.11. Let Of be a unitary oracle that maps |0?|0? to a state proportional to |0?|f?, and |??|0? to |??|0? for any |?? orthogonal to |0?; let Ox be a unitary oracle that maps |0?|0? to |0?|0?, |j?|0? to a state proportional to |j?|?j+? for j ? [d], and |j + d?|0? to a state proportional to |j + d?|?j?? for j ? [d]. Suppose ?|f??, ?|?j+??, ?|?j??? and Aj,j for j ? [d] are known. Define the parameter ?????? ??d [f? 2?k? ?n j=1 k + (A ??j+j,j k )2 + (A j? 2? j,j ??k ) ]q := ? . (3.171)d j+ j 2 ?k???n j=1 |f?k + Aj,j ??k + Aj,j ??k | Then the normalized quantum state ? ?d |B? ? (f? + A ??j+ + A j?k j,j k j,j ??k )|k1? . . . |kd?, (3.172) ?k???n j=1 with coefficients defined as in (3.88) and (3.89), can be prepared with gate and query complexity O(qd2 log n log log n). Proof. Starting from the initial state |0?|0?, we first perform a unitary transformation U 117 satisfying | ? ? ? ( ?|f??U 0 = ) |0? ?|f??2 + d A2 ?|?j+??2 + A2 ?|?j? 2? j=1 j,j j,j ??d ? ? (Aj,j?|?j+??+ ) |j? (3.173) ?j=1 ?|f?? 2 + d 2j=1 Aj,j?|?j+??2 + A2 j?j,j?|? ??2 d ? ? (Aj,j?|?j???+ ) |j + d? j=1 ?|f??2 + d A2 j+ 2 2 j? 2j=1 j,j?|? ?? + Aj,j?|? ?? on the first register to obtain ?|?f??|0?+ A1?,1?|?1(+??|1?+ ? ? ?+ Ad,d?|?d???|2)d? |0?. (3.174) ?|f??2 + d A2 ?|?j+??2 + A2 ?|?j???2j=1 j,j j,j This can be done in time O(2d + 1) by standard techniques [92]. Then we apply Ox and Of to obtain |0?|f?+ A?1,1|1?|? 1+?+ ? ? ?+ Ad,d|2d?|?d??, (3.175) ? ?k(?l)(f?k|0?+ A 1+ d?1,1??k |1?+ ? ? ?+ Ad,d??k |2d?)|l1? . . . |ld?, ?k??,?l???n according to (3.86) and (3.87). We then perform the d-dimensional inverse QSFT (for periodic boundary conditions) or inverse QCT (for non-periodic boundary conditions) on the last d registers, obtaining ? (f? |0?+ A ??1+|1?+ ? ? ?+ A ??d?k 1,1 k d,d k |2d?)|k1? . . . |kd?. (3.176) ?k???n Finally, observe that if we measure the first register in a basis containing the uniform 118 superposition |0? + |1? + ? ? ? + |2d? (say, the Fourier basis) and obtain the outcome corresponding to the uniform superposition, we produce the state ? ?d (f?k + A j+ j,j ??k + Aj,j ?? j? k )|k1? . . . |kd?. (3.177) ?k???n j=1 Since this outcome occurs with probability 1/q2, we can prepare this state with probability close to 1 using O(q) steps of amplitude amplification. According to Lemma 3.5 and Lemma 3.6, the d-dimensional (inverse) QSFT or QCT can be performed with gate complexity O(d log n log log n). Thus the total gate and query complexity is O(qd2 log n log log n). Alternatively, if it is possible to directly prepare the quantum state |B?, then we may be able to avoid the factor of q in the complexity of the overall algorithm. 3.4.5 Main result Having analyzed the condition number and state preparation procedure for our approach, we are now ready to establish the main result. Theorem 3.2 as follows. Theorem 3.2. Consider an instance of the quantum PDE problem as defined in Problem 3.1 with Dirichlet boundary conditions (3.81). Then there exists a quantum algorithm that produces a state in the form of (3.82) whose amplitudes are proportional to u(x) on a set of interpolation nodes x (with respect to the uniform grid nodes for periodic boundary conditions or the Chebyshev-Gauss-Lobatto quadrature nodes for non-periodic boundary conditions, as defined in in (3.60)), where u(x)/?u(x)? is ?-close to u?(x)/?u?(x)? in l2 119 norm for all nodes x, succeeding with probability ?(1), with a flag indicating success, using ( ) d?A?? + qd2 poly(log(g?/g?)) (3.54) C?A?? ? ?queries to oracles as defined in Section 3.4.4. Here ?A?? := ?j?1?h ?Aj?, ?A?? :=d j=1 |Aj,j|, C > 0 is defined in (3.8), and g = m??in ??u?(x)?, ? ? g := maxmax ?u? (n+1)(x)?, (3.55) x? x n?N? ? d?k? ?n ?j=1 f? 2 + (A ??j+ 2k j,j k ) + (A j? 2? j,j ??k )q = . (3.56)d ?k? ?n j=1(f?k + A j+ j? 2 ? j,j ??k + Aj,j ??k ) The gate complexity is larger than the query complexity by a factor of poly(log(d?A??/?)). Proof. We analyze the complexity of the algorithm presented in Section 3.4.2. First we choose ? ? log(?) n := , (3.178) log(log(?)) where g?(1 + ?) ? = . (3.179) g? By Eq. (1.8.28) of Reference [107], this choice guarantees n ? ?u?(x)? u(x)? ? max ?u?(n+1) ? e ? g g?(x) = =: ?. (3.180) x (2n)n ? 1 + ? 120 Now ?u?(x)? u(x)? ? ? implies ???? ?u?(x) ? u(x) ??? ? ? ? ? = ?, (3.181)?u?(x)? ?u(x)? min{?u?(x)?, ?u(x)?} g ? ? so we can choose n to ensure that the normalized output state is ?-close to u?(x)/?u?(x)?. As described in Section 3.4.2, the algorithm uses the high-precision QLSA from Reference [46] and the multi-dimensional QSFT/QCT (and its inverse). According to Lemma 3.5 and Lemma 3.6, the d-dimensional (inverse) QSFT or QCT can be performed with gate complexity O(d log n log log n). According to Lemma 3.11, the query and gate complexity for state preparation is O(qd2 log n log log n). For the linear system Lx = f + g in (3.90), the matrix L is an (n+ 1)d ? (n+ 1)d matrix with (n + 1) or (n + 1)d nonzero entries in any row or column for periodic or non-periodic conditions, respectively. According to Lemma 3.10, the condition number of L is upper bounded by ?A?? 4? ? (2n) . Consequently, by Theorem 5 of Reference [46],C A ? the QLSA produces a state proportional to x with O( d?A?? 5? ? (2n) ) queries to the oracles,C A ? and its gate complexity is larger by a factor of poly(log(d?A?? n)). Using the value of n specified in (3.178), the overall query complexity of our algorithm is ( ) d?A?? + qd2 poly(log(g?/g?)), (3.182) C?A?? and the gate complexity is ( ) d?A?? poly(log(d?A??/?)) + qd2 poly(log(g?/g?)) (3.183) C?A?? 121 which is larger by a factor of poly(log(d?A??/?)), as claimed. Note that we can establish a more efficient algorithm in the special case of the Poisson equation with homogeneous boundary conditions. In this case, ?A?? = ?A?? = d and C = 1. Under homogeneous boundary conditions, the complexity of state preparation can be reduced to d poly(log(g?/g?)), since we can remove 2d applications of the QSFT or QCT for preparing a state depending on the boundary conditions, and since ? = 0 there is no need to postselect on the uniform superposition to incorporate the boundary conditions. In summary, the query complexity of the Poisson equation with homogeneous boundary conditions is d poly(log(g?/g?)); (3.184) again the gate complexity is larger by a factor of poly(log(d?A??/?)). 3.5 Discussion and open problems We have presented high-precision quantum algorithms for d-dimensional PDEs using the FDM and spectral methods. These algorithms use high-precision QLSAs to solve Poisson?s equation and other second-order elliptic equations. Whereas previous algorithms scaled as poly(d, 1/?), our algorithms scale as poly(d, log(1/?)). This work raises several natural open problems. First, for the quantum adaptive FDM, we only deal with Poisson?s equation with homogeneous boundary conditions. Can we apply the adaptive FDM to other linear equations or to inhomogeneous boundary conditions? The quantum spectral algorithm applies to second-order elliptic PDEs with Dirichlet boundary conditions. Can we generalize it to other linear PDEs with Neumann 122 or mixed boundary conditions? Also, can we develop algorithms for space- and time- dependent PDEs? These cases are more challenging since the quantum Fourier transform cannot be directly applied to ensure sparsity. Finally, can we improve the dependence on d? Second, the complexity scales logarithmically with high-order derivatives (of the inhomogeneity or solution) for both the adaptive FDM and the spectral method. In particular, Theorem 3?.1 sho?ws that the complexity of the quantum adaptive FDM scales2k+1 logarithmically with ?d u ?2k+1 , and Theorem 3.2 shows that the complexity of the quantumdx spectral method is poly(log g?), where g? upper bounds ?u?(n+1)(x)? (see (3.55)). Such a logarithmic dependence on high-order derivatives of the solution is typical for classical algorithms, including the classical adaptive FDM (see for example Theorem 7 of Reference [6]) and spectral methods (see for example Eq.? (1.8.2?8) of Reference [107]), both of which2k+1 have the same logarithmic dependence on ?d u ? and g?2k+1 . This logarithmic dependencedx means that the algorithm is efficient even when faced with a highly oscillatory solution with an exponentially large derivative. However, the query complexity of time-dependent Hamiltonian simulation only depends on the first-order derivatives of the Hamiltonian [34, 35]. Can we develop quantum algorithms for PDEs with query complexity independent of high-order derivatives, and henceforth develop an unexpected advantage of quantum algorithms for PDEs? Third, can we use quantum algorithms for PDEs as a subroutine of other quantum algorithms? For example, some PDE algorithms have state preparation steps that require inverting finite difference matrices (such as Reference [70] using certain oracles for the initial conditions); are there other scenarios in which state preparation can be done using 123 the solution of another system of PDEs? Can quantum algorithms for PDEs be applied to other algorithmic tasks, such as optimization? Finally, how should these algorithms be applied? While PDEs have broad applications, much more work remains to understand the extent to which quantum algorithms can be of practical value. Answering this question will require careful consideration of various technical aspects of the algorithms. In particular: What measurements give useful information about the solutions, and how can those measurements be efficiently implemented? How should the oracles encoding the equations and boundary conditions be implemented in practice? And with these aspects taken into account, what are the resource requirements for quantum computers to solve classically intractable problems related to PDEs? 124 Chapter 4: Efficient quantum algorithms for dissipative nonlinear differential equations 4.1 Introduction In this chapter, we study efficient quantum algorithm for dissipative nonlinear differential equations1. As earlier introduced in Problem 1.5, we focus here on differential equations with nonlinearities that can be expressed with quadratic polynomials, as described in (4.1). Note that polynomials of degree higher than two, and even more general nonlinearities, can be reduced to the quadratic case by introducing additional variables [72, 73]. The quadratic case also directly includes many archetypal models, such as the logistic equation in biology, the Lorenz system in atmospheric dynamics, and the Navier?Stokes equations in fluid dynamics. As discussed in Chapter 1, quantum algorithms offer the prospect of rapidly characterizing the solutions of high-dimensional systems of linear ODEs [52, 53, 54] and PDEs [55, 56, 57, 58, 59, 60, 61]. Such algorithms can produce a quantum state proportional to the solution of a sparse (or block-encoded) n-dimensional system of linear differential equations in time poly(log n) using the quantum linear system algorithm [84]. Early work on quantum algorithms for differential equations already considered the 1This chapter is based on the paper [81]. 125 nonlinear case [74]. It gave a quantum algorithm for ODEs that simulates polynomial nonlinearities by storing multiple copies of the solution. The complexity of this approach is polynomial in the logarithm of the dimension but exponential in the evolution time, scaling as O(1/?T ) due to exponentially increasing resources used to maintain sufficiently many copies of the solution to represent the nonlinearity throughout the evolution. Recently, heuristic quantum algorithms for nonlinear ODEs have been studied. Reference [75] explores a linearization technique known as the Koopman?von Neumann method that might be amenable to the quantum linear system algorithm. In [76], the authors provide a high-level description of how linearization can help solve nonlinear equations on a quantum computer. However, neither paper makes precise statements about concrete implementations or running times of quantum algorithms. The recent preprint [77] also describes a quantum algorithm to solve a nonlinear ODE by linearizing it using a different approach from the one taken here. However, a proof of correctness of their algorithm involving a bound on the condition number and probability of success is not given. The authors also do not describe how barriers such as those of [78] could be avoided in their approach. While quantum mechanics is described by linear dynamics, possible nonlinear modifications of the theory have been widely studied. Generically, such modifications enable quickly solving hard computational problems (e.g., solving unstructured search among n items in time poly(log n)), making nonlinear dynamics exponentially difficult to simulate in general [78, 79, 80]. Therefore, constructing efficient quantum algorithms for general classes of nonlinear dynamics has been considered largely out of reach. We design and analyze a quantum algorithm that overcomes this limitation using 126 Carleman linearization [73, 82, 83]. This approach embeds polynomial nonlinearities into an infinite-dimensional system of linear ODEs, and then truncates it to obtain a finite- dimensional linear approximation. The Carleman method has previously been used in the analysis of dynamical systems [111, 112, 113] and the design of control systems [114, 115, 116], but to the best of our knowledge it has not been employed in the context of quantum algorithms. We discretize the finite ODE system in time using the forward Euler method and solve the resulting linear equations with the quantum linear system algorithm [46, 84]. We control the approximation error of this approach by combining a novel convergence theorem with a bound for the global error of the Euler method. Furthermore, we provide an upper bound for the condition number of the linear system and lower bound the success probability of the final measurement. Subject to the condition R < 1, where the quantity R (defined in Problem 4.1 below) characterizes the relative strength of the nonlinear and dissipative linear terms, we show that the total complexity of this quantum Carleman linearization algorithm is sT 2q poly(log T, log n, log 1/?)/?, where s is the sparsity, T is the evolution time, q quantifies the decay of the final solution relative to the initial condition, n is the dimension, and ? is the allowed error (see Theorem 4.1). In the regime R < 1, this is an exponential improvement over [74], which has complexity exponential in T . Note that the solution cannot decay exponentially in T for the algorithm to be efficient, as captured by the dependence of the complexity on q?a known limitation of quantum ODE algorithms [53]. For homogeneous ODEs with R < 1, the solution necessarily decays exponentially in time (see equation (4.30)), so the algorithm is not asymptotically efficient. However, even for solutions with exponential decay, we still 127 find an improvement over the best previous result O(1/?T ) [74] for sufficiently small ?. Thus our algorithm might provide an advantage over classical computation for studying evolution for short times. More significantly, our algorithm can handle inhomogeneous quadratic ODEs, for which it can remain efficient in the long-time limit since the solution can remain asymptotically nonzero (for an explicit example, see the discussion just before the proof of Lemma 4.2), or can decay slowly (i.e., q can be poly(T )). Inhomogeneous equations arise in many applications, including for example the discretization of PDEs with nontrivial boundary conditions. We also provide a quantum lower bound for the worst-case complexity of simulating strongly nonlinear dynamics, showing that the algorithm?s condition R < 1 cannot be significantly improved in general (Theorem 4.2). Following the approach of [78, 79], we construct a protocol for distinguishing two states of a qubit driven by a certain quadratic ? ODE. Provided R ? 2, this procedure distinguishes states with overlap 1 ? ? in time poly(log(1/?)). Since nonorthogonal quantum states are hard to distinguish, this implies a lower bound on the complexity of the quantum ODE problem. Our quantum algorithm could potentially be applied to study models governed by quadratic ODEs arising in biology and epidemiology as well as in fluid and plasma dynamics. In particular, the celebrated Navier?Stokes equation with linear damping, which describes many physical phenomena, can be treated by our approach provided the Reynolds number is sufficiently small. We also note that while the formal validity of our arguments assumes R < 1, we find in one numerical experiment that our proposed approach remains valid for larger R (see Section 4.6). We emphasize that, as in related quantum algorithms for linear algebra and differential 128 equations, instantiating our approach requires an implicit description of the problem that allows for efficient preparation of the initial state and implementation of the dynamics. Furthermore, since the output is encoded in a quantum state, readout is restricted to features that can be revealed by efficient quantum measurements. More work remains to understand how these methods might be applied, as we discuss further in Section 4.7. The remainder of this chapter is structured as follows. Section 4.2 introduces the quantum quadratic ODE problem. Section 4.3 presents the Carleman linearization procedure and describes its performance. Section 4.4 gives a detailed analysis of the quantum Carleman linearization algorithm. Section 4.5 establishes a quantum lower bound for simulating quadratic ODEs. Section 4.6 describes how our approach could be applied to several well-known ODEs and PDEs and presents numerical results for the case of the viscous Burgers equation. Finally, we conclude with a discussion of the results and some possible future directions in Section 4.7. 4.2 Quadratic ODEs We focus on an initial value problem described by the n-dimensional quadratic ODE du = F u?22 + F1u+ F0(t), u(0) = uin. (4.1) dt Here 2u = [u1, . . . , u ]Tn ? Rn, u?2 = [u21, u1u2, . . . , u1un, u2u1, . . . , unu , u2 T nn?1 n] ? R , each uj = uj(t) is a function of t on the interval [0, T ] for j ? [n] := {1, . . . , n}, F2 ? Rn?n2 , F ? Rn?n1 are time-independent matrices, and the inhomogeneity F0(t) ? Rn is a C1 continuous function of t. We let ? ? ? denote the spectral norm. 129 The main computational problem we consider is as follows. Problem 4.1. In the quantum quadratic ODE problem, we consider an n-dimensional quadratic ODE as in (4.1). We assume F2, F1, and F0 are s-sparse (i.e., have at most s nonzero entries in each row and column), F1 is diagonalizable, and that the eigenvalues ?j of F1 satisfy Re (?n) ? ? ? ? ? Re (?1) < 0. We parametrize the problem in terms of the quantity ( ) 1 ? ?? ? ?F: 0?R = u F + . (4.2) |Re (?1)| in 2 ?uin? For some given T > 0, we assume the values Re (?1), ?F2?, ?F1?, ?F0(t)? for each t ? [0, T ], and ?F0? := maxt?[0,T ] ?F0(t)?, ?F ?0? := max ?t?[0,T ] ?F0(t)? are known, and that we are given oracles OF2 , OF1 , and OF0 that provide the locations and values of the nonzero entries of F2, F1, and F0(t) for any specified t, respectively, for any desired row or column2. We are also given the value ?u nin? and an oracle Ox that maps |00 . . . 0? ? C to a quantum state proportional to uin. Our goal is to produce a quantum state |u(T )? that is ?-close to the normalized u(T ) for some given T > 0 in ?2 norm. When F0(t) = 0 (i.e., the ODE is homogeneous), the quantity R = ?uin??F2? | | isRe (?1) qualitatively similar to the Reynolds number, which characterizes the ratio of the (nonlinear) convective forces to the (linear) viscous forces within a fluid [117, 118]. More generally, R quantifies the combined strength of the nonlinearity and the inhomogeneity relative to dissipation. Note that without loss of generality, given a quadratic ODE satisfying (4.1) with 2For instance, F1 is modeled by a sparse matrix oracle OF that, on input (j, l), gives the location of the1 l-th nonzero entry in row j, denoted as k, and gives the value (F1)j,k. 130 R < 1, we can modify it by rescaling u ? ?u with a suitable constant ? to satisfy ?F2?+ ?F0? < |Re (?1)| (4.3) and ?uin? < 1, (4.4) with R left unchanged by the rescaling. We use this rescaling in our algorithm and its analysis. With this rescaling, a small R implies both small ?uin??F2? and small |F0?/?uin? relative to |Re (?1)|. 4.3 Quantum Carleman linearization It is challenging to directly simulate quadratic ODEs using quantum computers, and indeed the complexity of the best known quantum algorithm is exponential in the evolution time T [74]. However, for a dissipative nonlinear ODE without a source, any quadratic nonlinear effect will only be significant for a finite time because of the dissipation. To exploit this, we can create a linear system that approximates the initial nonlinear evolution within some controllable error. After the nonlinear effects are no longer important, the linear system properly captures the almost-linear evolution from then on. We develop such a quantum algorithm using the concept of Carleman linearization [73, 82, 83]. Carleman linearization is a method for converting a finite-dimensional system of nonlinear differential equations into an infinite-dimensional linear one. This is 131 achieved by introducing powers of the variables into the system, allowing it to be written as an infinite sequence of coupled linear differential equations. We then truncate the system to N equations, where the truncation level N depends on the allowed approximation error, giving a finite linear ODE system. Let us describe the Carleman linearization procedure in more detail. Given a system of quadratic ODEs (4.1), we apply the Carleman procedure to obtain the system of linear ODEs dy? = A(t)y? + b(t), y?(0) = y?in (4.5) dt with the tri-diagonal block structure ?? ?? ?? ?? ? ??? y? ?? ??A1 A1? 1 1 2 ?????? y?1 ??? ??? ? F0(t)? ? ??? ? y? ???? ????A22 1 A22 A2 ?????? y? ? ? ?? ? ? 3 ? 2 ? ? 0 ? ??? y? ??? ??? A3 A3 A3d ? 3 ? ? 2 3 4 ????? ??? ? ?? ? ? y?3 ? ? = ? ???? ? ?? ???? ?? ? 0 ????+?? ???? , (4.6)dt ... ???? ??? ?? . . . . . . . . . ??? . . ? . ? . ? ..??y?N?1??? ???? N?1 N?1 N?1???????? ?AN?2 AN?1 AN y?N?1??? ? ??? ?? ?? 0 ???? y?N A N N N?1 AN y?N 0 where y?j = u?j ? Rn j , y?in = [u ;u?2 ?Nin in ; . . . ;uin ], and j ? Rnj?nj+1 , j jAj+1 A j ? Rn ?nj , j j?1 Aj n ?nj?1 ? R for j ? [N ] satisfying Aj = F ? I?j?1j+1 2 + I ? F ?j?22 ? I + ? ? ?+ I?j?1 ? F2, (4.7) Aj = F ? I?j?1j 1 + I ? F1 ? I?j?2 + ? ? ?+ I?j?1 ? F1, (4.8) Aj = F (t)? I?j?1 + I ? F (t)? I?j?2 + ? ? ?+ I?j?1j?1 0 0 ? F0(t). (4.9) 132 Note that A is a (3Ns)-sparse matrix. The dimension of (4.5) is nN+12 ? ? ? N ? n? := n+ n + + n = = O(nN). (4.10) n? 1 To construct a system of linear equations, we next divide [0, T ] into m = T/h time steps and apply the forward Euler method on (4.5), letting yk+1 = [I + A(kh)h]yk + b(kh) (4.11) where yk ? R? approximates y?(kh) for each k ? [m+ 1]0 := {0, 1, . . . ,m}, with y0 = y := y?(0) = y? , and letting all ykin in be equal for k ? [m+ p+ 1]0 \ [m+ 1]0, for some sufficiently large integer p. (It is unclear whether another discretization could improve performance, as discussed further in Section 4.7.) This gives an (m + p + 1)? ? (m + p+ 1)? linear system L|Y ? = |B? (4.12) that encodes (4.11) and uses it to produce a numerical solution at time T , where m?+p ?m m?+p L = |k??k|? I? |k??k?1|? [I+A((k?1)h)h]? |k??k?1|? I (4.13) k=0 k=1 h=m+1 and 1 ( ?m )|B? = ? ?yin?|0? ? |yin?+ ?b((k ? 1)h)?|k? ? |b((k ? 1)h)? (4.14) Bm k=1 133 with a normalizing factor Bm. Observe that the system (4.12) has the lower triangular s?tructure? ????? 0? I ??????? y ? ? ? ? ? ? ?? yin ?? ??? ?? ? ?1 ? ? ??? ?[I+A(0)h] I ?????? y ??? ??? b(0) ??? . . . . ???????? .. ???? ???? . ?? . . ?? . ? ? . . ? ? ??? ? ?[I+A((m? 1)h)h] I ?????? ym? ?? ???? = ???? ? b((m? 1)h)? ?? ?? ? ?? . ?? ?I I ? m+1 ??? . . . . ????? ???y? ???? .. ? ? ? 0 ? ?? ? ? ???? ? ? ? .. ?? . . ?? . ? ? . ???? ?I I ym+p 0 (4.15) In the above system, the first n components of yk for k ? [m+ p+ 1]0 (i.e., yk1 ) approximate the exact solution u(T ), up to normalization. We apply the high-precision quantum linear system algorithm (QLSA) [46] to (4.12) and postselect on k to produce yk/?yk1 1? for some k ? [m+ p+ 1]0 \ [m]0. The resulting error is at most ?? ?u(T ) yk ? ? := max ? ? 1 ?. (4.16) k?[m+p+1] \[m] ?0 0 ?u(T )? ?yk1?? This error includes contributions from both Carleman linearization and the forward Euler method. (The QLSA also introduces error, which we bound separately. Note that we could instead apply the original QLSA [84] instead of its subsequent improvement [46], but this would slightly complicate the error analysis and might perform worse in practice.) Now we state our main algorithmic result. Theorem 4.1 (Quantum Carleman linearization algorithm). Consider an instance of the quantum quadratic ODE problem as defined in Problem 4.1, with its Carleman linearization 134 as defined in (4.5). Assume R < 1. Let ? ? ?u: : in?g = u(T ) , q = . (4.17) ?u(T )? There exists a quantum algorithm producing a state that approximates u(T )/?u(T )? with error at most ? ? 1, succeeding with probability ?(1), with a flag indicating success, using at most ( ) sT 2q[(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?] sT?F ?2??F1??F0??F ?poly log 0 / log(1/?u ?) (1? ?u ?)2in (? in F2?+ ?F0?)g? (1? ?uin?)g? (4.18) queries to the oracles OF2 , OF(1 , OF0 and Ox. The gate complexity is larger than the query) complexity by a factor of poly log(nsT?F2??F1??F ??F ?0 0?/(1??uin?)g?)/ log(1/?uin?) . Furthermore, if the eigenvalues of F1 are all real, the query complexity is ( ) sT 2q[(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?] sT?F2??F1??F0??F ?poly log 0?/ log(1/?uin?) g? g? ( (4.19)) and the gate complexity is larger by poly log(nsT?F2??F1??F0??F ?0?/g?)/ log(1/?uin?) . 4.4 Algorithm analysis In this section we establish several lemmas and use them to prove Theorem 4.1. 135 4.4.1 Solution error The solution error has three contributions: the error from applying Carleman linearization to (4.1), the error in the time discretization of (4.5) by the forward Euler method, and the error from the QLSA. Since the QLSA produces a solution with error at most ? with complexity poly(log(1/?)) [46], we focus on bounding the first two contributions. 4.4.1.1 Error from Carleman linearization First, we provide an upper bound for the error from Carleman linearization for arbitrary evolution time T . To the best of our knowledge, the first and only explicit bound on the error of Carleman linearization appears in [73]. However, they only consider homogeneous quadratic ODEs; and furthermore, to bound the error for arbitrary T , they assume the logarithmic norm of F1 is negative (see Theorems 4.2 and 4.3 of [73]), which is too strong for our case. Instead, we give a novel analysis under milder conditions, providing the first convergence guarantee for general inhomogeneous quadratic ODEs. We begin with a lemma that describes the decay of the solution of (4.1). Lemma 4.1. Consider an instance of the quadratic ODE (4.1), and assume R < 1 as defined in (4.2). Let ? ?Re(?1)? Re(?1)2 ? 4?F2??F0? r? := . (4.20) 2?F2? Then r? are distinct real numbers with 0 ? r? < r+, and the solution u(t) of (4.1) satisfies ?u(t)? < ?uin? < r+ for any t > 0. 136 Proof. Consider the derivative of ?u(t)?. We have d?u?2 = u?F2(u? u) + (u? ? u?)F ?2u+ u?(F1 + F ? 1 )u+ u ?F0(t) + F0(t) ?u, dt ? 2?F2??u?3 + 2Re(? 21)?u? + 2?F0??u?. (4.21) If ?u? =? 0, then d?u? ? ?F2??u?2 +Re(?1)?u?+ ?F0?. (4.22) dt Letting a = ?F2? > 0, b = Re(?1) < 0, and c = ?F0? > 0 with a, b, c > 0, we consider a 1-dimensional quadratic ODE dx = ax2 + bx+ c, x(0) = ?uin?. (4.23) dt Since R < 1 ? b > a?uin?+ c? ? , the discriminant satisfiesuin ( )2 ( )2 b2 ? c c c4ac > a?uin?+ ? 4a?uin? ? ? a?uin? ? ? 0. (4.24)?uin? ?uin? ?uin? Thus, r? defined in (4.20) are distinct real roots of ax2+bx+c. Since r +r = ? b? + > 0a and r c?r+ = ? 0, we have 0 ? ra ? < r+. We can rewrite the ODE as dx = ax2 + bx+ c = a(x? r?)(x? r+), x(0) = ?uin?. (4.25) dt 137 Letting y = x? r?, we obtain an associated homogeneous quadratic ODE dy = ?a(r 2+ ? r?)y + ay = ay[y ? (r+ ? r?)], y(0) = ?uin? ? r?. (4.26) dt Since the homogeneous equation has the closed-form solution r+ ? r? y(t) = , (4.27) 1? ea(r+?r?)t[1? (r+ ? r?)/(?uin? ? r?)] the solution of the inhomogeneous equation can be obtained as r+ ? r? x(t) = 1? ea(r+? + r r?)t ? ? ? ? ? ? . (4.28) [1 (r+ r?)/( uin r?)] Therefore we have ? r+ ? r?u(t)? ? + r . (4.29) 1? ea(r+?r?)t ?[1? (r+ ? r?)/(?uin? ? r?)] Since R < 1 ? a?u ? + cin ? < ?b ? a?uin? 2 + b?uin? + c < 0, ?uu in? is locatedin? between the two roots r? and r+, and thus 1? (r+ ? r?)/(?uin?? r?) < 0. This implies ?u(t)? in (4.29) decreases from u(0) = ?uin?, so we have ?u(t)? < ?uin? < r+ for any t > 0. We remark that lim d?u? = 0 since d?u?t?? < 0 and ?u(t)? ? 0, so u(t) approachesdt dt to a stationary point of the right-hand side of (4.1) (called an attractor in the theory of dynamical systems). Note that for a homogeneous equation (i.e., ?F0? = 0), this shows that the dissipation 138 inevitably leads to exponential decay. In this case we have r? = 0, so (4.29) gives ? ? ?uin?r+u(t) = , (4.30) ear+t(r+ ? ?uin?) + ?uin? which decays exponentially in t. On the other hand, as mentioned in the introduction, the solution of a dissipative inhomogeneous quadratic ODE can remain asymptotically nonzero. Here we present an example of this. Consider a time-independent uncoupled system with duj = f u2 + dt 2 j f1uj + f0, j ? [n], with uj(0) = x0 > 0, f2 > 0, f1 < 0, f0 > 0, and R < 1. We see that ? 2 each uj(t) decreases from to ?f1? f1?4f2f: 0x0 x1 = > 0, with 0 < x < u (t) < x .2f 1 j 02 ? ? Hence, the norm of u(t) is bounded as 0 < nx1 < ?u(t)? < nx0 for any t > 0. In general, it is hard to lower bound ?u(t)?, but the above example shows that a nonzero inhomogeneity can prevent the norm of the solution from decreasing to zero. We now give an upper bound on the error of Carleman linearization. Lemma 4.2. Consider an instance of the quadratic ODE (4.1), with its corresponding Carleman linearization as defined in (4.5). As in Problem 4.1, assume that the eigenvalues ?j of F1 satisfy Re (?n) ? ? ? ? ? Re (?1) < 0. Assume that R defined in (4.2) satisfies R < 1. Then for any j ? [N ], the error ?j(t) := u?j(t)? y?j(t) satisfies ??j(t)? ? ??(t)? ? tN?F2??u N+1in? . (4.31) 139 Proof. The exact solution u(t) of the original quadratic ODE (4.1) satisfies ?? ?? ? ?? ? ? ? ?? u ?? ? ????? ? ??A11 A12 ?????? u ??? ???F0(t)?? u?2 ??? ? ? ???A2 A2 A2 ????? u?2 ? ?? ? 1 2 3? ? ?? ? ??? ??? 0 ??? ? ? ?? u?3 ?? ?? A3 A3 A3 ?? ?3? 2 3 4? ? ??? u d ? . ??? ??? . . . ???????? . ? ??? ?? ??? ??? ? 0 ??? . ? dt ?? .. ?? ? = ? . . . . . . ?? .. ?+?? .. ???? , ?u?(N?1)???? ???? AN?1 AN?1 N?1? ? ? N?2 N?1 AN ? ? ? ? ???????u?(N?1)?????? ????? ? ? . ?? ? ? ??? 0 ???? ? u?N ? ? AN N .? A . ?N ?N 1 N u ?? ?0 ??.. . . . . ??. . .. ? ? . ?. . .. (4.32) and the approximated solution y?j(t) satisfies (4.6). Comparing these equations, we have d? = A(t)? + b?(t), ?(0) = 0 (4.33) dt with the tri-diagonal block structure ?? ?? ?? ?? ? ? ??? ? ?? ??A1 A1? 1 ? ? 1 2 ????? ? ? ?? ?1 ?? ?? 0 ?? ?? ? ?? ?? ? ? ? A2 22 1 A2 A 2 ?? ? ? ? 3 ???????? ? ?2 ???? ??? ?? 0 ?? ? ? ?? ? ? ?? ? ?? ? 3d 3 ?? ?? A2 A3 3 ?? 3 A4 = ? ?3 0 ? dt ?? ... ??? ??? . . . . . . . ?? . . ?? ?? ?? ???? ? ?+? ? , ???? .? .. ?? ?? ? . ?? ???? .. ????? ???N?1??? ??? ? ? ? ?AN?1 AN?1 N?1N?2 N?1 AN ???????N?1??? ??? 0 ??? ? N N N ?(N+1)N AN?1 AN ?N AN+1u (4.34) 140 Consider the derivative of ??(t)?. We have d???2 = ??(A(t) + A?(t))? + ??b?(t) + b?(t)??. (4.35) dt For ??(A(t) + A?(t))?, we bound each term as ??jA j j+1? + ? ? j ? j+1 j+1(Aj+1) ?j ? j?F2???j+1???j?, ??j [A j j + (A j j) ?]?j ? 2j Re(?1)?? ?2, (4.36)j ??Aj ? + ??j j?1 j?1 j?1(A j j?1) ??j ? 2j?F0???j?1???j? using the definitions in (4.7)?(4.9). Define a matrix G ? Rn?n with nonzero entries Gj?1,j = j?F0?, Gj,j = j Re(?1), and Gj+1,j = j?F2?. Then ??(A(t) + A?(t))? ? ??(G+G?)?. (4.37) Since ?F2?+?F0? < |Re (?1)|, G is strictly diagonally dominant and thus the eigenvalues ?j of G satisfy Re (?n) ? ? ? ? ? Re (?1) < 0. Thus we have ??(G+G?)? ? 2Re(?1)???2. (4.38) For ??b?(t) + b?(t)??, we have ??b?(t) + b?(t)?? ? 2?b????? ? 2?AN ??u?(N+1)N+1 ????. (4.39) 141 Since ?AN ? = N?F ?, and ?u?(N+1)? = ?u?N+1N+1 2 ? ?u N+1in? , we have ??b?(t) + b?(t)?? ? 2N?F ??u ?N+12 in ???. (4.40) Using (4.37), (4.38), and (4.40) in (4.35), we find d???2 ? 2Re(? )???2 + 2N?F ??u ?N+11 2 in ???, (4.41) dt so elementary calculus gives d??? 1 d???2 = ? Re(? )???+N?F ??u ?N+1. (4.42) dt 2? ? 1 2 in? dt Solving the differential inequality as an equation with ?(0) = 0 gives us a bound on ???: ? t ? t ??(t)? ? eRe(?1)(t?s)N?F ??u ?N+1 ds ? N?F ??u ?N+1 eRe(?1)(t?s)2 in 2 in ds. 0 0 (4.43) Finally, using ? t Re(?1)t eRe(?1)(t?s) 1? e ds = ? t (4.44) 0 |Re(?1)| (where we used the inequality 1? eat ? ?at with a < 0), (4.43) gives the bound ??j(t)? ? ??(t)? ? tN?F2??u N+1in? (4.45) as claimed. 142 Note that (4.44) can be bounded alternatively by ? t Re(?1)t eRe(?1)(t?s) 1? e 1 ds = ? , (4.46) 0 |Re(?1)| |Re(?1)| and thus ??j(t)? ? ??(t)? ? 1| |N?F2??u ? N+1 in . We select (4.44) because it avoidsRe(?1) including an additional parameter Re(?1). We also give an improved analysis that works for homogeneous quadratic ODEs (F0(t) = 0) under milder conditions. This analysis follows the proof in [73] closely. Corollary 4.1. Under the same setting of Lemma 4.2, assume F0(t) = 0 in (4.1). Then for any j ? [N ], the error ?j(t) := u?j(t)? y?j(t) satisfies ??j(t)? ? ?u ?j RN+1?jin . (4.47) For j = 1, we have the tighter bound ( ) ?? (t)? ? ?u ?RN 1? eRe(? )t N11 in . (4.48) Proof. We again consider ? satisfying (4.33). Since F0(t) = 0, (4.33) reduces to a time- independent ODE with an upper triangular block structure, d?j = Aj? + Ajj j j+1?j+1, j ? [N ? 1] (4.49)dt and d?N = AN N ?(N+1)N?N + AN+1u . (4.50)dt 143 We proceed by backward substitution. Since ?N(0) = 0, we have ? t N ? (t) = eAN (t?s0)ANN N+1u ?(N+1)(s0) ds0. (4.51) 0 j For j ? [N ], (4.8) gives ?eAjt? = ej Re(?1)t and (4.7) gives ?Ajj+1? = j?F2?. By Lemma 4.1, ?u?(N+1)? = ?u?N+1 ? ?uin?N+1. We can therefore upper bound (4.51) by ? t ?? (t)? ? ? NeAN (t?s0)? ? ?AN u?(N+1)N ? N+1 (s0)? ds00 (4.52)t ? N?F ??u ?N+1 eN Re(?1)(t?s0)2 in ds0. 0 For j = N ? 1, (4.49) gives d?N?1 = AN?1 N?1 dt N?1 ?N?1 + AN ?N . (4.53) Again, since ?N?1(0) = 0, we have ? t N?1 ?N?1(t) = e AN?1(t?s1)AN?1N ?N(s1) ds1 (4.54) 0 which has the upper bound ? t AN?1?? (t?s1)N?1(t)? ? ?e N?1 ? ? ? ?A N?1 N ?N(s1)? ds1 0 t ? (N ? 1)?F ? e(N?1)Re(?1)(t?s1)2 ? ??N(s1)? ds10 t ? s1 ? N(N ? 1)?F ?2?u ?N+1 e(N?1)Re(?1)(t?s1) eN Re(?1)(s1?s)2 in ds0 ds1, 0 0 (4.55) where we used (4.52) in the last step. Iterating this procedure for j = N ? 2, . . . , 1, we 144 find ? t ? ? ? ? N ! sN?j ? (t) ?F ?N+1?j?u ?N+1 ej Re(?1)(t?sN?j) e(j+1)Re(?1)(sN?j?sN?1?j)j 2 in (j ?? 1)!s ? 0 02 s1 ? ? ? e(N?1)Re(?1)(s2?s1) eN Re(?1)(s1?s0)? ? ? ds0 ? ? ? dsN?j0 0 N ! sN+1?j s2 s1 ?N?j+1 = ?F ?N+1?j?u ?N+1 ? ? ? eRe(?1)(?Ns0+ k=1 sk)2 in ds0 ? ? ? dsN?j. (j ? 1)! 0 0 0 (4.56) Finally, using ? sk+1 (N?k)Re(?1)sk+1 e(N?k)Re(?1)(sk+1?s ) 1? e 1 k dsk = ? (4.57) 0 (N ? k)|Re(?1)| (N ? k)|Re(?1)| for k = 0, . . . , N ? j, (4.56) can be bounded by ? N !? (t)? ? ?F ?N+1?jj 2 ?uin?N+1 (j ? 1)! (j ? 1)! N !|Re(? )|N+1?j1 (4.58) ?u ?N+1?F ?N+1?jin 2 = ? = ?u j N+1?j | |N+1 j in ? R . Re(?1) For j = 1, the bound can be further improved. By Lemma 5.2 of [73], if a ?= 0, ? s ? s ?N 2 s1 ? asN (e N? ? ? ? 1)Nea(?Ns0+ k=1 sk) ds0 ds1 ? ? ? dsN?1 = . (4.59) 0 0 0 N !a N 145 With sN = t and a = Re(?1), we find ? s ? s ?N 2 s1 ? N N+1 Re(? )(?Ns + N?? (t)? ? N !?F ? ?u 1 0 k=1 sk)1 2 in? ? ? ? e ds0 ds1 ? ? ? dsN?1 0 0 0 Re(?1)t N ? N !? ?N? (e ? 1)F2 u N+1in? ( ) N ! Re(? N1) = ?u ?RN 1? eat Nin , (4.60) which is tighter than the j = 1 case in (4.58). While Problem 4.1 makes some strong assumptions about the system of differential equations, they appear to be necessary for our analysis. Specifically, the conditions Re (?1) < 0 and R < 1 are required to ensure arbitrary-time convergence. Since the Euler method for (4.5) is unstable if Re (?1) > 0 [52, 119], we only consider the case Re (?1) ? 0. If Re (?1) = 0, (4.59) reduces to ? s ? ?N s2 s1 ? N ? ? ? ea(?Ns0+ N t k=1 sk) ds0 ds1 ? ? ? dsN?1 = , (4.61) 0 0 0 N ! giving the error bound ??1(t)? ? ?uin?(?uin??F2?t)N (4.62) instead of (4.60). Then the error bound can be made arbitrarily small for a finite time by increasing N , but after t > 1/?uin??F2?, the error bound diverges. Furthermore, if R ? 1, Bernoulli?s inequality gives ?uin?(1?NeRe(?1)t) ? ?u ?(1?NeRe(?1)tin ) RN ? ?uin?(1? eRe(?1)t)N RN , (4.63) 146 where the right-hand side upper bounds ??1(t)? as in (4.60). Assuming ??1(t)? = ?u(t)? y?1(t)? is smaller than ?uin?, we require N = ?(e|Re(?1)|t). (4.64) In other words, to apply (4.48) for the Carleman linearization procedure, the truncation order given by Lemma 4.2 must grow exponentially with t. ? In fact, we prove in Section 4.5 that for R ? 2, no quantum algorithm (even one based on a technique other than Carleman linearization) can solve Problem 4.1 efficiently. ? It remains open to understand the complexity of the problem for 1 ? R < 2. On the other hand, if R < 1, both (4.47) and (4.48) decrease exponentially with N , making the truncation efficient. We discuss the specific choice of N in (4.149) below. 4.4.1.2 Error from forward Euler method Next, we provide an upper bound for the error incurred by approximating (4.5) with the forward Euler method. This problem has been well studied for general ODEs. Given an ODE dz = f(z) on [0, T ] with an inhomogenity f that is an L-Lipschitz continuous dt function of z, the global error of the solution is upper bounded by eLT , although in most cases this bound overestimates the actual error [120]. To remove the exponential dependence on T in our case, we derive a tighter bound for time discretization of (4.5) in Lemma 4.3 below. This lemma is potentially useful for other ODEs as well and can be straightforwardly adapted to other problems. Lemma 4.3. Consider an instance of the quantum quadratic ODE problem as defined in 147 Problem 4.1, with R < 1 as defined in (4.2). Choose a time step { } ? 1 2(|Re(?1)| ? ?F2? ? ?F0?)h min , (4.65) N?F ? N(|Re(? )|21 1 ? (?F2?+ ?F 2 20?) + ?F1? ) in general, or ? 1h (4.66) N?F1? if the eigenvalues of F1 are all real. Suppose the error from Carleman linearization ?(t) as defined in Lemma 4.2 is bounded by ? g?(t)? ? , (4.67) 4 where g is defined in (4.17). Then the global error from the forward Euler method (4.11) on the interval [0, T ] for (4.5) satisfies ?y?1(T )? ym1 ? ? ?y?(T )? ym? ? 3N2.5Th[(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?]. (4.68) Proof. We define a linear system that locally approximates the initial value problem (4.5) on [kh, (k + 1)h] for k ? [m]0 as zk = [I + A((k ? 1)h)h]y?((k ? 1)h) + b((k ? 1)h), (4.69) where y?(t) is the exact solution of (4.5). For k ? [m], we denote the local truncation error 148 by ek := ?y?(kh)? zk? (4.70) and the global error by gk := ?y?(kh)? yk?, (4.71) where yk in (4.11) is the numerical solution. Note that gm = ?y?(T )? ym?. For the local truncation error, we Taylor expand y?((k ? 1)h) with a second-order Lagrange remainder, giving y???(?)h2 y?(kh) = y?((k ? 1)h) + y??((k ? 1)h)h+ (4.72) 2 for some ? ? [(k? 1)h, kh]. Since y??((k? 1)h) = A((k? 1)h)y?((k? 1)h)+ b((k? 1)h) by (4.5), we have y???(?)h2 ?? 2 y?(kh) = [I+A((k?1)h)h]y?((k?1)h)+b((k?1)h)+ = zk y? (?)h+ , (4.73) 2 2 and thus the local error (4.70) can be bounded as k ? ? k? ? ?? ?h 2 ? Mh 2 e = y?(kh) z = y? (?)h , (4.74) 2 2 where M := maxt?[0,T ] ?y???(?)?. By the triangle inequality, the global error (4.71) can therefore be bounded as gk = ?y?(kh)? yk? ? ?y?(kh)? zk?+ ?zk ? yk? ? ek + ?zk ? yk?. (4.75) 149 Since yk and zk are obtained by the same linear system with different right-hand sides, we have the upper bound ?zk ? yk? = ?[I + A((k ? 1)h)h][y?((k ? 1)h)? yk?1]? ? ?I + A((k ? 1)h)h? ? ?y?((k ? 1)h)? yk?1? (4.76) = ?I + A((k ? 1)h)h?gk?1. In order to provide an upper bound for ?I + A(t)h? for all t ? [0, T ], we write I + A(t)h = H2 +H1 +H0(t) (4.77) where N??1 H2 = |j??j + 1| ? Ajj+1h, (4.78) j=1 ?N H1 = I + |j??j| ? Ajjh, (4.79) ? j=1N H0(t) = |j??j ? 1| ? Ajj?1h. (4.80) j=2 We provide upper bounds separately for ?H2?, ?H1?, and ?H0? := maxt?[0,T ] ?H0(t)?, and use the bound maxt?[0,T ] ?I + A(t)h? ? ?H2?+ ?H1?+ ?H0?. The eigenvalues of Aj? j consist of all j-term sums of the eigenvalues of F1. More precisely, they are { ??[j] ?Ij}Ij?[n]j where {??}??[n] are the eige?nvalues of F and I j 1 ? ? [n]j is a j-tuple of indices. The eigenvalues of H1 are thus {1+ h ??[j] ?Ij}Ij?[n]j ,j?[N ].? 150 With J := max??[n] | Im(??)|, we have ??? ? ???2 ??? ? ???2 ?? ? ??21 + h ?Ij = 1 + h Re(?Ij) + ?h Im(?Ij)?? ? ? ??[j] ??[j] ??[j] (4.81) ? 1? 2Nh|Re(? 2 2 2 21)|+N h (|Re(?1)| + J ) where we used Nh|Re(?1)| ? 1 by (4.65). Therefore ? ? ? ? ?H1? ? ? = max max ?1 + h ?Ij ? ? 1? 2Nh|Re(?1)|+N2h2(|Re(? )|2 + J21 ). j?[N ] Ij?[n]j ? ??[j] (4.82) We also have ??N??1 ?? ?H2? = ?? |j??j + 1| ? Aj ? jj+1h? ? max ?A? j+1?h ? N?F2?h (4.83)j [N ] j=1 and ??N??1 ?? ?H0? = max ?? |j??j01|?Aj ?j?1h? ? max max ?Aj ?h ? N max ?F0(t)?h ? N?F0?h.t?[0,T ] t? j?1[0,T ] j?[N ] t?[0,T ] j=1 (4.84) Using the bounds (4.82) and (4.83), we aim to select the value of h to ensure max ?I + A(t)h? ? ?H2?+ ?H1?+ ?H0? ? 1. (4.85) t?[0,T ] The assumption (4.65) implies ? 2(|Re(?1)| ? ?F2? ? ?F0?)h (4.86) N(|Re(? )|21 ? (?F2?+ ?F 2 20?) + J ) 151 (note that the denominator is non-zero due to (4.3)). Then we have N2h2? (|Re(?1)| 2 ? ?F ?22 + J2) ? 2Nh(|Re(?1)| ? ?F2? ? ?F0?) =? 1? 2Nh|Re(?1)|+N2h2(|Re(? 2 21)| + J ) ? 1?N(?F2?+ ?F0?)h, (4.87) so maxt?[0,T ] ?I + A(t)h? ? 1 as claimed. The choice (4.65) can be improved if an upper bound on J is known. In particular, if J = 0, (4.86) simplifies to 2 h ? , (4.88) N(|?1|+ ?F2?+ ?F0?) which is satisfied by (4.66) using ?F2?+ ?F0? < |?1| ? ?F1?. Using this in (4.76), we have ?zk ? yk? ? ?I + A((k ? 1)h)h?gk?1 ? gk?1. (4.89) Plugging (4.89) into (4.75) iteratively, we find ?k gk ? gk?1 + ek ? gk?2 + ek?1 + ek ? ? ? ? ? ej, k ? [m+ 1]0. (4.90) j=1 Using (4.74), this shows that the global error from the forward Euler method is bounded by ?k 2 ?y?1(kh)? yk1? ? ?y?(kh)? yk? Mkh = gk ? ej ? , (4.91) 2 j=1 152 and when k = m, mh = T , ? 1 ? m? ? m ? Mh 2 MTh y? (T ) y1 g m = . (4.92)2 2 Finally, we remove the dependence on M = max ?y???t?[0,T ] (?)?. Since y???(t) = [A(t)y?(t) + b(t)]? = A(t)y??(t) + A?(t)y?(t) + b?(t) (4.93) = A(t)[A(t)y?(t) + b(t)] + A?(t)y?(t) + b?(t), we have ?y???(t)? = ?A(t)?2?y?(t)?+ ?A(t)??b(t)?+ ?A?(t)??y?(t)?+ ?b?(t)?. (4.94) Maximizing each term for t ? [0, T ], we have ??N??1 ?N ?N max ?A(t)? = ?? |j??j + 1| ? Ajj+1 + |j??j| ? Aj j? j + |j??j ? 1| ? Aj?1t [0,T ] ?? ?? j=1 j=1 j=2 ? N(?F2?+ ?F1?+ ?F0?), ??? ?? (4.95)N max ?A?(t)? = ?? |j??j ? 1| ? (Ajj?1)??? ? N?F ?0(t)? ? N?F ?? 0?, (4.96)t [0,T ] j=2 max ?b(t)? = max ?F0(t)? ? ?F0?, (4.97) t?[0,T ] t?[0,T ] max ?b?(t)? = max ?F ?0(t)? ? ?F ?0?, (4.98) t?[0,T ] t?[0,T ] and using ?u? ? ?uin? < 1, ??j(t)? ? ??(t)? ? g/4 < ?uin?/4 by (4.67), and R < 1, 153 we have ?N ?N ?N ?y?(t)?2 ? ?y? (t)?2 = ?u?jj (t)? ?j(t)?2 ? 2 (?u?j(t)?2 + ?? 2j(t)? ) j=?1 ( j=1 ) j=1N 2 ?N ? 2 ? ?2j ?uin? 1uin + < 2 (1 + )?u 2in? < 4N?uin?2 < 4N 16 16 j=1 j=1 (4.99) for all t ? [0, T ]. Therefore, substituting the bounds (4.95)?(4.99) into (4.94), we find M ? max ?A(t)?2?y?(t)?+ ?A(t)??b(t)?+ ?A?(t)??y?(t)?+ ?b?(t)? t?[0,T ] ? 2N2.5(?F2?+ ?F1?+ ?F 20?) +N(?F2?+ ?F1?+ ?F0?)?F0?+ 2N1.5?F ? ?0?+ ?F0? ? 6N2.5[(?F2?+ ?F1?+ ?F 2 ?0?) + ?F0?]. (4.100) Thus, (4.92) gives ?y?1(kh)? ym? ? 3N2.5kh2[(?F ?+ ?F ?+ ?F ?)21 2 1 0 + ?F ?0?], (4.101) and when k = m, mh = T , ?y?1(T )? ym? ? 3N2.51 Th[(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?] (4.102) as claimed. 4.4.2 Condition number Now we upper bound the condition number of the linear system. 154 Lemma 4.4. Consider an instance of the quantum quadratic ODE problem as defined in Problem 4.1. Apply the forward Euler method (4.11) with time step (4.65) to the Carleman linearization (4.5). Then the condition number of the matrix L defined in (4.12) satisfies ? ? 3(m+ p+ 1). (4.103) Proof. We begin by upper bounding ?L?. We write L = L1 + L2 + L3, (4.104) where m?+p L1 = |k??k| ? I, (4.105) k=?0m L2 = ? |k??k ? 1| ? [I + A((k ? 1)h)h], (4.106) k= m?1+p L3 = ? |k??k ? 1| ? I. (4.107) k=m+1 Clearly ?L1? = ?L3? = 1. Furthermore, ?L2? ? maxt?[0,T ] ?I + A(t)h? ? 1 by (4.85), which follows from the choice of time step (4.65). Therefore, ?L? ? ?L1?+ ?L2?+ ?L3? ? 3. (4.108) 155 Next we upper bound ?L?1? = sup ?L?1|B??. (4.109) ?|B???1 We express |B? as m?+p m?+p |B? = ?k|k? = |bk?, (4.110) k=0 k=0 where |bk? := ?k|k? satisfies m?+p ?|bk??2 = ?|B??2 ? 1. (4.111) k=0 Given any |bk? for k ? [m+ p+ 1]0, we define m?+p m?+p |Y k? := L?1|bk? = ?k|l? = |Y kl l ?, (4.112) l=0 l=0 where |Y kl ? := ?kl |l?. We first upper bound ?|Y k?? = ?L?1|bk??, and then use this to upper bound ?L?1|B??. We consider two cases. First, for fixed k ? [m+ 1]0, we directly calculate |Y kl ? for each l ? [m+ p+ 1]0 by the forward Euler method (4.11), giving ???? ???????0, if l ? [k]0;????|bk?, if l = k; |Y kl ? = ????? (4.113)?????l?1? j=k[I + A(jh)h]|b k?, if l ? [m+ 1]0 \ [k + 1] ;? 0????m?1j=k [I + A(jh)h]|bk?, if l ? [m+ p+ 1]0 \ [m+ 1]0. 156 Since maxt?[0,T ] ?I + A(t)h?| ? 1 by (4.85), (4.112) gives m?+p ?m m?+p ?|Y k??2 = ?|Y kl ??2 ? ?|bk??2 + ?|bk??2 l=0 l=k l=m+1 (4.114) ? (m+ p+ 1? k)?|bk??2 ? (m+ p+ 1)?|bk??2. Second, for fixed k ? [m+ p+ 1]0 \ [m+ 1]0, similarly to (4.113), we directly calculate |Y kl ? using (4.11), giving ???? ??0, if l ? [k]0;|Y kl ? = ??? (4.115)|bk?, if l ? [m+ p+ 1]0 \ [k]0. Similarly to (4.114), we have (again using (4.112)) m?+p m?+p ?|Y k??2 = ?|Y k??2 = ?|bk??2 = (m+ p+1? k)?|bk??2l ? (m+ p+1)?|bk??2. l=0 l=k (4.116) Combining (4.114) and (4.116), for any k ? [m+ p+ 1]0, we have ?|Y k??2 = ?L?1|bk??2 ? (m+ p+ 1)?|bk??2. (4.117) By the definition of |Y k? in (4.112), (4.117) gives ?L?1?2 = sup ?L?1|B??2 ? (m+p+1) sup ?L?1|bk??2 ? (m+p+1)2, (4.118) ?|B???1 ?|bk???1 157 and therefore ?L?1? ? (m+ p+ 1). (4.119) Finally, combining (4.108) with (4.119), we conclude ? = ?L??L?1? ? 3(m+ p+ 1) (4.120) as claimed. 4.4.3 State preparation We now describe a procedure for preparing the right-hand side |B? of the linear system (4.12), whose entries are composed of |yin? and |b((k ? 1)h)? for k ? [m]. The initial vector yin is a direct sum over spaces of different dimensions, which is cumbersome to prepare. Instead, we prepare an equivalent state that has a convenient tensor product structure. Specifically, we embed yin into a slightly larger space and prepare the normalized version of z = [u ? vN?1;u?2 ? vN?2; . . . ;u?Nin in 0 in 0 in ], (4.121) where v0 is some standard vector (for simplicity, we take v0 = |0?). If uin lives in a vector space of dimension n, then zin lives in a space of dimension NnN while yin lives in a slightly smaller space of dimension ? = n+n2+ ? ? ?+nN = (nN+1?n)/(n?1). Using standard techniques, all the operations we would otherwise apply to yin can be applied instead to zin, with the same effect. 158 Lemma 4.5. Assume we are given the value ?uin?, and let Ox be an oracle that maps |00 . . . 0? ? Cn to a normalized quantum state |uin? proportional to uin. Assume we are also given the values ?F0(t)? for each t ? [0, T ], and let OF0 be an oracle that provides the locations and values of the nonzero entries of F0(t) for any specified t. Then the quantum state |B? defined in (4.14) (with yin replaced by zin) can be prepared using O(N) queries to Ox and O(m) queries to OF0 , with gate complexity larger by a poly(logN, log n) factor. Proof. We first show how to prepare the state ?N | 1zin? = ? ?u jin? |j?|uin??j|0??N?j, (4.122) V j=1 where ?N V := ?u 2jin? . (4.123) j=1 This state can be prepared using N queries to the initial state oracle Ox applied in superposition to the intermediate state ?N |?int? 1 := ? ?u ?j|j? ? |0??Nin . (4.124) V j=1 To efficiently prepare |?int?, notice that ?1 k??1 | ? ??uin??int = ? ? u ?j?2in |j0j1 . . . j ?Nk?1? ? |0? , (4.125) V j0,j1,...,jk?1=0 ?=0 where k := log2N (assuming for simplicity that N is a power of 2) and jk?1 . . . j1j0 is 159 the k-bit binary expansion of j ? 1. Observe that ?k?1 ( ?1 ) |? ? = ?1 ? ?u ?j?2 |j ? ? |0??Nint in ? (4.126) V ?=0 ? j?=0 where V? := 1 + ?uin?2 ?+1 . (4.127) ? (Notice that k?1?=0 V? = V/?uin?2.) Each tensor factor in (4.126) is a qubit that can be produced in constant time. Overall, we prepare these k = log2N qubit states and then apply Ox N times. We now discuss how to prepare the state 1 ( ?m )|B? = ? ?zin?|0? ? |zin?+ ?b((k ? 1)h)?|k? ? |b((k ? 1)h)? , (4.128) Bm k=1 in which we replace yin by zin in (4.14), and define ?m B 2m := ?zin? + ?b((k ? 1)h)?2. (4.129) k=1 This state can be prepared using the above procedure for |0? 7? |zin? and m queries to OF0 with t = (k ? 1)h that implement |0? 7? |b((k ? 1)h)? for k ? {1, . . . ,m}, applied in superposition to the intermediate state m | ? ?1 ( ? ) ?int = ?zin?|0? ? |0?+ ?b((k ? 1)h)?|k? ? |0? . (4.130) Bm k=1 160 Here the queries are applied conditionally upon the value in the first register: we prepare |zin? if the first register is |0? and |b((k?1)h)? if the first register is |k? for k ? {1, . . . ,m}. We can prepare |?int? (i.e., perform a unitary transform mapping |0?|0? 7? |?int?) in time complexity O(m) [92] using the known values of ?uin? and ?b((k ? 1)h)?. Overall, we use O(N) queries to Ox and O(m) queries to OF0 to prepare |B?. The gate complexity is larger by a poly(logN, log n) factor. 4.4.4 Measurement success probability After applying the QLSA to (4.12), we perform a measurement to extract a final state of the desired form. We now consider the probability of this measurement succeeding. Lemma 4.6. Consider an instance of the quantum quadratic ODE problem defined in Problem 4.1, with the QLSA applied to the linear system (4.12) using the forward Euler method (4.11) with time step (4.65). Suppose the error from Carleman linearization satisfies ??(t)? ? g as in (4.67), and the global error from the forward Euler method 4 as defined in Lemma 4.3 is bounded by ?y?(T )? ym? ? g , (4.131) 4 where g is defined in (4.17). Then the probability of measuring a state |yk1? for k = [m+ p+ 1]0 \ [m+ 1]0 satisfies p+ 1 Pmeasure ? , (4.132) 9(m+ p+ 1)Nq2 161 where q is also defined in (4.17). Proof. The idealized quantum state produced by the QLSA applied to (4.12) has the form m?+p m?+p?N |Y ? = |yk?|k? = |ykj ?|j?|k? (4.133) k=0 k=0 j=1 where the states |yk? and |ykj ? for k ? [m+ p+ 1]0 and j ? [N ] are subnormalized to ensure ?|Y ?? = 1. We decompose the state |Y ? as |Y ? = |Ybad?+ |Ygood?, (4.134) where m??1?N m?+p?N |Ybad? := |ykj ?|j?|k?+ |ykj ?|j?|k?, ?k=0 j=1 k=m j=2 (4.135)m+p |Y kgood? := |y1?|1?|k?. k=m Note that |yk m1? = |y1 ? for all k ? {m,m+ 1, . . . ,m+ p}. We lower bound ?|Y ??2good (p+ 1)?|ym??2 P 1measure := = (4.136)?|Y ??2 ?|Y ??2 by lower bounding the terms of the product ?|ym1 ??2 ?|ym 2 0 2= 1 ?? ? ?|y1?? . (4.137) ?|Y ??2 ?|y01??2 ?|Y ??2 First, according to (4.67) and (4.131), the exact solution u(T ) and the approximate 162 solution ym1 defined in (4.12) satisfy ?u(T )?ym1 ? ? ?u(T )? y?1(T )?+?y? (T )?ym1 1 ? ? ??(t)?+?y?(T )?ym? ? g . (4.138) 2 Since y01 = (yin)1 = uin, using (4.138), we have ?|ym m m m1 ?? ?y= 1 ? ? ?u(T )? ? ?u(T )? y1 ? g ? ?u(T )? y ?= 1 ? g 1= . ?|y01?? ?uin? ?uin? ?uin? 2?uin? 2q (4.139) Second, we upper bound ?yk?2 by ?yk?2 = ?y?(kh)? [y(kh)? yk]?2 ? 2(?y?(kh)?2 + ?y(kh)? yk?2). (4.140) Using ?y?(t)?2 < 4N?u 2in? by (4.99), and ?y(kh)?yk? ? ?y?(T )?ym? ? g/4 < ?uin?/4 by (4.131), and R < 1, we have ( ) 2 ?yk?2 ? 2 4N? ?uin?u 2in? + < 9N?u 2in? . (4.141) 16 Therefore ?|y01??2 ? ?|y01??2 ? ?u 2in? 1= m+p = . (4.142)?|Y ??2 ?|yk??2 9N(m+ p+ 1)?uin?2 9N(m+ p+ 1)k=0 Finally, using (4.139) and (4.142) in (4.137) and (4.136), we have Pmeasure ? p+ 1 (4.143) 9(m+ p+ 1)Nq2 163 as claimed. Choosing m = p, we have Pmeasure = ?(1/Nq2). Using amplitude amplification, ? O( Nq) iterations suffice to succeed with constant probability. 4.4.5 Proof of Theorem 4.1 Proof. We first present the quantum Carleman linearization (QCL) algorithm and then analyze its complexity. The QCL algorithm. We start by rescaling the system to satisfy (4.3) and (4.4). Given a quadratic ODE (4.1) satisfying R < 1 (where R is defined in (4.2)), we define a scaling factor ? > 0, and rescale u to u := ?u. (4.144) Replacing u by u in (4.1), we have du 1 = F u?22 + F1u+ ?F0(t), dt ? (4.145) u(0) = uin := ?uin. We let F 12 := F2, F 1 := F1, and F 0(t) := ?F0(t) so that? du = F u?22 + F 1u+ F 0(t), dt (4.146) u(0) = uin. Note that R is invariant under this rescaling, so R < 1 still holds for the rescaled equation. 164 Concretely, we take3 ? = ? 1 . (4.147) ?uin?r+ After rescaling, the new quadratic ODE satisfies ?uin?r = ?2+ ?uin?r+ = 1. Since ?uin? < r+ by Lemma 4.1, we have r? < ?uin? < 1 < r+, so (4.4) holds. Furthermore, 1 is located between the two roots r? and r+, which implies ?F 2??12+|Re (?1)|?1+?F 0? < 0 as shown in Lemma 4.1, so (4.3) holds for the rescaled problem. Having performed this rescaling, we henceforth assume that (4.3) and (4.4) are satisfied. We then introduce the choice of parameters as follows. Given g and an error bound ? ? 1, we define g? ? := ? g . (4.148) 1 + ? 2 Given ?uin?, ?F2?, and Re(?1) < 0, we choose ? ? ? ? log(2T?F2?/?) log(2T?F2?/?) N = = . (4.149) log(1/?uin?) log(r+) Since ?uin?/? > 1 by (4.148) and g < ?uin?, Lemma 4.2 gives ?u(T )? y? (T )? ? ??(T )? ? TN?F ??u ?N+1 1= TN?F ?( )N+11 2 in 2 ? ? . (4.150) r+ 2 Thus, (4.67) holds since ? ? g/2. Now we discuss the choice of h. On the one hand, h must satisfy (4.65) to satisfy the conditions of Lemma 4.3 and( Lemma)4.4. On the other hand, Lemma 4.3 gives the 3In fact, one can show that any ? ? 1r , 1 ?u ? suffices to satisfy (4.3) and (4.4).+ in 165 upper bound ?y?1(T )? ym1 ? ? 3N2.5Th[(?F2?+ ?F1?+ ?F ?)2 + ?F ?? g? g? ? 0 0 ] ? ? = .4 2(1 + ?) 2 (4.151) This also ensures that (4.131) holds since ? ? g/2. Thus, we choose { ? g? 1h min , , 12N2.5T [(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?]}N?F1? (4.152) 2(|Re(?1)| ? ?F2? ? ?F0?) N(|Re(? )|21 ? (?F2?+ ?F0?)2 + ?F1?2) to satisfy (4.65) and (4.151). Combining (4.150) with (4.151), we have ?u(T )? ym1 ? ? ?u(T )? y?1(T )?+ ?y?1(T )? ym1 ? ? ?. (4.153) Thus, (4.138) holds since ? ? g/2. Using ???? u(T ) m? y1 ???? ? ?u(T )? ym1 ? ? ?u(T )? ym1 ? (4.154)?u(T )? ?ym? min{?u(T )?, ?ym1 1 ?} g ? ?u(T )? ym1 ? and (4.153), we obtain ???? ?u(T ) ym ?? 1 ?? ? ? = ?, (4.155)?u(T )? ?ym1 ? g ? ? i.e., the normalized output state is ?-close to u(T )? .u(T )? We follow the procedure in Lemma 4.5 to prepare the initial state |y?in?. We apply 166 the QLSA [46] to the linear system (4.12) with m = p = ?T/h?, giving a solution |Y ?. We then perform a measurement to obtain a normalized state of |ykj ? for some k ? [m+ p+ 1]0 and j ? [N ]. By Lemma 4.6, the probability of obtaining a state |yk1? for some k ? [m+ p+ 1]0 \ [m+ 1]0, giving the normalized vector ym1 /?ym1 ?, is ? p+ 1Pmeasure ? 1 . (4.156) 9(m+ p+ 1)Nq2 18Nq2 ? By amplitude amplification, we can achieve success probability ?(1) with O( Nq) repetitions of the above procedure. Analysis of the complexity. By Lemma 4.5, the right-hand side |B? in (4.12) can be prepared with O(N) queries to Ox and O(m) queries to OF0 , with gate complexity larger by a poly(logN, log n) factor. The matrix L in (4.12) is an (m+p+1)?? (m+p+1)? matrix with O(Ns) nonzero entries in any row or column. By Lemma 4.4 and our choice of parameters, the condition number of L is at most 3(m+(p+ 1) N2.5T 2[(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?]= O +NT)?F1?? NT [|Re(? )|21 ? (?F2?+ ?F0?)2 + ?F1?2] ( + 2(|Re(?1)| ? ?F2? ? ?F0?) ) N2.5T 2[(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?] 1 NT?F1?2= O( g? )+ ?(1? ?uin?)2 ?F2?+ ?F0? N2.5T 2[(?F ?+ ?F ?+ ?F ?)22 1 0 + ?F ?0?]= O . (1? ?u 2in?) (?F2?+ ?F0?)g? (4.157) 167 Here we use ?F2?+ ?F0? < |Re(?1)| ? ?F1? and 2(| 1Re(?1)| ? ?F2? ? ?F0?) > (?uin?+ ? 2)(?F ?+ ?F ?)? ? 2 0uin (4.158) 1 = (1? ?uin?)2(?F 22?+ ?F0?) > (1? ?uin?) (?F2?+ ?F? 0 ?). uin? The first inequality above is from the sum of |Re(?1)| > ?F2??uin? + ?F0?/?uin? and |Re(?1)| = ?F2?r+ + ?F0?/r+, where r+ = 1/?uin?. Consequently, by Theorem 5 of [46], the QLSA produces the state |Y ? with ( ) N3.5sT 2[(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?] NsT?F2??F1??F0??F ?poly log 0? (1? ?u 2in?) (?F2?+ ?F0?)g? ( (1? ?uin?)g? ) sT 2[(?F2?+ ?F1?+ ?F ?)20 + ?F ?0?] sT?F2??F ?1??F0??F ?= poly log 0 / log(1/?uin?) (1? ?u 2in?) (?F2?+ ?F0?)g? (1? ?uin?)g? (4.159) ? queries to the oracles OF2 , OF1 , OF0 , and Ox. Using O( Nq) steps of amplitude amplification to achieve success probability ?(1), the overall query complexity of our algorithm is ( ) N4sT 2q[(?F2?+ ?F1?+ ?F ?)20 + ?F ?0?] NsT?F2??F ??F ??F ?1 0 ?poly log 0 (1? ?uin?)2(?F2?+ ?F0?)g? ( (1? ?uin?)g? ) sT 2q[(?F2?+ ?F1?+ ?F 2 ? ?0?) + ?F0?] sT?F2??F1??F0??F ?= poly log 0 / log(1/?uin?) (1? ?uin?)2(?F2?+ ?F0?)g? (1? ?uin?)g? ( (4.160) and the gate complex)ity exceeds this by a factor of poly log(nsT?F2??F1??F0??F ? 0?/(1? R)g?)/ log(1/?uin?) . If the eigenvalues ?j of F1 are all real, by (4.66), the condition number of L is at 168 most ( ) N2.5T 2[(?F2?+ ?F1?+ ?F0?)2 + ?F ?0?]3(m+ p+ 1) = O( )+NT?F1?? (4.161) N2.5T 2[(?F ?+ ?F ?+ ?F ?)2 ?2 1 0 + ?F = O 0 ?] . g? Similarly, the QLSA produces the state |Y ? with ( ) sT 2[(?F2?+ ?F1?+ ?F 20?) + ?F ?0?] sT?F2??F1??F0??F ?0?poly log / log(1/?uin?) g? g? (4.162) queries to the oracles OF2 , OF1 , OF0 , and Ox. Using amplitude amplification to achieve success probability ?(1), the overall query complexity of the algorithm is ( ) sT 2q[(?F ?+ ?F ?+ ?F ?)22 1 0 + ?F ?0?] sT?F2??F1??F ??F ?0poly log 0?/ log(1/?uin?) g? g? ( (4.163)) and the gate complexity is larger by poly log(nsT?F ?2??F1??F0??F0?/g?)/ log(1/?uin?) as claimed. 4.5 Lower bound In this section, we establish a limitation on the ability of quantum computers to solve the quadratic ODE problem when the nonlinearity is sufficiently strong. We quantify the strength of the nonlinearity in terms of the quantity R defined in (4.2). Whereas there is an efficient quantum algorithm for R < 1 (as shown in Theorem 4.1), we show here ? that the problem is intractable for R ? 2. 169 ? Theorem 4.2. Assume R ? 2. Then there is an instance of the quantum quadratic ODE problem defined in Problem 4.1 such that any quantum algorithm for producing a quantum state approximating u(T )/?u(T )? with bounded error must have worst-case time complexity exponential in T . We establish this result by showing how the nonlinear dynamics can be used to distinguish nonorthogonal quantum states, a task that requires many copies of the given state. Note that since our algorithm only approximates the quantum state corresponding to the solution, we must lower bound the query complexity of approximating the solution of a quadratic ODE. 4.5.1 Hardness of state discrimination Previous work on the computational power of nonlinear quantum mechanics shows that the ability to distinguish non-orthogonal states can be applied to solve unstructured search (and other hard computational problems) [78, 79, 80]. Here we show a similar limitation using an information-theoretic argument. Lemma 4.7. Let |??, |?? be states of a qubit with |??|??| = 1 ? ?. Suppose we are either given a black box that prepares |?? or a black box that prepares |??. Then any bounded-error protocol for determining whether the state is |?? or |?? must use ?(1/?) queries. Proof. Using the black box k times, we prepare states with overlap (1? ?)k. By the well- known?relationship between fidelity and trace distance, these states have trace distance at? most 1? (1? ?)2k ? 2k?. Therefore, by the Helstrom bound (which states that the 170 advantage over random guessing for the best measurement to distinguish two quantum states is given by their trace distance [121]), we need k = ?(1/?) to distinguish the states with bounded error. 4.5.2 State discrimination with nonlinear dynamics Lemma 4.7 can be used to establish limitations on the ability of quantum computers to simulate nonlinear dynamics, since nonlinear dynamics can be used to distinguish nonorthogonal states. Whereas previous work considers models of nonlinear quantum dynamics (such as the Weinberg model [79, 80] and the Gross-Pitaevskii equation [78]), here we aim to show the difficulty of efficiently simulating more general nonlinear ODEs? in particular, quadratic ODEs with dissipation?using quantum algorithms. Lemma 4.8. There exists an instance of the quantum quadratic ODE problem as defined ? in Problem 4.1 with R ? 2, and two states of a qubit with overlap 1 ? ? (for 0 < ? < ? 1 ? 3/ 10) as possible initial conditions, such that the two final states after evolution ? time T = O(log(1/?)) have an overlap no larger than 3/ 10. Proof. Consider a 2-dimensional system of the form du1 = ?u1 + ru21,dt (4.164) du2 = ?u 22 + ru , dt 2 for some r > 0, with an initial condition u(0) = [u1(0);u2(0)] = uin satisfying ?uin? = 1. According to the definition of R in (4.2), we have R = r, so henceforth we write this 171 parameter as R. The analytic solution of (4.164) is 1 u1(t) = , R?et(R?1/u1(0)) (4.165) 1 u2(t) = . R?et(R?1/u2(0)) When u2(0) > 1/R, u2(t) is finite within the domain ( ) ? R0 t < t? := log ; (4.166) R?1/u2(0) when u2(0) = 1/R, we have u2(t) = 1/R for all t; and when u2(0) < 1/R, u2(t) goes to 0 as t ? ?. The behavior of u1(t) depends similarly on u1(0). Without loss of generality, we assume u1(0) ? u2(0). For u2(0) ? u1(0) > 1/R, both u1(t) and u2(t) are finite within the domain (4.166), which we consider as the domain of u(t). Now we consider 1-qubit states that provide inputs to (4.164). Given a sufficiently small ? > 0, we first define ? ? (0, ?/4) by ? 2 sin2 = ?. (4.167) 2 We then construct two 1-qubit states with overlap 1? ?, namely |?(0)? ?1= (|0?+ |1?) (4.168) 2 172 and ( | ? ? ) ( ) ?(0) = cos ? + | ?0?+ sin ? + |1?. (4.169) 4 4 Then the overlap between the two initial states is ??(0)|?(0)? = cos ? = 1? ?. (4.170) ? The initial overlap (4.170) is larger than the target overlap 3/ 10 in Lemma 4.8 provided ? ? < 1? 3/ 10. For simplicity, we denote ( ?) v0 := cos(? + ),4 (4.171)? w0 := sin ? + , 4 and let v(t) and w(t) denote solutions of (4.164) with initial conditions v(0) = v0 and w(0) = w0, respectively. Since w0 > 1/R, we see that w(t) increases with t, satisfying 1 ? ?1 < w0 < w(t), (4.172) R 2 and v(t) < w(t) (4.173) for any time 0 < t < t?, whatever the behavior of v(t). We now study the outputs of our problem. For the state |?(0)?, the initial condition 173 ? ? for (4.164) is [1/ 2; 1/ 2]. Thus, the output for any t ? 0 is | 1?(t)? = ? (|0?+ |1?). (4.174) 2 For the state |?(0)?, the initial condition for (4.164) is [v0;w0]. We now discuss how to select a terminal time T to give a useful output state |?(T )?. For simplicity, we denote the ratio of w(t) and v(t) by w(t) K(t) := . (4.175) v(t) Noticing that w(t) goes to infinity as t approaches t?, while v(t) remains finite within (4.166), there exists a terminal time T such that4 K(T ) ? 2. (4.176) The normalized output state at this time T is | 1?(T )? = ? (|0?+K(T )|1?). (4.177) K(T )2 + 1 Combining (4.174) with (4.177), the overlap of |?(T )? and |?(T )? is ? | ? ?K(T ) + 1?(T ) ?(T ) = ? ?3 (4.178) 2K(T )2 + 2 10 4More concretely, we take v ?max = max{v0, v(t )} that upper bounds v(t) on the domain [0, t?), in which v(t?) is a finite value since v0 < w0. Then there exists a terminal time T such that w(T ) = 2vmax, and hence K(T ) = w(T )/v(T ) ? 2. 174 using (4.176). Finally, we estimate the evolution time T , which is implicitly defined by (4.176). We can upper bound its value by t?. According to (4.166), we have ( ) ( ? ) R 2 T < t? = log < log ? (4.179) R? 1 1 w 2?0 w0 since the function log(x/(x ? c)) decreases monotonically with x for x > c > 0. Using (4.170) to rewrite this expression in terms of ?, we have ( ? ) ( ) T < t? 2 1 < log ? = log 1 + ? , (4.180) 2? 1 ? 2?? ?2 ? ?sin(?+ ) 4 which scales like 1 log(1/2?) as ? ? 0. Therefore T = O(log(1/?)) as claimed. 2 4.5.3 Proof of Theorem 4.2 We now establish our main lower bound result. Proof. As introduced in the proof of Lemma 4.8, consider the quadratic ODE (4.164); the two initial states of a qubit |?(0)? and |?(0)? defined in (4.168) and (4.169), respectively; and the terminal time T defined in (4.176). Suppose we have a quantum algorithm that, given a black box to prepare a state that is either |?(0)? or |?(0)?, can produce quantum states |??(T )? or |??(T )? that are within distance ? of |?(T )? and |?(T )?, respectively. Since by Lemma 4.8, |?(T )? and |?(T )? have constant overlap, the overlap between |??(T )? and |??(T )? is also constant 175 for sufficiently small ?. More precisely, we have ? 3?(T )|?(T )? ? ? (4.181) 10 by (4.178), which implies ? ( ) ?| 3?(T )? ? |?(T )? ? ? 2 1? ? > 0.32. (4.182) 10 We also have ? |?(T )? ? |??(T )? ? ? ?, (4.183) and similarly for ?(T ). These three inequalities give us ? |??(T )? ? |??(T )? ? = ?(|?(T )? ? |?(T ))? ? (|?(T )? ? |??(T ))? ? (|??(T )? ? |?(T ))? ? ? ?(|?(T )? ? |?(T ))? ? ? ?(|?(T )? ? |??(T ))? ? ? ?(|??(T )? ? |?(T ))? ? > 0.32? 2?, (4.184) which is at least a constant for (say) ? < 0.15. Lemma 4.7 therefore shows that preparing the states |??(T )? and |??(T )? requires time ?(1/?), as these states can be used to distinguish the two possibilities with bounded error. By Lemma 4.8, this time is 2?(T ). This shows that we need at least exponential simulation time to approximate the solution of arbitrary quadratic ODEs to within sufficiently ? small bounded error when R ? 2. Note that exponential time is achievable since our QCL algorithm can solve the 176 problem by taking N to be exponential in T , where N is the truncation level of Carleman linearization. (The algorithm of Leyton and Osborne also solves quadratic differential equations with complexity exponential in T , but requires the additional assumptions that the quadratic polynomial is measure-preserving and Lipschitz continuous [74].) 4.6 Applications Due to the assumptions of our analysis, our quantum Carleman linearization algorithm can only be applied to problems with certain properties. First, there are two requirements to guarantee convergence of the inhomogeneous Carleman linearization: the system must have linear dissipation, manifested by Re(?1) < 0; and the dissipation must be sufficiently stronger than both the nonlinear and the forcing terms, so that R < 1. Dissipation typically leads to an exponentially decaying solution, but for the dependency on g and q in (4.160) to allow an efficient algorithm, the solution cannot exponentially approach zero. However, this issue does not arise if the forcing term F0 resists the exponential decay towards zero, instead causing the solution to decay towards some non-zero (possibly time-dependent) state. The norm of the state that is exponentially approached can possibly decay towards zero, but this decay itself must happen slower than exponentially for the algorithm to be efficient.5 We now investigate possible applications that satisfy these conditions. First we present an application governed by ordinary differential equations, and then we present 5Also note that the QCL algorithm might provide an advantage over classical computation for homogeneous equations in cases where only evolution for a short time is of interest. 177 possible applications in partial differential equations. Several physical systems can be represented in terms of quadratic ODEs. Examples include models of interacting populations of predators and prey [122], dynamics of chemical reactions [123, 124], and the spread of an epidemic [125]. We now give an example of the latter, based on the epidemiological model used in [126] to describe the early spread of the COVID-19 virus. The so-called SEIR model divides a population of P individuals into four components: susceptible (PS), exposed (PE), infected (PI), and recovered (PR). We denote the rate of transmission from an infected to a susceptible person by rtra, the typical time until an exposed person becomes infectious by the latent time Tlat, and the typical time an infectious person can infect others by the infectious time Tinf. Furthermore we assume that there is a flux ? of individuals constantly refreshing the population. This flux corresponds to susceptible individuals moving into, and individuals of all components moving out of, the population, in such a way that the total population remains constant. To ensure that there is sufficiently strong linear decay to guarantee Carleman convergence, we also add a vaccination term to the PS component. We choose an approach similar to that of [127], but denote the vaccination rate, which is approximately equal to the fraction 178 of susceptible individuals vaccinated each day, by rvac. The model is then dPS = ? PS PI? ? rvacPS + ?? rtraPS (4.185) dt P P dPE = ? PE ? PE PI? + rtraPS (4.186) dt P Tlat P dPI ? PI PE PI= ? + ? (4.187) dt P Tlat Tinf dPR = ? PR PI? + rvacPS + . (4.188) dt P Tinf The sum of equations (4.185)?(4.188) shows that P = PS + PE + PI + PR is a constant. Hence we do not need to include the equation for PR in our analysis, which is crucial since the PR component would have introduced positive eigenvalues. The maangces corresponding to (4.1) are then ?? ? ? ??????? ????? ? r? ? ? P vac 0 0 ??? F0 = ???0??? , F1 = ??? 0 ?? ? 1 0 ?? , (4.189)P Tlat ?? ?0 0 ?1 ?? ? 1Tlat P Tinf???0 0 ? rtra 0 0 0 0 0 0? P ???F2 = ???0 0 rtra 0 0 0 0 0 0???? . (4.190)P 0 0 0 0 0 0 0 0 0 Since F1 is a triangular matrix, its eigenvalues are located on its diagonal, so ? Re(?1) = ??/P ? min {rvac, 1/Tlat, 1/Tinf}. Furthermore we can bound P/ 3 ? 179 ? ?uin? ? P , ?F0? = ?, and ?F2? = 2rtra/P , so ? ? ? 2rtra + 3?/PR . (4.191) min {rvac, 1/Tlat, 1/Tinf}+ ?/P ? We see that the condition for guaranteed convergence of Carleman linearization is 2rtra < ? min {rvac, 1/Tlat, 1/Tinf}? ( 3? 1). Essentially, the Carleman method only converges if the (nonlinear) transmission is sufficiently slower than the (linear) decay. To assess how restrictive this assumption is, we consider the SEIR parameters used in [126]. Note that they also included separate components for asymptomatic and hospitalized persons, but to simplify the analysis we include both of these components in the PI component. In their work, they considered a city with approximately P = 107 inhabitants. In a specific period, they estimated a travel flux6 of ? ? 0 individuals per day, latent time Tlat = 5.2 days, infectious time Tinf = 2.3 days, and transmission rate r ? 0.13 days?1tra . We let the initial condition be dominated by the susceptible component so that ?uin? ? P , and we assume7 that rvac > 1/T ?1lat ? 0.19 days . With the stated parameters, a direct calculation gives R = 0.956, showing that the assumptions of our algorithm can correspond to some real-world problems that are only moderately nonlinear. While the example discussed above has only a constant number of variables, this example can be generalized to a high-dimensional system of ODEs that models the early 6Since we require that u(t) does not approach 0 exponentially, we can assume that the travel flux is some non-zero, but small, value, e.g., ? = 1 individual per day. 7This example arguably corresponds to quite rapid vaccination, and is chosen here such that R remains smaller than one, as required to formally guarantee convergence of the Carleman method. However, as shown in the upcoming example of the Burgers equation, larger values of R might still allow for convergence in practice, suggesting that our algorithm might handle lower values of the vaccination rate. 180 spread over a large number of cities with interaction, similar to what is done in [128] and [129]. Other examples of high-dimensional ODEs arise from the discretization of certain PDEs. Consider, for example, equations for u(r, t) of the type ?tu+ (u ? ?)u+ ?u = ??2u+ f . (4.192) with the forcing term f being a function of both space and time. This equation can be cast in the form of (4.1) by using standard discretizations of space and time. Equations of the form (4.192) can represent Navier?Stokes-type equations, which are ubiquitous in fluid mechanics [118], and related models such as those studied in [130, 131, 132] to describe the formation of large-scale structure in the universe. Similar equations also appear in models of magnetohydrodynamics (e.g., [133]), or the motion of free particles that stick to each other upon collision [134]. In the inviscid case, ? = 0, the resulting Euler-type equations with linear damping are also of interest, both for modeling micromechanical devices [135] and for their intimate connection with viscous models [136]. As a specific example, consider the one-dimensional forced viscous Burgers equation ?tu+ u?xu = ?? 2 xu+ f, (4.193) which is the one-dimensional case of equation (4.192) with ? = 0. Equation (4.193) is often used as a simple model of convective flow [117]. For concreteness, let the initial condition be u(x, 0) = U0 sin(2?x/L0) on the domain x ? [?L0/2, L0/2], and use 181 Figure 4.1: Integration of the forced viscous Burgers equation using Carleman linearization on a classical computer (source code available at https://github.com/ hermankolden/CarlemanBurgers). The viscosity is set so that the Reynolds number Re = U0L0/? = 20. The parameters nx = 16 and nt = 4000 are the number of spatial and temporal discretization intervals, respectively. The corresponding Carleman convergence parameter is R = 43.59. Top: Initial condition and solution plotted at a third of the nonlinear time 13Tnl = L0 3U . Bottom: l2 norm of the absolute0 error between the Carleman solutions at various truncation levels N (left), and the convergence of the corresponding time-maximum error (right). 182 Dirichlet boundary conditions u(?L0/2, 0) = u(L0/2, 0) = 0. We force this equation using a(localized off)-center Gaussian with a sinusoidal time dependence,8 given by f(x, t) = 2 U0 exp ? (x?L0/4)2 cos(2?t). To solve this equation using the Carleman method, we2(L0/32) discretize the spatial domain into nx points and use central differences for the derivatives to get ui+1 ? 2ui + u 2 2i?1 ui+1 ? u? u = ? ? i?1t i + f (4.194) ?x2 i 4?x with ?x = L0/(nx ? 1). This equation is of the form (4.1) and can thus generate the Carleman system (4.6). The resulting linear ODE can then be integrated using the forward Euler method, as shown in Figure 4.1. In this example, the viscosity ? is defined such that the Reynolds number Re := U0L0/? = 20, and nx = 16 spatial discretization points were sufficient to resolve the solution. The figure compares the Carleman solution with the solution obtained via direct integration of (4.194) with the forward Euler method (i.e., without Carleman linearization). By inserting the matrices F0, F1, and F2 corresponding to equation (4.194) into the definition of R (4.2), we find that Re(?1) is indeed negative as required, given Dirichlet boundary conditions, but the parameters used in this example result in R ? 44. Even though this does not satisfy the requirement R < 1 of the QCL algorithm, we see from the absolute error plot in Figure 4.1 that the maximum absolute error over time decreases exponentially as the truncation level N is incremented (in this example, the maximum Carleman truncation level considered is N = 4). Surprisingly, this suggests that in this example, the error of the classical Carleman method converges exponentially with N , even though R > 1. 8Note that this forcing does not satisfy the general conditions for efficient implementation of our algorithm since it is not sparse. However, we expect that the algorithm can still be implemented efficiently for a structured non-sparse forcing term such as in this example. 183 4.7 Discussion In this chapter we have presented a quantum Carleman linearization (QCL) algorithm for a class of quadratic nonlinear differential equations. Compared to the previous approach of [74], our algorithm improves the complexity from an exponential dependence on T to a nearly quadratic dependence, under the condition R < 1 as defined in (4.2). Qualitatively, this means that the system must be dissipative and that the nonlinear and inhomogeneous effects must be small relative to the linear effects. We have also provided numerical results suggesting the classical Carleman method may work on certain PDEs that do not strictly satisfy the assumption R < 1. Furthermore, we established a lower bound showing that ? for general quadratic differential equations with R ? 2, quantum algorithms must have worst-case complexity exponential in T . We also discussed several potential applications arising in biology and fluid and plasma dynamics. It is natural to ask whether the result of Theorem 4.1 can be achieved with a classical algorithm, i.e., whether the assumption R < 1 makes differential equations classically tractable. Clearly a naive integration of the truncated Carleman system (4.6) is not efficient on a classical computer since the system size is ?(nN). But furthermore, it is unlikely that any classical algorithm for this problem can run in time polylogarithmic in n. If we consider Problem 4.1 with dissipation that is small compared to the total evolution time, but let the nonlinearity and forcing be even smaller such that R < 1, then in the asymptotic limit we have a linear differential equation with no dissipation. Hence any classical algorithm that could solve Problem 4.1 could also solve non-dissipative linear differential equations, which is a BQP-hard problem even when the dynamics are 184 unitary [137]. In other words, an efficient classical algorithm for this problem would imply efficient classical algorithms for any problem that can be solved efficiently by a quantum computer, which is considered unlikely. ? Our upper and lower bounds leave a gap in the range 1 ? R < 2, for which we do not know the complexity of the quantum quadratic ODE problem. We hope that future work will close this gap and determine for which R the problem can be solved efficiently by quantum computers in the worst case. Furthermore, the complexity of our algorithm has nearly quadratic dependence on T , namely T 2 poly(log T ). It is unknown whether the complexity for quadratic ODEs must be at least linear or quadratic in T . Note that sublinear complexity is impossible in general because of the no-fast-forwarding theorem [33]. However, it should be possible to fast-forward the dynamics in special cases, and it would be interesting to understand the extent to which dissipation enables this. The complexity of our algorithm depends on the parameter q defined in Theorem 4.1, which characterizes the decay of the final solution relative to the initial condition. This restricts the utility of our result, since we must have a suitable initial condition and terminal time such that the final state is not exponentially smaller than the initial state. However, it is unlikely that such a dependence can be significantly improved, since renormalization of the state can be used to implement postselection, which would imply the unlikely consequence BQP = PP (see Section 8 of [53] for further discussion). As discussed in the introduction, the solution of a homogeneous dissipative equation necessarily decays exponentially in time, so our method is not asymptotically efficient. However, for inhomogeneous equations the final state need not be exponentially smaller 185 than the initial state even in a long-time simulation, suggesting that our algorithm could be especially suitable for models with forcing terms. It is possible that variations of the Carleman linearization procedure could increase the accuracy of the result. For instance, instead of using just tensor powers of u as auxiliary variables, one could use other nonlinear functions. Several previous papers on Carleman linearization have suggested using multidimensional orthogonal polynomials [73, 138]. They also discuss approximating higher-order terms with lower-order ones in (4.6) instead of simply dropping them, possibly improving accuracy. Such changes would however alter the structure of the resulting linear ODE, which could affect the quantum implementation. The quantum part of the algorithm might also be improved. In this chapter we limit ourselves to the first-order Euler method to discretize the linearized ODEs in time. This is crucial for the analysis in Lemma 4.3, which states the global error increases at most linearly with T . To establish this result for the Euler method, it suffices to choose the time step (4.65) to ensure ?I + Ah? ? 1, and then estimate the growth of global error by (4.92). However, it is unclear how to give a similar bound for higher-order numerical schemes. If this obstacle could be overcome, the error dependence of the complexity might be improved. It is also natural to ask whether our approach can be improved by taking features of particular systems into account. Since the Carleman method has only received limited attention and has generally been used for purposes other than numerical integration, it seems likely that such improvements are possible. In fact, the numerical results discussed in Section 4.6 (see in particular Figure 4.1) suggest that the condition R < 1 is not a 186 strict requirement for the viscous Burgers equation, since we observe convergence even though R ? 44. This suggests that some property of equation (4.193) makes it more amenable to Carleman linearization than our current analysis predicts. We leave a detailed investigation of this for future work. A related question is whether our algorithm can efficiently simulate systems exhibiting dynamical chaos. The condition R < 1 might preclude chaos, but we do not have a proof of this. More generally, the presence or absence of chaos might provide a more fine- grained picture of the algorithm?s efficiency. When contemplating applications, it should be emphasized that our approach produces a state vector that encodes the solution without specifying how information is to be extracted from that state. Simply producing a state vector is not enough for an end-to-end application since the full quantum state cannot be read out efficiently. In some cases in may be possible to extract useful information by sampling a simple observable, whereas in other cases, more sophisticated postprocessing may be required to infer a desired property of the solution. Our method does not address this issue, but can be considered as a subroutine whose output will be parsed by subsequent quantum algorithms. We hope that future work will address this issue and develop end-to-end applications of these methods. Finally, the algorithm presented in this chapter might be extended to solve related mathematical problems on quantum computers. Obvious candidates include initial value problems with time-dependent coefficients and boundary value problems. Carleman methods for such problems are explored in [83], but it is not obvious how to implement those methods in a quantum algorithm. It is also possible that suitable formulations of problems in nonlinear optimization or control could be solvable using related techniques. 187 Chapter 5: Conclusion and future work In this dissertation, we developed an understanding of quantum algorithms for differential equations concerning their design, analysis, and applications. In Chapter 2, we have presented high-precision quantum algorithms to solve linear, time-dependent, d-dimensional systems of ordinary differential equations. Specifically, we showed how to employ a global approximation based on the spectral method as an alternative to the more straightforward finite difference method. Compared to the previous work [52], our algorithm improves the complexity of solving time-dependent linear differential equations from poly(1/?) to poly(log(1/?)). In Chapter 3, we have presented high-precision quantum algorithms for d-dimensional partial differential equations. We developed quantum adaptive finite difference methods for Poisson?s equation, and generalized the quantum spectral method for general second- order elliptic equations. Whereas previous algorithms scaled as poly(d, 1/?), our algorithms scale as poly(d, log(1/?)). In Chapter 4, we have presented the first efficient quantum algorithm for a class of nonlinear differential equations, exponentially improving the dependence of the evolution time over previous quantum algorithms. The key ingredient we adopted and modified is the Carleman linearization, which provides a long-time stable linear approximation to the 188 nonlinear system. We developed quantum Carleman linearization (QCL) for dissipative nonlinear differential equations provided the condition R < 1. Here the quantity R is used to quantify the ?2 relative strength of the nonlinearity and forcing to the linear dissipation. The complexity of this approach is (?T 2q/?), where T is the evolution time, ? is the allowed error, and q measures the decay of the solution. Moreover, We also provided a quantum lower bound for the worst-case exponential time complexity of simulating ? general nonlinear dynamics given R ? 2. Beyond the worst-case complexity analysis, the numerical experiments for the Burgers equation revealed that Carleman linearization could be practicable for certain nonlinear problems even for larger R. Our work has raised enthusiasm from the communities in quantum physics, computer science, applied mathematics, fluid and plasma dynamics, and other fields. Detailed open questions and future directions followed by each topic can be found in the discussion sections in Chapter 2, Chapter 3, and Chapter 4. In general, it is great interest for us to find answers to the following questions: From the aspect of theories, can we offer exponential quantum speedups for more general differential equations with provable guarantees? It is possible that the assumptions on the models can be relaxed. For linear ODEs and PDEs, we wonder if we can develop high-precision quantum algorithms under weaker smoothness assumptions, and if we can relax the elliptic condition for second-order linear PDEs. And for nonlinear differential equations, we wonder whether the weak nonlinearity condition R < 1 can be relaxed. Numerical results in Chapter 4 suggest that R < 1 is not a strict requirement for the viscous Burgers equation, since we observe convergence even though R ? 44. This suggests that some properties of nonlinear systems make them more amenable to Carleman 189 linearization than our current analysis predicts. We also wonder whether the complexity of quantum algorithms for quadratic ODEs (?T 2q/?) can be improved. It is implausible to significantly reduce the dependence on q, since renormalization of the state can implement postselection, which could imply the unlikely consequence BQP = PP. The error dependence might be improved if the quantum adaptive FDM or the quantum spectral method could be applied to the nonlinear case, as a rigorous bound on the condition number of the resulting quantum linear system is required. It is unclear whether a linear or even sublinear dependence on T is achievable. While there is a no-fast-forwarding theorem for Hamiltonian simulations, it might be feasible to fast-forward certain initial and boundary value problems of linear and nonlinear differential equations. Despite that previous studies focus on the performance in terms of the worst-case error, we may find further improvements when considering the average-case error with random initial inputs of typical differential equations. Finally, it is also likely that the techniques presented in this dissertation could inspire new quantum algorithms for related problems in optimization, control, and machine learning. From the aspect of applications, can we investigate end-to-end quantum applications for differential equations and related real-world problems? For the preprocessing procedure, we are concerned with how to efficiently prepare the initial states by loading classical data. The ability to prepare such quantum states requires the implementation of quantum random access memory (qRAM), which is widely thought to be impractical. But for some particular initial conditions such as the Gaussian distribution, it is promising to generate the initial states by a certain efficient routine instead of involving qRAM. For the postprocessing procedure, we are of great interest in extracting meaningful classical 190 observables of practical interest. Typical instances include the scattering cross section from a wave propagation, and the free energy from a physical or biological system. Our algorithms only produce a state vector that encodes the solution, which is not enough for an end-to-end application since the full quantum state cannot be read out efficiently. In some circumstances, it may be able to extract useful information by sampling a simple observable, whereas in other cases, more sophisticated postprocessing may be required to infer the desired property of the solution. For producing such classical readouts by quantum computers, we can investigate the complexity and figure out whether an exponential quantum speedup is possible. In addition, we are interested in the smallest system that could be used to demonstrate proof-of-principle versions of these algorithms. It is of great interest to estimate the implementation cost (e.g. the one- or two-qubit gate counts) for early fault-tolerant quantum computers. We hope that future work will address these issues toward end-to-end applications. 191 Bibliography [1] Michael A. Nielsen and Isaac Chuang. Quantum computation and quantum information, 2002. [2] John Watrous. The theory of quantum information. Cambridge university press, 2018. [3] Richard P. Feynman. Simulating physics with computers. International Journal of Theoretical Physics, 21(6):467?488, 1982. [4] Seth Lloyd. Universal quantum simulators. Science, pages 1073?1078, 1996. [5] Ivan Kassal, Stephen P. Jordan, Peter J. Love, Masoud Mohseni, and Ala?n Aspuru-Guzik. Polynomial-time quantum algorithm for the simulation of chemical dynamics. Proceedings of the National Academy of Sciences, 105(48):18681? 18686, 2008. arXiv:0801.2986. [6] Ian D. Kivlichan, Nathan Wiebe, Ryan Babbush, and Ala?n Aspuru-Guzik. Bounding the costs of quantum simulation of many-body physics in real space. Journal of Physics A: Mathematical and Theoretical, 50(30):305301, 2017. [7] Borzu Toloui and Peter J. Love. Quantum algorithms for quantum chemistry based on the sparsity of the CI-matrix, 2013. arXiv:1312.2579. [8] Ryan Babbush, Dominic W. Berry, Yuval R. Sanders, Ian D. Kivlichan, Artur Scherer, Annie Y. Wei, Peter J. Love, and Ala?n Aspuru-Guzik. Exponentially more precise quantum simulation of fermions in the configuration interaction representation. Quantum Science and Technology, 3(1):015006, 2017. [9] Ryan Babbush, Dominic W. Berry, Jarrod R. McClean, and Hartmut Neven. Quantum simulation of chemistry with sublinear scaling in basis size. Npj Quantum Information, 5(1):1?7, 2019. [10] Yuan Su, Dominic W Berry, Nathan Wiebe, Nicholas Rubin, and Ryan Babbush. Fault-tolerant quantum simulations of chemistry in first quantization. PRX Quantum, 2(4):040332, 2021. arXiv:2105.12767. 192 [11] Ala?n Aspuru-Guzik, Anthony D. Dutoi, Peter J. Love, and Martin Head-Gordon. Simulated quantum computation of molecular energies. Science, 309(5741):1704? 1707, 2005. arXiv:quant-ph/0604193. [12] James D. Whitfield, Jacob Biamonte, and Ala?n Aspuru-Guzik. Simulation of electronic structure Hamiltonians using quantum computers. Molecular Physics, 109(5):735?750, 2011. arXiv:1001.3855. [13] Jacob T. Seeley, Martin J. Richard, and Peter J. Love. The Bravyi-Kitaev transformation for quantum computation of electronic structure. The Journal of Chemical Physics, 137(22):224109, 2012. arXiv:1208.5986. [14] Dave Wecker, Bela Bauer, Bryan K. Clark, Matthew B. Hastings, and Matthias Troyer. Gate-count estimates for performing quantum chemistry on small quantum computers. Physical Review A, 90(2):022305, 2014. arXiv:1312.1695. [15] Matthew B. Hastings, Dave Wecker, Bela Bauer, and Matthias Troyer. Improving quantum algorithms for quantum chemistry. Quantum Information & Computation, 15(1-2):1?21, 2015. arXiv:1403.1539. [16] David Poulin, Matthew B. Hastings, David Wecker, Nathan Wiebe, Andrew C. Doberty, and Matthias Troyer. The Trotter step size required for accurate quantum simulation of quantum chemistry. Quantum Information & Computation, 15(5- 6):361?384, 2015. arXiv:1406.4920. [17] Jarrod R. McClean, Ryan Babbush, Peter J. Love, and Ala?n Aspuru-Guzik. Exploiting locality in quantum computation for quantum chemistry. The Journal of Physical Chemistry Letters, 5(24):4368?4380, 2014. [18] Ryan Babbush, Jarrod McClean, Dave Wecker, Ala?n Aspuru-Guzik, and Nathan Wiebe. Chemical basis of Trotter-Suzuki errors in quantum chemistry simulation. Physical Review A, 91(2):022311, 2015. arXiv:1410.8159. [19] Markus Reiher, Nathan Wiebe, Krysta M. Svore, Dave Wecker, and Matthias Troyer. Elucidating reaction mechanisms on quantum computers. Proceedings of the National Academy of Sciences, 114(29):7555?7560, 2017. arXiv:1605.03590. [20] Ryan Babbush, Dominic W. Berry, Ian D. Kivlichan, Annie Y. Wei, Peter J. Love, and Ala?n Aspuru-Guzik. Exponentially more precise quantum simulation of fermions in second quantization. New Journal of Physics, 18(3):033032, 2016. [21] Mario Motta, Erika Ye, Jarrod R. McClean, Zhendong Li, Austin J. Minnich, Ryan Babbush, and Garnet Kin-Lic Chan. Low rank representations for quantum simulation of electronic structure. npj Quantum Information, 7(1):1?7, 2021. [22] Earl Campbell. Random compiler for fast Hamiltonian simulation. Physical Review Letters, 123(7):070503, 2019. arXiv:1811.08017. 193 [23] Dominic W. Berry, Craig Gidney, Mario Motta, Jarrod R. McClean, and Ryan Babbush. Qubitization of arbitrary basis quantum chemistry leveraging sparsity and low rank factorization. Quantum, 3:208, 2019. arXiv:1902.02134. [24] Andrew M. Childs, Yuan Su, Minh C. Tran, Nathan Wiebe, and Shuchen Zhu. Theory of Trotter error with commutator scaling. Physical Review X, 11(1):011020, 2021. [25] Vera von Burg, Guang Hao Low, Thomas Ha?ner, Damian S. Steiger, Markus Reiher, Martin Roetteler, and Matthias Troyer. Quantum computing enhanced computational catalysis. Physical Review Research, 3(3):033055, 2021. [26] Joonho Lee, Dominic Berry, Craig Gidney, William J. Huggins, Jarrod R. McClean, Nathan Wiebe, and Ryan Babbush. Even more efficient quantum computations of chemistry through tensor hypercontraction. PRX Quantum, 2(3):030305, 2021. arXiv:2011.03494. [27] Yuan Su, Hsin-Yuan Huang, and Earl T. Campbell. Nearly tight Trotterization of interacting electrons. Quantum, 5:495, 2021. arXiv:2012.09194. [28] Stephen P. Jordan, Keith S.M. Lee, and John Preskill. Quantum algorithms for quantum field theories. Science, 336(6085):1130?1133, 2012. [29] John Preskill. Simulating quantum field theory with a quantum computer. In The 36th Annual International Symposium on Lattice Field Theory, volume 334, page 024. SISSA Medialab, 2019. arXiv:1811.10085. [30] Bela Bauer, Sergey Bravyi, Mario Motta, and Garnet Kin-Lic Chan. Quantum algorithms for quantum chemistry and quantum materials science. Chemical Reviews, 120(22):12685?12717, 2020. arXiv:2001.03685. [31] Yudong Cao, Jonathan Romero, Jonathan P. Olson, Matthias Degroote, Peter D. Johnson, Ma?ria Kieferova?, Ian D. Kivlichan, Tim Menke, Borja Peropadre, Nicolas P. D. Sawaya, et al. Quantum chemistry in the age of quantum computing. Chemical Reviews, 119(19):10856?10915, 2019. arXiv:1812.09976. [32] Ryan Babbush, Nathan Wiebe, Jarrod McClean, James McClain, Hartmut Neven, and Garnet Kin-Lic Chan. Low-depth quantum simulation of materials. Physical Review X, 8(1):011044, 2018. [33] Dominic W. Berry, Graeme Ahokas, Richard Cleve, and Barry C. Sanders. Efficient quantum algorithms for simulating sparse Hamiltonians. Communications in Mathematical Physics, 270(2):359?371, 2007. arXiv:quant-ph/0508139. [34] David Poulin, Angie Qarry, Rolando D. Somma, and Frank Verstraete. Quantum simulation of time-dependent Hamiltonians and the convenient illusion of Hilbert space. Physical Review Letters, 106(17):170501, 2011. arXiv:1102.1360. 194 [35] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Exponential improvement in precision for simulating sparse Hamiltonians. Forum of Mathematics, Sigma, 5:e8, 2017. arXiv:1312.1414. [36] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Simulating Hamiltonian dynamics with a truncated taylor series. Physical Review Letters, 114(9):090502, 2015. arXiv:1412.4687. [37] Dominic W. Berry, Andrew M. Childs, and Robin Kothari. Hamiltonian simulation with nearly optimal dependence on all parameters. In IEEE 56th Annual Symposium on Foundations of Computer Science, pages 792?809. IEEE, 2015. arXiv:1501.01715. [38] Gilles Brassard, Peter Hoyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification and estimation. Contemporary Mathematics, 305:53?74, 2002. arXiv:quant-ph/0005055. [39] Dominic W. Berry and Leonardo Novo. Corrected quantum walk for optimal hamiltonian simulation. arXiv:1606.03443, 2016. [40] Leonardo Novo and Dominic W Berry. Improved hamiltonian simulation via a truncated taylor series and corrections. arXiv:1611.10033, 2016. [41] Guang Hao Low and Isaac L Chuang. Hamiltonian simulation by qubitization. Quantum, 3:163, 2019. arXiv:1610.06546. [42] Guang Hao Low and Isaac L. Chuang. Optimal hamiltonian simulation by quantum signal processing. Physical Review Letters, 118(1):010501, 2017. arXiv:1606.02685. [43] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations. Physical Review Letters, 103(15):150502, 2009. arXiv:0811.3171. [44] Andris Ambainis. Variable time amplitude amplification and quantum algorithms for linear algebra problems. In 29th Symposium on Theoretical Aspects of Computer Science, volume 14, pages 636?647. LIPIcs, 2012. arXiv:1010.4458. [45] Dong An and Lin Lin. Quantum linear system solver based on time-optimal adiabatic quantum computing and quantum approximate optimization algorithm, 2019. arXiv:1909.05500. [46] Andrew M. Childs, Robin Kothari, and Rolando D. Somma. Quantum algorithm for systems of linear equations with exponentially improved dependence on precision. SIAM Journal on Computing, 46(6):1920?1950, 2017. arXiv:1511.02306. 195 [47] Andra?s Gilye?n, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 193?204, 2019. arXiv:1806.01838. [48] Lin Lin and Yu Tong. Optimal quantum eigenstate filtering with application to solving quantum linear systems. Quantum, 4:361, 2020. arXiv:1910.14596. [49] Yig?it Subas??, Rolando D. Somma, and Davide Orsucci. Quantum algorithms for systems of linear equations inspired by adiabatic quantum computing. Physical Review Letters, 122(6):060504, 2019. arXiv:1805.10549. [50] Yu Tong, Dong An, Nathan Wiebe, and Lin Lin. Fast inversion, preconditioned quantum linear system solvers, and fast evaluation of matrix functions, 2020. arXiv:2008.13295. [51] Pedro Costa, Dong An, Yuval R. Sanders, Yuan Su, Ryan Babbush, and Dominic W. Berry. Optimal scaling quantum linear systems solver via discrete adiabatic theorem, 2021. arXiv:2111.08152. [52] Dominic W. Berry. High-order quantum algorithm for solving linear differential equations. Journal of Physics A: Mathematical and Theoretical, 47(10):105301, 2014. arXiv:1010.2745. [53] Dominic W. Berry, Andrew M. Childs, Aaron Ostrander, and Guoming Wang. Quantum algorithm for linear differential equations with exponentially improved dependence on precision. Communications in Mathematical Physics, 356(3):1057?1081, 2017. arXiv:1701.03684. [54] Andrew M. Childs and Jin-Peng Liu. Quantum spectral methods for differential equations. arXiv:1901.00961, 2019. [55] B. David Clader, Bryan C. Jacobs, and Chad R. Sprouse. Preconditioned quantum linear system algorithm. Physical Review Letters, 110(25):250504, 2013. arXiv:1301.2340. [56] Yudong Cao, Anargyros Papageorgiou, Iasonas Petras, Joseph Traub, and Sabre Kais. Quantum algorithm and circuit design solving the Poisson equation. New Journal of Physics, 15(1):013021, 2013. arXiv:1207.2485. [57] Ashley Montanaro and Sam Pallister. Quantum algorithms and the finite element method. Physical Review A, 93(3):032324, 2016. arXiv:1512.05903. [58] Pedro C. S. Costa, Stephen Jordan, and Aaron Ostrander. Quantum algorithm for simulating the wave equation. Physical Review A, 99(1):012323, 2019. arXiv:1711.05394. [59] Andrew M. Childs, Jin-Peng Liu, and Aaron Ostrander. High-precision quantum algorithms for partial differential equations, 2020. arXiv:2002.07868. 196 [60] Alexander Engel, Graeme Smith, and Scott E. Parker. Quantum algorithm for the Vlasov equation. Physical Review A, 100(6):062315, 2019. arXiv:1907.09418. [61] Noah Linden, Ashley Montanaro, and Changpeng Shao. Quantum vs. classical algorithms for solving the heat equation. arXiv:2004.06516. [62] Alexei Y. Kitaev. Quantum measurements and the abelian stabilizer problem, 1995. arXiv:quant-ph/9511026. [63] David Kahaner, Cleve Moler, and Stephen Nash. Numerical methods and software. Prentice-Hall, Inc., 1989. [64] Rainer Kress. Numerical Analysis: V. 181. Springer Verlag, 1998. [65] Esmail Babolian and Mohammad Mahdi Hosseini. A modified spectral method for numerical solution of ordinary differential equations with non-analytic solution. Applied Mathematics and Computation, 132(2-3):341?351, 2002. [66] Mohammad Mahdi Hosseini. A modified pseudospectral method for numerical solution of ordinary differential equations systems. Applied mathematics and computation, 176(2):470?475, 2006. [67] Ca?lin Ioan Gheorghiu. Spectral methods for differential problems. Casa Ca?rt?ii de S?tiint?a? Cluj-Napoca, 2007. [68] Jie Shen, Tao Tang, and Li-Lian Wang. Spectral methods: algorithms, analysis and applications, volume 41. Springer Science & Business Media, 2011. [69] Yudong Cao, Anargyros Papageorgiou, Iasonas Petras, Joseph Traub, and Sabre Kais. Quantum algorithm and circuit design solving the Poisson equation. New Journal of Physics, 15(1):013021, 2013. arXiv:1207.2485. [70] Pedro C. S. Costa, Stephen Jordan, and Aaron Ostrander. Quantum algorithm for simulating the wave equation. Physical Review A, 99(1):012323, 2019. arXiv:1711.05394. [71] Ivo Babus?ka and Manil Suri. The h ? p version of the finite element method with quasiuniform meshes. Mathematical Modelling and Numerical Analysis, 21(2):199?238, 1987. [72] Edward H. Kerner. Universal formats for nonlinear ordinary differential systems. Journal of Mathematical Physics, 22(7):1366?1371, 1981. [73] Marcelo Forets and Amaury Pouly. Explicit error bounds for Carleman linearization, 2017. arXiv:1711.02552. [74] Sarah K. Leyton and Tobias J. Osborne. A quantum algorithm to solve nonlinear differential equations. arXiv:0812.4423, 2008. 197 [75] Ilon Joseph. Koopman-von Neumann approach to quantum simulation of nonlinear classical dynamics, 2020. arXiv:2003.09980. [76] Ilya Y. Dodin and Edward A. Startsev. On applications of quantum computing to plasma simulations, 2020. arXiv:2005.14369. [77] Seth Lloyd, Giacomo De Palma, Can Gokler, Bobak Kiani, Zi-Wen Liu, Milad Marvian, Felix Tennie, and Tim Palmer. Quantum algorithm for nonlinear differential equations, 2020. arXiv:2011.06571. [78] Andrew M. Childs and Joshua Young. Optimal state discrimination and unstructured search in nonlinear quantum mechanics. Physical Review A, 93(2):022314, 2016. arXiv:1507.06334. [79] Daniel S. Abrams and Seth Lloyd. Nonlinear quantum mechanics implies polynomial-time solution for NP-complete and #P problems. Physical Review Letters, 81(18):3992, 1998. arXiv:quant-ph/9801041. [80] Scott Aaronson. NP-complete problems and physical reality. ACM SIGACT News, 36(1):30?52, 2005. arXiv:quant-ph/0502072. [81] Jin-Peng Liu, Herman ?ie Kolden, Hari K. Krovi, Nuno F. Loureiro, Konstantina Trivisa, and Andrew M. Childs. Efficient quantum algorithm for dissipative nonlinear differential equations, 2020. arXiv:2011.03185. [82] Torsten Carleman. Application de la the?orie des e?quations inte?grales line?aires aux syste?mes d?e?quations diffe?rentielles non line?aires. Acta Mathematica, 59(1):63? 87, 1932. [83] Krzysztof Kowalski and Willi-Hans Steeb. Nonlinear Dynamical Systems and Carleman Linearization. World Scientific, 1991. [84] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations. Physical Review Letters, 103(15):150502, 2009. arXiv:0811.3171. [85] Tom M. Apostol. Calculus, volume ii: multi-variable calculus and linear algebra, with applications to differential equations and probability, 1969. [86] Sangtae Kim and Seppo J. Karrila. Microhydrodynamics: principles and selected applications. Courier Corporation, 2013. [87] R. K. Michael Thambynayagam. The diffusion handbook: applied solutions for engineers. McGraw Hill Professional, 2011. [88] Stewart Harris. An introduction to the theory of the Boltzmann equation. Courier Corporation, 2004. [89] John David Jackson. Classical electrodynamics, 1999. 198 [90] Bo Thide?. Electromagnetic field theory. Upsilon books Uppsala, 2004. [91] R. Byron Bird. Transport phenomena. Appl. Mech. Rev., 55(1):R1?R4, 2002. [92] Vivek V. Shende, Stephen S. Bullock, and Igor L. Markov. Synthesis of quantum- logic circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 25(6):1000?1010, 2006. arXiv:quant-ph/0406176. [93] Andrew M Childs. On the relationship between continuous-and discrete-time quantum walk. Communications in Mathematical Physics, 294(2):581?603, 2010. arXiv:0810.0312. [94] Franc?ois Fillion-Gourdeau and Emmanuel Lorin. Simple digital quantum algorithm for symmetric first-order linear hyperbolic systems. Numerical Algorithms, 82(3):1009?1045, 2019. arXiv:1705.09361. [95] Hans-Joachim Bungartz and Michael Griebel. Sparse grids. Acta Numerica, 13:147?269, 2004. [96] Christoph Zenger. Sparse grids, 1991. https://www5.in.tum.de/pub/zenger91sg.pdf. [97] Jie Shen and Haijun Yu. Efficient spectral sparse grid methods and applications to high-dimensional elliptic problems. SIAM Journal on Scientific Computing, 32(6):3228?3250, 2010. [98] Jie Shen and Haijun Yu. Efficient spectral sparse grid methods and applications to high-dimensional elliptic equations II. Unbounded domains. SIAM Journal on Scientific Computing, 34(2):A1141?A1164, 2012. [99] Lawrence C. Evans. Partial differential equations (2nd ed.). American Mathematical Society, 2010. [100] Jianping Li. General explicit difference formulas for numerical differentiation. Journal of Computational and Applied Mathematics, 183(1):29?52, 2005. [101] Roger A. Horn and Charles R. Johnson. Matrix analysis. Cambridge University Press, 2012. [102] Norbert Schuch and Jens Siewert. Programmable networks for quantum algorithms. Physical Review Letters, 91(2):027902, 2003. arXiv:quant- ph/0303063. [103] Jonathan Welch, Daniel Greenbaum, Sarah Mostame, and Ala?n Aspuru-Guzik. Efficient quantum circuits for diagonal unitaries without ancillas. New Journal of Physics, 16(3):033040, 2014. arXiv:1306.3991. [104] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Simulating Hamiltonian dynamics with a truncated Taylor series. Physical Review Letters, 114(9):090502, 2015. arXiv:1412.4687. 199 [105] Daniel T. Colbert and William H. Miller. A novel discrete variable representation for quantum mechanical reactive scattering via the S-matrix Kohn method. The Journal of Chemical Physics, 96(3):1982?1991, 1992. [106] Daniel Spielman. Rings, paths, and Cayley graphs (course notes), 2014. http: //www.cs.yale.edu/homes/spielman/561/lect05-15.pdf. [107] Tao Tang. Spectral and high-order methods with applications. Science Press Beijing, 2006. [108] Andreas Klappenecker and Martin Rotteler. Discrete cosine transforms on quantum computers. In Proceedings of the 2nd International Symposium on Image and Signal Processing and Analysis, pages 464?468, 2001. arXiv:quant-ph/0111038. [109] Martin Ro?tteler, Markus Pu?schel, and Thomas Beth. Fast signal transforms for quantum computers. In Proceedings of the Workshop on Physics and Computer Science, pages 31?43, 1999. [110] Markus Pu?schel, Martin Ro?tteler, and Thomas Beth. Fast quantum Fourier transforms for a class of non-abelian groups. In International Symposium on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes, pages 148? 159, 1999. arXiv:quant-ph/9807064. [111] Willi-Hans Steeb and F. Wilhelm. Non-linear autonomous systems of differential equations and Carleman linearization procedure. Journal of Mathematical Analysis and Applications, 77(2):601?611, 1980. [112] Roberto F. S. Andrade. Carleman embedding and Lyapunov exponents. Journal of Mathematical Physics, 23(12):2271?2275, 1982. [113] Gerasimos Lyberatos and Christos A. Tsiligiannis. A linear algebraic method for analysing Hopf bifurcation of chemical reaction systems. Chemical Engineering Science, 42(5):1242?1244, 1987. [114] Roger Brockett. The early days of geometric nonlinear control. Automatica, 50(9):2203?2224, 2014. [115] Andreas Rauh, Johanna Minisini, and Harald Aschemann. Carleman linearization for control and for state and disturbance estimation of nonlinear dynamical processes. IFAC Proceedings Volumes, 42(13):455?460, 2009. [116] Alfredo Germani, Costanzo Manes, and Pasquale Palumbo. Filtering of differential nonlinear systems via a Carleman approximation approach. In Proceedings of the 44th IEEE Conference on Decision and Control, pages 5917?5922, 2005. [117] Johannes M. Burgers. A mathematical model illustrating the theory of turbulence. In Advances in Applied Mechanics, volume 1, pages 171?199. Elsevier, 1948. 200 [118] Pierre Gilles Lemarie?-Rieusset. The Navier-Stokes problem in the 21st century. CRC Press, 2018. [119] Germund G. Dahlquist. A special stability problem for linear multistep methods. BIT Numerical Mathematics, 3(1):27?43, 1963. [120] Kendall E. Atkinson. An introduction to numerical analysis. John Wiley & Sons, 2008. [121] Carl W. Helstrom. Quantum detection and estimation theory. Journal of Statistical Physics, 1:231?252, 1969. [122] Paul Waltman. Competition models in population biology. SIAM, 1983. [123] Elliott W. Montroll. On coupled rate equations with quadratic nonlinearities. Proceedings of the National Academy of Sciences, 69(9):2532?2536, 1972. [124] Alessandro Ceccato, Paolo Nicolini, and Diego Frezzato. Recasting the mass- action rate equations of open chemical reaction networks into a universal quadratic format. Journal of Mathematical Chemistry, 57:1001?1018, 2019. [125] Fred Brauer and Carlos Castillo-Chavez. Mathematical models in population biology and epidemiology, volume 2. Springer, 2012. [126] Chaolong Wang, Li Liu, Xingjie Hao, Huan Guo, Qi Wang, Jiao Huang, Na He, Hongjie Yu, Xihong Lin, An Pan, Sheng Wei, and Tangchun Wu. Evolving epidemiology and impact of non-pharmaceutical interventions on the outbreak of coronavirus disease 2019 in Wuhan, China. Journal of the American Medical Association, 323(19):1915?1923, 2020. [127] Gul Zaman, Yong Han Kang, and Il Hyo Jung. Stability analysis and optimal vaccination of an SIR epidemic model. Biosystems, 93(3):240?249, 2008. [128] Derdei Bichara, Yun Kang, Carlos Castillo-Cha?vez, Richard Horan, and Charles Perrings. SIS and SIR epidemic models under virtual dispersal, 03 2015. [129] Rany Qurratu Aini, Deden Aldila, and Kiki Sugeng. Basic reproduction number of a multi-patch SVI model represented as a star graph topology, 10 2018. [130] Lev Kofman, Dmitri Pogosian, and Sergei Shandarin. Structure of the universe in the two-dimensional model of adhesion. Monthly Notices of the Royal Astronomical Society, 242(2):200?208, 1990. [131] Sergei F. Shandarin and Yakov B. Zeldovich. The large-scale structure of the universe: turbulence, intermittency, structures in a self-gravitating medium. Reviews of Modern Physics, 61(2):185?220, 1989. [132] Massimo Vergassola, Be?renge?re Dubrulle, Uriel Frisch, and Alain Noullez. Burgers? equation, devil?s staircases and the mass distribution for large-scale structures. Astronomy and Astrophysics, 289:325?356, 1994. 201 [133] P. A. Davidson. An Introduction to Magnetohydrodynamics. Cambridge Texts in Applied Mathematics. Cambridge University Press, 2001. [134] Yann Brenier and Emmanuel Grenier. Sticky particles and scalar conservation laws. SIAM Journal on Numerical Analysis, 35(6):2317?2328, 1998. [135] Min-Hang Bao. Micro mechanical transducers: pressure sensors, accelerometers and gyroscopes. Elsevier, 2000. [136] Constantine M. Dafermos. Hyperbolic conservation laws in continuum physics, volume 3. Springer, 2005. [137] Richard P. Feynman. Quantum mechanical computers. Optics News, 11:11?20, 1985. [138] Richard Bellman and John M. Richardson. On some questions arising in the approximate solution of nonlinear differential equations. Quarterly of Applied Mathematics, 20:333?339, 1963. 202