ABSTRACT Title of Dissertation: MODEL ORDER REDUCTION FOR PARAMETER-DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS WITH CONSTRAINTS Kayla Davie Doctor of Philosophy, 2023 Dissertation Directed by: Professor Howard Elman Department of Computer Science Reduced-order methods have been used to reliably solve discrete parametrized mathematical models and to lessen the associated computational complexity of these models by projecting them onto spaces of reduced order. The reduced-order spaces are spanned by a finite number of full- order solutions, a reduced basis, that, if well-chosen, provide a good approximation of the entire solution manifold. Reduced-order methods have been used with various problem classes including different types of constrained parametrized problems such as constrained parametrized partial differential equations (PDEs) where the constraints are built into the PDEs, and parametrized PDE- constrained optimization problems, or PDE-control problems, where the constraints themselves are PDEs. In the deterministic setting, both of these problem classes involve discrete models that are of saddle-point form and can be computationally expensive to solve. It is well known that saddle-point problems must satisfy an inf-sup condition to ensure stability of the solution, thus, solving deterministic variations of these models requires the consideration and satisfaction of an inf-sup stability condition. When these models are subject to parametrization, the solution to a deterministic problem is sought for many parameter values. Reduced-order models for these problem classes are often constructed so that they mirror the full-order models and are also of saddle-point form. In the established RB methods we study for the problem classes explored in this thesis, the reduced basis is represented as a block-diagonal matrix that produces a saddle- point reduced system and is augmented to satisfy inf-sup stability. Two methods of building an augmented RB to ensure inf-sup stability that have been well studied are augmentation by aggregation and augmentation by the supremizer. We present a comparative study of these two common methods of stabilizing reduced order models, through use of the supremizer and through aggregation, and compare the accuracy, efficiency, and computational costs associated with them for solving the parametrized PDE-control problem. We propose a new approach to implementing the RB basis, the stacked reduced basis, that produces a reduced system that is not of saddle-point form. Implementing the stacked reduced basis avoids the necessity to satisfy the inf-sup condition to ensure stability and therefore, to augment the reduced bases spaces. This results in a reduced basis system of smaller order, which reduces the computational work in the online step. While inf-sup stability is avoided, there are still issues with the stability of the stacked reduced system during RB construction, particularly for the constrained PDE problem class. We show that this can be addressed by penalization and present results to show that penalization improves the stability of an established augmented RB method as well. We present numerical results to compare the new approach to two developed ways of implementing the RB method (with both commonly accepted choices of augmentation) and prove the efficiency of the proposed approach. We study the efficiency of the stacked reduced basis for both PDE-control problems and constrained PDEs subject to parametrization. MODEL ORDER REDUCTION FOR PARAMETER-DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS WITH CONSTRAINTS by Kayla Davie Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2023 Advisory Committee: Professor Howard Elman, Chair/Advisor Professor Radu Balan Professor Ramani Duraiswami Professor Michael Fu Professor Konstantina Trivisa © Copyright by Kayla Davie 2023 Acknowledgments First and foremost, I give thanks to God. While I wish I could say my faith was unwavering, it was not, but His love and protection was. Without Him I am nothing and it is constant prayer and belief in His will that sustained me throughout this journey. I would like to thank my amazing advisor, Dr. Howard Elman, for his guidance, patience, thoughtful advice, editorial suggestions and mentorship. I thank him for having high expectations of me and for helping me to develop into a better researcher and technical writer while still being kind. Working under him has been an honor and without him, this would not have been possible. I would like to thank Dr. Konstantina Trivisa for her support, considerate words, and advice during the early stages of my graduate career. I am thankful to my entire committee and am extremely grateful for their participation and feedback. Special thanks to Dr. Sherry Scott, who, without ever meeting me, offered her full support, mentored me, and reminded me that I am worthy. I owe endless thanks to my family. My loving parents, Vicki C. and James Davie, taught me to chase my dreams no matter how unrealistic they seem to those around me. I thank them for giving me a strong foundation rooted in love and self-worth. I thank my mother for always being there to pray for me and with me, for being one of my very best friends, and for never being more than phone call away. I thank my father for having a level of pride in me that was contagious enough to encourage me to have that same pride in myself. I thank my big brother, James Davie, Jr., for reassuring me and believing in me. I am thankful for the gifts of my nephew and niece, ii Ayden and Reverie, because knowing they are watching made me want to make this dream a reality as an example of what is possible when you trust yourself and God. To my godmother and cheerleader, Gwen Twyman Clarke, my loving sister, Angela Douglas, my nieces, nephews, aunts, uncles and cousins, thank you for the encouraging words, prayers, and endless support. I thank Michael Bucaoto for being the best partner, friend, and support system I could have ever asked for. I thank you for being my family, my heart, and my home. Thank you for the endless trips back and forth between New York and Maryland, for loving me unconditionally through this journey, and for constantly reminding me that all power belongs to God. To my incredible friends, thank you for being there for me throughout this journey. The supportive texts, visits, encouragement and advice were needed and appreciated. To one of my closest friends who is more like a sister, Danielle Middlebrooks, thank you for doing this journey with me. Thank you for being my only friend here, for being a role model for me, for believing in me, and for giving me a shoulder to cry on and an ear to vent to. This journey would not have been the same without you and I am thankful that I have a friend for life in you. To Mother Spelman, thank you for being a place where I could grow and be nurtured by people who believed in me, my abilities and what I had to offer the world. Thank you for teaching little Black girls to chase their dreams, despite what the world thinks of them. I can confidently say that choosing Spelman College was one of the best decisions of my life and I am eternally grateful for the support I received within those gates. To Dr. Viveka Borum, thank you for mentoring me, introducing me to mathematical research, and showing me that a young Black woman under thirty could earn a Ph.D. in mathematics, which inspired me to want the same for myself. Lastly, I am thankful for the people I met here who discouraged me and doubted me because those people made me want this dream even more. To God be the glory! iii Table of Contents Acknowledgements ii Table of Contents iv List of Tables vi List of Figures ix List of Abbreviations xii Chapter 1: Introduction 1 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Reduced-Order Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Constrained Problems in the PDE Setting . . . . . . . . . . . . . . . . . 6 1.2.3 Reduced-Order Modeling for Parametrized Constrained Problems . . . . 10 1.3 Summary of Main Contributions and Results . . . . . . . . . . . . . . . . . . . . 11 Chapter 2: Preliminaries 15 2.1 Parametrized PDE-Constrained Optimization Problems . . . . . . . . . . . . . . 15 2.1.1 Definition of Operators and Spaces . . . . . . . . . . . . . . . . . . . . . 16 2.1.2 Discrete Forms and Matrix Systems . . . . . . . . . . . . . . . . . . . . 19 2.2 Parametrized Constrained PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.1 Definition of Operators and Spaces . . . . . . . . . . . . . . . . . . . . . 24 2.2.2 Discrete Forms and Matrix Systems . . . . . . . . . . . . . . . . . . . . 25 2.3 Reduced-Order Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.3.1 Error Indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3.2 Projection Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.3.3 Standard Block-Diagonal Matrix Representation of the Reduced Basis . . 31 2.4 Accuracy and Efficiency Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Chapter 3: A Comparative Study of the Supremizer and Aggregation Methods of Stabilization for Parametrized PDE-Control Problems 36 3.1 Stabilization by Supremizer for Parametrized PDE-Control Problems . . . . . . . 36 3.2 Stabilization by Aggregation for Parametrized PDE-Control Problems . . . . . . 39 3.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 iv Chapter 4: The Stacked Reduced Basis for Parametrized PDE-Control Problems 52 4.1 Reduced Basis Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.1 Parametrized Diffusion Control . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.2 Parametrized Convection-Diffusion Control . . . . . . . . . . . . . . . . 64 4.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Chapter 5: The Stacked Reduced Basis for Constrained PDEs 69 5.1 Supremizer Reduced Basis for the Stokes Equations . . . . . . . . . . . . . . . . 70 5.2 Stacked Reduced Basis for the Stokes Equations . . . . . . . . . . . . . . . . . . 71 5.3 Stable Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.3.1 Penalization and Error Analysis . . . . . . . . . . . . . . . . . . . . . . 81 5.3.2 Reduced Basis Performance . . . . . . . . . . . . . . . . . . . . . . . . 87 5.4 Unstable Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Chapter 6: Conclusion 96 6.1 Summary of Methodological Advancements . . . . . . . . . . . . . . . . . . . . 96 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Bibliography 100 v List of Tables 3.1 Maximum condition number of the reduced system QTG(µ)Q in the Galerkin case and (G(µ)Q)T (G(µ)Q) in the Petrov-Galerkin case over all parameters in the training set and all steps of the greedy algorithm for the parameterized diffusion control problem as nc is refined, where the spatial mesh has (2nc + 1)× (2nc + 1) discrete elements, tolerance is 10−7, Nmax = 2000 and ND = 3. Empty cells represent lack of convergence to desired tolerance. . . . . . . . . . . . . . . . . . 47 3.2 Comparison of number of snapshots N , number of columns in the reduced basis, and maximum relative error indicator over the verification set for Qsup and Qagg with Galerkin and Petrov-Galerkin formulations for the parameterized diffusion control problem. Here, the verification set had 500 parameters, the domain Ω has ND = 3 subdomains, the spatial mesh has (2nc +1)× (2nc +1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−7. Empty cells correspond to cases where the greedy search failed to reach this tolerance. . 48 3.3 Comparison of columns and maximum relative error indicator over the verification set for Qsup and Qagg with Galerkin projection for the parameterized diffusion control problem. Here, the verification set had 500 parameters, the domain Ω has ND = 10 subdomains, the spatial mesh has (2nc+1)× (2nc+1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−7. . . . . 48 3.4 Comparison of columns, number of basis vectors N and maximum relative error indicator over the verification set for Qsup and Qagg with Galerkin and Petrov- Galerkin projection for the parameterized convection-diffusion control problem. Here, the verification set had 500 parameters, the spatial mesh has (2nc + 1) × (2nc+1) discrete elements,Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.1 Maximum condition number of the Galerkin and Petrov-Galerkin reduced systems, QTG(µ)Q and (G(µ)Q)T (G(µ)Q), respectively, for the parametrized diffusion control problem during the offline stage. This is the maximum condition number achieved by any µ ∈ T over all N . The domain Ω has ND = 3 subdomains, the spatial mesh has (2nc + 1)× (2nc + 1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−7. Empty cells correspond to cases where the greedy search failed to reach this tolerance. . . . . . . . . . . . 62 vi 4.2 Maximum condition number of the Galerkin and Petrov-Galerkin reduced systems, QTG(µ)Q and (G(µ)Q)T (G(µ)Q), respectively, for the parametrized diffusion control problem during the offline stage. This is the maximum condition number achieved by any µ ∈ T at the last step of the greedy algorithm. The domain Ω has ND = 3 subdomains, the spatial mesh has (2nc +1)× (2nc +1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−7. Empty cells correspond to cases where the greedy search failed to reach this tolerance. . 62 4.3 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup, Qagg, and Qstk with Galerkin projection for the parametrized diffusion control problem. The verification set has 500 parameters, the domain Ω has ND = 3 subdomains, the spatial mesh has (2nc + 1)× (2nc + 1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−7. . . . . . . . . . . . . . . . . 64 4.4 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup, Qagg, and Qstk with Petrov-Galerkin projection for the parametrized diffusion control problem. The verification set has 500 parameters, the domain Ω has ND = 3 subdomains, the spatial mesh has (2nc+1)× (2nc+1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−7. . . . . . . . . . 65 4.5 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup, Qagg, and Qstk with Galerkin projection for the parametrized diffusion control problem. The verification set has 500 parameters, the domain Ω has ND = 10 subdomains, the spatial mesh has (2nc + 1)× (2nc + 1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−7. . . . . . . . . . . . . . . . . 65 4.6 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup, Qagg, and Qstk with Galerkin projection for the parametrized convection-diffusion control problem. The verification set has 500 parameters, the spatial mesh has (2nc + 1)× (2nc + 1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.7 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup, Qagg, and Qstk with Petrov-Galerkin projection for the parametrized convection-diffusion control problem. The verification set has 500 parameters, the spatial mesh has (2nc + 1)× (2nc + 1) discrete elements, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 10−4. . . . . . . . . . . . . . . . . . . . . . . . . . . 67 vii 5.1 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup and Qstk with Galerkin and Petrov-Galerkin projection for the parametrized Stokes control problem with Poiseuille flow. The verification set has 500 parameters, the domain Ω has ND = 3 subdomains, the spatial mesh has (2nc + 1) × (2nc + 1) discrete elements, Nmax = 3000 and the stopping tolerance for the greedy algorithm is 10−4. In all cases, the Stokes problem is not penalized and has a zero matrix in the 2× 2 block of G(µ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.2 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup and Qstk with Galerkin and Petrov-Galerkin projection for the parametrized Stokes control problem with Poiseuille flow. The verification set has 500 parameters, the domain Ω has ND = 3 subdomains, the spatial mesh has (2nc + 1) × (2nc + 1) discrete elements with distance h = 2−(nc−1) between them, Nmax = 3000 and the stopping tolerance for the greedy algorithm is 10−4. In all cases, the Stokes problem has been penalized and ϵ = h2. . . . . . . . . . . . . . . . . . . . . . . 90 5.3 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qstk with Galerkin projection for the parametrized Stokes control problem with Poiseuille flow. The verification set has 500 parameters, the domain Ω has ND = 10 subdomains, the spatial mesh has (2nc+1)×(2nc+1) discrete elements with distance h = 2−(nc−1) between them, Nmax = 3000 and the stopping tolerance for the greedy algorithm is 10−4. In all cases, the Stokes system has been penalized and ϵ = h2. . . . . . . 91 5.4 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup and Qstk with Galerkin and Petrov-Galerkin projection for the parametrized Stokes control problem with L-shaped domain for a model of flow over a backward facing step. The verification set has 500 parameters, the domain Ω has ND = 3 subdomains, the spatial mesh has discrete elements with distance h = 2−(nc−1) between them, Nmax = 3000 and the stopping tolerance for the greedy algorithm is 10−4. For RB systems specified by Galerkin projection, the Stokes problem has been penalized and ϵ = h2. For RB systems specified by Petrov-Galerkin projection, penalization is not used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 5.5 Comparison of the number of columns in the RB matrix, number of snapshots N , and maximum error indicator values over the verification set for Qsup and Qstk with Galerkin and Petrov-Galerkin projection for the parametrized Stokes control problem with Poisseuille flow. The verification set has 500 parameters, the domain Ω has ND = 3 subdomains, and the spatial mesh has (2nc+1)× (2nc+1) discrete elements with distance h between them, Nmax = 2000 and the stopping tolerance for the greedy algorithm is 5 × 10−5. In all cases, stabilized Q1 − Q1 elements have been used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 viii List of Figures 3.1 Subdivision of the spatial domain Ω for the parameterized diffusion control problem when ND = 3. In this case, µ ∈ R3. . . . . . . . . . . . . . . . . . . . . 41 3.2 Maximum relative error indicator over the training set as Qsup is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc+1)× (2nc+1) elements where nc = 4, and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . 43 3.3 Maximum relative error indicator over the training set as Qagg is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc+1)× (2nc+1) elements where nc = 4, and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . 43 3.4 Maximum relative error indicator over the training set as Qsup is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc+1)× (2nc+1) elements where nc = 6, and the number of subdomains Ωk ⊂ Ω is ND = 3. The PG reduction fails to converge. . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.5 Maximum relative error indicator over the training set as Qagg is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc+1)× (2nc+1) elements where nc = 6, and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . 44 3.6 Maximum condition number of the reduced system QTG(µ)Q in the Galerkin (G) case and (G(µ)Q)T (G(µ)Q) in the Petrov-Galerkin (PG) case over all parameters in the training set as Qsup is being built for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1)× (2nc + 1) elements where nc = 6, and the number of subdomains Ωk ⊂ Ω is ND = 3. There is a lack of convergence in the case of PG projection. . . . . . . . . . . . . . . . 45 ix 3.7 Maximum condition number of the reduced system QTG(µ)Q in the Galerkin (G) case and (G(µ)Q)T (G(µ)Q) in the Petrov-Galerkin (PG) case over all parameters in the training set as Qagg is built for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 6, and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . 46 3.8 Picture of the spatial domain Ω and subdomains Ω1 and Ω2 for the convection- diffusion control problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.1 Maximum error indicator over all parameters in the training set as Qsup, Qagg and Qstk are being built with a Galerkin (G) projection for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc+1)× (2nc+1) elements where nc = 5, Nmax = 2000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.2 Maximum condition number of the Galerkin (G) reduced system QTG(µ)Q over all parameters in the training set as Qsup, Qagg and Qstk are built for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1)× (2nc + 1) elements where nc = 5, Nmax = 2000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3 Maximum error indicator over all parameters in the training set as Qsup, Qagg and Qstk are being built with a Petrov-Galerkin (PG) projection for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc+1)× (2nc + 1) elements where nc = 5, Nmax = 2000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.4 Maximum condition number of the Petrov-Galerkin (PG) reduced system (G(µ)Q)T (G(µ)Q) over all parameters in the training set as Qsup, Qagg and Qstk are built for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 5, Nmax = 2000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . 59 4.5 Maximum error indicator over all parameters in the training set as Qsup, Qagg and Qstk are being built with a Petrov-Galerkin (PG) projection for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc+1)× (2nc + 1) elements where nc = 7, Nmax = 2000 and the number of subdomains Ωk ⊂ Ω is ND = 3. The greedy algorithm fails to converge for Qsup. . . . . . . . 60 4.6 Maximum condition number of the Petrov-Galerkin (PG) reduced system (G(µ)Q)T (G(µ)Q) over all parameters in the training set as Qsup, Qagg and Qstk are built for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 7, Nmax = 2000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . 61 x 5.1 Plots of equally distributed streamlines (left) and pressure field (right) of a mixed- order Q2-Q1 finite element solution of the Poiseuille flow benchmark problem when nc = 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.2 Plots of equally distributed streamlines (top) and pressure field (bottom) of a mixed-order Q2-Q1 finite element solution of the benchmark problem for flow over a backward-facing step when nc = 4. . . . . . . . . . . . . . . . . . . . . . 75 5.3 Maximum error indicator over all parameters in the training set as Qsup is built with Galerkin (G) and Petrov-Galerkin projection (PG) for the parameterized Stokes problem with Poiseuille flow. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−4, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 4, Nmax = 3000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.4 Plot of the maximum condition number of the reduced system QTG(µ)Q in the Galerkin (G) case and (G(µ)Q)T (G(µ)Q) in the Petrov-Galerkin (PG) case over all parameters in the training set as Qsup is built for the parameterized Stokes problem with Poiseuille flow. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−4, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 4, Nmax = 3000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.5 Maximum error indicator over all parameters in the training set as Qstk is built with Galerkin (G) and Petrov-Galerkin projection (PG) for the parameterized Stokes problem with Poiseuille flow. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−4, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 4, Nmax = 3000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.6 Plot of the maximum condition number of the reduced system QTG(µ)Q in the Galerkin (G) case and (G(µ)Q)T (G(µ)Q) in the Petrov-Galerkin (PG) case over all parameters in the training set as Qstk is built for the parameterized Stokes problem with Poiseuille flow. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−4, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 4, Nmax = 3000 and the number of subdomains Ωk ⊂ Ω is ND = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.7 Maximum error indicator over all parameters in the training set as Qsup and Qstk are built with Galerkin (G) for the parameterized Stokes problem with Poiseuille flow. The number of basis vectors isN . Here, the stopping tolerance for the greedy algorithm is 10−4, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 4, Nmax = 3000 and the number of subdomains Ωk ⊂ Ω is ND = 3. The Stokes problem has been penalized and ϵ = h2. . . . . . . . . . . . . . . . . 89 5.8 Maximum condition number of the Galerkin (G) reduced system QTG(µ)Q over all parameters in the training set as Qsup and Qstk are built for the parameterized Stokes problem with Poiseuille flow. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−4, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 4, Nmax = 3000 and the number of subdomains Ωk ⊂ Ω is ND = 3. The Stokes problem has been penalized and ϵ = h2. 89 xi List of Abbreviations FEM Finite element method MOR Model order reduction PDE Partial differential equation RB Reduced basis ROM Reduced-order model xii Chapter 1: Introduction The focus of this dissertation is on using model order reduction (MOR) techniques to efficiently solve partial differential equations (PDEs) and PDE-constrained optimization problems that are subject to parametrization. We start the introduction with a brief overview of reduced-order modeling, the computational method we study in this research, as well as the problem classes we will consider. We follow this with a discussion of the goals of this dissertation and a brief summary of the main contributions of this thesis. We conclude this section with an outline for the following chapters. 1.1 Overview Reduced-order modeling is an effective computational method for reducing the computational expense of several types of parametrized mathematical models. When problems are solved subject to parametrization, solutions to a full-order model are desired for many parameter values. Si- mulations of this type may be carried out to perform sensitivity analysis or to obtain statistical properties of solutions. Often, for parametrized models solved using MOR techniques, solving the full-order model many times is not only computationally expensive but unfeasible. This issue may arise in settings where statistical inversion methods are used and simulations are required for thousands of parameter values. Lack of feasibility may also come from the time associated with 1 repeated solves of the full-order system for many parameter values when solutions are desired in settings where time is critical, an issue that emerges in model predictive control problems [16]. Reduced-order models (ROMs) diminish the computational complexity of full-order parametrized models by projecting the large dimensional full-order systems onto smaller order spaces spanned by a finite number of full-order solutions, called the reduced basis (RB). Many real-world phenomena can be represented by constrained models, specifically con- strained PDEs and PDE-constrained optimization problems, optimization problems where the constraints themselves are PDEs. Discretization of these problems results in large, complex mathematical models that are computationally expensive to solve and are of saddle-point form. In the deterministic setting, there has been a considerable amount of research done on finding preconditioning techniques in order to iteratively solve constrained models using known optimization techniques such as iterative Krylov subspace methods [53, 56]. In the parametrized setting, this computationally expensive problem must be solved repeatedly, a major computational task. Reduced-order methods have been effective at reducing the computational complexity of parametrized models including parametrized constrained PDE and PDE-control models [23, 48]. In order to guarantee stability of saddle-point systems, satisfaction of an inf-sup stability condition is required [13]. In several standard ways of implementing ROMs for saddle-point problems, the reduced systems are also of saddle-point form and inf-sup stability of the RB system must be considered in constructing the RB. The necessity for ROMs of constrained problems to be inf-sup stable adds difficulty to developing reduced-order techniques to solve them. Methods have been developed to augment the RB during construction in order to maintain inf-sup stability. We consider two commonly accepted approaches to implementing ROMs for saddle-point problems and ensuring inf-sup stability: augmentation by supremizer and augmentation by aggregation. The 2 supremizer method has been effective at stabilizing RB models for parametrized constrained PDEs like the parametrized Navier-Stokes equations as well as parametrized PDE-control problems like the parametrized Stokes control problem [23, 47]. The aggregation method has only been demonstrated to be effective in the parametrized PDE-control setting [48]. While the two standard approaches of constructing a RB we focus on have been thoroughly studied and proven effective in reduced-order modeling, there is no research comparing the accuracy and efficiency of these methods. The first goal of this thesis is to offer an in-depth comparison of these two standard approaches for constructing ROMs, the supremizer method and the aggregation method, and determine which method is superior for obtaining approximate solutions to parametrized PDE- control problems. A second goal of this research is to find a new way to build a RB that eliminates the need to augment the RB while still producing a stable system. Eliminating augmentation will result in an RB of smaller dimension and thus reduce the computational complexity associated with solving the reduced problem. We desire a method that is effective at solving parametrized PDE-control models as well as parametrized constrained PDEs. 1.2 Background In this section, we begin by introducing the computational methods we study in this thesis, reduced-order methods, and provide insight to how these methods have been effective for addressing different problem classes. We follow this by a discussion of the two problem classes we study in this thesis. The first class of problems is PDE-constrained optimization problems, optimization problems where minimization is done with respect to PDE constraints. In this case, the constraint of the problem is the PDE itself. The second class of problems we study is 3 constrained PDEs. These are special cases of PDE models where the constraint is built into the PDE. We summarize some of the research that has been done to find methods to efficiently solve these problems. To conclude this section, we return to the discussion of reduced-order modeling and explain the additional complexity that arises when reduced-order methods are used to solve problems in the two constrained problem classes we study in this thesis. 1.2.1 Reduced-Order Methods Reduced-order methods have been used to effectively solve PDEs in various settings. A solution is sought to a general parametrized PDE model of the form L(y;µ) = b(y;µ) where µ ∈ P is a parameter vector and P is the parameter space of possible parameter vectors µ. Consider a discretization of the PDE of order N∗ such that the discrete parametrized PDE has the form G(µ)y(µ) = b(µ), (1.1) where N∗ is large and the PDE is expensive solve. The discrete PDE (1.1) is considered a “full- order” model. When a solution to the full-order model is required for many parameter values, this expensive discrete system must be solved many times, which can increase the computational expense dramatically. The idea behind MOR techniques is that the entire solution manifold Y = {y(µ)|µ ∈ P} can be approximated by a subset of the solution manifold, YN ∈ Y , which 4 is of relatively small dimension and that y(µ) can be well-approximated by yN(µ) ∈ YN for many values of µ [29]. MOR methods also seek to bound the approximation error such that ||y(µ)− yN(µ)|| is bounded above by some desired tolerance [29]. RB methods are one type of reduced-order methods [49]. RB methods define the space YN in terms of a finite number of full-order solutions corresponding to carefully chosen parameter values µ such that YN is considered a RB approximation space and is spanned by the set of full-order solutions, {y(µj)}Nj=1, the RB. The computations involved with RB methods are divided into two steps: an offline step and an online step. In the offline step, a full-order model is solved repeatedly for a specified set of Nmax parameters, called the training set, to obtain a set of N full-order solutions that are used to construct a RB, that, if constructed well, accurately represents the entire solution manifold. The RB is represented by the N columns of a RB matrix, Q. The offline step may be expensive but is only done once and the resulting basis reduces the costs of obtaining approximate solutions in the online step. In the online step, where full-order solutions are desired for many parameters, solutions to the full-order model are approximated by solving reduced-order models that are projected onto the space spanned by the RB constructed in the offline step, YN . The problem can be projected in different ways, including by Galerkin projection and Petrov-Galerkin projection, and the residual b − G(µ)yr(µ) is orthogonal to Q when the system is specified by Galerkin projection. We will discuss both types of projection in detail in Section 2.3.2. The projected problem is referred to as a reduced-order model. The online step is expected to be inexpensive because N << N∗ Reduced-order modeling has been used to solve discrete models subject to parametrization in various settings. Reduced-order modeling techniques have been used in reverse time migration for imaging in seismic data processing [5] and energy performance forecasting simulations for 5 building modeling [31]. Algorithms involving MOR techniques have been developed to solve models of anomalous diffusion involving nonlocal equations [74]. Artificial neural networks models that utilize reduced-order modeling methodology are developed to analyze and solve nonlinear wave-propagation prediction models in [63]. In the astronautics field, reduced-order models have been investigated to simulate nonlinear structural dynamic responses [75]. The approach has been shown to be highly effective for solving a variety of PDEs, both linear and nonlinear [37, 54]. We focus on solving linear problems in this thesis. 1.2.2 Constrained Problems in the PDE Setting Constrained models that involve PDEs can be used to describe numerous physical phenomena. These models have the form J (u, f) (1.2) subject to c(u, f) = 0 (1.3) where functionals J (u, f) are optimized with respect constraints c(u, f). In PDE-constrained optimization models, or PDE-control problems, cost functionals J (u, f) are optimized with respect constraints c(u, f) that are PDEs. These problems arise in several fields including fluid and solid mechanics [1, 10]. PDE-constrained optimization problems are used to represent: • Elastic/Inelastic deformation: models behavior of bodies subject to forces. • Flow Simulations: model fluid or air flow around a body. • Reactive Flow Simulations: model flow of liquid or gases that react with each other. 6 • Biomedical Imaging: model describes propagation of radiation through a body. • Inverse conductivity problem: models direct electrical current injected into some media where the potential field in or around field is recorded. For example, in a Direct Current resistivity survey, currents are injected into the ground and potentials are measured above the ground or in boreholes. This allows the conductivity of the media (or the ground) to be estimated. Here, the diffusion equation is the PDE-constraint. • Double Glazing problem: models the temperature in a cavity with recirculating wind. Here, the convection-diffusion equation is the PDE constraint. The models for these physical phenomena are complex and beyond the scope of this thesis. Troeltzsch provides an analysis of several different PDE-control models in [71]. A significant amount of research has been done in recent years on PDE-control models involving elliptic equations, including [6, 34, 38, 56], because these are some of the simplest PDE constraints. Not as much work has been done on more complex PDE-control problems with hyperbolic and parabolic constraints but there is still a considerable amount of research available on efficiently solving these problems including [44, 45, 46, 53, 58]. The PDE-control models we study for this are steady-state models; the time-dependent versions of these models can be considered as well, as is done in [36, 51]. The two PDE-control benchmark problems we study in this thesis are a parmetrized diffusion control and a parametrized convection-diffusion control problem where the PDE constraints c(u, f) are the diffusion and convection-diffusion equations, respectively. The second class of problems we study for this thesis is constrained PDEs, PDE models where the constraint is built into the PDE. These PDEs also have the form (1.2)-(1.3), where the constraint (1.3) is a part of the PDE. Constrained PDEs are used to represent models of the flow of 7 viscous incompressible fluids such as: • Inviscid free-surface flow: flow of fluid under a free surface such as the flow of water under air. • Creeping flow: fluid flow that is dominated by viscosity. Creeping flow can be used for models of blood flow through capillaries and models of the flow of heavy oils. Creeping flow is also called Stokes flow and can be modeled by the Stokes equations. The constrained PDE benchmark problems we study here are various versions of the parametrized Stokes equations. In the Stokes equations, −▽2 u⃗+▽p = 0⃗ (1.4) ▽ · u⃗ = 0, (1.5) a model of incompressible viscous flow that arises in computational fluid dynamics, the first equation is called the momentum equation and the second equation is called the incompressibility constraint. The incompressibility constraint enforces the law of conservation of mass. The variable u⃗ represents the velocity of the fluid while p represents the pressure. This incompressibility constraint is on the velocity field and does not involve the pressure variable. This constraint is built directly into the PDE itself. The Stokes equations can model viscous fluids like oil and blood and are a limiting case of the Navier-Stokes equations. The Navier-Stokes equations involve two additional terms in the momentum equation defined for the Stokes equations above, a convection term and a forcing term. (The Stokes equations can also involve a forcing term.) The Navier-Stokes equations is a more general model than the Stokes equations for modeling incompressible viscous 8 fluids and can be used to model media like air and water. Computational methods for solving both the Stokes equations and Navier-Stokes equations have been studied in great detail. An in-depth discussion of both of these models as well as methods to solve discretized versions of these models is available in [24]. Similar to the PDE-control models discussed previously, the Stokes and Navier-Stokes equations have been studied in both the steady-state and time-dependent settings, including in [23, 28] and [64, 68], respectively. For the purposes of this research, we focus solely on the steady-state models of the two types of constrained benchmark problems we study in this thesis, control problems with parametrized PDE constraints, and parametrized versions of the Stokes equations. Discretization of constrained models leads to saddle-point systems and the solutions to these models are saddle-points. A saddle-point is a point on a graph of a curve where the derivative is zero in orthogonal directions and the point is not a local minimum nor maximum. Saddle-point systems occur in the constrained PDEs and PDE-control models discussed above; they arise in many other scenarios as well, including image processing [78], fibre-optic networks [72], linear elasticity [2], and mixed finite-element method (FEM) formulations of control problems subject to second-order PDEs [27]. A substantial amount of research has been done on saddle-point problems and several techniques have been developed for solving these systems [7]. In the deterministic setting, large discrete saddle-point systems have been solved using methods that include Schur complement approximation [52], preconditioned iterative Krylov subspace methods [53, 56, 57], duality based approaches [66], and multilevel methods [25]. When these constrained problems are solved subject to parametrization, the computationally expensive deterministic problem must be solved many times for various parameter values. Parametrized models have been solved using methods that include collocation [43, 70] and reduced-order modeling [22, 25, 39, 41], the method 9 studied in this thesis. 1.2.3 Reduced-Order Modeling for Parametrized Constrained Problems When reduced-order methods are used to address the computational expense associated with constrained problems, there is an added level of difficulty to constructing the ROMs. Deterministic versions of discretized constrained models are of saddle-point form and face an issue of lack of stability. Saddle-point systems must satisfy an inf-sup condition in order to be well-posed and solvable. When parametrization is added and ROMs are constructed in a way that the saddle-point form of the system is preserved, inf-sup stability must be considered in the design of the reduced- order method. In RB methods for parametrized constrained problems, augmentation of the RB has been shown to be an effective way preserving inf-sup stability of the ROM [23, 48]. Augmenting the RB increases the dimension of the RB approximation space and reduces the savings associated with solving the ROM rather than the full-order system. This dissertation focuses on effectively solving constrained parametrized PDE models and parametrized PDE-constrained optimization problems using ROM techniques. Reduced- order methods have already been shown to be effective at solving problems of these types. In [23, 40, 55, 59, 61, 62], viscous flow models represented by the Stokes and Navier-Stokes equations are solved using reduced-order methods. In [77], reduced order methods are used to handle models of parametric optimal flow control in coronary artery bypass grafts. In [76], reduced- order modeling techniques for PDE-constrained optimization problems are applied to subsonic aerodynamic shape optimization models. PDE-control models constrained by parametrized elliptic PDEs like the diffusion and convection-diffusion equations are addressed using RB methods in 10 [3, 17, 41, 48, 60]. PDE-control models constrained by parametrized Stokes equations are solved using reduced-order modeling in [18]. 1.3 Summary of Main Contributions and Results In this section, we will present a summary of the main contributions and results of this thesis. Our main contribution is that we propose a new way to construct a RB for reduced-order models, a stacked RB, which results in a reduced system that is not of saddle-point form. Because the system does not have saddle-point structure, inf-sup stability is not a concern for this model and the necessity to augment the RB approximation spaces is eliminated. Although this is an advantage of this approach, there are also stability issues associated with it, and we show that these can be addressed using (classic ideas of) penalization [33]. Numerical results to demonstrate that there is still an issue of stability of offline stage ROMs produced by the proposed basis for both constrained problem classes which must be treated in the constrained PDE setting. This stability issue can be handled effectively by penalization, or additional stabilization of the constrained operator, which will be discussed further in Chapter 5.3.1. We demonstrate that the order of the error incurred due to additional stabilization is no worse than the order of finite element approximation error suffered due to discretization. We prove that at the end of the offline stage, the full proposed RB of dimension N produces ROMs that are nonsingular for all parameter vectors in the training set in both the parametrized constrained PDE and PDE-control settings. We establish the accuracy and efficiency of the proposed method by comparing its behavior for solving parametrized constrained PDEs and parametrized PDE-control problems to the behavior of the two standard approaches of reduced-order modeling we study here, augmentation by supremizer and augmentation by 11 aggregation. We show that the proposed method results in reduced systems of smaller dimension that are less complex to construct and produces reduced solutions that well approximate solutions of the full-order models. We assert that the proposed method is the superior method. This research involves three major parts: (1) we present a comparative study of two common methods of stabilizing reduced order models, through use of the supremizer and through aggregation, and compare the accuracy, efficiency, and computational costs associated with them for solving the parametrized PDE-control problem, (2) we propose a new stacked approach of constructing the RB matrix that does not require augmentation for constrained problems, and (3) we demonstrate the effectiveness of the stacked RB matrix and compare it to the two augmented block-diagonal RB matrices for the benchmark parametrized PDE and PDE-control problems. We start by presenting a comparison of the two standard approaches for constructing ROMs we study in this thesis, the supremizer method of augmentation and the aggregation method of augmentation, and determine which method is superior for solving two PDE-control benchmark problems, a parametrized diffusion control and a parametrized convection-diffusion control problem. In this comparative study, we found that aggregation is a superior method of stabilizing which produces a basis of smaller dimension and produces approximations to the solution that are closer to the direct solve solution in the sense of the L2-norm. Following the comparative study, we present our new approach to constructing the RB matrix, the stacked RB. Numerical results reveal that for the two parametrized PDE-control benchmark problems studied, the stacked RB produces a stable system and a basis that is almost as or as accurate as the two other common choices of reduced basis construction examined. The RB augmented by the method of aggregation requires a minimal number of full-order solutions N to form a complete basis such that N is considerably smaller than the number of full-order solutions required for both the supremizer 12 and stacked RBs. Regardless, due to lack of augmentation, the stacked RB produces a basis of smaller dimension. For constrained PDEs like the two parametrized Stokes benchmark problems studied in this thesis, a Pousielle flow model and a model of flow over a backward facing step, the effectiveness of the stacked RB in the Galerkin setting is dependent on penalization of the constrained operator. Testing suggests that when penalization is added, the stacked method is more effective than the supremizer method. The stacked RB is of smaller dimension and thus reduces the computational costs associated with building the RB by supremizer method while producing approximate solutions of comparable accuracy. Penalization is not required in the Petrov-Galerkin setting, which we demonstrate through numerical results, and the stacked RB is superior for similar reasons in this setting as well. The proposed method is more generally applicable than the aggregation method, which has superior qualities in the PDE-control setting, because the stacked RB is effective for both parametrized PDE-control problems and parametrized constrained PDEs and produces systems of smaller dimension than the other choices of RBs in both settings. The rest of this dissertation is organized as follows. In Chapter 2, we establish some preliminary concepts that will be needed throughout this thesis. The operators and spaces used to examine the continuous problems we study are defined along with the functions and spaces used to discretize the models. We provide a detailed description of the reduced-order modeling procedure for both problem classes and a greedy algorithm for selecting the RB. In Chapter 3, we present a comparison of two standard methods of stabilizing ROMs for two benchmark parametrized PDE-control problems, a parametrized diffusion control problem and a parametrized convection-diffusion control problem. In Chapter 4, we introduce our new proposed method of building a reduced basis, the stacked reduced basis, and demonstrate the nonsingularity of the 13 resulting reduced systems for all parameters in the training set. We extend the comparative study from Chapter 3 to show our proposed RB is competitive against the two standard RBs compared in that chapter for parametrized PDE-control problems. We continue our discussion of the stacked RB in Chapter 5 and present results to show it is a feasible choice of RB for ROMs for constrained PDEs. We establish the stabilization necessary to make the proposed RB effective in this setting and compare its performance to one of the standard RBs for solving two benchmark problems, parametrized models of the Stokes equations. In Chapter 6, we conclude with a summary of our methodological advancements and provide insight on future work. 14 Chapter 2: Preliminaries In this chapter we summarize some of the main ideas and methodologies used in this thesis. We start by defining the operators and spaces needed for both the continuous and discrete deterministic models of both problem classes we will study. We follow this with an outline of the reduced-order modeling procedure that includes a greedy algorithm for building a RB. Next, we discuss the error indicator used for the greedy algorithm and the two types of projection we implement in solving the reduced system. We conclude this chapter with a discussion of the methods we will use in this thesis to assess the accuracy and efficiency of a RB. The majority of the material in this chapter is adapted from our manuscript under review and manuscript in preparation [19, 20]. 2.1 Parametrized PDE-Constrained Optimization Problems We study two benchmark parametrized PDE-control problems, a diffusion control problem and a convection-diffusion control problem. For the main discussion of this problem class, we will focus on the parametrized diffusion control problem with Dirichlet boundary conditions. In the next two sections, we will define the operators, matrix system, and spaces associated with the deterministic diffusion control problem, and review the inf-sup condition for the deterministic 15 problem. Consider the parametrized diffusion control problem, min u,f 1 2 ∥∥u(x, µ)− û(x, µ) ∥∥2 L2(Ω) + β ∥∥f(x, µ)∥∥2 L2(Ω) (2.1) subject to −▽ · (σ(x, µ)▽ u(x, µ)) = f(x, µ) in Ω× Γ, (2.2) such that u(x, µ) = g(x) on ∂ΩD × Γ, (2.3) σ(x, µ) ∂u(x, µ) ∂n = 0 on ∂ΩN × Γ. (2.4) Here, Ω ⊂ Rd is the spatial domain, µ is a vector of parameters, and σ(x, µ) is a parameter- dependent diffusion coefficient. For each µ, û(x, µ) is a desired state and we seek a state, u(x, µ), as close to û(x, µ) as possible (in the L2-norm sense), and the control, f(x, µ), can vary to achieve this. The regularization term (second term in cost functional) is added to make the problem well- posed [35, 56]. This term involves a regularization parameter, β, based on Tikhonov regularization theory [73], and is typically set to be β ≈ 10−2. The solution variables are state and control variables, u(x, µ) and f(x, µ) respectively, where û(x;µ) and g(x) are given. We will consider a two-dimensional spatial domain Ω divided into ND horizontal strips or subdomains with the kth subdomain denoted Ωk. The diffusion coefficient is taken to be piecewise constant on each subdomain σ(x, µ)|Ωk = µk, k = 1 : ND, giving a parameter space of dimension ND depending on the ND-dimensional parameter vector µ = [µ1, ..., µND ]T . 2.1.1 Definition of Operators and Spaces In this section, we introduce the operators and spaces for the parameterized diffusion control problem with fixed µ and present the structure for solving a problem of this type at the continuous 16 level. The goal is to find state and control solutions, u(x, µ) and f(x, µ) respectively, for some desired state û(x, µ) and Dirichlet boundary data g(x). The state space, U , is a Hilbert space for state function u(x, µ) equipped with inner product and induced norm (u1, u2)U = ∫ Ω ▽u1(x, µ) · ▽u2(x, µ)dx, ||u|| = (u, u) 1/2 U . (2.5) The solution space for u(x, µ), H1 E , has the Dirichlet condition built into its definition, and functions in the test space H1 E0 are zero on Dirichlet part of boundary ΩD. The control space, F , for control function f(x, µ), is another Hilbert space equipped with inner product and induced norm (f1, f2)F = ∫ Ω f1(x, µ)f2(x, µ)dx, ||f || = (f, f) 1/2 F , i.e. F = L2(Ω). A standard approach to solving constrained optimization problems is to apply first-order conditions for stationarity to the Lagrangian functional L:= 1 2 ∫ Ω (u(x, µ)−û(x, µ))2dx+ β ∫ Ω f(x, µ)2dx+ λ (∫ Ω σ(x, µ)▽ u(x, µ)·▽v(x, µ)dx− ∫ Ω f(x, µ)v(x, µ)dx ) . (2.6) This gives rise to an adjoint/Lagrange multiplier function, λ(x, µ), and the adjoint/Lagrange multiplier space, Q. The Lagrange multiplier must satisfy −▽2 λ(x, µ) = −u(x, µ) + û(x, µ) 17 together with the same boundary conditions imposed on the state u(x, µ), so that λ(x, µ) ∈ H1 E as well [24]. Thus, the Hilbert space Q is equipped with inner product and induced norm (2.5) and Q and U are equivalent. It will be convenient to refer to certain product spaces for the purpose of defining operators and establishing stability. These include the product between control and state spaces, X = F ×U , with elements x̄ = (f(x, µ), u(x, µ)) ∈ X , and the product spaces U × Q, F × Q, and X × Q. For given µ, let a(·, ·;µ) : U ×Q→ R be the linear elliptic operator a(z, q;µ) = ∫ Ω σ(x, µ)▽ z(x, µ) · ▽q(x, µ)dx. We assume that the operator a is coercive, i.e., there exists α0 > 0 such that α(µ) = infz∈U a(z, z;µ) ||z||2U ≥ α0 for all µ. The action of the control f(x, µ) is represented by the operator c(·, ·;µ) : F ×Q→ R, c(v, q;µ) = ∫ Ω v(x, µ)q(x, µ)dx. The weak formulation of (2.1)-(2.4) is to find the minimizers u(·, µ) ∈ H1 E(Ω), f(·, µ) ∈ F as in (2.1) satisfying (2.3)-(2.4) subject to the weak form of the constraint ∫ Ω σ(x, µ)▽ u(x, µ) · ▽v(x, µ)dx = ∫ Ω f(x, µ)v(x, µ)dx ∀v ∈ H1 E0 (Ω). (2.7) First-order stationarity of the Lagrangian (2.6) results in a saddle-point system. 18 It is well known that to guarantee the existence, uniqueness and stability of solutions, saddle- point systems of this form must satisfy an inf-sup condition. Specifically, for the bilinear form B(·, ·;µ) : X ×Q→ R, given by B(w̄, q;µ) = a(z, q;µ)− c(v, q;µ) where w̄ = v z  ∈ X = F × U , it must hold that there exists β0 > 0 such that inf q∈Q sup w̄∈X B(w̄, q;µ) ||w̄||X ||q||Q ≥ β0. (2.8) It is shown in [48] that this condition holds for the control problem (2.1)-(2.4). 2.1.2 Discrete Forms and Matrix Systems For the discrete version of the parametrized control problem, let Sh 0 be a finite-dimensional subspace of Hh E0 , and let Sh E augment Sh 0 with a finite set of basis functions used to impose Dirichlet boundary conditions. The Galerkin finite element method is chosen for discretization of the optimal control problem. The weak formulation of the constraint is then to find uh(x, µ) ∈ Sh E such that ∫ Ω σ(x;µ)▽ uh(x, µ) · ▽vh(x, µ) = ∫ Ω fh(x, µ)vh(x, µ) ∀vh ∈ Sh 0 ⊂ H1 E0 (Ω). (2.9) Here, Sh E is the solution space and Sh 0 is a vector space containing test functions. Let the basis for the test functions be denoted {ϕ1, ..., ϕn}, and assume this basis is extended by ϕn+1, ..., ϕn+∂n, 19 which ensures that the Dirichlet boundary conditions hold at certain points on ∂ΩD; see [12, pp. 195ff.] for additional discussion of this. The discrete analog of the weak version of the minimization problem (2.1)-(2.4) is as follows: min uh,fh 1 2 ∥∥∥∥∥uh(x, µ)− ûh(x, µ) ∥∥∥∥∥ 2 2 + β ∥∥∥∥∥fh(x, µ) ∥∥∥∥∥ 2 2 (2.10) subject to ∫ Ω σ(x;µ)▽ uh(x, µ) · ▽vh(x, µ)dx = ∫ Ω fh(x, µ)vh(x, µ)dx (2.11) subject to boundary conditions for uh as in (2.3)-(2.4). For discretization, we use equal-order Q1 (piecewise bilinear) finite elements for all three (state, control and Lagrange multiplier) spaces, which is shown to be div-stable in [48]. In the state space, the finite element approximation of u(x, µ), uh(x, µ), is represented in terms of basis functions uh(x, µ) = ∑n j=1 uµ,jϕj + ∑n+∂n j=n+1 uµ,jϕj such that uh(x, µ) is determined uniquely by coefficient vector u(µ) = uµ = (uµ,1, ...,uµ,n+∂n) T . The coefficient vector f(µ) = fµ = (fµ,1, ...fµ,n)T associated with the approximation fh(x, µ) of f(x, µ) is defined similarly. The cost functional can be rewritten in matrix notation as min uµ,fµ 1 2 uT µMuµ − uT µbµ + βfTµMfµ where f(µ) = fµ = (fµ,1, ...fµ,n)T , b(µ) = bµ = { ∫ û(x, µ)ϕi}i=1,...,n, and the mass matrix, M, is defined as M = { ∫ ϕiϕj}i,j=1,...,n. The constraint can be expressed as K(µ)uµ = Mfµ + dµ (2.12) 20 such that K(µ) = [kµij] and [kµij] = ∫ Ω σ(x, µ) ▽ ϕj · ▽ϕi = ∑ q µq ∫ Ωq ▽ϕj · ▽ϕi. Note that d(µ) = dµ contains terms from the boundary values of the discretized u(x, µ), such that dµ = {dµ,i}i=1,...,n with dµ,i = ∫ Ω gϕi dΩ− ∑n+n∂ j=n+1 uµ,j ∫ Ω σ(x, µ)▽ ϕi · ▽ϕj dΩ. The discrete Lagrangian is then L := 1 2 uT µMuµ − uT µbµ + βfTµMfµ + λT µ (K(µ)uµ −Mfµ − dµ) where λµ is a vector of Lagrange multipliers associated with finite element approximation of λ(x, µ), λh(x, µ). Applying first-order conditions for stationarity gives a set of three coupled equations, a block system  2βM 0 −M 0 M K(µ)T −M K(µ) 0   fµ uµ λµ  =  0 bµ dµ  . (2.13) The matrix system 2.13 will also be denoted G(µ)y(µ) = b(µ), for convenience. The block 3× 3 coefficient matrix G(µ) can also be represented in compact form, G(µ) =  A B(µ)T B(µ) 0  , (2.14) where B(µ) = [−M,K(µ)]. The discrete spaces are defined as follows. The state space, Uh, has inner product ⟨uh(x, µ), vh(x, µ)⟩Uh = ⟨Kuµ, vµ⟩ = vT µKuµ 21 and induced norm ||uh||Uh = ⟨Kuµ,uµ⟩1/2 where K is defined like K(µ) above. The control space, Fh, is equipped with inner product ⟨fh(x, µ), vh(x, µ)⟩Fh = ⟨Mfµ, vµ⟩ = vT µMfµ and induced norm ||fh(x, µ)||Fh = ⟨Mfµ, fµ⟩1/2. The Lagrange multiplier space Qh is equipped with inner product ⟨qh(x, µ), ph(x, µ)⟩Qh = ⟨Kqµ,pµ⟩ = pT µKqµ and induced norm ||λh(x, µ)||Qh = ⟨Kλµ,λµ⟩1/2. The state-control space, Xh = Fh×Uh, has elements x̄h with associated coefficient vector x̄(µ) =  fµ uµ . Because the matrix system (2.13) is of saddle point form (2.14), satisfaction of the inf-sup condition is required to guarantee stability. The discrete inf-sup condition requires that there exists β0 > 0 such that min q∈Qh max w̄= v z ∈Xh ⟨K(µ)z,q⟩ − ⟨Mv,q⟩ ||w̄||X ||q||Q ≥ β0, as proven in [48]. Recall also [24] that this stability bound is equivalent to the bound on the Rayleigh quotient ⟨B(µ)A−1B(µ)Tq,q⟩1/2 ⟨Kq,q⟩1/2 ≥ β0. (2.15) 2.2 Parametrized Constrained PDEs We will consider the parameterized Stokes equations as an example of a parametrized constrained PDE. We study two parametrized Stokes equations benchmark problems with different 22 flow: a Poisseuille flow model and a model of flow over a backward-facing step problem. We will provide the specifications for these benchmark problems in Chapter 5.3. In the two succeeding sections, we will define the operators and spaces needed to discuss the Stokes equations in both continuous and discrete space. The aim when solving the Stokes equations is to find v⃗(x, µ) and p(x, µ) such that −▽ ·(σ(x, µ)▽ v⃗(x, µ)) +▽p(x, µ) = 0⃗ in Ω× Γ (2.16) ▽ · v⃗(x, µ) = 0 in Ω× Γ (2.17) a special case of the Navier-Stokes equations and a model of viscous flow, where Ω ⊆ R2 is the spatial domain. The probability space (Γ,F ,P) is equipped with sample space Γ, σ-algebra F and probability measure P . The boundary of the spatial domain is ∂Ω = ∂ΩD ∪ ∂ΩN where Dirichlet and Neumann boundary conditions are defined on ∂ΩD and ∂ΩN respectively, v⃗(x, µ) = w⃗(x) on ∂ΩD, (2.18) ∂v⃗(x, µ) ∂n(x) − n⃗(x)p(x, µ) = s⃗(x) on ∂ΩN , (2.19) and n⃗ is the outward-pointing normal to the boundary. We will consider a spatial domain Ω divided into ND horizontal strips or subdomains with the kth subdomain denoted Ωk. The coefficient σ(x, µ) : Ω× Γ → R is a random field and is taken to be piecewise constant on each subdomain σ(x, µ)|Ωk = µk, k = 1 : ND, giving a parameter space of dimension ND depending on the random vector µ = [µ1, ..., µND ]T . The velocity function v⃗(x, µ) is vector-valued and the pressure function p(x, µ) is scalar-valued. Similar models to the one presented here have been useful in 23 modeling the viscosity in multiphase flows [23, 42, 50, 69]. 2.2.1 Definition of Operators and Spaces For the purpose of this discussion, we will focus briefly on the parameterized Stokes equations with strictly Dirichlet boundary conditions and fixed µ. We introduce operators and spaces for solving this problem at the continuous level. The goal is to find velocity and pressure solutions, v⃗(x, µ) satisfying Dirichlet boundary conditions, g(x), together with pressure p(x, µ). The velocity space, V , is a Hilbert space for function v⃗(x, µ) equipped with inner product and norm (v⃗1, v⃗2)V = ∫ Ω v⃗1(x, µ) · v⃗2(x, µ) +▽v⃗1(x, µ) : ▽v⃗2(x, µ)dx, ||v⃗||V = (v⃗, v⃗)1/2. Here, ▽v⃗ : ▽u⃗ represents the componentwise scalar product, vx · ux + vy · uy. The solution space for v⃗(x, µ), H1 E , has the Dirichlet condition built into its definition and functions in the test space H1 E0 are zero on Dirichlet part of boundary. The pressure space, P , for pressure function p(x, µ), is another Hilbert space L2(Ω) equipped with inner product and induced norm (p1, p2)P = ∫ Ω p1(x, µ) · p2(x, µ)dx, ||p|| = (p, p) 1/2 P . 24 The weak formulation of (2.16)-(2.18) is to find v⃗(x, µ) ∈ V and p(x, µ) ∈ P such that ∫ Ω σ(x, µ)▽ v⃗(x, µ) : ▽u⃗(x, µ)dx− ∫ Ω p(x, µ)▽ · u⃗(x, µ)dx = (2.20)∫ ∂ΩN s⃗(x) · u⃗(x, µ) for all u⃗(x, µ) ∈ H1 E0 (2.21)∫ Ω q(x, µ)▽ ·v⃗(x, µ) = 0 for all q(x, µ) ∈ L2(Ω). (2.22) These equations are a saddle-point problem that can be identified with saddle-point optimality conditions [13]. To ensure the existence and uniqueness of the pressure solution, the following inf-sup stability condition is necessary, that there exists β0 > 0 such that inf q(x,µ)∈P sup u⃗(x,µ)∈V |⟨q(x, µ),▽ · u⃗(x, µ)⟩| ||u⃗(x, µ)||V ||q(x, µ)||P ≥ β0. (2.23) It is established in [11, 12] that this condition holds for (2.16)-(2.19). 2.2.2 Discrete Forms and Matrix Systems For the discrete version of the parametrized Stokes equations, let Xh 0 be a finite-dimensional subspace of Hh E0 , let Vh = Xh E augment Xh 0 with a finite set of basis functions used to impose Dirichlet boundary conditions, and similarly let Ph ⊂ L2(Ω). The Galerkin finite element method gives the discrete analog of the weak version of the parametrized PDE, to find v⃗h(x, µ) ∈ Vh and 25 ph(x, µ) ∈ Ph such that ∫ Ω σ(x, µ)▽ v⃗h(x, µ) : ▽u⃗h(x, µ)dx− ∫ Ω ph(x, µ)▽ ·u⃗h(x, µ)dx (2.24) = ∫ ∂ΩN s⃗(x) · u⃗h(x, µ) for all u⃗h(x, µ) ∈ Xh 0 (2.25)∫ Ω qh(x, µ)▽ ·v⃗h(x, µ) = 0 for all qh(x, µ) ∈ Ph (2.26) Let the basis for the vector-valued velocity test functions be denoted {ϕ⃗1, ..., ϕ⃗n}, and assume this basis is extended by ϕ⃗n+1, ..., ϕ⃗n+∂n, which ensures that the Dirichlet boundary conditions hold at certain points on ∂ΩD; see [12, pp. 195ff.] for additional discussion of this. For a matrix representation of (2.24)-(2.26), let v⃗h(x, µ) = ∑nu j=1 vµ,jϕ⃗j + ∑nv+n∂ j=nv+1 vµ,jϕ⃗j , such that {ϕ⃗j}nv+n∂ j=1 are vector-valued basis functions and v⃗h(x, µ) is determined uniquely by coefficient vector v(µ) = vµ = (vµ,1, ..., vµ,n) T . Let p(x, µ), ph(x, µ) be represented in terms of scalar-valued basis functions {ψj}, such that ph(x, µ) = ∑np k=1 pµ,kψk. Consequently, (2.24)- (2.26) can be expressed as a matrix system denoted G(µ)y(µ) = b(µ) and defined A(µ) BT B 0  vµ pµ  = f g  , (2.27) where A(µ) is a parameterized vector-Laplacian matrix defined as A(µ) = [aµ ij] where aµ ij =∫ Ω σ(x, µ) ▽ ϕ⃗i : ▽ϕ⃗j , B is a discrete divergence operator defined B = [bkj] where bkj = −ψk ▽ ·ϕ⃗j , f = [fj] where fj = ∫ ∂ΩN s⃗ · ϕ⃗j − ∑nv+n∂ i=nv+1 vµ,i ∫ Ω ▽ϕ⃗j : ▽ϕ⃗i, g = [gk] where gk = ∑nv+n∂ i=nv+1 vµ,i ∫ Ω ψk ▽ ·ϕ⃗i, and G(µ) is of saddle-point form. The discrete velocity space, Vh, is equipped with norm ⟨v⃗h(x, µ), v⃗h(x, µ)⟩1/2Vh = ⟨Avµ,vµ⟩1/2 26 and the discrete pressure space, Ph, has norm ⟨ph(x, µ), ph(x, µ)⟩Ph = ⟨Mpµ,pµ⟩ where M is the pressure mass matrix defined M = [mij] where mij = ∫ Ω ψiψj , and the discrete inf-sup stability condition requires that there exists β0 > 0 such that min qµ∈Ph max uµ∈Vh |⟨qµ,Buµ⟩| ||uµ||Vh ||qµ||Ph ≥ β0. (2.28) Satisfaction of the discrete inf-sup condition is dependent on appropriately chosen finite element approximation spaces [24]. We will discuss this further in Chapter 5. 2.3 Reduced-Order Modeling We wish to solve a large, computationally-expensive, parametrized discrete system that can be represented by the following matrix system, G(µ)y(µ) = b(µ). RB methods use a relatively small set of full-order discrete solutions yµ, called “snapshots,” for various values of the parameter(s) µ, to approximate the solution to the problem for other values of the parameter by projecting on the space spanned by a subset of these snapshots, where it is assumed that the solution manifold can be approximated well by a subset of snapshots [40, 48, 61]. Computations are separated into an offline step in which the reduced basis is constructed, and an online step in which simulations are done. In the offline step, full solutions y(µ) are computed for multiple values of the parameter vector µ, and these solutions are used to generate the reduced basis approximation spaces using 27 a greedy algorithm [8, 14]. Let T be a set of Nmax training parameters, where we assume that Nmax is large enough so that T represents a good approximation of the entire parameter space. Starting with a single parameter µ(1) from T and corresponding snapshot, an initial reduced basis approximation space consists of the single snapshot y(µ(1)). At each step of the greedy algorithm for building the reduced basis, an approximation to the full solution yr(µ) ≈ y(µ) from the reduced space is computed for each µ in the training set, and for the parameter µ for which an error indicator η(yr(µ)) is maximal, the snapshot, full solution y(µ), is added to the reduced basis. This continues until the values of the error indicator for all approximations from the reduced space are less than some prescribed tolerance for all parameters in T . The resulting reduced basis approximation space consists of N snapshots corresponding to some parameters µ(n), 1 ≤ n ≤ N where N ≤ Nmax. A generic statement of the greedy algorithm, for finding a subspace Yi of Yh, is given in Algorithm 1. At each stage, approximate solutions yr(µ) in the (current) reduced space Yi−1 are computed for all parameters µ in T , together with the error indicator η(yr(µ)). For the parameter with the largest value of η(yr(µ)), µ(i), the full-system solution y(µ(i)) is computed and the reduced basis is augmented with this solution. The basis for the reduced space will ultimately be represented by the columns of an orthogonal matrix Q. In practice, the RB matrix is orthogonalized at each step of Algorithm 1 using Gram-Schmidt or a Proper Orthogonal Decomposition technique. In the online step, simulations are carried out and reduced solutions ỹr(µ (i)) are computed by solving reduced systems specified using one of two methods, Galerkin or Petrov-Galerkin projection, which we will discuss in the next section. The reduced solution is used to compute an approximate (full-sized) solution yr(µ (i)) ≈ y(µ(i)). With either choice of reduced system, reduced basis approximations will be prolonged to approximate full solutions as follows, Qỹr(µ) = 28 Algorithm 1 Greedy sampling algorithm for constructing the reduced basis space YN Randomly sample Nmax parameters µ. Let T = {µ(k)}Nmax 1 . Choose µ(1). Solve the full system for y(µ(1)). Set Y1 = {y(µ(1))}. Set N = 1. Set η∗ = ∞. while η∗ > tolerance do for i = 1 : Nmax do Solve reduced system for ỹr(µ (i)), compute yr(µ (i)) = Qỹr(µ (i)). Compute ηi ≡ η(yr(µ (i))). end for Find η∗ = maxi ηi, µ (∗) = argmaxi η(yr(µ (i))). if η∗ > tolerance then Set N = N + 1. Solve the full system for y(µ(∗)). Update YN = span(YN−1 ⋃ {y(µ(∗))}). end if end while yr(µ) such that yr(µ) ≈ y(µ). 2.3.1 Error Indicator There has been a considerable amount of research into choosing optimal error indicators for ROMs [32, 48]. For this research, the greedy search is implemented with a relative residual error indicator η(yr) = ||G(µ)yr − b||2 ||b||2 (2.29) (where yr refers to any element in Yh) and is carried out until η(yr(µ)) is less than some desired tolerance for all µ ∈ T . 29 2.3.2 Projection Methods The main benefit of MOR techniques is that accurate solutions to computationally-expensive full-order models can be obtained by projecting the models onto spaces of smaller dimension. We will consider two ways to specify the reduced system required for Algorithm 1: Galerkin projection: QTG(µ)Qỹr(µ) = QTb(µ) (2.30) Petrov-Galerkin projection: (G(µ)Q)T (G(µ)Q)ỹr(µ) = (G(µ)Q)Tb(µ) (2.31) If Galerkin projection is chosen, the maximum residual and (consequently) the maximum relative error indicator is orthogonal to the reduced training space at each step of basis construction; however, the residual can be extremely large while still being orthogonal which is undesirable. The Galerkin model is only necessarily stable if G(µ) is symmetric positive definite [15]. Define the G(µ)TG(µ)-norm as ||y||G(µ)TG(µ) = yTG(µ)TG(µ)y. When Petrov-Galerkin projection is chosen, the numerator of the error indicator (2.29) is equivalent to ||yµ − ỹµ||G(µ)TG(µ). The minimum- residual can be expressed as the least-squares minimization yr = argminyr=Qỹr ||G(µ)Qỹr(µ)− b||22. When the gradient of the minimization function is zero, (G(µ)Q)T (G(µ)Q)ỹr(µ) = (G(µ)Q)Tb(µ). Thus, the Petrov-Galerkin projection, a Galerkin projection on the normal equation, minimizes the state error in the G(µ)TG(µ)-norm [15]. In implementation, this is evident in the monotonic decrease of maximum error indicator values over the training set as the reduced basis grows in the offline stage. ROMs specified using Galerkin projection do not have this feature. It is well-known in the field of linear algebra that the condition number of G(µ)TG(µ) is 30 the square of that of G(µ). Thus, the condition number of the Petrov-Galerkin reduced system is consistently considerably larger than that of the Galerkin system. In our simulations, the maximum condition number of the Petrov-Galerkin reduced system increases monotonically in the offline stage which is not the case for the condition numbers of the Galerkin reduced systems. 2.3.3 Standard Block-Diagonal Matrix Representation of the Reduced Basis Several effective approaches to using MOR have been implemented in solving parameterized PDEs and parametrized PDE-control problems. In some these approaches, matrix approximations to the RB space RN are constructed so that the resulting reduced systems mirror the overall form of the full constrained systems they approximate. Implementing block-diagonal RB matrices Q results in reduced systems that are also of saddle-point form. This is the case for the two standard approaches we explore in this thesis. The block-diagonal RB matrix Q for the diffusion control model consists of submatrices to represent the state, control, state-control, and adjoint/Lagrange multiplier reduced spaces (UN , FN , XN , and QN ), Q =  Qf Qu Qλ  or Q = Qx̄ Qλ  . (2.32) Galerkin and Petrov-Galerkin formulations of the reduced model result in matrices that are of saddle-point form. We demonstrate this for the Galerkin formulation, which results in one of the 31 following scenarios:  QT f QT u QT λ   2βM 0 −M 0 M K(µ)T −M K(µ) 0   Qf Qu Qλ  =  QT f (2βM)Qf 0 −QT f MQλ 0 QT uMQu QT uK(µ)TQλ −QT λMQf QT λK(µ)Qu 0  (2.33)QT x̄ QT λ   A B(µ)T B(µ) 0  Qx̄ Qλ  =  QT x̄AQx̄ QT x̄B(µ)TQλ QT λB(µ)Qx̄ 0  (2.34) For the reduced model of the parametrized Stokes equations represented by matrix system (2.27), the standard block-diagonal RB matrix Q consists of submatrices to represent the velocity and pressure reduced spaces (VN and PN ): Q = Qv Qp  . (2.35) Similar to what is seen in the parametrized PDE-control setting, Galerkin and Petrov-Galerkin formulations of reduced models result in matrices that are of saddle-point form. This is shown for the Galerkin formulation below. Q T v QT p   A B(µ)T B(µ) 0  Qv Qp  =  QT vAQv QT v B(µ)TQp QT pB(µ)Qv 0  Until this point, we have only talked generally about the reduced basis space YN . We will now show that the seemingly natural definitions of reduced spaces exhibit issues on inf-sup bounds for the reduced problem. We will discuss how to fix this in the next chapter. For the parametrized 32 PDE-control problems we study for this discussion, we first consider defining the reduced spaces as follows. state : UN = span{u(µ(n)), n = 1, ..., N} (2.36) control : FN = span{f(µ(n)), n = 1, ..., N} (2.37) adjoint/Lagrange multiplier : QN = span{λ(µ(n)), n = 1, ..., N} (2.38) The control-state product space XN is defined XN = FN × UN and XN =span{x̄(µ(k))} where x̄(µ(k)) satisfies the weak form of the constraint, ∫ Ω σ(x, µ)▽ u(x, µ) · ▽v(x, µ)dx = ∫ Ω f(x, µ)v(x, µ)dx ∀v ∈ H1 E0 (Ω), which is equivalent to the condition B(x̄, q;µ(k)) = 0. Consequently, the numerator in (2.8) is zero for all w̄ ∈ XN and the reduced problem is not inf-sup stable. For the parametrized Stokes equations, we similarly define the following spaces. velocity : VN = span{v(µ(n)), n = 1, ..., N} (2.39) pressure : PN = span{p(µ(n)), n = 1, ..., N} (2.40) The reduced velocity space VN is spanned by velocity solutions of the full system. Because velocity solutions to the full system are divergence-free, every member of the reduced velocity space is divergence-free. If the reduced system is inf-sup stable, then for any given qh in the 33 discrete pressure space there exists u⃗h in the velocity space such that |⟨qh,▽ · u⃗h⟩| |u⃗h|1 ≤ C||qh||0 for some constant C. Any divergence-free u⃗h makes |⟨qh,▽ · u⃗h⟩| equal to zero which means inf-sup stability will not hold for any u⃗h ∈ VN . Enriching the reduced basis spaces has been effective at addressing the issue of inf-sup stability. In the next chapter, we will discuss two enrichment methods, stabilization by supremizer and stabilization by aggregation, and compare them for stabilizing ROMs for parametrized PDE- control problems. 2.4 Accuracy and Efficiency Testing To compare the accuracy and efficiency of the different methods of constructing the RB, behaviors of the methods of constructing an RB during both the offline stage and online stage computations were explored. To compare offline stage behavior of the RBs, maximum relative error indicator values were calculated over all µ in the training set T at every step of Algorithm 1. The maximum condition number of the Galerkin or Petrov-Galerkin reduced systems, QTG(µ)Q or (G(µ)Q)T (G(µ)Q), were calculated over all µ in T at every stage of Algorithm 1, as well. For online stage testing, a separate set of parameter vectors not used in the training set was constructed, referred to in this thesis as a testing set or verification set. Relative error indicator values were calculated for all µ in the verification set to test the accuracy of the RB. To compare online stage behavior of the RBs, the maximum error indicator values over the testing set and the sizes of the RBs were analyzed. 34 Computations were done on a Macbook Pro with an Intel 2.2 GHz i7 processor and 16 GB RAM, using MATLAB R2022a, or on a Dell Precision 7820 Tower with an Intel Xeon Silver 1102.1 GHz processor and 64 GB RAM, using MATLAB R2018b. Finite element matrices for the full models were generated using IFISS 3.6 [65]. Orthogonalization of the RB matrices was done using MATLAB built-in QR decomposition function that finds the QR decomposition of a matrix using Gram-Schmidt orthogonalization. 35 Chapter 3: A Comparative Study of the Supremizer and Aggregation Methods of Stabilization for Parametrized PDE-Control Problems In this chapter, we will explore two standard methods of stabilizing the RB to ensure inf- sup stability of the reduced model, the supremizer method of stabilization [4, 18, 23] and the aggregation method of stabilization [48, 77]. While the supremizer method has been effective at stabilizing ROMs for parametrized constrained PDEs and parametrized PDE-constrained optimization problems, the aggregation method has only been shown to be effective for the latter problem class. We describe both methods and the procedure for constructing the supremizer RB and aggregation RB for the parametrized PDE-control problem. We conclude this chapter with a discussion of numerical results for the performance of the supremizer and aggregation RB methods for both parametrized PDE-control benchmark problems. The majority of this chapter has been adapted from our manuscript under review [19]. 3.1 Stabilization by Supremizer for Parametrized PDE-Control Problems Let Y = [x̄(µ(1)), ..., x̄(µ(N))] and L = [λ(µ(1)), ...,λ(µ(N))], where {x̄(µ(j))} and {λ(µ(j))} are the basis vectors in VN = XN ×QN constructed by Algorithm 1. Following [4], we describe two ways to construct supremizers to enrich the space determined by Y . The first of these, called an “exact supremizer” in [4], produces what is needed but not in a practical way. Let µ be a parameter 36 arising in an online simulation, and let rj be the solution of Arj = B(µ)Tλj , j = 1, ..., N . (It can be shown that rj = argmaxw̄ ⟨B(µ)w̄,λj⟩ ||w̄||X , whence the name supremizer.) Let R = [r1, ..., rN ], and let the enriched reduced state space be defined as the span of [Y ,R]. The operators satisfy AR = B(µ)TL, and any member of the enriched space has the form w̄ = Yξ +Rω. Therefore, on the enriched reduced spaces, we seek a lower bound on min q∈range(L) max w̄∈range[Y,R] ⟨B(µ)w̄,q⟩ ||w̄||X ||q||Q = min α max ξ,ω ⟨B(µ)[Yξ +Rω],Lα⟩ ⟨A[Yξ +Rω], [Yξ +Rω]⟩1/2⟨Lα,Lα⟩1/2 where q = Lα. But max ξ,ω ⟨B(µ)[Yξ +Rω],Lα⟩ ⟨A[Yξ +Rω], [Yξ +Rω]⟩ 1 2 ≥ max ω ⟨B(µ)Rω,Lα⟩ ⟨ARω,Rω⟩1/2 (ξ = 0) = max ω ⟨ω,RTB(µ)TLα⟩ ⟨(RTAR)1/2ω, (RTAR)1/2ω⟩1/2 = max θ ⟨θ, (RTAR)−1/2RTB(µ)TLα⟩ ||θ|| where θ = (RTAR)−1/2ω = ||(RTAR)−1/2RTB(µ)TLα|| = ⟨(RTAR)α, α⟩1/2, where the last inequality follows from the fact that B(µ)TL = AR. Thus, max w̄=Yξ+Rω ⟨B(µ)w̄,q⟩ ||w̄||X ||q||Q ≥ ⟨(RTAR)α, α⟩1/2 ⟨KLα,Lα⟩1/2 = [ ⟨(RTAR)α, α⟩1/2 ⟨A−1B(µ)TLα,B(µ)TLα⟩1/2 ][⟨A−1B(µ)TLα,B(µ)TLα⟩1/2 ⟨KLα,Lα⟩1/2 ] . The first of these factors is ⟨(RTAR)α, α⟩1/2 ⟨A−1B(µ)TLα,B(µ)TLα⟩1/2 = ⟨(RTAR)α, α⟩1/2 ⟨(RTAR)α, α⟩1/2 = 1. 37 From inequality (2.15), the second factor is bounded below by β0. Thus, this version of the supremizer produces a div-stable reduced basis. For RB methods to be practical, the RB should be constructed in the offline step. The method just described does not meet this requirement, as the supremizers depend on the parameter µ used in the online simulation and a new enriched RB must be constructed for each new parameter. A practical variant constructs a set of supremizers {rj} that satisfy Arj = B(µ(j))Tλj , where {µ(j)} is the set of parameters chosen during the search, Algorithm 1. This computation can be done in the offline step. The resulting quantities satisfy AR = [B(µ(1))Tλ1, ...,B(µ(N))TλN ], but there is no longer a relation of the form AR = BTL. Consequently, the argument above establishing inf-sup stability of the reduced spaces is not applicable. This approach is described in [4] as constructing approximate supremizers, and it is the method we explore in experiments. Let Qsup denote the block-diagonal RB matrix with augmentation by supremizer. Qsup has the form in the right side of (2.32). As the basis is constructed, at each step that a snapshot is added, Qλ is updated with snapshot λµ and Qx̄ is updated with snapshot x̄µ and supremizer rµ = A−1B(µ)Tλµ. Thus, the matrix Qλ from the naive RB spaces is left unaugmented and Qx̄ is augmented so that its range is span {x̄µ1 , rµ1 , ..., x̄µN , rµN}. For computations, we use a Gram-Schmidt process so that both Qx̄ and Qλ are forced to have orthonormal columns. The matrix Qx̄ has twice as many columns and rows as Qλ, and the entire matrix Qsup has 3N columns, where N is the number of snapshots used in Algorithm 1. Stabilizing in this manner has been considered for both the parameterized optimal control 38 problem as well as for stabilizing PDEs themselves. In [18], stabilization by supremizer was employed for solving the Stokes control problem. In [23], this method of stabilization is used for stabilizing the Navier-Stokes equations. 3.2 Stabilization by Aggregation for Parametrized PDE-Control Problems Another method of enriching the spaces to ensure inf-sup stability of the RB approximation spaces is to augment by aggregation [48]. Because equivalence of U and Q ensured inf-sup stability at the continuous and discrete levels, both UN and QN are enriched so that they are equivalent and are each updated with both u(µ) and λ(µ)) at each step the RB spaces are updated. That is, the updated spaces and matrices determined by Algorithm 1 are given by the following rules: • Let ZN = span {uh(µ(n)), λh(µ (n)), n = 1, ..., N} • state: UN = ZN such that Qu = [uµ(1) ,λµ(1) , ...,uµ(N) ,λµ(N) ] • control: FN = span {f(µ(n)), n = 1, ..., N} such that Qf = [fµ(1) , ..., fµ(N) ] • adjoint/Lagrange multiplier: QN = ZN such that Qλ = Qu Inf-sup stability is established as follows [48]; min q∈QN max (v,z)∈FN×UN ⟨K(µ)z,q⟩ − ⟨Mv,q⟩ (⟨2βMv,v⟩+ ⟨Mz, z⟩)1/2||q||Q ≥ min q∈QN max (0,z)∈FN×UN ⟨K(µ)z,q⟩ ⟨Mz, z⟩1/2||q||Q ≥ min q∈QN ⟨K(µ)q,q⟩ ⟨Mq,q⟩1/2||q||Q ≥ min q cΩ ⟨K(µ)q,q⟩ ||q||2Q ≥ cΩ α̃0, where cΩ, α̃0 come from the Poincare inequality and coercivity of a(·, ·;µ), respectively. Note 39 that this argument depends on the assumption that QN = UN . The resulting reduced basis is fully constructed in the offline step. Let Qagg denote the block-diagonal reduced basis matrix with augmentation by aggregation, which has the structure shown in the left side of (2.32). The RB matrix Qf from the definition of naive RB spaces is left unaugmented. The RB matrices Qu and Qλ are both augmented to ensure Qu = Qλ. It follows that Qu and Qλ each have twice as many columns as Qf and the total number of columns in Qagg is 5N . For computations, as for the supremizer, we use a Gram-Schmidt process so that Qu, Qf and Qλ have orthonormal columns. Stabilization by aggregation, also referred to as integration, has been considered strictly for using reduced order modeling to solve parameterized optimal control problems. In [41, 48], this method is used for solving the parameterized elliptic optimal control problems, specifically parameterized diffusion and convection-diffusion control problems. In [77], aggregation is used along with supremizers for reduced velocity and pressure spaces when solving optimal flow control pipeline problems. 3.3 Numerical Results We study two benchmark problems, the parametrized diffusion control problem (2.1)-(2.4) and a parametrized convection-diffusion control problem. For the diffusion problem, the spatial domain is Ω = [0, 1]2, with Dirichlet boundary ΩD := [0, 1] × {1} and Neumann boundary ΩN := Ω \ ΩD. The desired or target state is uniformly equal to one everywhere, û(x, µ) = 1, and g(x) = 0. Thus, the target is inconsistent with the Dirichlet boundary condition. The state u(x, µ) cannot match the target on the Dirichlet part of the boundary and the control f(x, µ) will 40 Figure 3.1: Subdivision of the spatial domain Ω for the parameterized diffusion control problem when ND = 3. In this case, µ ∈ R3. require more energy to produce a good approximation of the target state. The domain D is divided into ND equal-sized horizontal subdomains, where the stratified domain consists of ND horizontal strips, as shown in Figure 3.1 for ND = 3. The diffusion coefficient is piecewise constant on each subdomain σ(x, µ)|Ωk = µk, k = 1 : ND, where the parameters µ = [µ1, ..., µND ]T are taken to be independently and uniformly distributed random variables in Γ := [0.01, 1]ND . The training set (of size Nmax) is used in Algorithm 1 to build snapshots that form the basis. The basis is tested in the online stage using a set of parameters not in the training set T . In all experiments, we used Nmax = 2000 randomly chosen training points, regularization parameter β = 10−2 in (2.1), and spatial discretization consisting of piecewise bilinear finite elements on a uniform grid with (2nc + 1)× (2nc + 1) elements. We begin by examining how training, i.e., Algorithm 1, behaves for the two methods of stabilization, using both Galerkin and Petrov-Galerkin formulations of the reduced problem. Figure 3.2 shows the maximum relative error indicator for the supremizer over the training set T as Qsup is constructed, for nc = 4 and ND = 3. The tolerance for the greedy algorithm is 10−7. Figure 3.3 shows the analogous results for aggregation. Note that the Petrov-Galerkin formulation 41 (2.31) corresponds to the normal equations associated with the (residual) error indicator η(µ), so that as the RB grows, the error indicator monotonically decreases. This is not true for Galerkin formulation. It can be seen from Figure 3.2 that with the given error indicator, the Petrov-Galerkin method is more effective than the Galerkin method when the supremizer is used for stabilization. In contrast, the behavior of the two reduced models is closer when aggregation is used. However, the behaviors of the two reduction strategies are very similar (and are close to identical for certain values of N ) as the maximum error indicator values approach the desired tolerance. Similar results are shown in Figures 3.4 and 3.5 for a finer spatial mesh. For the supremizer, the Petrov- Galerkin formulation produces better maximum relative error indicator values for most values of N . However, there is one significant difference between Figures 3.2 and 3.4. On the finer mesh (Figure 3.4), the Petrov-Galerkin formulation fails to meet the tolerance (of 10−7), whereas the Galerkin formulation succeeds. On a finer mesh (nc = 7), the behavior is similar; the Petrov- Galerkin formulation fails to meet the tolerance while the Galerkin formulation succeeds. For aggregation, there is better behavior of the Petrov-Galerkin formulation initially and very similar behaviors of the two formulations as the tolerance is reached. Both formulations are shown to be convergent in Figure 3.5. This is also true for a finer mesh (nc = 7). The behavior of the supremizer RB for finer meshes brings about a question of robustness of the Petrov-Galerkin formulation. Figures 3.6 and 3.7 show the maximum condition numbers of the reduced matrices QTG(µ)Q (Galerkin) and (G(µ)Q)T (G(µ)Q) (Petrov-Galerkin) over the training set T . The latter matrices become severely ill-conditioned as the search proceeds, and we attribute failure to produce a suitable reduced basis to ill-conditioning. Note that this is a common issue in linear algebra: a matrix ATA associated with the normal equations has condition number equal to the square of the condition number of A. The trends seen in Figures 3.6 and 3.7, for 42 Figure 3.2: Maximum relative error indicator over the training set as Qsup is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 4, and the number of subdomains Ωk ⊂ Ω is ND = 3. Figure 3.3: Maximum relative error indicator over the training set as Qagg is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 4, and the number of subdomains Ωk ⊂ Ω is ND = 3. 43 Figure 3.4: Maximum relative error indicator over the training set as Qsup is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 6, and the number of subdomains Ωk ⊂ Ω is ND = 3. The PG reduction fails to converge. Figure 3.5: Maximum relative error indicator over the training set as Qagg is being built with both a Galerkin (G) and Petrov-Galerkin (PG) solve for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1) × (2nc + 1) elements where nc = 6, and the number of subdomains Ωk ⊂ Ω is ND = 3. 44 Figure 3.6: Maximum condition number of the reduced system QTG(µ)Q in the Galerkin (G) case and (G(µ)Q)T (G(µ)Q) in the Petrov-Galerkin (PG) case over all parameters in the training set as Qsup is being built for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1)× (2nc + 1) elements where nc = 6, and the number of subdomains Ωk ⊂ Ω is ND = 3. There is a lack of convergence in the case of PG projection. the two stabilization methods, are consistent as the basis size varies. In Table 3.1, results for the maximum condition number of the reduced systems over the training set for all N are given for both formulations for various spatial meshes. The supremizer RB leads to lack of convergence of the greedy algorithm for the Petrov-Galerkin formulation when nc = 6 and nc = 7. The Galerkin formulation never fails to converge. Now we examine the performance of the reduced bases in the online stage. In Table 3.2, the accuracy of the reduced solution tested over 500 parameters not in the training set T is shown for a variety of meshes with fixed ND = 3 and both Galerkin and Petrov-Galerkin formulations. Similar results are given for ND = 10 in Table 3.3 where only results for the Galerkin formulation are shown. We summarize the trends observed in these experiments, as follows: • The number of snapshots, as well as the size of the RB, is smaller for Qagg than for Qsup. 45 Figure 3.7: Maximum condition number of the reduced system QTG(µ)Q in the Galerkin (G) case and (G(µ)Q)T (G(µ)Q) in the Petrov-Galerkin (PG) case over all parameters in the training set as Qagg is built for the parameterized diffusion control problem. The number of basis vectors is N . Here, the stopping tolerance for the greedy algorithm is 10−7, the spatial discretization has (2nc + 1)× (2nc + 1) elements where nc = 6, and the number of subdomains Ωk ⊂ Ω is ND = 3. • The Petrov-Galerkin formulation is less robust than Galerkin formulation, in the sense that the greedy search failed to reach the stopping tolerance for finer spatial meshes with the supremizer RB. • The aggregation RB is less sensitive to robustness of the Petrov-Galerkin formulation. The greedy algorithm failed to reach the tolerance for the two finer meshes with Qsup. The greedy algorithm always converged for Qagg. • The sizes of the reduced bases tend toward asymptotic limits as the spatial mesh is refined. For example, this limit is approximately N = 25 snapshots (basis matrix of order 125) for Qagg with Galerkin search. • The sizes of the reduced bases are larger for the larger number of parameters ND = 10. 46 Galerkin Petrov-Galerkin nc Qsup Qagg Qsup Qagg 3 4.7e+ 04 4.7e+ 04 2.3e+ 09 2.2e+ 09 4 2.0e+ 05 2.0e+ 05 2.2e+ 10 3.8e+ 10 5 6.0e+ 05 5.5e+ 05 2.5e+ 11 3.0e+ 11 6 2.0e+ 06 1.7e+ 06 - 3.0e+ 12 7 6.1e+ 06 5.0e+ 06 - 1.9e+ 12 Table 3.1: Maximum condition number of the reduced system QTG(µ)Q in the Galerkin case and (G(µ)Q)T (G(µ)Q) in the Petrov-Galerkin case over all parameters in the training set and all steps of the greedy algorithm for the parameterized diffusion control problem as nc is refined, where the spatial mesh has (2nc + 1)× (2nc + 1) discrete elements, tolerance is 10−7, Nmax = 2000 and ND = 3. Empty cells represent lack of convergence to desired tolerance. (Compare Tables 3.2 and 3.3.) Remark. For certain tests in the online stage, the relative error indicator values produced over the testing set of parameters were greater than the prescribed tolerance, as seen in Tables 3.2 and 3.3. This can be explained by the size of the training set chosen. Increasing the number of training parameters to 3000 resulted in maximum relative error indicator values below the tolerance in all cases. For the second benchmark problem, we consider a variant of the Graetz convection-diffusion 47 Galerkin Projection Petrov-Galerkin Proj