ABSTRACT Title of dissertation: THEORETICAL METHODS IN THE NON-EQUILIBRIUM QUANTUM MECHANICS OF MANY BODIES Andrew Robertson, Doctor of Philosophy, 2011 Dissertation directed by: Dr. Victor Galitski Department of Physics A toolbox of theoretical methods pertinent to the study of non-equilibrium many-body quantum mechanics is presented with an eye to speci c applications in cold atoms systems and solids. We discuss the generalization from unitary quantum mechanics to the non-unitary framework of open quantum systems. Theoretical techniques include the Keldysh close-time-path integral and its associated correla- tion functions, the quantum kinetic equation, and numerical integration of equations of motion both unitary and non-unitary. We explore how the relaxation of the as- sumption of equilibrium yields a whole new array of sometimes counterintuitive e ects. We treat such examples as the non-equilibrium enhancement of BCS super- uidity by driving, bistability and coherent population transfer in Feshbach coupled fermions, and the dynamic stimulation of quantum coherence in bosons con ned to a lattice. These systems are considered with an eye to enhancing some useful quantum properties and making them available in wider parameter regimes. THEORETICAL METHODS IN THE NON-EQUILIBRIUM QUANTUM THEORY OF MANY BODIES by Andrew B. Robertson Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial ful llment of the requirements for the degree of Doctor of Philosophy 2011 Advisory Committee: Professor Victor Galitski, Chair/Advisor Professor Charles Clark Professor Theodore Einstein Professor Christopher Jarzynski Professor Victor Yakovenko c Copyright by Andrew Robertson 2011 Dedication This work is dedicated to my grandfather, Frank J. Kopec, whose curiosity about the world still inspires me. ii Acknowledgments If it takes a village to raise a child, it takes at least that much to train a scientist. To that end, I would like to thank the people that have made my graduate career so ful lling both professionally and personally. First and foremost, I would like to thank my graduate advisor, Dr. Victor Galitski. I have a deep respect for his profound intuition for physical systems, and I have attempted to repay his faith in my abilities with thorough analyses of the problems he has assigned to me. The generosity, guidance, and patience he has shown me throughout my time at the University of Maryland have served as an example of good management that I will not soon forget. I will also thank my undergraduate advisor, Dr. Hong Ling, at Rowan Uni- versity. Dr. Ling rst introduced me to the work of a research physicist. I admire his inquisitiveness and his ability to unify seemingly disparate areas of physics. The continuing collaboration between Dr. Ling and myself has born much investigative fruit, including a sizable portion of the work described in this thesis. It has been, and continues to be, one of the luckiest fortunes of my career to work with him. The work of any scientist is a foregone conclusion without the generosity of the organization and agencies that fund scienti c research. For my own part, I would like to thank the American Society of Electrical Engineers (ASEE) for awarding me the National Defense Science and Engineering Graduate (NDSEG) Fellowship. It went a long way to allowing me to focus solely on research and learning without having to worry excessively about day to day expenses. The work in this thesis has iii also been supported by DARPA, the US-ARO, and the NSF. I have been personally funded by the Joint Quantum Institute - Physics Frontiers Center (JQI-PFC) and the University of Maryland (UMD). I owe great debts of gratitude to all of these organizations. I would also like to thank my family and my future wife. I will never be able to repay the time, money, and work put into me by Frank Kopec, Barbara Robertson, or William Robertson. Pop-Pop, Mom, and Dad, thank you so much for everything. I know it was not easy, but you gave of yourselves until there was nothing left to give. You are the most generous people I have ever met, and the accomplishments of my career are equally yours. To David and Katy, we have all been successful in our own ways. I am very proud of both of you, and I really enjoy the di erent ways that you have of looking at things. Lastly, to Adi, you have been my rock from day to day. I am humbled by your discipline and willingness to push on in the face of adversity. Together, we can meet any challenge. I truly thank you all. iv Table of Contents List of Figures viii List of Abbreviations ix 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Overview of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Unitary Dynamics . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Systems at Equilibrium . . . . . . . . . . . . . . . . . . . . . . 3 1.2.3 Systems Out of Equilibrium . . . . . . . . . . . . . . . . . . . 4 1.2.4 Dark States, Coherent Population Transfer, and Bistability in Feshbach Coupled Fermions . . . . . . . . . . . . . . . . . . . 4 1.2.5 Non-Equilibrium Enhancement of BCS Pairing . . . . . . . . . 5 1.2.6 Dynamic Stimulation of Phase Coherence in Lattice Bosons . 6 1.2.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 6 2 Unitary Dynamics 7 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Unitarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 States in Unitary Many-Body Quantum Mechanics . . . . . . . . . . 10 2.4 Observables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.5 Equations of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.5.1 Heisenberg Equation of Motion . . . . . . . . . . . . . . . . . 21 2.5.2 Equations of Motion in the Interaction Picture . . . . . . . . . 23 2.6 Periodic Hamiltonians . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3 Systems at Equilibrium 30 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2 Closed and Open Systems . . . . . . . . . . . . . . . . . . . . . . . . 30 3.3 Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.4 Particle Statistics at Equilibrium . . . . . . . . . . . . . . . . . . . . 36 3.4.1 The Gibbs Distribution . . . . . . . . . . . . . . . . . . . . . . 36 3.4.2 Bosons and Fermions . . . . . . . . . . . . . . . . . . . . . . . 39 4 Systems Out of Equilibrium 44 4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.2 Observables out of Equilibrium . . . . . . . . . . . . . . . . . . . . . 45 4.3 Non-Equilibrium Correlation Functions . . . . . . . . . . . . . . . . . 47 4.4 Equations of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 v 5 Dark States, Coherent Population Transfer, and Bistability in Feshbach Cou- pled Fermions 60 5.1 Dark States and Coherent Population Transfer in Feshbach Coupled Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.1.2 Mean-Field Hamiltonian . . . . . . . . . . . . . . . . . . . . . 63 5.1.3 Stationary Dark State Solution . . . . . . . . . . . . . . . . . 66 5.1.4 Dynamic Simulations and Stability . . . . . . . . . . . . . . . 68 5.1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.2 Matter-Wave Bistability in Feshbach Coupled Fermions . . . . . . . . 72 5.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.2.2 Bosonic model . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 5.2.2.1 Quantum Optical Analogy . . . . . . . . . . . . . . . 75 5.2.2.2 Bistability . . . . . . . . . . . . . . . . . . . . . . . . 76 5.2.3 Fermionic model . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.2.3.1 Quantum Optical Analog . . . . . . . . . . . . . . . 80 5.2.3.2 Bistability . . . . . . . . . . . . . . . . . . . . . . . . 83 5.2.3.3 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . 88 5.2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 6 Non-Equilibrium Enhancement of BCS Pairing 91 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.2.1 Nonequilibrium Enhancement . . . . . . . . . . . . . . . . . . 96 6.2.2 Kinetic Equation . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.3.1 Super uid at Equilibrium . . . . . . . . . . . . . . . . . . . . 102 6.3.2 Normal at Equilibrium . . . . . . . . . . . . . . . . . . . . . . 105 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7 Dynamic Stimulation of Phase Coherence in Lattice Bosons 111 7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 7.2 Super uidity at Equilibrium . . . . . . . . . . . . . . . . . . . . . . . 113 7.3 Dynamic Enhancement of Super uidity . . . . . . . . . . . . . . . . . 115 7.4 Non-Equilibrium Phase Boundary . . . . . . . . . . . . . . . . . . . . 121 7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8 Conclusions and Future Work 127 A.1 Estimation of Mean Collision Time Between Harmonically Trapped Bosons and Fermions . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 A.2 Equilibrium Phase Boundary of the Bose-Hubbard Model within Keldysh Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 A.2.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 A.2.2 Expansion of Keldysh Correlation Functions . . . . . . . . . . 134 A.2.3 Lang-Firsov Transformation . . . . . . . . . . . . . . . . . . . 136 vi A.2.4 Zeroth Order Keldysh Green?s Function . . . . . . . . . . . . 138 A.2.5 Equilibrium Phase Boundary . . . . . . . . . . . . . . . . . . 140 A.2.5.1 Phase Boundary at Equilibrium without Dissipation 142 A.2.5.2 Phase Boundary at Equilibrium with Dissipation . . 144 A.2.6 Phonon Green?s Functions . . . . . . . . . . . . . . . . . . . . 144 A.2.7 Frequency Transforms of Undriven Functions ^g0ij (t;t0) . . . . . 148 A.2.7.1 The Functions Q and P . . . . . . . . . . . . . . . . 150 A.3 Nonequilibrium Phase Boundary of the Bose-Hubbard Model with Periodic Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 A.3.1 Homogeneous Perturbation . . . . . . . . . . . . . . . . . . . . 153 A.3.2 Heterogeneous Non-Equilibrium Perturbation . . . . . . . . . 156 A.3.3 Correlation Functions in Wigner Coordinates . . . . . . . . . . 157 A.3.3.1 Rewriting in Floquet form . . . . . . . . . . . . . . . 158 A.3.4 Wigner Transformation of g(t;t0) . . . . . . . . . . . . . . . . 163 A.3.5 Dyson Equation in Wigner Form . . . . . . . . . . . . . . . . 164 A.3.6 The Exact Non-equilibrium Single-Site Correlation Function . 167 A.3.7 Lattice-Momentum Transformation of the Nonequilibrium Dyson Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Bibliography 172 vii List of Figures 5.1 Simulated Dynamics of the Coupled Fermion System . . . . . . . . . 65 5.2 Frequency Analysis of Collective Excitations of the Coupled Fermion System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.3 Bistable Stationary Solutions in Bosonic Model . . . . . . . . . . . . 77 5.4 Mapping of Coupled Fermion Problem to the Dicke Model . . . . . . 81 5.5 Free Energy Density as a Function of e and jcj2 . . . . . . . . . . . 85 5.6 Molecular Population as a Function of Detuning . . . . . . . . . . . . 86 5.7 Dynamics of Atom-Molecule Conversion . . . . . . . . . . . . . . . . 89 6.1 Bragg Potential: Formation of a Moving Lattice by Interfering Lasers 94 6.2 Non-Equilibrium Quasi-particle Distribution Functions for System that is Super uid at Equilibrium . . . . . . . . . . . . . . . . . . . . 103 6.3 Non-Equilibrium Enhanced Order Parameter as a Function of Tem- perature for a System that is Super uid at Equilibrium . . . . . . . . 106 6.4 Non-Equilibrium Quasi-particle Distribution for a System that is Nor- mal at Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 6.5 Non-equilibrium Enhanced Gap as a Function of Temperature for a System that is Normal at Equilibrium . . . . . . . . . . . . . . . . . . 109 7.1 Phase Diagram of the Bose-Hubbard Model at Equilibrium . . . . . . 116 7.2 Density Plots of Divergence of the Correlation Function in J=U vs. =U space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 7.3 Examples of Non-Equilibrium Super uid Phase Boundary from Dif- ferent Regions of Driving Parameter Space . . . . . . . . . . . . . . . 124 viii List of Abbreviations ASEE American Society of Electrical Engineers BCS Bardeen Cooper Schrie er BEC Bose-Einstein Condensate BHM Bose-Hubbard Model CV Column Vectorization DARPA Defense Advanced Research Projects Agency EQ Equilibrium JQI-PFC Joint Quantum Institute - Physics Frontier Center MI Mott Insulator NDSEG National Defense Science and Engineering Graduate Fellowship NEQ Non-Equilibrium NSF National Science Foundation PAT Photon-Assisted Tunneling Rb Rubidium Li Lithium SF Super uid STIRAP Stimulated Raman Adiabatic Passage UMD University of Maryland US-ARO United States Army Research O ce ix Chapter 1 Introduction 1.1 Motivation From classes in introductory thermodynamics up through graduate-level sta- tistical mechanics courses, it is often taken for granted that an astronomical number of degrees of freedom can be described using only a few variables. Many students are surprised to nd that a gas of atoms, in general, does not have a temperature (Well then how hot is it when you touch it?). Such massive simpli cations of a sys- tem?s complexity are made possible by the assumption that the system is at thermal equilibrium, and despite the obvious and continued usefulness of this assumption, it is incredibly restrictive. Experiments are possible, especially in the eld of optically con ned atoms, that can access the instantaneous dynamics of systems with large numbers of degrees of freedom. These experiments must be described by theory that does not depend on the assumption of equilibrium. Furthermore, from the perspective of applications, the relaxation of this assumption allows for a whole new array of possible e ects and engineering possibilities in systems that are much less interesting at equilibrium. Non-equilibrium (NEQ) theory and the theory of open systems attempt to describe these interesting and potentially useful physical situations. The price we pay to do this is the mathematical intricacy associated with describing large numbers 1 of entities individually. Having to give up such cherished concepts as pressure, temperature, and even entropy in favor of seemingly more abstract quantities can be disconcerting at rst. The rst of the main goals of this work is to represent these ideas in an intuitive language so that the theory can be used to describe observable phenomena without getting tangled in all the formal rigor. Our second goal is to showcase the usefulness of non-equilibrium theory by using it to predict a few new and useful e ects found in condensed matter and cold atom applications. It is our hope that by intuitive explanation followed by a few illustrative examples, we will help to dispel the reputation that non-equilibrium quantum statistical mechanics has for opacity and help to keep pace with the fast-moving experimental work in this area. By no means will this work be comprehensive, nor will it survey all the tech- niques available to the theorist interesting in non-equilibrium phenomena. The generality of the theory of open systems makes the eld extremely vast, and even a cursory survey is impractical here. Rather, we shall explain a few techniques that we have found useful in our work with an eye toward physical understanding. Along the way, we shall direct the interested reader toward work that will provide more extensive and indepth understanding. 1.2 Overview of Thesis While the theory of open systems can be quite intuitive, it does require a certain generalization of perspective. We believe that it can best be understood 2 by building upon the concepts that it has in common with equilibrium theory and emphasizing how and where equilibrium is a speci c case of our treatment. 1.2.1 Unitary Dynamics Before we burden ourselves with the complexity of non-equilibrium theory, we shall review what we know of unitary many-body theory. We shall discuss the Schr odinger, Heisenberg, and Interaction pictures of unitary dynamics. We shall cover observables such as correlation functions and the basic kinetic theory necessary to describe systems that do not couple to a bath of any kind. The Floquet theory of periodically driven systems will be discussed, and we shall cover the master equation approach to unitary quantum mechanics involving a density matrix. 1.2.2 Systems at Equilibrium In order to understand what the assumption of equilibrium really does and how it can be used, we shall review the basic framework of classical and quantum statistical mechanics paying special attention to the assumptions that de ne equi- librium. The de nitions of closed and open systems will be reviewed with a few demonstrative and somewhat counterintuitive examples. We shall generalize the techniques familiar from unitary quantum mechanics so that they can be used to describe systems at equilibrium with an energy or particle bath. We shall clearly delineate how the assumption of equilibrium allows for massive simpli cations of our treatments. 3 1.2.3 Systems Out of Equilibrium Here the assumptions mentioned in the previous chapters are relaxed, and we derive a few approaches to dealing with systems away from equilibrium. The Keldysh closed time-path integral and its associated correlation functions are de- rived. Kinetic equations for the density operator are derived and related to the Quantum Boltzmann equation. The assumptions under which a system can be par- titioned into \system" and \reservoir" components are discussed, and the theoretical framework of the coarse-graining procedure is discussed. 1.2.4 Dark States, Coherent Population Transfer, and Bistability in Feshbach Coupled Fermions As an application of our theory for unitary many-body quantum mechancs to a problem, we derive the equations of motion for a system of fermions that are coupled by a Feshbach resonance to an excited molecular state. This excited state is photocoupled to a ground molecular state. We also consider the case where there is only a Feshbach resonance between the free fermionic states and a single molecular state. In this situation, the molecular state is not photoassociated with any other molecular state. In the coupled case, we nd that there are stationary states with no popula- tion in the excited state. This corresponds to coherent superpositions of unpaired fermions and ground molecular states. Furthermore, these \dark states" can be adiabatically tuned into each other resulting in coherent population transfer from 4 free fermions to bosonic molecules. Quenching the system yields stable oscillations dominated by two incommensurate frequencies. In the uncoupled case, we nd a useful analogue to a system familiar from laser theory. Non-linear fermion interaction terms allow for population transfer between unpaired and paired states with two possible frequencies. This bistability reminds us of the multiple possible operating frequencies of a laser. We show that the bistability leads to hysteresis e ects when the strength of the interactions is swept between the paired and unpaired sides. We also show that this sweeping cannot be done adiabatically. This fact necessitates a non-equilibrium treatment. 1.2.5 Non-Equilibrium Enhancement of BCS Pairing As a simple, intuitive application of our kinetic theory, we describe a system of optically con ned fermions. These fermions can interact, leading to pairing and BCS super uidity. We show that by pushing the system out of equilibrium with driving lasers, we can enhance the super uid transition. That is, the super uid order parameter and transition temperature can be increased. This phenomenon is familiar from Eliashberg?s work [33] on driven superconductors. However, we will show that the added tunability of the driving parameters a orded in cold atom systems as compared to solid state allows us to actually drive the system from the normal phase to super uid. This is in stark contrast to solid superconductors where the superconducting phase could only be enhanced if the system was superconduct- ing at equilibrium. Our approach employs a Boltzmann-like kinetic equation in a 5 local-density approximation to describe the steady-state non-thermal energy distri- bution of quasiparticles. The order parameter can then be calculated by putting this distribution into the self-consistency equation. 1.2.6 Dynamic Stimulation of Phase Coherence in Lattice Bosons As an application of our Keldysh closed time-path framework, we shall describe a non-equilibrium phase transition in a system of interacting bosons on an optical lattice driven by interfering lasers. Lattice bosons at equilibrium are described by the Bose-Hubbard model, and this model admits a phase transition between an insulator-like state and a phase coherent super uid state. We blend Floquet and Keldysh theory to expand the non-equilibrium site-to-site correlation function of the driven system in small tunneling. We show that by tuning the driving energy to be of the order of the insulating gap, particles can be made to tunnel proli cally and condense producing phase-coherence in regions of parameter-space where it was unavailable before. Divergences of the correlation function correspond to the non-equilibrium phase boundary between phase coherent and incoherent states. 1.2.7 Conclusions and Future Work Here we discuss the conclusions to be drawn from our study of non-equilibrium many-body theory. We brie y discuss some future work that can be built o of the aforementioned studies. 6 Chapter 2 Unitary Dynamics 2.1 Overview Unitary quantum mechanics is the rst type of quantum dynamics that a student is likely to meet. In unitary dynamics, the system is governed by a total Hamiltonian such that the energy is a conserved quantity. The time-dependence of the system is determined through the Schr odinger equation which can be written in di erent \pictures" which will emphasize di erent features of the time-evolution. These pictures allow the researcher to isolate the e ects of selected couplings in a system?s Hamiltonian. These pictures, as well as the formalism necessary to describe quantum states, both pure and mixed, will generalize to non-unitary systems that have contact with a reservoir. 2.2 Unitarity Unitary dynamics refers to the dynamics of quantum systems governed by the Schr odinger equation. d dtj i= i h ^Hj i; (2.1) Of course all quantum mechanics is governed by the Schr odinger equation, but only when describing a system in its entirety. More often than not, one only cares about 7 some subset of the total degrees of freedom. For instance, if we wanted to describe an electron on a large lattice of ions, we might look for an equation that described only the motion of the electron and included the motion of the ions only as an averaged e ect on the electron. The entire system of ions and the electron will be described by the Schr odinger equation, but the equation of motion for only the electron will not be the Schr odinger equation. We have a demonstrated a coarse- graining procedure [12, 76] wherein we average over some of a system?s degrees of freedom to yield equations of motion for the remaining degrees of freedom that are interesting. Systems are unitary if no such coarse-graining has been done. To say that an operator ^U in quantum mechanics is unitary is simply the mathematical statement that its hermitian adjoint is also its inverse. ^Uy = ^U 1; (2.2) However, this deceptively simple statement has far-reaching consequences when we enforce the requirement that a system?s time-evolution is unitary. All systems gov- erned by the Schr odinger equation have unitary time-evolution. That is, there exists an operator for every (possibly time-dependent) Hamiltonian that evolves the state j i in time: j (t)i= ^U (t)j (0)i; (2.3) Putting this expression into the Schr odinger equation exacts the following con- straints on the time-evolution operator ^U. d dt ^U (t) = i h ^H (t) ^U (t); (2.4) 8 with ^U (0) equal to the identity operator. These two properties imply rather simply that ^U is unitary. It is obvious that ^Uy(0) ^U (0) = ^1 because ^U (0) = ^1. It is also easy to check that d dt h^ Uy(t) ^U (t) i = 0; (2.5) ensuring that ^U remains unitary for all times t. Unitarity of ^U in quantum systems refers to probability-conserving processes (i.e. evolution such thath (t)j (t)i= 1). It is also implicitly related to the concept of reversibility which requires that ^U have an inverse to begin with. A system is said to be reversible when it is realistic for the dynamics to be run backwards. It is easy to stop a pendulum swinging and get it to swing in the opposite direction. It is di cult to perfectly reform a broken glass. We say that the pendulum system is reversible while the glass is not [64]. The entropy of the glass has been irreversibly increased. However, once reversibility has been established, the unitarity statement, ^Uy(t) = ^U 1 (t), requires h (t)j (t)i=h (0)j^Uy(t) ^U (t) (0)i=h (0)j (0)i; (2.6) Thus, h (t)j (t)i = h (0)j (0)i, and the total probability is conserved. We may intepret this mathematical statement as the assertion that no eigenstates of the static Hamiltonian have any creation or decay channels. Particle number is conserved, and the system is said to be closed. Of course, energy can still ow into or from the system into an energy reservoir. If we further required that no energy transfer was possible, the system would be said to be isolated. The focus of this chapter will be to describe closed and isolated systems. 9 2.3 States in Unitary Many-Body Quantum Mechanics The concept of a state in quantum mechanics can be simultaneously elusive and concrete. We will bypass many of the more epistemological questions by assuming that the fundamental elements of our system (particles, quasiparticles, rotors, etc...) are instantaneously in statesj ithat can be projected onto di erent bases spanned by the eigenvectors of physical operators such as position, momentum, and so on. Once the quantum states of the fundamental elements have been de ned, the state of an ensemble of these elements is determined by the distribution of the ensemble elements over their possible quantum states. For example, the state of a gas of atoms is determined by the distribution of individual atoms over states de ned by such observables as position, momentum, and spin state. Classically, this leads to classical kinetics with its distribution functions. At the quantum level, quantum interference, uncertainty, and particle statistics arising from indistinguishability must also be included. This leads to quantum kinetics and the density matrix formalism. Let us operationally de ne the quantities of use to us. As mentioned, we assume that the fundamental elements of our theory are always in some states. Let us rst consider one single such element and let its state be j i in the Schr odinger picture. That is to say that j i is a function of t, and it satis es the Schr odinger equation, Eq. (2.1). At any time,j ican be projected into any basis that spans the Hilbert space of possible states. The state can be written in terms of this basis of eigenstates (indexed by j) of some operator ^A as j i= X j cjjji (2.7) 10 where the coe cients cj are complex and normalized to unity (Pjjcjj2 = 1). This normalization is due to the fact that the probability of the fundamental element being observed in statejjiis equal tojcjj2. A state that can be put into the form of Eq. (2.7) is known as a pure state [81]. The coe cients cj are complex, and saying that an element is in a pure state is equivalent to saying that we know the phases of these numbers rather than just their moduli jcjj. In a pure state, we understand how the states interfere with each other. Contrary to this, we could envision a particle to be in a state of which we have incomplete information. For instance, we may have complete knowledge of the probabilities that the particle is found in di erent eigenstates of some observable (we know thejcij2) but have no knowledge of the phases among these states (we are ignorant of the actual values of ci). This is known as a mixed state. The use of the concept of a mixed state is not as common in single-particle mechanics as it is when many particles are considered. When a system has many degrees of freedom, some kind of coarse graining is usually necessary so that a theory can be derived that applies only to the part of the system that is of interest rather than to the whole system in its unwieldy entirety. This coarse-graining amounts to willing ignorance of the total state of the system in favor of a probability distribution of its elements over a set of observables or eigenstates. Take an ideal gas at equilibrium as an example. Rather than de ne the precise microstate, including the number of bosons in each free-particle state, we de ne the average occupation in these states through the partition function. In reality, the total system is instantaneously in a well-de ned quantum state in the Fock basis, but in the interest of mathematical and conceptual 11 tractability, we simply say that it has a \thermal probability" of being observed in any number of these Fock states that have a common macroscopic feature such as energy (i.e. A particle has a probability proportional to e " of being in some state of energy "). We say that the gas is in a mixed quantum state. We have denoted the probabilities of particles being in certain single-particle states, but we have no idea of the instantaneous relative phases between them. We have no interference information, but we can still acheive useful results through the concept of a mixed state. There is a common framework that will enable us to describe both pure and mixed states using the same mathematical quantities. We shall employ the very powerful density matrix formalization which is useful both in single and many par- ticle systems. The density matrix is fundamentally just a projection operator. Just as the operatorjxihxjwill yield the contribution (or \projection") of that state into the jxi direction, so the density operator ^ = j ih j projects onto the instanta- neous state j i of the system. This form for the density matrix, however, will only be suitable for pure states (states that can be written as a single vector j i). A simple generalization is required to include mixed states (states with probability pi of being in a pure quantum statej ii). To that end, we de ne the density operator is the following way. ^ = X i pij iih ij (2.8) This form will incorporate both pure and mixed states. A pure state arises in the situation when all the probabilities pi but one vanish. In that case, the density 12 matrix reduces to ^ =j iih ij (2.9) for some i. Because this quantum state must be normalized h ij ii = 1, we note that ^ 2 = ^ . Equivalently, we may say that a density matrix (simply the matrix elements of the density operator) describes a pure state when the trace of its square is 1. Trf^ 2g= 1 (2.10) Because this formalism is built on the quantum state of the total system, it will describe systems in single-particle states or many-particle quantum state equally well. 2.4 Observables No theory can be said to be truly scienti c until it describes something that can be observed. In single-particle quantum mechanics, the observables are de- scribed by Hermitian operators [81]. We recall that a state can be decomposed into linear combinations of the eigenstates of such operators. We further recall that a measurement of the observable ^A will yield one of the eigenvalues of that operator with a probability given by the state?s coe cient of that eigenvector. That is, if ^A de nes a basis such that ^Ajaii= aijaii (2.11) and we choose to express the state of our system in this basis: j i= Picijaii, then a measurement of the observable A on our system will yield ai with probabilityjcij2. 13 Even though we are dealing with a pure state which can be written in terms of a single ket j i, the indeterminacy inherent to quantum mechanics means that the best prediction we can make of the outcome of a measurement will be a probability distribution. In classical statistical mechanics the situation is similar, but di erent in a fundamental way. In this situation, we are dealing with huge numbers of particles, and the indeterminacy of a measurement comes from the fact that it is impossible to know the dynamics of such a big number of particles. It is much more feasible to calculate the probability that a certain number of particles will exhibit a given property. For instance, it is impossible to know the energy of every particle in a con ned ideal gas even though they all have well-de ned energies. It is comparatively simple to determine the probability that a given number of those particles have an energy more than a set amount. We can calculate the probability distribution for the value of a measurement of the energy of any of these particles. This probabilistic nature of the treatment does not come from any indeterminacy inherent to the laws of physics under consideration. It is merely due to the fact that we can never know the full state of a system of so many particles. Measurements in quantum statistical mechanics su er from both of these kinds of indeterminacy. When we wish to treat large numbers of quantum particles, mea- surements will be statistical averages of quantum averages. That is, we shall have to take averages over mixed states. Happily, the density matrix formalism has already been show to be adept at describing mixed states. We would like the measurement 14 of some observable on a mixed state to return the following average value. h^Ai= X i pih^Aii (2.12) where this expression is a sum over quantum averages h^Aii of the observable in states indexed by i. These states are generally not eigenstates of ^A. The average is weighted by pi, the incoherent probability that the system is found in pure state i. The density matrix formalism is particularly valuable in that it reproduces this form for the observable immediately by taking a trace. Because the trace operation is invariant to changes of basis, let us take the trace in the basis in which the density matrix is de ned. Trf^ ^Ag = X i X j h ij^ j jih jj^Aj ii= X i X jh ij^ j jih jj^Aj ii = X k pkh^Aik =h^Ai (2.13) This equation is equally valid in and out of equilibrium, and it will form the con- nection between the somewhat abstract mathematical treatment of non-equilibrium quantum mechanics and experimental observation. While it is very useful to know the probability distribution of observables such as those described in the previous paragraphs, there is more statistical information available that will be useful in understanding experiments. Correlations are also of great use as they are accessible by experiments in the linear response regime. This can be made clear in the following way from reference [4]. Consider an experiment where we shall probe a system with an external eld. That is, we shall add a term 15 ^HF to the Hamiltonian of the system where ^HF = Z dxF (x;t) ^X (x) (2.14) The generalized force F (x;t) couples to some position-dependent operators ^X (x) such as density or position. We expect that the addition of this force will change the expectation value of the operator to which it couples. What we nd is that we can relate the time-dependent change, X (x;t), in the expectation value of ^X to the strength of the force to rst order as X (x;t) = Z dx0 Z dt0 (x;t;x0;t0)F (x0;t0) (2.15) where we may think of (x;t;x0;t0) as a generalized susceptibility or response func- tion. This response function, appropriately scaled, is simply the correlation function of the observable ^X written in the interaction picture with respect to the perturba- tion ^HF. (x;t;x0;t0) =h^X (x;t) ^X (x0;t0)i (2.16) This correlation function will be dependent on properties of the system without the external perturbation F (x;t). That is, if we can access the suceptibility by probing, we will be able to use this function to describe or verify properties of the unperturbed system. Therein lies a main value of correlators. Correlation functions yield valuable information about a system?s order, phase transitions, energy spectrum, lifetime, and distribution of excitations. As explored by [4] in the following, a special case is given by the non-interacting single-particle Green function: hT^a (t) ^ay (t0)i where j i represents an eigenbasis of the single- 16 particle Hamiltonian. It is not di cult to show that the Fourier transform of aver- ages of this form can be written in a simple way: G (z) = 1z (2.17) where is the energy of j i and we have analytically continued the function from real frequency space into the complex plane of z. Clearly, the poles of this function can be used to give us information about the single-particle spectrum. We employ precisely this strategy in Chapter 7 to verify that the energy gap for mobile ex- citations has closed. We do this because functions are often easier to evaluate in complex time, and causality guarantees that these correlators are analytic through the Kramers-Kronig relations [51]. Thus, we may evaluate these functions anywhere we like in the complex plane and then analytically continue to the real axis (to study real-time dynamics) or the imaginary axis (to study thermodynamics) [4, 63]. From this function, it is possible obtain the single-particle density of states, the spectrum, and the e ective lifetime of state j i. Moreover, in the presence of many-body interactions, this function generalizes with the addition of a complex self-energy term. Con ning ourselves to real frequencies !+ = ! + i0 with a small imaginary part enforcing causality, we have G (!)! 1!+ (!+) (2.18) If we decompose this self-energy into real and imaginary parts = R + i I (and also make the unrealistically simplifying assumption that is constant in !), then we can interpret the real e ects of the interactions as follows. Let us look at the 17 retarded response function in real time GR (t) = Z d! 2 e i!tGR (!) e it( + R)+t I (2.19) From here we see that the inclusion of many-body interactions renormalizes the energies of the single-particle energy states by the R while information regarding the instability of single-particle states in the presence of interactions is encoded in a decay time 1 = 2 I. The factor of 2 results from the fact that the squared modulus jGRj2 / e2t I is interpreted as a probability density in a given state at time t. The o -diagonal correlators G ; 0 (t;t0) = hT^a (t) ^ay 0 (t0)i also yield valuable information about the system. If G ; is interpreted as the amplitude to remain in state j i, then G ; 0 is the amplitude for the transition j i!j 0i. Choosing to nd correlators in the position basis for instance, Gx;x0 (t;t0) can be understood as the amplitude for a particle to tunnel from points x to x0 over time jt t0j. When this tunneling amplitude is large over some distance scale , we expect domain formation as particles are very mobile over distances de ned by this scale. In the opposite limit, when this amplitude decays quickly over , we say that there are no correlations (no order) on the scale of . Particles will not be able to tunnel long-range, and the behaviors of the system at two points separated by a distance of order are independent. The di erent results in these two limits correpond to the two phases of a phase transition. It is this thought process that allows us to de ne a non-equilibrium phase transition in the driven Bose-Hubbard model in chapter 7 even when such concepts as temperature and free energy do not apply. 18 2.5 Equations of Motion As mentioned, unitary methods are useful when the system in question follows the Schr odinger equation, Eq. (2.1). However, the same information can be encap- sulated in di erent \pictures" that can be used to simplify or emphasize certain ele- ments of a treatment. Speci cally, we may rede ne the states into which we project our system such that they are rotating with chosen terms in the Hamiltonian. When we do this, those terms in the Hamiltonian drop out of the equations of motion for our operators and states. This is especially useful when one part of the Hamiltonian is diagonalizable while another part is di cult to deal with. These time-dependent rotations are nothing more than unitary evolution operators corresponding to di er- ent terms of the Hamiltonian. Let us begin by explicitly de ne the time-evolution operator to which we alluded in the rst section of this chapter. We recall that this is the operator that satis es Eq. (2.4) under the constraint that ^U (0). For a general time-dependent non-commutative Hamiltonian, the time-evolution operator has the form ^U (t) = 1 + 1X n=1 i h nZ t 0 dt1 Z t1 0 Z tn 1 0 dtn ^H (t1) ^H (t2)::: ^H (tn); (2.20) The right-hand-side of equation (2.20) is called the Dyson series, and it has a short- hand notation of the following form. ^U (t) =T expf i h Z t 0 dt0 ^H (t0)g; (2.21) where T represents the time-ordering symbol. This symbol will become important when we consider driven systems because it will order arguments along a time- 19 contour in the complex plane rather than on the real line. Additionally, there are other ways to group the terms that appear in this series (i.e. the Linked Cluster Ex- pansion [63]), but the general concept of writing the full time-evolution due to some complicated Hamiltonian which appears in an exponential as a sum over simpler polynomial terms will appear again and again. We may, alternatively, choose to express the state of our system in the form of a density matrix. As mentioned in section 2.3, instead of writing the state of our system as a ket j i, we may elect to write it as an operator ^ = j ih j or its corresponding matrix elements. While we are focusing on unitary motion presently, the equation of motion for this operator can be generalized to non-unitary and non- equilibrium dynamics [25]. Di erentiating the density operator and noting that the Hermitian adjoint of the Schr odinger equation is i h@@th j=h j^H, we get i h@@t^ = j ii h@@th j+i h@@tj ih j = j ih j^H + ^Hj ih j = [ ^H; ^ ] (2.22) This is the famous Von Neumann \master" equation for the density matrix [81, 25]. As mentioned, it can be generalized to describe the non-unitary dynamics associated with the coupling to a reservoir by adding terms to the right side. These extra bath terms, called Lindblad operators, will allow us to derive a Quantum Master Equation capable of dealing with driving, dissipation, and noise. The Wigner-Keldysh kinetic equation [53, 54], the Feynman-Vernon In uence Functional [36], and the classical Boltzmann equation [82, 76] can all be derived from the master equation in Lindblad 20 form. Armed with equations of motion for the density matrix and an expression for the time-evolution operator, we may use it to project the state of a system onto a basis of states that \moves with" chosen terms in the Hamiltonian. We may choose to rotate with the entire Hamiltonian (Heisenberg Picture) or just a piece of it (Interaction Picture). Let us consider the relative advantages of each. 2.5.1 Heisenberg Equation of Motion For reasons that will become clear, we may choose to transform to a basis wherein the states that describe our system are completely stationary. However, the Schr odinger equation still encodes dynamic information, and so whatever new equation of motion we wish to derive will do the same. The only di erence will be that the new equation encodes the information in the time-dependence of the operators rather than the states. This is not such an unfamiliar concept. In classical mechanics, the kinematics of a massive body for instance, a state is represented by a time-dependent trajectory in which the position and momentum of the object are changing instantaneously. In quantum mechanics, position and momentum (as well as every other observable) are represented by operators. It is thus natural to expect that there is a representation wherein these are the time-dependent quantities while the states are constant. Such a representation is called the Heisenberg picture, and for an undriven Hamiltonian it is described in Ref. [81] as follows. It rede nes operators ^AS in the familiar Schr odinger picture in terms of the evolution operator 21 as ^AH (t) = ^Uy(t) ^AS ^U (t); (2.23) To determine a desired matrix element or expectation value, we need only take an inner product of this operator between the states in the Schr odinger picture at t = 0. The time-dependence of this matrix element will be accomplished through the time- dependence of the operator ^AH (t). The equation of motion for this operator is simply d ^AH dt = @^Uy @t ^AS ^U + ^Uy ^AS@^U @t (2.24) = 1i h ^Uy ^H ^U ^Uy ^AS ^U + 1i h ^Uy ^AS ^U ^Uy ^H ^U (2.25) = 1i h[ ^AH; ^Uy ^H ^U]; (2.26) Because we postulated that the Hamiltonian in our system is stationary (as many many-body Hamiltonians used in common applications are), we know that it com- mutes with the time-evolution operator. As such, we may write the standard form of the Heisenberg equation of motion. d ^AH dt = 1 i h[ ^AH; ^H]; (2.27) This equation is useful because of the ease with which one may interpret it classically. Because of this fact, it is also possible to phenomenologically add terms (such as decay channels) to Heisenberg?s equations of motion despite the fact that such terms lead to non-unitary dynamics and are not derivable from Schr odinger?s equation or any other treatment involving only Hamiltonian dynamics. 22 2.5.2 Equations of Motion in the Interaction Picture We saw in the previous section that states could be transformed in a time- dependent way that would seemingly cancel out their own time-dependence. The time-dependence required by the Hamiltonian was put instead onto the operators that denote observables by transforming to a basis of states that \rotate with" the Hamiltonian. Just as a person on a train that watches a car going at the same velocity sees the car as a stationary object, so quantum states can be made to appear stationary by making them rotate with the Hamiltonian. One could, however, envision a transformation of the states that did not make them rotate with the entire Hamiltonian, but rather with a selected portion of it. That is, we could transform into a basis that is stationary with respect to a simple part of the Hamiltonian but is still time-dependent due to the action of some more other part of the Hamiltonian. In this case, the e ects due to the other part could be isolated, and we could deal with them without matters being complicated by the dynamics due to the simple part of the Hamiltonian. This is the thought behind the interaction picture of quantum mechanics. We shall begin with a Hamiltonian that we shall assume can be split into pieces as follows. ^H = ^H0 + ^V (2.28) Our goal will be to choose a basis of states that rotate with H0. The equation of motion governing the dynamics of these states will depend only on V. Every such new state j Ii can be given in terms of the state j Si that solves the total 23 Schr odinger equation as j I (t)i= ei ^H0t= hj Si (2.29) Exactly as expected, the \interaction basis" is simply the Schr odinger basis with a time-dependent prefactor that rotates the interaction states with ^H0. Let us now remember what happened in the Heisenberg picture. We rotate the states with respect to the full Hamiltonian and force all of their time-dependence onto observables. Now we are rotating with respect to only a portion of the Hamiltonian, so the observables will take on the time-dependence of only that portion. To that end, we de ne operators ^AI (t) in the interaction picture in terms of the stationary operators ^AS in the Schr odinger picture as follows. ^AI (t) = ei ^H0t= h ^ASe i ^H0t= h (2.30) There is no need to distinguish between ^H0 in the interaction and Schr odinger pictures because any operator or exponential of that operator commutes with itself. However, the interaction Hamiltonian ^V may not commute with ^H0. As such, we have ^VI (t) = ei ^H0t= h ^Ve i ^H0t= h (2.31) In terms of this operator, the dynamics of both operators and states can be written. Di erentiating equation (2.29), we see that their dynamics are encapsulated in the following familiar-looking equation. i hddtj I (t)i= ^VI (t)j Ii (2.32) Of course, in the interaction picture, the operators have time-dependence too. The equation of motion governing this dependence is again familiar-looking from our 24 treatment of the Heisenberg picture. Operators in the interaction picture must follow i hddt ^AI (t) = [ ^AI (t); ^H0] (2.33) In the same way as was done for the Schr odinger pictures, we may use equations (2.32) and (2.33) to de ne time-evolution operators and density matrices in the interaction picture. Let us rst look at the density operator ^ (t) =j S (t)ih S (t)j. Despite the fact that this operator is de ned in the Schr odinger picture, it is actually a time-dependent object because the state of the system changes in this picture. We may de ne the density operator in the interaction picture naturally to be ^ I (t) = ei ^H0t= h^ (t)e i ^H0t= h (2.34) Di erentiating equation (2.34) yields the expected time-dependence for the density operator. i hddt^ I (t) = [ ^VI (t); ^ I (t)] (2.35) We took the time to write this equation because it forms the basis of quantum kinetics. When we consider systems in contact with reservoirs, we shall show that we can often expand the right side of Eq. (2.35) in weak coupling and average over the bath degrees of freedom to arrive at a non-unitary equation of motion for the dynamics of the system alone. Alternatively, we might choose to describe the state of our system in terms of a ket j i. In this case, we may make use of the time-evolution operator in the interaction picture. Just as in section 2.2, we may de ne this operator to be the 25 operator that solves d dt ^UI (t) = i h ^VI (t) ^U (t) (2.36) such that ^UI (0) is simply the identity operator. As we expect from equation (2.20), the operator satisfying this initial value problem can be written as ^UI (t) = 1 + 1X n=1 i h nZ t 0 dt1 Z t1 0 Z tn 1 0 dtn ^VI (t1) ^VI (t2)::: ^VI (tn) (2.37) This equation will form the basis of our expansion of correlation functions in the Bose-Hubbard model, chapter 7. It will be generalized to include non-equilibrium situations by changing the integrations along real time to contours in complex time. 2.6 Periodic Hamiltonians The Schr odinger equation is equally valid for time-dependent Hamiltonians. In such cases the dynamics will be unitary despite the fact that the system is being driven. The problem amounts to nding a solution to Eq. (2.1) d dtj i= i h ^Hj i; (2.38) This is a problem in dynamics, but in the special case of periodic Hamiltonians, we are a orded a great simpli cation from the discrete time symmetry. We shall follow the work in [16] and [96] to rewrite the di cult dynamic problem of a time- dependent Hamiltonian as a simple eigenvector problem using Floquet theory. Let us begin with some unspeci ed system with a time-dependent Hamiltonian ^H (t) which is periodic with period such that ^H (t) = ^H (t+ ) (2.39) 26 for all t. In a way that is analogous to the Bloch theorem for spatially periodic Hamiltonians, the Floquet theorem states that there exists complete basis of solu- tions fj (t)ig such that j (t)i= e i? tju (t)i (2.40) whereju (t)i=ju (t+ )ishares the periodicity of the Hamiltonian. The phase ? is called the quasi-energy because it shares some properties with the eigenvalues of a static Hamiltonian. There is at least one very important di erence, however. While it is true that real energies manifest themselves in the dynamics of eigenstates of static Hamiltonians as phase prefactors, energies also have a signi cance with respect to transitions. The absolute value of energy must be conserved in transitions rather than simply its value modulo 2 . This is in contrast to the behavior of quasi-energies which only have a distinct meaning modulo 2 . This topological di erence between energy and quasi-energy is a source of great recent interest [55] in what are called Floquet topological insulators [59]. Because ju (t)i has period , only a discrete set of harmonics are necessary to transform the function to frequency space. That is, we may de ne the Fourier transform of ju (t)i as ju (t)i= X n e in tjun i (2.41) where N is an integer and = 2 . It is this discreteness of the set of the frequencies required for the Fourier transform that is going to allow us to represent the dynamics of the system under the time-dependent Hamiltonian as matrix multiplication. If 27 we now take the Fourier inner product of the Hamiltonian: ^Hmn = 1 Z =2 =2 dtei(m n) t ^H (t) (2.42) then we can write the Schr odinger equation as a static linear algebra problem X n ^Hmnjun i= (? +m )jum i (2.43) The system we wish to consider has a time-dependent Hamiltonian. This means that the energy of the system is not a conserved quantity. However, we see in Eq. (2.43) that it is conserved up to integer multiples of . These multiples correspond to the absorption or emission of quanta of energy from or into the driving eld. The net e ect of all this is that we are able to trade the dynamic problem of Eq. (2.38) for the static eigenvector problem in Eq. (2.43). Because we are solving a simpler problem, we should expect that observables will take on simpler forms also. To determine averages of single particle operators, we can simply evaluate the operators in the Floquet basis fj (t)ig. The calculation of correlation functions h^ay (t) ^a 0 (t0)iis slightly more complicated because there are two times, t and t0, to account for and the correlator will depend on both of them separately (rather than just on the di erence t t0 as is the case without driving). However, the situation is once again salvaged by the fact that the dependence on T = t + t0 is nontrivial only up to a period of 2 . Consider an arbitrary function G(t;t0) that satis es the condition G(t;t0) = G(t+ ;t0 + ). We may rewrite the function in center-of-mass coordinates T = t+t0 2 and = t t 0 as G t = T + 2 ;t 0 = T 2 . A simple inspection shows that 28 G(T; ) = G(T + ; ). Therefore, we know that we may write a Fourier transform as G(T; ) = 12 X N Z 1 1 d! e i! eiN TG(!;N) (2.44) where the component G(!;N) is given by G(!;N) = Z 1 1 d Z 2 = 0 dTei! e iN TG(T; ) (2.45) This representation is known as the Wigner representation. If we make one - nal transformation, we will be able to write the Dyson expansions of dynamically changing correlators as simple matrix equations. To that end, we de ne the Floquet representation [16, 96] to be Gmn (!) = G((m+n)!;m n) (2.46) where the right hand side of Eq. (2.46) is nothing more than the Wigner rep- resentation given in Eq. (2.45). By writing the Dyson equation for the driven Bose-Hubbard model in this form, we are able in chapter 7 to nd the correlation functions to in nite order in the driving strength. 29 Chapter 3 Systems at Equilibrium 3.1 Overview The assumption that a system is in equilibrium is a very powerful one. It allows for a massive reduction in the mathematical complexity necessary for describing a system. It is also a very restrictive assertion depending on requirements that are never fully satis ed. In this chapter, we shall outline precisely where these assumptions come into play and how they are used to make seemingly complex problems more tractable. 3.2 Closed and Open Systems There are many de nitions of what equilibrium actually is. These can range from the convolutedly abstract and mathematical to the hands-dirty and opera- tional. It is de ned in [4] as the simultaneous satisfaction of two assumptions. The rst is that the equilibrium system is characterized by a unique set of extensive and intensive variables which do not change in time. The second is that after isolation of the system from its environment, all the variables remain unchanged. This de nition is a little more of the hands-dirty variety. It has already partitioned the total system of interest into a \system" and an \environment" [58], and it has skipped most of 30 the hard work by assuming that the system is characterized by time-independent extensive and intensive variables. We know immediately that such a de nition can- not be the whole story because it is precisely these thermodynamic variables that we needed the assumption of equilibrium in order to de ne. Furthermore, it is cer- tainly possible for an isolated gas to be in equilibrium with itself. In this case, there is no easily identi able \environment", so it seems that the de nition of equilib- rium must not explicitly depend on such a construct. The second assumption is made so as not to mislabel stationary non-equilibrium situations as thermal equi- librium. The example given in [4] is of an electronic conductor subject to a strong time-independent voltage bias. The particle energy distribution function will be time-independent. Thermodynamic variables could be de ned that are stationary, but when the voltage is turned o , the particle distribution will relax back to the Gibbs distribution. The thermodynamic variables will also change. The second assumption de nes such situations as non-equilibrium somewhat arbitrarily. Such a situation could equally be considered equilibrium if we never wanted to consider the voltage turning o . In just the same way, an ideal gas contained in a volume can be considered non-equilibrium if some change is made to the volume in the far future. The point of such considerations is to realize that whether or not a system is in equilibrium depends critically on what is labeled the \system" and what is the \environment" and on when the total system is observed. With this in mind, let us discuss how the partitioning of a total system occurs. To do this, we shall have to consider what can be exchanged between multitudes of particles or elementary excitations. Consider a gas in a container. In this very 31 abstract example, consider that the container is made of an in nitely rigid material that does not accept any energy due to collisions with the particles of the gas. This gas can be considered an isolated system, a system that does not exchange energy or matter with any external environment [29]. From the detailed level of abstraction necessary even to give an example of an isolated system, one should get the feeling that they are not very common in real life. Indeed, no system is completely isolated. The real question is \When is it justi able to model a real system as if it were isolated". This is a question of time-scales. Consider a gas of N atoms of Uranium or some other nuclear ssile material. We wish to perform a statistical analysis of the gas to arrive at average values for observables such as momentum and position and the like. If we con ne ourselves to time-scales that are short compared to the half-life of atoms, there will be no trouble. The particle number N shall remain constant, and the system can be modeled as if it were isolated. However, if we wish to consider the system at times that are long compared to this half-life (as an operational de nition of equilibrium in terms of the in nite future state of a system would require), then N ! 0 as the Uranium decays. The system must be modeled as fully open with a \system" of Uranium atoms exchanging both matter and energy with an \environment" of lighter elements and photons. Fundamentally, it is when the system is observed and described that determines whether or not it is open or isolated. It is the goal of non-equilibrium statistical mechanics to describe these open systems. There are other ways in which a system might interact with an environment. It might exchange energy without exchanging matter. Such a system is referred to 32 as \closed", and an example might be that of a swimming pool [29]. A swimmer may feel hot or cold depending on the temperature of the water, but he will not dissolve like a sugar cube. The swimmer may be modeled as a closed system exchanging energy with a thermal environment (the pool). Again, the question of whether a system is closed or isolated depends on the time-scales over which energy exchange occurs. A thermos of co ee might be modeled as isolated if we only consider times that are short compared to the time it takes to cool thereby releasing its thermal energy into the air surrounding it. Closed systems describing energy dissipation into a thermal reservoir are general enough to describe a host of interesting current phonemona. Indeed, chapters 7 and 6 deal physical systems of optically con ned atoms that can be well approximated as closed. There are other ways to model systems? interactions with environments. One could envision a system designed such that matter but not energy is exchanged with an environment by requiring that particles are removed or added only when their energies are zero. There are no names for these other types of models as they are not very common or particularly useful. These three types (open, closed, and isolated) [29] will provide the necessary framework to de ne equilibrium. This is the topic of the next section. 3.3 Equilibrium Now that we have an understanding of the way in which systems may interact with one another, we are in a position to de ne the assumptions of equilibrium more 33 clearly. We shall begin by de ning what we mean for a system to be at equilibrium with itself. Using this de nition, we will be able to de ne concepts such as entropy and temperature. Then we shall say that two systems (both at equilibrium with themselves) are at equilibrium with each other when they are in thermal contact with the same temperature. Let us begin by considering a systemK. The system is a set of microscopic de- grees of freedom that can be in quantum states. If we know the individual quantum state of every degree of freedom in the system, we say that we know the microstate of the system. As an example, if K represented an isolated gas and we knew the quantum state of every particle in that gas, then the set f ig where i is a col- lective index of quantum numbers that de nes the state of particle i represents a microstate of K. The system will always instantaneously be in some microstate, but due to the large number of microscopic degrees of freedom it will be impossible even to write down the microstate, much less describe the dynamics that govern howK will change from one microstate to another. Furthermore, it would be fairly useless even if the task could be completed. Even classically, if one were given the position and momentum of a gas of 1025 particles, one would be hard-pressed to give us an intuitive picture of what would happen when we squeezed it. Physicists are macroscopic beings, and so they care mostly about macroscopic observables. As such, there is a lot of useless microscopic information that can be coarse-grained in order to determine a useful macroscopic property. Equilibrium, being a thermody- namic property, applies only to macroscopic systems and its assumption performs this coarse-graining. Furthermore, the requirement that the length and energy scales 34 of K are much bigger than those of its microscopic degrees of freedom will be used frequently and subtly to help us derive useful values for observables from the as- sumption of equilibrium. To that end, we shall de ne equilibrium of K with itself as the satisfaction of two conditions on the system. They are as follows 1. The system K is isolated 2. All accessible microstates of K have the same probability The rst supposition requires that the system under consideration is not being driven nor is its energy being dissipated into a larger environment [70]. If we wish to provide analysis of a system at equilibrium with an environment, theK must be thought to include both the system and the reservoir such that the total system+reservoir is isolated. Assumption 1 allows us to de ne a total energy E forK that we will later assume to be much bigger than the energy of its the microscopic degrees of freedom in K. A true purist might argue that this assumption is unnecessary in that it is subsumed in assumption 2 when we say \accessible" which often means (at least) that the energy of the microstate is equal to the total energy of the system. In any case, both of these statements must be true, and we will use the rst of these to get something useful, the energy probability distribution, from the assumption of equilibrium. The second assumption is where the coarse-graining happens. We ignore all the complex and necessarily chaotic dynamics [70] (It is chaotic dynamics that leads to ergodicity which is the basis of assumption 2. Non-chaotic or \integrable" systems need not ever equilibrate) of the system in favor of a probability distribution forKto 35 be in any of the microstates available to it (availability determined by conservation of particles, conservation of energy, and so on). Furthermore, we declare somewhat arbitrarily that this probability distribution is completely at. This was a fairly controvertial assumption when it was rst propounded, but it can be shown to be true by employing non-equilibrium kinetic equations in the in nite future limit with some (some would say equally dubious) conditions. This is not our concern. We need need not concern ourselves with the conditions upon which the assumption of equilibrium is rigorously satis ed because our aim is to deal with systems where this assumption obviously fails. In the next section, we shall show how these two assumptions have massively reduced the computational complexity of the problem and allowed us to calculate macroscopic observables almost magically. 3.4 Particle Statistics at Equilibrium 3.4.1 The Gibbs Distribution In this section, we shall use the assumption of equilibrium to determine the energy distribution of the excitations at equilibrium. It should be no surprise that this is possible because we have essentially done all the real work by assuming the at phase space density discussed in the previous section. The real surprise is that the assumptions of equilibrium yield results that are as useful in the real world as they are. We shall paraphrase Landau?s arguments [58] using the two assumptions we have made in the previous section to calculate the distribution of energy in some 36 small subsystem of K. Let us begin by partitioning K into a system S (a set of degrees of freedom that are of interest to us) and an environment R. We assume that S is \small" (it has much smaller length and energy scales than those of K), and we are not interested in the microscopic state of the R. The objective is to nd the probability pn that the whole system K is in a state such that S is in a well-de ned quantum microstate with energy "n. That is, we are looking for the probability that the total system is in some unknown state (unknown because we don?t want to know the state of the reservoir) but with the system in a microstate that is known. The assumptions of equilibrium will make this an easy task to complete. Start- ing with assumption 2 from section 3.3, we know that the probability of every mi- crostate of the total system K is equally probable. That means that if we want to nd the probability that theS is in a microstate de ned by energy "n, we need only count (and appropriately normalize) the number of states ofK such that this is the case. Thus, we have pn = ("n) (3.1) where ("n), called the statistical weight, is the number of microstates of the total system such thatS has energy "n. If we observeS to have energy "n, then we know that the total system is in one of these states, but we are ignorant of which one. This is what necessitated a probability distribution in the rst place. Of course, since K is a macroscopic system, the number of its possible microstates (even if we are only counting the ones such thatS has energy "n) is unimaginably huge. We shall make 37 the problem more tractable by representing this huge number as an exponential. pn = eln ("n) = 1ZeS("n) (3.2) where we have cleaned up notation by introducing the function S("n) = ln ("n). This is the entropy. The logarithm of the number of microstates of K such that S has energy "n. Naturally, it is a function of "n. However, the rst assumption of equilibrium dictates that K is an isolated system, and as such, it has a xed total energy E. Because we know that the system energy "n and the reservoir energy must add up to the total energy E, we can equally well write "n = E (E "n) and let the entropy be a function of the reservoir energy E "n. S("n)!S(E "n) (3.3) Finally, because we constructed S such that it had much smaller energies than K, we know that E "n. Thus, we are justi ed in expanding the entropy in the exponential of Eq. (3.2). We have pn eS(E) "n(@S@x)E (3.4) Both S(E) as well as the derivative of the entropy function S(x) at x = E that appear in Eq. (3.4) are constants with respect to "n. We shall identify the derivative with a constant called temperature (in units where the Boltzmann constant kB = 1). = 1T = @S @x E (3.5) Of course, we shall also have to normalize the probability. De ning a partition function Z = Pnpn = Pne "n=T, we are left with the Gibbs distribution. pn = 1Ze "n (3.6) 38 It is truly amazing that from these simple assumptions and abstractions it is possible to describe the states of extremely complex systems seemingly without doing any work. We never gave any of these details of what K, S, and R actually are. Par- ticularly, we never said whether the microscopic states of S were many body states or single-particle states. In the next section, we shall consider what happens when they are many body states of particles that have quantum statistical requirements due to particle-exchange symmetry. 3.4.2 Bosons and Fermions Following the ideas in [82], let us consider a situation wherein the total system K is a collection of non-interacting quantum particles. We will assume the system to be at equilibrium with itself with a temperature T and a volume V. For the body of interest S, we shall choose a small region of the total volume. If we look at this small volume, particles will be entering and leaving all the time. The total number of particles inside this smaller volume will uctuate due to interactions with the larger system T. However, the number of particles should not uctuate very far away from some average density for the system. We can model this type of uctuation with a chemical potential . That is, we shall phenomenologically describe uctuations by allowing the number of particles N to change but to associate an energy cost N to the presence of N particles. Within the system, these N particles will be distributed among quantum states i with energies "i. If there are ni particles in quantum state i, then the total energy contribution from i is simply ni"i. The total 39 energy of the microstate of the systemSwill simply be the sum of these energies over all these states i. Thus, using the Gibbs distribution, formula (3.6), the probability that system S has N particles distributed with ni particles in each state i is given by p(N;fnig) = e N exp X i ni"i ! (3.7) with the condition that Pini = N. Of course, it is possible for the particles (at least for now) to be distributed into quantum states in any way such that this condition is satis ed. Thus, if we want to nd the partition function for S (usually more useful than the probability distribution), we shall have to sum over all these possible con gurations as well as all the possible numbers of particles in S. The partition function can hence be written as a sum over N and i of expression (3.7). Z = 1X N=0 X fnig exp[ X i ("i )ni] (3.8) where for each value of N, we sum over all possible placements of the N particles into the quantum states i. The only condition is that the sum of the numbers of particles in each quantum state must add up to the number of particles in the total system: Pini = N (and therefore Pini = N which we used in the derivation of (3.8)). We will now make a very useful observation. We know that all the occupations ni of each state must add up to N, but we are summing over every possible value of N. Thus we may as well dispense with the index N and simply sum over the ni without any restrictions. If we do this, the sums in the expression for our partition 40 function, Eq. (3.8), reduce to Z = X n1;n2;::: exp[ ("1 )n1 ("2 )n2 :::] (3.9) but this sum factorizes! The occupation numbers are all treated on the same footing (we can write n instead of ni), and the partition function can easily be written as Z = Y i f X n e ("i )ng (3.10) Now we shall have to take particle exchange symmetry into account. We know that quantum mechanically (for our purposes), particles can be divided into bosons and fermions. Bosons have the property that many particles can be in the same quantum state while fermions allow only a single fermion per state. In terms of our partition function, this means that the sum over n in Eq. (3.10) goes from n = 0 to n = 1 for bosons while it is only n = 0;1 for fermions. Let us consider bosons rst. In this case, the sum in Eq. (3.10) is a geometric series which converges only if e ("i )n < 1. Therefore, we know that the chemical potential must be negative for our analysis to be valid. If this is the case, we have for the bosonic partition function ZB = Y i 1 e (" i ) 1 (3.11) Using the fact that the average occupation number can be determined through the formula hnii = 1 @@"i lnZ, we nd that the distribution of bosonic occupations among the quantum states i is given by hnii= 1e (" i ) 1 (3.12) 41 The assumption that K is at equilibrium has allowed us to determine the particle statistics even with the complication of exchange symmetry in the many-body wave- functions. Let us now consider the case where we are dealing with fermions. No two fermions can be in the same quantum state, so n can be equal to 0 or 1 in Eq. (3.10). Our partition function again has a simple form. ZF = Y i 1 +e (" i ) (3.13) and again we may use hnii= 1 @@"i lnZ to nd the fermionic occupation statistics hnii= 1e (" i ) + 1 (3.14) From these distribution functions, it is possible to nd the equilibrium av- erages of many observables of great practical importance. We have made a lot of headway with just a few very strong assumptions. Even apart from the restrictive assumption that our system is in equilibrium, we have made other simpli cations that are physically unsound in certain limits. For instance, our sum over N to in n- ity in Eq. (3.8) contradicts the assertion that S is very small compared to K upon which the legitimacy of Gibbs distribution depends. This is generally not a problem because the probability of high N states will be very low (exponentially suppressed in the energy of those states). However, this is not the case when the phenomenon of Bose-Einstein condensation is considered. As T ! 0, the bosonic population in the ground state ("0 = 0) diverges. This is interpreted as a macroscopic population in a single quantum state. Such interpretations yield useful predictions, but we will have to be more exact if we want a quantitative understanding of the condensate 42 fraction. In other areas, these equilibrium distribution functions have been found especially useful in the theory of solids at low temperatures. Even when interactions are included, they still retain some validity as jumping-o points from which to per- form perturbative expansions in weak couplings. Furthermore, it is often possible to reformulate an interacting problem in terms of elementary collective excitations that are non-interacting with either bosonic or fermionic statistics. In the next chapter, we will discuss how to generalize beyond these distributions to nd observables of interest in systems with both driving and dissipation. 43 Chapter 4 Systems Out of Equilibrium 4.1 Overview We will now review all of the basic techniques available to the theorist trying to describe systems that are far from equilibrium. As mentioned in previous chapters, the eld is simply too big to give it a useful treatment here. Rather, we shall outline the theory behind a few techniques that we have found to be useful in our studies of non-equilibrium optical and condensed matter systems. For more comprehensive surveys of techniques, one could look to references [19] or [4]. A non-equilibrium system does not have well-de ned thermodynamic proper- ties. Entropy, temperature, free energy, and the like are all unde ned for a general system. This is unfortunate in one respect because we seem to have lost the ability to describe large numbers of degrees of freedom with a few variables. Consequently, we can expect non-equilibrium treatments to be far more complicated mathematically than their equilibrium counterparts. In another respect, however, the loss of these equilibrium quantities can be something of a blessing in that it forces us to move away from abstract thermodynamic concepts available only in unrealistic limits and restrict ourselves to more concrete observables such as ensemble averages and corre- lation functions. As with many theoretical methods, non-equilibrium theory o ers a tradeo between the conceptual simplicity with which we describe the quantities 44 of interest and the mathematical di culty inherent to describing the dynamics of those quantities. Here, we shall discuss the basic theoretical constructions that we have made use of in our work. 4.2 Observables out of Equilibrium In non-equilibrium systems, we shall still want information about observables that are fairly familiar from systems at equilibrium. From an intuitive perspective, they fall into two basic categories: ensemble averages of single particle operators (like particle kinetic energy, position, or momentum) and correlations (ensemble averages of products of such operators). These are the quantities that are directly measured in experiments. At equilibrium, thermodynamic quantities like temperature can then be inferred from knowledge of these concrete observables. Obviously, away from equilibrium, it will be enough just to know the concrete observables them- selves. Happily, we have already seen the basic formalism necessary to describe these non-equilibrium observables. Our work has made heavy use of two elementary prescriptions. The rst is given in terms of the density matrix ^ formalism which describes instantaneous ensemble averages through the relation given in Eq. (2.13) h^Ai= Trf^ ^Ag (4.1) When the system is not at equilibrium, ^ will generally be a function of time (how- ever we do consider the especially useful case of stationary non-equilibrium solutions in particular systems in later chapters). Consequently, the ensemble averages will also be time-dependent through the traces taken over ^ . When we considered only 45 unitary dynamics as in chapter 2, the dynamics were governed by the Von Neumann equation, Eq. (2.22). However we know that the inclusion of an environment with which the system has not equilibrated will make the dynamics non-unitary. We shall have to introduce the quantum master equation in Lindblad form [40]. This will be one of the focuses of section 4.4. The second prescription for the solution of non-equilibrium problems is ex- pressed in terms of correlation functions. We make great use of this method in chapter 7 to describe a non-equilibrium phase transition in lattice bosons. Correla- tions are also experimentally observable in the statistical properties of distributions of single particle operators. Furthermore, if there is a separation in energy scales among the terms contributing to the total Hamiltonian, then the correlation func- tions will have well-understood expansions in terms of Feynman-like diagrams even far from equilibrium. Indeed, the formulation of such a Dyson-like expansion for a non-equilibrium correlation function is the central mathematical result of chap- ter 7. Far from equilibrium, our understanding of these functions will have to be generalized from what we know at equilibrium or unitary dynamics. For instance, non-equilibrium functions ^Gx;x0 (t;t0) = h^ayx (t) ^ax0 (t0)i will depend on t and t0 sep- arately rather than on only the di erence t t0 as is the case at equilibrium (We remember from chapter 2 that this property can also be exhibited in unitary dy- namics. In special cases such as periodic driving it can be dealt with rather simply). Furthermore, the brackets h i will denote averages over time-dependent density matrices rather than stationary thermal ensembles. These considerations and others are the focus of the next section. 46 4.3 Non-Equilibrium Correlation Functions In earlier chapters, we have discussed the various uses for nding averages of products of operators such as the following. G ; 0 (t;t0) = ihT^ay (t) ^a 0 (t0)i (4.2) where the brackets h i denote a quantum average over the ground state of the full Hamiltonian (for T = 0) and T is the time-ordering symbol. At T = 0, we can formally take this average and write the correlation function in terms of a scattering matrix without ever knowing the ground state over the full Hamiltonian. It will be the focus of this section to generalize this concept to non-equilibrium systems and show equilibrium as a special case. We begin by remembering the prescription at equilibrium with a bath at zero temperature. Following [63], we write the total Hamiltonian as ^H = ^H0 + ^V where ^H0 is a \bare" Hamiltonian for which we know the ground state j 0i. We assume that we cannot diagonalize ^V so we wish to include its e ect as a perturbation. Being at equilibrium, all of these Hamiltonians are assumed static. We shall assume that in the in nite past, ^V = 0. As time progresses, ^V is adiabatically turned on so that the system is instantaneously in the ground state of ^H0 + ^V for all time. Then, in the in nite future, the perturbation is adiabatically turned o again. Assuming the perturbation to be fully turned on at t = 0, the net e ect of all this is that we may write the instantaneous ground statej iover which the average in Eq. (4.2) is 47 written as j i = ^S(0; 1)j 0i (4.3) h j = h 0j^S( 1;0) (4.4) in the interaction picture with respect to ^V. The operator ^S(t;t0) = ^U (t) ^Uy(t0) is the scattering operator. If the time-evolution operator is de ned such that j (t)i= ^U (t)j (0)i then the scattering operator is more general in that j (t)i= ^S(t;t0)j (t0)i. This operator can be written as a time-ordered operator. ^S(t;t0) = T exph i Z t t0 dt1 ^V (t1) i (4.5) Because we have adiabatically turned the perturbation on and o , the adia- batic theorem says that the ground state at t = 1 is equal to the ground state at t = 1 up to a phase. That is, h 0j^S(1; 1)j 0i= eiL (4.6) This is the crucial assumption that we will relax when we generalize this formalize out of equilibrium. Far from equilibrium, the perturbation cannot be assumed to be adiabatically turned on. There is no reason to believe that the system will remain in its ground state in the in nite future. We will deal with this issue soon. Using equation (4.6), we can rewriteh jin terms of the ground state in the in nite future rather than the in nite past so long as we divide out the extra phase. h j= h 0j ^S(1;0) h 0j^S(1; 1)j 0i = e iLh 0j^S(1;0) (4.7) The end result of all these mathematical gymnastics is that we may now write the correlation function in the interaction picture as a time-ordered product of the 48 particle operators and the scattering matrix ^S(1; 1) averaged over the known ground state j 0i of the bare Hamiltonian. G ; 0 (t;t0) = ihT^a y (t) ^a 0 (t 0) ^S(1; 1)i0 h 0jT ^S(1; 1)j 0i (4.8) In this form, Eq. (4.8) is not particularly useful, but when combined with Eq. (4.5) and Eq. (2.20) it forms an expansion series for G in the small Hamiltonian ^V. To generalize equation (4.8) to non-equilibrium situations, we must go back to the assumption that requires equilibrium. The adiabicity assumption that we turned the perturbation ^V on very slowly compared to the system?s time-scales allowed us to write the ground state in the in nite future in terms of the ground state in the in nite past and a phase in Eq. (4.7). Relaxing this assumption, we must neglect Eq. (4.7) and return to Eq. (4.4) writing it in a slightly di erent way using the transitivity of the operator ^S. h j=h 0j^S( 1;0) =h 0j^S( 1;1) ^S(1;0) (4.9) Rather than writing the in nite future state as the in nite past state with a phase prefactor, we simply write the future state as the past state time-evolved to t =1. At equilibrium, this forward-backward evolution will produce the results mentioned already in this section. However, this method is equally valid away from equilibrium. Substituting Eq. (4.9) into our expression for the correlator, we nd that we may write the correlation function as G ; 0 (t;t0) = ihTC^ay (t) ^a 0 (t0) ^S( 1; 1)i0 (4.10) where C is a contour in complex time space that goes from t = 1 to 1 slightly 49 above the real t axis and from t = 1 to 1 slightly below the real axis. The forward-backward evolution is encompassed in a single scattering operator written in terms of this Keldysh contour C, [53, 54]. The backward evolution is interpreted to have occurred after the forward evolution, the contour-ordering symbol TC orders times along the contour taking this into account. The scattering matrix now has the form (For the remainder of our work, h = 1 unless speci cally indicated). ^S( 1; 1) = TC exph i Z C dt1 ^V (t1) i (4.11) where this operator can still be expanded to yield a non-equilibrium Dyson equation for G. However, due to the fact that the time arguments can be on di erent legs of the contour, we will be forced to reorganize our Dyson equation into matrix form. As indicated earlier, the backward evolution is interpreted to happen after the forward evolution. This means that if t is on the forward leg while t0 is on the backward leg, then the contour-ordering symbol will evaluate the correlator in Eq. (4.10) as G ; 0 (t;t0) = G> ; 0 (t;t0) = ih^a 0 (t0) ^ay (t) ^S( 1; 1)i0 (4.12) regardless of the actual values of t and t0. If both of the time arguments are on the forward leg, then they contour-ordering symbol just becomes the time-ordering we have already met. There are four possible ways that t and t0 can be placed on the two legs of contour C. Thus, there are four possible Green?s functions that the contour- ordered correlator can attain. Our goal is to expand the total correlator in Eq. (4.10) in terms of bare correlators g ; 0 (t;t0) = hTC^a 0 (t0) ^ay (t)i0. This expansion 50 is best accomplished through the four possible bare correlators corresponding to the four orderings by TC. They are presented here (for bosons). g> ; 0 (t;t0) = ih^a 0 (t0) ^ay (t)i0 (4.13) g< ; 0 (t;t0) = ih^ay (t) ^a 0 (t0)i0 (4.14) gT ; 0 (t;t0) = (t t0)G> ; 0 (t;t0) + (t0 t)G< ; 0 (t;t0) (4.15) g~T ; 0 (t;t0) = (t0 t)G> ; 0 (t;t0) + (t t0)G< ; 0 (t;t0) (4.16) Let us see what an expansion of one of these four total correlators would look like. Using our expansion for the scattering matrix in Eq. (4.10), we have for the lesser correlator: G< ; 0 (t;t0) = i X n ( i)n n! Z C dt1:::dtnhTC ^V (t1)::: ^V (tn) ^ay (t) ^a 0 (t0)i0 (4.17) To illustrate how we may represent this series very simply in terms of matrix mul- tiplication, let us take an example interaction form from [63]. In the Schr odinger picture, we posit an interaction that has the form ^V = X M ^ay ^a (4.18) Equation (4.17) then becomes G< ; 0 (t;t0) = g< ; 0 (t;t0) + X M Z C dt1hTC^ay (t) ^a (t1)i0hTC^ay (t1) ^a 0 (t0)i0 +::: (4.19) Remembering that the integral over t1 runs over the contour C, we know that the contour-ordering symbols will order the operators in the second term on the right-hand-side depending on which leg t1 is on. The are two possiblities. The 51 rst is that t1 is on the forward leg and we have hTC^ay (t) ^a (t1)i0 ! g< and hTC^ay (t1) ^a 0 (t0)i0 !gT. The second is that t1 is on the backward leg and we have hTC^ay (t) ^a (t1)i0 !g~T and hTC^ay (t1) ^a 0 (t0)i0 !g<. Because t1 will be on both legs (the contour runs from 1to1and back), we can write these two possibilities in the same line as G< ; 0 (t;t0) = g< ; 0 (t;t0) (4.20) + X M Z 1 1 dt1[g< (t;t1)gT 0 (t1;t0) g~T (t;t1)g< 0 (t1;t0)] +::: Here is where the great simpli cation a orded by matrix multiplication becomes apparent. If we de ne the following matrix ^G 0 (t;t0) = 0 BB @ GT 0 (t;t0) G> 0 (t;t0) G< 0 (t;t0) G~T 0 (t;t0) 1 CC A (4.21) then we immediately see that the information in Eq. (4.20) is encompassed in the equation ^G 0 (t;t0) = ^g 0 (t;t0) +X M Z 1 1 dt1^g (t;t1) ^g 0 (t1;t0) +::: (4.22) where ^g is the matrix de nition analogous to Eq. (4.21) for the functions g 0 (t;t0). It can be easily (if somewhat tediously;) veri ed that Eq. (4.22) is correct for all the elements of ^G. Indeed it is also obvious how to expand the series to higher orders to obtain the full non-equilibrium Dyson equation in terms of these matrix correlation functions. Relation (4.22) holds even when the brackets h:::i indicate a thermal average or any average over a mixed state rather than simply the quantum average 52 over the ground state. It is this method that we use in chapter 7 to determine the non-equilibrium correlations in the driven Bose-Hubbard model. 4.4 Equations of Motion Having considered the types of observables of interest in non-equilibrium the- ory, we shall have to discuss the equations of motion that govern them. The density operator ^ (t) is a dynamically changing measure of the population of particles in each quantum state. When we discussed unitary dynamics, we noted that the den- sity operator solved the Von Neumann [81] equation of motion, Eq. (2.22). i h@@t^ = [ ^H; ^ ] (4.23) How will this change when we wish to include the e ects of an external environment with which the system has not equilibrated? We nd the answer by enlarging our perspective to include the environment. Let us de ne the Hamiltonian to have three parts. ^H = ^HS + ^HR + ^V (4.24) Here we have included the environment and the system of interest into a larger \total" system. The Hamiltonian that a ects only the system is designated by ^HS. The Hamiltonian ^HR governs the dynamics of the reservoir degrees of freedom only. The coupling between the system and the reservoir is given by ^V, which we assume to be weak. This assumption is necessary almost by construction. By assuming that there is a way to meaningfully partition the total system into a system and a reservoir, we are implicitly assuming that the coupling between these to partitions is 53 weak enough that the dynamics of the two can be separated (at least conceptually). It would not make much sense to make an arbitrary distinction between particles in a con ned gas when the interaction between particles in the two imaginary classes is as strong as the interaction among particles within the classes themselves. The density matrix describing the state of the system+reservoir will follow the Von Neumann equation with the total Hamiltonian given in Eq. (4.24). The dynamics of this total system is unitary. However, remember that we are not interested in the dynamics of the total system. Only S is of interest to us. We will want to coarse-grain over all the unnecessary information about the reservoir to get an equation of motion only for S. This equation will not be unitary [76, 12]. Following [25] with this objective in mind, we shall de ne the reduced density matrix ^ (t) for the system to be ^ (t) = TrRf^ g (4.25) Then we shall transform into the interaction picture with respect to the coupling. ~ (t) = ei( ^HS+ ^HR)t= h^ (t)e i( ^HS+ ^HR)t= h (4.26) ~V (t) = ei( ^HS+ ^HR)t= h ^Ve i( ^HS+ ^HR)t= h (4.27) As expected, in this basis, the Von Neumann equation only details the dynamic dependence on the coupling term. @ @t~ (t) = 1 i h[ ~V (t); ~ (t)] (4.28) The interaction picture is particularly useful because of our assumption that the coupling was weak. Because we have assumed that ^V is small compared to the 54 other energy scales of the problem, the time-evolution due to equation (4.28) will be slow without the rapid evolution due to the other terms in the Hamiltonian. We can integrate both sides of Eq. (4.28) in time to get ~ (t+ t) ~ (t+) = 1i h Z t+ t t dt0[ ~V (t0); ~ (t0)] (4.29) We can iterate this equation by letting ~ (t0) = ~ (t+ 0t) and using Eq. (4.28) again. This can be done to any desired order in ~V, but we will only include the second order term here. ~ (t+ t) = ~ (t) + 1i h Z t+ t t dt0[ ~V (t0); ~ (t)] + 1 i h 2Z t+ t t dt0 Z t0 t [ ~V (t0);[ ~V (t00); ~ (t00)]] (4.30) As stated in [25], equation (4.30) is exact, but in order to get anything useful from it, we shall now have to assume certain properties of the reservoir. We shall rst assert that the reservoir is large enough that the coupling to the system produces no change in the reservoir ~ R (t) = TrSf~ (t)g= R (4.31) where R is not a function of time. Secondly, we shall assume that the reservoir is in a stationary state. Its density operator commutes with its Hamiltonian. [ R; ^HR] = 0 (4.32) Thus, R is diagonal in the energy basis, and we can consider it to be a mixed state of energy eigenkets. Lastly, we shall make a strong assumption on the coupling. We shall assume that ^V is a product of a system observable ^A consisting of operators 55 that act only on system states and a reservoir observable ^R. By construction, these operators will commute with each other and we will have ^V = ^A^R (4.33) where the generalization to a sum of such operators is easily accommodated. We also assume with no loss in generalization that the average value of the bath operator vanishes. Writing the trace in the interaction picture yields Trf R ~R(t)g= 0 (4.34) Finally, we shall make the Markovian assumption [25, 4]. Looking at the reservoir correlation function g(t;t0) = h~R(t) ~R(t0)i it is easy to prove that this function depends only in the di erence = t t0 by virtue of the fact that the bath is in a stationary state ([ R; ^HR] = 0). We have g(t;t0) = TrRf R ~R(t) ~R(t0)g (4.35) = TrRf Rei ^HRt= h ^Re i ^HR(t t0)= h ^Re i ^HRt0= hg (4.36) = TrRf R ~R( ) ~R(0)g= g( ) (4.37) We expected this from knowing that reservoirs at equilibrium (though we have not actually assumed our reservoir to be at equilibrium) have correlators that depend only on the di erence between their time arguments. The Markovian assumption consists of the assertion that the bath has a very dense ensemble of energy levels. Thus, when gets large enough, the exponentials in the Fourier expansion of g( ) will destructively interfere because the spread of frequencies in the expansion is very dense. Thus, g( )!0 for times much bigger than some bath correlation time 56 c. The Markovian assumption is that c vanishes when compared to the relevant time-scales of the problem. Under these assumptions, our goal is to use the exact equation (4.30) to de- termine a coarse-grained equation of motion for the system density operator ~ (t) where the degrees of freedom in the reservoir have been averaged over. Let us begin by writing our total density matrix as a sum of two parts. ~ (t) = TrR~ (t) TrS~ (t) + ~ correl (t) (4.38) We have written the total density matrix as a product of the two reduced matrices plus some unknown contribution having to do with correlations between the system and the bath. We shall see that the Markovian assumption allows us to neglect the contribution from ~ correl. Let us return to Eq. (4.30). We shall assume that t is small compared to the evolution time of the system S. If this is the case (meaning that ^V is su ciently weak), then we may replace ~ (t00) with ~ (t). We have ignored the evolution of the system due to third order and higher terms in ^V arguing that a substantial change in the system due to these terms takes a much longer time than we wish to consider. Now we can trace both sides of this equation over the bath degrees of freedom. Because the second term in Eq. (4.30) vanishes due to Eq. (4.34), we have ~ t = 1 h2 1 t Z t+ t t dt0 Z t0 t dt00 TrR ~V (t0);[ ~V (t00);~ (t) R + ~ correl (t)] (4.39) where we have divided both sides by t to form an equation reminiscent of that for a time-derivative. An order of magnitude estimate of the term due to the system bath correlations (which must accrue due to interaction during all times before 57 t) including a contribution from the linear term in Eq. (2.22) (which cannot be assumed to vanish in this case) is given in [25] as ~ t correl = 1 h2 1 t Z t 1 dt00 Z t+ t t dt0h~V (t00) ~V (t0)iR (4.40) Remembering that bath correlators vanish for t0 t00 c, an order of magnitude estimate of this quantity is given by 1 h2 1 tv 2 2 c = 1 TS c t (4.41) where v is the energy characterizing the strength of ^V. Thus, in the limit where c TS t !0, we may write the coarse-grained equation of motion for our system density matrix ~ in the interaction picture with respect to ^V as ~ t = 1 h2 1 t Z t+ t t dt0 Z t0 t dt00 TrR ~V (t0);[ ~V (t00);~ (t) R] (4.42) This equation is not yet written in its most useful form. Speci cally, the right-hand side is not very intuitive. We would like to write it in a form that takes advantage of the bath correlation functions which we know to be functions only of = t0 t00. To that end, let us substitute our variables of integration such that the time integrations can be written as Z t+ t t dt0 Z t0 t dt00 = Z t 0 d Z t+ t t+ dt0 (4.43) Recalling that the bath correlators vanish for c, we know that we can extend the upper limit of the integration over d to in nity and lower limit of the integration overdt0 to t, and the error will be negligible. Finally, we may expand the commutator 58 in Eq. (4.42) yielding ~ t = 1 h2 Z 1 0 1 t Z t+ t t dt0 n g( ) [ ~A(t0) ~A(t0 ) ~ (t) ~A(t0 ) ~ (t) ~A(t0)] +g( ) [~ (t) ~A(t0 ) ~A(t0) ~A(t0) ~ (t) ~A(t0 )] o (4.44) where the operators ~A are just the interaction picture representation of the system operator in Eq. (4.33). Equation (4.44) is the quantum master equation [40, 4, 38]. It is an operator equation, so it can be cast in whatever basis we choose (most often the energy basis of eigenkets of the system Hamiltonian). The products of operators appearing in Eq. (4.44) can be organized into what are called Lindblad operators [40]. Often (but not always) they can be identi ed as noise and dissipation terms. A much simpler Boltzmann-like equation is used in chapter 6 to model the energy distribution of quasiparticle excitations in optically con ned fermions. 59 Chapter 5 Dark States, Coherent Population Transfer, and Bistability in Feshbach Coupled Fermions 5.1 Dark States and Coherent Population Transfer in Feshbach Cou- pled Fermions 5.1.1 Overview As a demonstration of the unitary techniques available to model driven many- body systems, this chapter is concerned with the e cient association of Feshbach coupled Fermions into molecules. Large portions of the chapter are quoted from the author?s publication, reference [78]. Association of ultracold atom pairs into diatomic molecules via Feshbach res- onance [93] or photoassociation [92], has made it possible to create coherent super- positions between atomic and molecular species at macroscopic level. This ability is the key to applications that employ the principle of the double pulse Ramsey in- terferometer [72] for observing coherent population oscillations between atoms and molecules [30, 56, 62]. A particular kind of state, the atom-molecule dark state, has been theoretically proposed [61, 60] and experimentally observed [99], where population is trapped in a superposition between atom pairs and deeply bound molecules in the electronic ground state. Destructive interference leads to the van- 60 ishing population in the excited molecular level. Such a state is the generalization of the usual atomic dark state that lies at the heart of many exciting applications, including electromagnetically induced transparency, slow light propagation and pre- cision spectroscopy [85]. So far, the macroscopic atom-molecule dark state has only been studied in bosonic systems. The purpose of this chapter is to show that, under proper conditions, an atom-molecule dark state also exists in fermionic systems, but with quite distinct properties compared with its bosonic counterpart. To be speci c, we consider a homogeneous atom-molecule system where an excited molecular leveljmiis coupled both to a ground molecular leveljgi(bound- bound coupling) by a coherent laser eld, and to two free atomic states of equal population labeled asj"iandj#i(bound-free coupling) via, for example, a photoas- sociation laser eld. At zero temperature, bosonic molecules all condense to the zero-momentum state, whereas fermionic atoms are of multi-momentum modes in nature due to the Pauli principle, and are thus described by momentum continua of di erent internal states. This di erence has two important rami cations. The rst one is related to the formation of the dark state. As is known, two necessary ingredients for creating a macroscopic atom-molecule dark state are the coherence between its components and the generalized two-photon resonance which, unlike in the linear atomic model, becomes explicitly dependent on the atomic momentum. For bosons at zero temperature, since they all occupy the same zero- momentum mode, properly tuning the laser frequencies can make all the bosons satisfy the two-photon resonance simultaneously. However, for fermions, because of the existence of the fermi momentum sea, the same technique can only render 61 a limited number of atoms with the \right" momentum to satisfy the two-photon resonance. Hence a macroscopic dark state involving all the particles in the system does not seem to be possible for fermions. This di culty can be circumvented when the attractive interaction between atoms of opposite spins results in a fermionic super uid state that can be regarded as a condensate of atomic Cooper pairs. As we shall show below, such a fermionic super uid, together with the ground molecule condensate, can now form a macroscopic dark state under the two-photon resonance condition. The second rami cation of the momentum continuum is related to the collec- tive excitation of the dark state. The excitation spectrum of the fermionic system is far more di cult to analyze than its bosonic counterpart. The zero-temperature spectrum of the bosonic system is discrete [60]. In contrast, the spectrum of the fermionic system is made up of both a discrete and a continuous part, and hence can be regarded as the nonlinear analog of the Fano-Anderson type of models in linear atomic and condensed matter systems [35]. As we demonstrate later, this analogy signi cantly simpli es our understanding of the excitation spectrum while at the same time enables us to gain profound insights into the dynamical properties of the fermionic dark state. 62 5.1.2 Mean-Field Hamiltonian Let us begin with the mean- eld Hamiltonian [28] written in the frame rotating at the laser frequency: ^H = X k; k^ayk; ^ak; + 0^bym^bm + ( 0 + 0)^byg^bg X k ?k ^ayk;"^ay k;# +h:c + 02 ^ bym^bg +h:c + 1pV X k g?k ^ bm^ay+k;"^ay k;# +h:c ; (5.1) where ^ak; is the annihilation operator for an atom of spin (=" or #), having momentum hk and kinetic energy k = h2k2=2m, ^bm;g the annihilation operator for a bosonic molecule in state jmi or jgi. We have neglected the Hartree mean- eld potential as it is usually weak for typical parameters. Here, V is the system volume, 0 and 0 ( 0 and g) are respectively the detuning and coupling strength of the bound-bound (bound-free) transition, ?k = exp [ k2=(2K2c)] is the regularization function providing momentum cuto , and = UPk?kh^a k;#^ak;"i=V is the gap parameter. The collisional interaction potential between atoms of opposite spins and the atom-molecule coupling are given by U (k k0) = U?k?k0 and g(k) = g?k, respectively, where U and g are momentum independent. Evidently, Eq. (5.1) preserves the total atom number N = 2(h^bym^bmi+h^byg^bgi) + 2Pkh^ayk;"^ak;"i. The dynamics of the system is governed by the Heisenberg equations of motion for operators. By replacing bose operator ^bm;g with the related c-number cm;g = h^bm;gi=pV and fermi operator ^ak; (t) with uk (t) and vk (t) through the Bogoliubov 63 transformation 2 66 4 ^ak;"(t) ^ay k;#(t) 3 77 5 = 2 66 4 u k (t) vk (t) v k (t) uk (t) 3 77 5 2 66 4 ^ k;" ^ y k;# 3 77 5 (5.2) with juk (t)j2 +jvk (t)j2 = 1 and ^ k; being fermi quasiparticle operators, we obtain the following equations i hdcmdt = 0cm + 02 cg gU ; (5.3a) i hdcgdt = ( 0 + 0)cg + 0 2 cm; (5.3b) i hdukdt = kuk +?k (g c m )vk; (5.3c) i hdvkdt = kvk +?k (gcm )uk; (5.3d) (t) = UV X k ?ku k (t)vk (t); (5.3e) where we have assumed that the state of the system is the quasiparticle vacuum annihilated by ^ k; . The stationary solutions to Eqs. (5.3) have the form: cm;g (t) = csm;ge 2i t= h; (t) = se i2 t= h; uk (t) = uskeiEkt= hei t= h; vk (t) = vskeiEkt= he i t= h; where quantities with superscript s are time-independent. Inserting this stationary ansatz into Eqs. (5.3) and searching for solutions with csm = 0, we nd that such a dark-state solution indeed exists as long as the generalized two-photon resonance condition 0 + 0 = 2 ; (5.4) 64 s48s46s48s48 s48s46s48s50 s48s46s48s52 s48s46s48s54 s48s46s48s48 s48s46s48s50 s48s46s48s52 s48s46s48s54 s48 s53s48 s49s48s48 s49s53s48 s50s48s48 s48s46s48s48 s48s46s48s50 s48s46s48s52 s48s46s48s54 s48 s50 s52 s54 s56 s49s48 s48 s53 s49s48 s49s53 s48 s50 s52 s54 s56 s49s48 s48 s53 s49s48 s49s53 s48 s50 s52 s54 s56 s49s48 s48 s53 s49s48 s49s53 s48 s50 s52 s54 s56 s49s48 s49s50 s49s52 s48s46s48 s48s46s50 s48s46s52 s48s46s54 s48s46s56 s49s46s48 s116 s115 s48 s61 s32s45s52 s46s51 s50 s32s69 s70 s124s99 s103 s115 s124 s50 s32s47s32s110 s40s97 s41 s40s98 s41 s48 s61 s32s45s50 s46s56 s56 s32s69 s70 s124s99 s103 s115 s124 s50 s32s47s32s110 s40s100 s41 s40s99 s41 s48 s61 s32s48 s46s48 s32s69 s70 s124s99 s103 s115 s124 s50 s32s47s32s110 s116s32s40 s47s69 s70 s41 s47s69 s70 s65 s109 s112 s108 s105 s116 s117 s100 s101 s32 s40 s97 s114 s98 s105 s116 s114 s97 s114 s121 s32 s117 s110 s105 s116 s115 s41 s47s69 s70 s65 s109 s112 s108 s105 s116 s117 s100 s101 s32 s40 s97 s114 s98 s105 s116 s114 s97 s114 s121 s32 s117 s110 s105 s116 s115 s41 s47s69 s70 s65 s109 s112 s108 s105 s116 s117 s100 s101 s32 s40 s97 s114 s98 s105 s116 s114 s97 s114 s121 s32 s117 s110 s105 s116 s115 s41 s48 s32s47s32s69 s70 s124s99 s103 s115 s124 s50 s32s47s32s110 s32s47s32s69 s70 s115 s32s47s32s69 s70 Figure 5.1: (a) csg 2=n, , and s as functions of 0. The dark state solution at 0 = 4:1 EF is indicated by the vertical line where cs g 2=n = 0:067, = 0:87 E F, and s = 0:17 EF. (b)-(d) The ground population dynamics where (b) 0 = 4:32 EF, (c) 0 = 2:88 EF, and (d) 0 = 0:00 EF. The insets are the Fourier spectra of the corre- sponding population dynamics after t = ts = 126:65 h=EF. We have used the following parameters: U0 = 28:39EF =k3F, g0 = 15:68EF =k3=2F , n = 0:034k3F, and Kc = 14:4kF, where EF and kF are the fermi energy and momentum, respectively. 65 is satis ed. Such a solution is given by juskj2 = 1 jvskj2 = (Ek + k )=2Ek; Ek = q ( k )2 +j sj2?2k, where ; s and csg are determined from the follow- ing equations, representing, respectively, (a) the destructive interference condition leading to vanishing population in jmi 0 2 c s g = g U s; (5.5) (b) the gap equation 1 U = 1 2 2 Z 1 0 ?2k 2Ekk 2dk; (5.6) and (c) the conservation of particle number n = NV = 2 csg 2 + 12 2 Z 1 0 1 k E k k2dk: (5.7) Equation (5.5) in particular demonstrates the coherent nature of the dark state: for a normal atomic Fermi gas ( s = 0) which does not possess phase coherence, such a state is impossible as Eq. (5.5) would imply vanishing population in the molecular level jgi (csg = 0). 5.1.3 Stationary Dark State Solution An example of the dark state solution obtained by solving Eqs. (5.5-5.7) self- consistently is shown in Fig. 5.1(a). To remove the ultraviolet divergence in the gap equation (5.6), we have followed the standard renormalization procedure to replace U by U0, where U0 is the physical two-body atomic collisional strength. Here = 1=(1 + U0U 1c ), and U 1c = mKc= 4 3=2 h2 [57]. Further, by replacing g with g0 while keeping the rest of parameters unchanged, we can easily show 66 that our results become independent of Kc. Figure 5.1(a) displays the ground molecular population of the dark state jcsgj2, the corresponding chemical potential , and the gap parameter s as a function of the bound-bound coupling strength 0. In the limit 0=(g0pn)!1, we have csg 2 !0 and all the population is in a pure BCS atomic state; while in the opposite limit of 0=(g0pn) ! 0, csg 2 ! 0:5 and all the population are in the ground molecular state. Thus, in principle, we can adiabatically convert the BCS atom pairs into ground molecular BEC or vice versa by controlling the ratio 0=g0pn in the spirit of Stimulated Raman Adiabatic Processes (STIRAP) [11]. Our use of STIRAP here is, however, for preparing a superposition which is a prerequisite for demonstrating coherent oscillations in fermionic systems [101, 7, 9, 91]. Starting from t = 0 with a pure atomic BCS state at a relatively large 0, we adiabatically decrease 0 to 4.1 EF at t = ts [indicated in Fig. 5.1(b)-(d)] while maintaining the two-photon resonance condition (5.4) through a proper chirping of the laser frequency [60]. At t = ts, a dark state, which is indicated by the vertical lines in Fig. 5.1, is then formed with about 14% of the atoms now converted to ground molecules. Next, immediately after t = ts, we suddenly change 0 from 4.1 to 4.6 EF and then keep it xed for later time, while xing all other parameters at their respective values at ts. The dynamical response of the system is illustrated in Fig. 5.1(b)-(d), which display the ground molecular population as a function of time as obtained by solving Eqs. (5.3). 67 5.1.4 Dynamic Simulations and Stability From the dynamical simulation, we see that the system follows the dark-state solution up to t = ts, after which, a sudden change of 0 induces oscillations in the population. Note that although the dark-state solution is not explicitly dependent upon the detunings 0 and 0, which must satisfy Eq. (5.4), the population dynamics for t > ts does depend on their speci c values. Several conclusions can be drawn from Fig. 5.1(b)-(d). First, the atom-molecule dark state is robust as, after the sudden \shake" at ts, the system oscillates around its steady state. Second, the population oscillation occurs between the ground molecular statejgiand the atomic state, while the excited molecular population (not shown in the gures) remains negligible. Third, the oscillations are dominated by two frequencies whose values depend on the detunings as indicated by the corresponding Fourier spectra shown in the insets. To better understand these oscillations and gain insight into the dark states, we calculate the collective mode frequencies by linearizing Eqs. (5.3) around the dark state solution. This procedure leads to a transcendental equation for the collective mode frequency ! given by f (!) = 0 where f (!) has the following form: 1 Ueff(!) R1 0 dk 2 2k 2?2 k E2k+( k )2+!( k ) Ek(!2 4E2k) R1 0 dk 2 2k 2?2 k (?k s)2 Ek(!2 4E2k) R1 0 dk 2 2k 2?2 k (?k s)2 Ek(!2 4E2k) 1 Ueff( !) R1 0 dk 2 2k 2?2 k E2k+( k )2 !( k ) Ek(!2 4E2k) (5.8) where the vertical delimiters indicate a determinant of the matrix and Ueff (!) = U +!g2 [!(! + 2 0) j 0j2=4] 1. Here, the integrals in the diagonal elements 68 are automatically renormalized since Ueff (!) scales as U0eff (!), where U0eff(!) = U0 +!g20=[!(! + 2 00) j 0j2=4] with 00 = 0 + g20=Uc. Before examining f (!) in detail, we rst make a remark. As we have men- tioned, our dark state reduces to a pure BCS state in the limit 0=g0pn!1. In this case, Ueff !U0, which is independent of !. As is known [97], the collective excitation spectrum of a BCS state contains a continuous part and a discrete mode lying just below the continuum threshold at 2 s. Due to the coupling between discrete (molecular) states and the continuum (atomic) states, the problem at hand bears much resemblance to the energy diagonalization of the Fano-Anderson type of Hamiltonians in linear atomic and condensed matter systems [35]. In analogy to these problems, such discrete-continuum coupling may lead to drastic modi cations to both parts of the excitation spectrum. Mathematically, this coupling gives rise to !-dependence in Ueff and introduces extra poles in f(!). We now examine the spectrum by nding the roots of Eq. (5.8). Since f(!) is an even function of !, we only concentrate on the positive-frequency branch. The function of f(!) is plotted in Fig. 5.2. The left panel [Fig. 5.2(a)-(c)] shows the low-frequency part. Here, just as in the pure BCS model, one isolated mode lies not far below the continuum threshold. As the free-bound detuning becomes more negative, this mode decreases and shifts further away from continuum. In the right panel [Fig. 5.2(d)-(f)], we show the high-frequency part. Here, the vertical lines are the poles determined by ! = 2Ek at discrete momenta. Typically, a single root is trapped between two adjacent poles. These roots will form a continuum. This pattern of root distribution is, however, broken in the region indicated by the 69 arrow, where two roots exist between two adjacent poles. In the continuous k limit, one of the two roots joins the continuum while the other one becomes part of the discrete spectrum. The two discrete modes (one shown in left and the other in right panel) are the ones that determine the dynamical population oscillation shown in Fig. 5.1(b)-(d), while the contribution from the continous part of the spectrum, due to the destructive interference, may lead to a power-law decay of the oscillation at a longer time scale [97, 101]. 5.1.5 Summary In summary, we have shown that it is possible to construct a macroscopic atom- molecule dark state in a fermionic super uid. The super uidity of the fermionic atoms is a necessary ingredient for such a state. Therefore characteristics of the dark state may serve as a diagnostic tool for Fermi super uids. Via direct dynamical simulation, we have shown that the dark state is quite robust. By perturbing the state, we are able to generate coherent oscillations reminiscent of the oscillating current across Josephson junctions. A remarkable feature here is that the population oscillation occurs between the ground molecules and the BCS atom pairs, while the excited molecular population remains highly suppressed. This has the obvious advantage of preserving the atom-molecule coherence for a time much longer than the excited molecular lifetime. Thus, this technique has the potential to increase the sensitivity in interference-based high-precision measurements. In particular, the low frequency mode is directly related to the gap parameter s, and measurement 70 s56s46s51s48 s56s46s51s53 s56s46s52s48 s56s46s52s53 s56s46s53s48 s45s48s46s48s48s48s49s48 s45s48s46s48s48s48s48s53 s48s46s48s48s48s48s48 s48s46s48s48s48s48s53 s48s46s48s48s48s49s48 s45s48s46s48s48s49s48 s45s48s46s48s48s48s53 s48s46s48s48s48s48 s48s46s48s48s48s53 s48s46s48s48s49s48 s55s46s48s53 s55s46s49s48 s55s46s49s53 s55s46s50s48 s55s46s50s53 s45s48s46s48s48s48s49s48 s45s48s46s48s48s48s48s53 s48s46s48s48s48s48s48 s48s46s48s48s48s48s53 s48s46s48s48s48s49s48 s48s46s51s48 s48s46s51s49 s48s46s51s50 s48s46s51s51 s48s46s51s52 s48s46s51s53 s48s46s51s54 s45s48s46s48s48s49s48 s45s48s46s48s48s48s53 s48s46s48s48s48s48 s48s46s48s48s48s53 s48s46s48s48s49s48 s52s46s56s48 s52s46s56s50 s52s46s56s52 s52s46s56s54 s52s46s56s56 s52s46s57s48 s52s46s57s50 s52s46s57s52 s45s48s46s48s48s48s49s48 s45s48s46s48s48s48s48s53 s48s46s48s48s48s48s48 s48s46s48s48s48s48s53 s48s46s48s48s48s49s48 s45s48s46s48s48s49s48 s45s48s46s48s48s48s53 s48s46s48s48s48s48 s48s46s48s48s48s53 s48s46s48s48s49s48 s40s100s41 s102s40 s41 s40s98s41 s48 s61s32s45s50s46s56s56s32s69 s70 s102s40 s41 s40s101s41 s102s40 s41 s40s99s41 s48 s61s32s48s46s48s32s69 s70 s102s40 s41 s32s47s32s69 s70 s40s102s41 s102s40 s41 s32s47s32s69 s70 s48 s61s32s45s52s46s51s50s32s69 s70 s102s40 Figure 5.2: The sections of f (!) containing the low frequency root (left column) and the high frequency root (right column). (a) and (d) are for 0 = 4:32 EF, (b) and (e) are for 0 = 2:88 EF, and (c) and (f) are for 0 = 0:00 EF. Other parameters are the same as in Fig. 5.1. 71 of this frequency will allow us to gain insight into the atom-atom and atom-molecule interactions as they will strongly a ect s. 5.2 Matter-Wave Bistability in Feshbach Coupled Fermions 5.2.1 Overview Here we study the matter-wave bistability in coupled atom-molecule quan- tum gases,in which heteronuclear molecules are created via an interspecies Fesh- bach resonance involving either two-species Bose or two-species Fermi atoms at zero temperature. We show that the resonant two-channel Bose model is equivalent to the nondegenerate parametric down-conversion in quantum optics, while the corre- sponding Fermi model can be mapped to a quantum optics model that describes a single-mode laser eld interacting with an ensemble of inhomogeneously broadened two-level atoms. Using these analogies and the fact that both models are subject to the Kerr nonlinearity due to the two-body s-wave collisions, we show that under proper conditions, the population in the molecular state in both models can be made to change with the Feshbach detuning in a bistable fashion. Large portions of the chapter are quoted from the author?s publication, reference [52]. The ability to cool and trap neutral atoms down to quantum degenerate regime has created a host of new and exciting problems that are increasingly interdisci- plinary, bridging in particular the atomic, molecular, and optical physics and the condensed matter physics. The rich knowledge and experience accumulated over the past several decades in these elds have dramatically accelerated the progress 72 of ultracold atomic physics. An example that serves to illustrate how the inter- disciplinary elds learn and bene t from each other is the phenomonon of atomic pairing where a bosonic molecule is coupled to two bosonic or fermionic constituent atoms via Feshbach resonance or photoassociation. So far this is the only viable approach to create ultracold molecules. It is also an ideal test ground for studying coupled atom-molecule condensates and the BCS-BEC crossover [23]. The latter is thought to be underlying the mechanism of high temperature superconductors and extensively studied in the realm of condensed matter physics. In addition, the cou- pled atom-molecule systems have deep quantum optical analogies [50, 94]: bosonic molecules coupled to bosonic atoms (which we will refer to as the bosonic model) is the matter-wave analog of parametric coupling of photons which has important applications in generating nonclassical light elds and, more recently, in quantum information science; while the system of bosonic molecules coupled to fermionic atoms (which we will refer to as the fermionic model) can be mapped to the Dicke model where a light eld interacts with an ensemble of two-level atoms, a model having fundamental importance in the eld of quantum optics. In this work, we will further explore these quantum optical analogies of the atom-molecule system and focus on the important e ects of binary collisional inter- actions between atoms which are largely ignored in previous studies [50, 94]. We show that the atom-atom interaction introduces extra nonlinear terms which, under certain conditions, give rise to matter-wave bistability in both bosonic and fermionic models. Hence, we may establish the connection between the coupled atom-molecule quantum gases and the nonlinear bistable systems [41] that have been extensively 73 studied in the 80?s in the context of nonlinear optics, due both to its fundamen- tal interest, and to its many practical applications in fast optical switches, optical memory, laser pulse shaping, etc. 5.2.2 Bosonic model In what we call the bosonic model, a molecule associated with annihilation operator ^am is coupled to two non-identical atoms labeled as j"i and j#i with corresponding annihilation operators ^a" and ^a#, respectively. Here we consider two types of atoms in order to make direct comparisons with the fermionic model to be treated in the next section, for which only unlike fermionic atoms can pair with each other and form a bosonic molecule. Futhermore, in this work we only consider the zero-temperature homogeneous case so that all the bosons are condensed into zero center-of-mass momentum states. The second quantized Hamiltonian reads ^H = ^aym^am +g ^aym^a"^a# +h:c: +X i;j ij^ayi^ayj^aj^ai; (5.9) where the detuning represents the energy di erence between the molecular and atomic levels which can be tuned by external eld, g is the atom-molecule coupling strength and ij = ji is the s-wave collisional strength between modes i and j. This system has been studied in Ref. [103]. For completeness and better comparison with the fermionic model, we brie y state some of the main results relevant to the focus of this work | matter-wave bistability | and direct readers to Ref. [103] for more details. 74 For our purpose, we take the standard mean- eld approximation and replace operators ^aj with c-numbers aj = pNjei?j. The mean- eld Hamiltonian takes the form: H = 2 (y2 y) + 2 y + (1 2y)p2y cos?; (5.10) where N N"+N#+2Nm is a constant of the motion representing the total number of atoms, and we have assumed that the number of atoms in statesj"iandj#iare equal, i.e., N" = N#. We also nd the quantities y = 0:5 [1 (N" +N#)=N] = Nm=N; ? = ?" +?# ?m (5.11) which are a pair of conjugate variables representing the molecular population and phase mismatch, respectively. Other quantities are de ned as G = gp2N; = N ( "" + ## + mm + 2 "# 2 m" 2 m#)=G; = [ + "" + ## + (N 1) mm N m" N m#]=G; We will focus on the stationary states with ? = which has lower energies than the ones with ? = 0. 5.2.2.1 Quantum Optical Analogy It is quite clear from the form of the second-quantized Hamiltonian in Eq. (5.9) that without the collisional terms our model will reduce to the trilinear Hamiltonian describing the nondegenerate parametric down-conversion in quantum optics [31, 65]. In this analogy, the molecular mode plays the role of the pump photon, where 75 the two atomic modes are the signal and idler photons, respectively. The collisional terms would correspond to the Kerr-type cubic nonlinearity which will be present in the optical system if the light elds propagate in some nonlinear medium [89]. 5.2.2.2 Bistability In the absence of the collisions or Kerr nonlinearity (i.e., = 0), the system does not exhibit bistability. This can be seen by studying the properties of the mean- eld Hamiltonian H in Eq. (5.10) which can be simpli ed as (taking ? = ) H = 2 y (1 2y)p2y; (5.12) The stationary state corresponds to the solution of @H @y = 2 + 3 p2y 1=p2y = 0: (5.13) For a given detuning , the stationary state is unique: y0( ) = 8 >>< >>: 0:5; < 1 1 18( + p 2 + 3)2; 1 (5.14) For 6= 0, using Eq. (5.10), the stationary condition is given by @H @y = 2 0 + 3p2y 1=p2y = 0; (5.15) where we have de ned 0 = + (2y 1): (5.16) Note that Eqs. (5.13) and (5.15) have the same form. In other words,we can express the e ect of collisions as a nonlinear phase shift for molecules that modi es the 76 v? 0.5 0.4 0.3 0.2 0.1 0.0 y0 v (a) (b) y0 Figure 5.3: For given and , the thick solid line represents y0( 0) and the thin dashed straight line represents Eq. (5.16). The intersects are the stationary solutions. Here we take 1=2 = 0:1 and = 0:4 . detuning . Consequently, the stationary solution for 6= 0 should have the same form as in Eq. (5.14) but with replaced by 0, which makes y0 an implicit function of the detuning . To nd the explicit dependence of y0 on , we can use the graphic method as illustrated in Fig. 5.3. For the example given, we obtain three stationary states. Further analysis shows that the middle solution is dynamically unstable and the other two are stable solutions [103]. Such a behavior is typical in bistable systems [41]. The graphics of Fig. 5.3 also shows that, in order to have multiple stationary solutions, the slope of the straight line (given by 1=2 ) must be negative and cannot be too steep. More speci cally, the slope of the straight line has to be larger than the slope of the curve at = 1, and this leads to the condition < 1; (5.17) 77 in order for the system to exhibit bistability. 5.2.3 Fermionic model In the fermionic model, we denote ^ak; as the annihilation operator for an atom with spin (=",#), momentum hk, and energy k = h2k2=(2m), and as before denote ^am as the annihilation operator for a molecule in state jmi with zero momentum. the second quantized Hamiltonian reads: ^H = X k; k^ayk ^ak +U X k;k0;q ^ayk"^ay k+q#^a k0+q#^ak0" + ^aym^am + gpV X k ^ay m^a k#^ak" +h:c: ; (5.18) where V is the quantization volume. Hamiltonian (5.18) has the form of the two- channel model of BCS-BEC crossover where only the condensed molecule part is considered [24]. Following the Hartree-Fock-Bogoliubov mean- eld approach [28] by dividing the two-body collision into a part related to the BCS gap potential = Up; where p = X k h^a k#^ak"i=V (5.19) and a part related to the Hartree potential Vh = U X k h^ayk ^ak i=(2V); (5.20) 78 where we again assume equal population inj"iandj#iatomic states, i.e.,h^ayk"^ak"i= h^ayk#^ak#i, we may express the Hamiltonian as ^H = X k; ( k +Vh)^ayk ^ak + ^aym^am + X k h Up+g^am=pV ^ayk"^ay k# +h:c i : (5.21) De ning ^N = 2^aym^am +Pk; ^ayk ^ak as the number operator which is a constant of motion, we may rewrite the term proportional to Vh in (5.21) as X k; Vh^ayk ^ak = Vh( ^N 2^by^b) = Vh ^N Un 2Uh^by^bi=V ^ by^b; (5.22) where n = h^Ni=V is the constant total atom number density. In our derivation, Vh arises from the two-body atom-atom collision. In general, additional terms rep- resenting atom-molecule and molecule-molecule collisions are also present. These additional terms will modify the coe cient U in the de nition of Vh [Eq. (5.20)], which is the counterpart of in the bosonic model, but the general form of Eq. (5.22) will remain valid. In the following, we will refer to this term as the collisional term. Through Eq. (5.22), we have expressed the e ect of the two-body collisions as a nonlinear energy shift of the molecules (along with a constant energy bias VhN), in complete analogy with the bosonic model. We remark that in the usual one- channel model of the mean- eld BCS theory valid when the molecular population is negligible, the collisional term just represents an unimportant constant energy shift. As usual, ^ak (t) and ^am (t) obey the Heisenberg equations of motion based on Hamiltonian (5.21). By replacing Bose operator ^am with the related c-number 79 c =h^bi=pV and Fermi operators ^ak (t) with the familiaruk (t) andvk (t) parameters through the Bogoliubov transformation ^ak" = u k ^ k"+vk ^ y k# and ^ay k# = v k ^ k"+ uk ^ y k#, where ^ k are the Fermi quasiparticle operators, we arrive at the following set of mean- eld equations of motion i h_c = ec+gp; (5.23a) i h_uk = kuk + evk; (5.23b) i h_vk = euk + kvk; (5.23c) where p = Pku kvk=V, e = gc+Up, and e = Un+ 2Ujcj2 ; (5.24) is the e ective detuning which contains a Kerr nonlinear term 2Ujcj2 whose origin can be traced to the two-body collisional shift. This set of equations describes the dynamics at zero temperature where the state of the system can be described as the quasiparticle vacuum. 5.2.3.1 Quantum Optical Analog In several previous studies where the collisional term is neglected, it has been pointed out that the fermionic model can be mapped to the Dicke model in quantum optics [86, 50] as schematically shown in Fig. 5.4 (see below for details). In fact, this model was recently shown to display collective dynamics similar to photon echo and soliton-like oscillations in transient collective coherent optics [8]. Such a connection can be traced to the work of Anderson?s spin analogy [6] for the BCS problem. 80 c c110 g m?| c175 ?| c175?| Figure 5.4: Mapping of the two-channel resonant Fermi super uid model to the Dicke model. The bosonic molecules and the fermionic atoms in the former are mapped to the cavity laser eld and an ensemble of two- level atoms in the latter, respectively. See text for details. 81 To show what is the quantum optical analogy of the collisional term, let us rewrite Eqs. (5.23) in a form more familiar in cavity optics. To this end, we rst introduce a set of new variables Pk = 2u kvk; Dk =jukj2 jvkj2 ; EL = 2i e; and recast Eqs. (5.23b), (5.23c) into h _Pk = i2 kPk ELDk; (5.25a) h _Dk = (E LPk +ELP k)=2: (5.25b) Interpreting Pk and Dk as the microscopic polarization and population inversion, respectively, Eqs. (5.25) then become the optical Bloch equation that describes the interaction between a local electromagnetic eld EL and a ctitious two-level atom, characterized with a transition energy 2 k [3]. This analogy is consistent with the fact that there exists a one-to-one mapping between pairs of fermion operators and Pauli matrices when the BCS pairing mechanism is taken into account [6]. In this optical analogy, the local electric eld EL = E +Ei contains two con- tributions because of e = gc + Up. The rst of these (E = i2gc) is equivalent to an average macroscopic eld, whose dynamics is described by Eq. (5.23a), which can now be interpreted as the Maxwell?s equation for the cavity eld E with cavity detuning e, driven by a macroscopic polarization density p = PkPk=(2V) of an inhomogeneously-broadened medium [see Fig. 5.4]. The second partEi = i2Up may be regarded as the internal eld at the atom due to the collective dipole polarization of the nearby two-level atoms in the ensemble. As such, EL = E +Ei here bears 82 a direct analogy to the Lorentz-Lorenz relation in optics [51]. Note that had the collisional term been neglected (i.e., U = 0), there would have been no internal eld contribution, nor would there have been the Kerr nonlinearity in the equation for the bosonic mode. For U 6= 0, both of these terms will be present. Under such a circumstance, Eqs. (5.23a) and (5.25) represent the generalized optical-Bloch equa- tions in which the Lorentz-Lorenz relation is explicitly incorporated [15], and hence can lead to interesting nonlinear phenomena just as they do in optical systems. 5.2.3.2 Bistability Having established this analogy, we now look for the steady state solution from Eqs. (5.23a) and (5.25). As is well-known, the operation frequency of a laser eld is not known a priori; but is established through the so-called mode pulling | the dynamical competition between atomic and cavity resonances. A similar argument holds for the molecular eld c. For this reason, we adopt the following steady-state ansatz c!ce 2i t= h; Pk!Pke 2i t= h; Dk!Dk where the same symbols are used for both dynamical and steady-state variables for notational simplicity. The molecular chemical potential, 2 , is just the correspond- ing lasing frequency in the cavity optics model. From the steady state equations obtained by inserting this stationary ansatz into Eqs. (5.23a) and (5.25), we can easily nd that (a) there always exists a trivial solution or a \non-lasing" state with e = 0 or equivalently c = 0, which corresponds to the non-super uid normal Fermi 83 sea; and (b) a non-trivial solution with its , e and c determined self-consistently from the gap equation 1 U g2=( e 2 ) = 1 2V X k 1 Ek (5.26) with Ek = p( k )2 + 2e, the number equation 2jcj2 + 1V X k 1 k E k = n; (5.27) and an auxiliary relation jg ej=jc( e 2 )[U g2=( e 2 )]j: (5.28) The integral in the gap equation (5.26) under the assumption of contact interaction is known to be ultraviolet divergent. To eliminate this problem, we renormalize the interaction strength U and g, as well as the detuning in (5.26), while U in the collisional term is replaced by the background interaction strength U0 [57, 47]. Note that there exists, in the single-mode inhomogeneously broadened laser theory [1], a similar set of steady-state integral equations, which, due to lasers being open systems, are obtained under di erent considerations. For example, the require- ment that the cavity loss balance the saturated gain leads to the \gap" equation, whose primary role is to limit the laser intensity; while the phase matching condition translates into the \number" equation, whose main responsibility is to assign the amount of mode pulling of the laser eld relative to the cavity resonance. An alternative way to derive Eqs. (5.26)-(5.28) is from the energy density. The zero-temperature energy density f( e;c; ) h^Hi=V can be calculated using 84 e c2 (b) x x + c2 (a) x Figure 5.5: Free energy density f as a function of e and jcj2 at = 0:02 (a) and = 0:2 (b). Extremum points are indicated by ?x? (minimum) and ?+? (saddle point). f, e, and are all in units of EF = (3 2n)2=3=(2m), the Fermi energy of the non-interaction system. In all the examples shown in this paper, the physical parameters corre- sponding to g0 and U0 are 1:2EF=k3=2F and 60:7EF=k3F, respectively. 85 (a) (b) Figure 5.6: Molecular populationjcj2 as a function of detuning. Vertical line in (a) indicate the critical point of a rst-order phase transition. In (a) the collisional term is included while it is neglected in (b). Hamiltonian (5.21) and the Bogoliubov transformation as [47] f = X k k Ek V ( e gc)2 U + ( e 2 )jcj 2 + n: (5.29) The extremum conditions @f=@ e = @f=@c = 0, lead to Eqs. (5.26) and (5.28), respectively, while the condition @f=@ = 0 results in the number equation (5.27). Figure 5.5 illustrates the energy density in the jcj2- e plane for di erent de- tuning . For any given pair of (c, e), is calculated self-consistently using the number equation (5.27). Typically, f has only one extremum which is a minimum point as shown in Fig. 5.5(a). However, in the regime 2( 0:08;0:13)EF, f pos- sesses three extrema: two of them are local minima and the third a saddle point. An example with = 0:02 is shown in Fig. 5.5(b). To gain more insight into the bistable behavior, we may carry an analogous analysis as in Sec. 5.2.2.2. In the absence of the collisional term, steady-state molec- ular populationjcj2 is a smooth monotonically decreasing function of and the sys- 86 tem does not exhibit bistability: As increases, molecules decompose into atoms. This is shown in Fig. 5.6(a). When the collisional term is included, the relevant equations of motion maintain the same forms if we replace with 0 = + 2U0jcj2: (5.30) Hence the solution jcj2 as a function of 0 is represented by the same curve as in Fig. 5.6(a). To nd jcj2 as a function of , we need to nd the intersects between this curve and the straight line representing Eq. (5.30). In direct analogy to the graphic method in Fig. 5.3, for U0 su ciently large and negative, these two curves have three intersects and the system exhibits bistability. One example is shown in Fig. 5.6(b). The vertical line in Fig. 5.6(b) indicate the critical point of a rst-order phase transition: across this line, the ground state jumps from the upper branch to the lower one. For the parameters used, this occurs at c = 0:01EF. To check the stability of these steady states, we have solved the dynamical equations (5.23) using the slightly perturbed steady state solution as the initial condition. From the dynamical evolution of the system one can see that, just like in the bosonic model, the states in the upper and lower branches are dynamically stable: when slightly perturbed, they exhibit damped oscillations around their equilibrium values. These oscillations can be further understood from the excitation spectrum of the corresponding steady state. This can be done using a linear stability analysis, which is also the standard tool for studying laser instabilities [1, 67]. The spectrum is found to contain a discrete part which determines the oscillation frequencies, and a continuous part which contributes to the damping of these oscillations at a much 87 longer time scale [78]. By contrast, the states in the middle branch are unstable as small perturbations will lead to large departures. 5.2.3.3 Dynamics The bistability has important rami cations in atom-molecule conversion dy- namics. When the collisional term is unimportant and negligible, one can easily cre- ate bosonic molecules from fermionic atoms by adiabatically sweeping the Feshbach detuning across the resonance. As long as the sweeping speed is su ciently slow, the molecular population will follow the steady-state curve as shown in Fig. 5.6(a). By contrast, when bistability induced by the collisional term occurs, the adiabatic- ity condition will necessarily break down. Fig. 5.7 displays the dynamical evolution of the bosonic population when the detuning is swept starting either from a large positive or a large negative value. We can see that the steady-state curve can be followed up to the point where the stable states of the upper and lower branches and the unstable states of the middle branch join each other (indicated by 1 and 2 in Fig. 5.7), where the population suddenly jumps between the two stable branches. Note that the critical detuning c for the rst-order phase transition as indicated by the vertical line in Fig. 5.6(b) lies between 1 and 2. The dynamical population curve thus exhibits hysteresis in the vicinity of the rst-order phase transition. In this way, by tuning the detuning in the vicinity of 1 or 2, an atom-molecule switch can be realized. Similar behavior is also found in the bosonic model. 88 ? (a) (b) |c|2 ?1 ?2 Figure 5.7: Dynamics of atom-molecule conversion as illustrated by the molecular population when the detuning is slowly swept. Curve (a) is obtained by sweeping from positive to negative values, while curve (b) is obtained by sweeping in the opposite direction. The dotted line is the steady-state molecular population, the same as in Fig. 5.6(b). 89 5.2.4 Conclusion In conclusion, we have studied the matter-wave bistability in coupled atom- molecule quantum gases in both the bosonic and the fermionic models. These two cases can be mapped to two very di erent quantum optical models: parametric downconversion in the former and generalized Dicke model in the latter. Neverthe- less, one important common feature for both cases is that bistability can be induced by collisional interactions which give rise to Kerr nonlinearity. We hope that our work will motivate experimental e orts in demonstrating the matter-wave bistability we predicted here. 90 Chapter 6 Non-Equilibrium Enhancement of BCS Pairing 6.1 Overview In this chapter, we employ a semiclassical kinetic equation to demonstrate the enhancement of BCS pairing by non-equilibrium driving. Large portions of this chapter are quoted from the author?s manuscript, [77]. The di culty of observ- ing quantum coherent phases in cold gases highlights the need to overcome low transition temperatures. In addition to Bose-Einstein condensation, there has also been recent interest in the generation of Fermi super uids through the BCS pairing mechanism [74, 42, 84]. This phenomenon is more di cult to observe due to pro- hibitively low transition temperatures [46, 90] though the problem may be partially surmounted by use of Feshbach resonances [74, 42, 84, 104, 48]. Non-equilibrium e ects can also be used to control and e ectively cool such systems. However, this is an unexplored area of research by comparison. Since the 1960?s, it was known that superconductivity could be stimulated by radiation in microbridges [100]. In 1970, Eliashberg explained this e ect as an ampli cation of the gap parameter by means of a stationary nonequilibrium shift in the quasiparticle spectrum to higher energies brought on by the radiation [33, 34]. Over the next decade, his theory found experimental acceptance through the enhancement of critical currents and temperatures in Josephson junctions [95] and 91 thin lms [69]. At the same time, other nonequilibrium stimulation methods were developed [22] with more recent reports of enhancements of the superconducting critical temperature by up to several times its equilibrium value [14, 45]. With the present interest in the application of the BCS model of superconductivity to trapped atomic Fermi gases [21, 45, 17], nonequilibrium e ects represent an attractive way to magnify the quantum properties of these types of super uids. As in superconductors, the BCS order parameter 0 for cold fermionic gases obeys a self-consistency equation [80, 48, 88] 0 = 0 X k " 1 2nk 2p 2k + 20 1 2 k # ; (6.1) where k = "k "F is the quasiparticle dispersion centered on the Fermi energy "F, 0 is the BCS gap, and nk is the quasiparticle distribution function. The constant has the form = 4 a"#=mF where a"# is the negative s-wave scattering length for collisions between hyper ne states and mF is the mass. At equilibrium, n(FD)k is the Fermi-Dirac distribution function, and the only way to increase 0 is either to increase the interaction strength or to lower the temperature. However, there exists a wide class of stationary nonequilibrium distributions, nk, such that Eq. (6.1) is still valid and has solutions with enhanced order parameters. Indeed, if a quasista- tionary non-equilibrium distribution is created that is di erent from the canonical Fermi-Dirac function, nk = nk n(FD)k , then according to the weak-coupling BCS Eq. (6.1), it e ectively renormalizes the pairing interaction and transition temper- ature as follows: 1 e = 1 + X k nk Ek 1 ("F) and T (e ) c = T (0) c e ; (6.2) 92 where here and below Ek = p 2k + 20, ("F) is the density of states at the Fermi level, T(0)c "F expf 1=[ ("F) ]g is the weak-coupling BCS transition temper- ature in equilibrium, and we also introduced the dimensionless parameter = Pk nk=[ ("F)Ek]. For many non-equilibrium distributions, > 0, and this yields an e ective enhancement of Tc and/or . We note that even though our the- ory below and that of Eliashberg are limited to small deviations from equilibrium, with j j 1, this does not imply a limitation in experiment, where this param- eter can be large. For such large deviations from equilibrium, the weak-coupling BCS approach and Eqs. (6.1) and (6.2) may not be quantitatively applicable, but the tendency to enhance pairing may remain. Therefore the proposed underlying mechanism may lead to substantial enhancement of fermion pairing and super uid- ity. We also emphasize here that cold atom systems o er more control in creating and manipulating non-equilibrium many-body quantum states than that available in solids. Speci cally, we will show that while it was impossible to drive a metal from the normal to the superconducting phase by irradiation, it is indeed possible to drive the equivalent transition in cold gas by utilizing this additional control. In this paper, we propose a theory of nonequilibrium stimulation of fermion pairing by considering the e ect of Bragg pulses [13, 75] as shown schematically in Fig. 6.1 on a harmonically trapped gas of fermions in the Thomas-Fermi approx- imation [18]. The heating induced by the external perturbation is dumped into an isothermal bath of trapped bosons via collisions, but this is not necessary in general. The pairing enhancement is calculated for a typical mixture of 87Rb and 6Li. It depends on the state of the gas at equilibrium. In the super uid phase, 93 Figure 6.1: Bragg Potential: A moving lattice with wave vector q = k2 k1 can be formed in the region of the Bose-Fermi mixture through the interference of two lasers with di ering wave vectors and frequencies. By adjusting the parameters of this non-equilibrium perturbation, one can achieve states with enhanced super uidity. Eliashberg?s requirement that the frequency of the perturbation be less than twice the equilibrium gap ( h! < 2 0) ensures that the pulse does not e ectively heat the system by producing more quasiparticles with energies " 0. Though this requirement cannot be satis ed in the normal phase, where 0 = 0, the independent tunability of both the momentum and energy of the Bragg pulse allows us to protect the system from e ective heating through energy conservation. This avenue, which was not available in the context of superconductors, e ectively provides a means to \sharpen" the Fermi step (or even create a discontinuity at a di erent momentum), thereby enhancing fermion pairing. 94 6.2 Model In our problem, we assume optically trapped bosons in thermal contact with fermions that occupy two hyper ne states j"i and j#i. This system has a Hamilto- nian of the form ^H= ^H0 + ^HI where the noninteracting part ^H0 is given by ^H0 = Z d3r X p ^ yp (r) 12m p @2 +V (r) ^ p (r); where for brevity, we introduced the subscript p = B;F";F# which labels bosons (p = B) and fermions in the two hyper ne states: \up" (p = F") or \down" (p = F#) and mF" = mF# mF is the fermion mass and mB is the mass of the bosons. We assume that fermions in either hyper ne state feel the same trapping potential. Thus, Vp (r) is given by VF;B (r) = 12mF;B 2F;Br2 where the subscript F (B) refers to fermions (bosons). There is also an interaction Hamiltonian ^HI which has the form ^HI = 1 2 Z d3r X p1;p2 gp1;p2 ^ yp1 (r) ^ p1 (r) ^ yp2 (r) ^ p2 (r); with gp1p2 being the strength for s-wave collisions between the particles labelled by p1;p2 = fB;F";F#g. While Pauli exclusion requires that gF"F" = gF#F# = 0, an attractive coupling gF"F# gF#F" < 0 will lead to BCS pairing. A nonzero interaction gFB between bosons and fermions is required for thermalization between the two populations. We need put no other restrictions on gp1p2; but our desired e ect will be easier to observe experimentally with some other constraints. For instance, requiring that gFB < 0 will raise the BCS condensation temperature [44] while a larger gBB > 0 facilitates thermalization between bosons and fermions. 95 To proceed further we use the gap equation, Eq. (6.1), where we have nk = n(FD)k at equilibrium. In the Thomas Fermi approximation, the transition tempera- ture, T(0)c , is given by [48] kBT(0)c ? 8"Fe 2 exp 2k Fja"#j ; (6.3) where kF is the Fermi wave-vector and 0:577::: is Euler?s constant, the scat- tering length a"# is a simple combination of the coupling strengths gp1p2. 6.2.1 Nonequilibrium Enhancement It is possible to create distributions that lead to larger order parameter and e ective condensation temperature by weakly perturbing the trapped fermions. Speci cally, we a ect a Bragg pulse (Fig. 6.1) by illuminating the fermions with two lasers, which are both largely detuned from any fermionic transition. In what follows, we assume the lasers to be even further detuned from any bosonic transition such that we may ignore the e ect of the Bragg pulse on the bosons. The interaction of the fermions with these lasers is described by the addition of a term ^Hbg = Z d3r X pf=F";F# ^ yp f (r) [ h bg cos (q r !t)] ^ p f (r) to the Hamiltonian where q and ! represent the di erence in wavevectors and fre- quencies between the two lasers [13]. Now, following Schmid [83] and the general argument in the introduction [see, Eq. (6.2)], we introduce the function, nk, which describes departure from equilibrium. While Eliashberg assumed in [34] that the impurity concentration was high enough in metals such that momentum relaxation 96 happened at a much faster rate than energy relaxation, we shall not make this as- sumption. As such, nk need not be isotropic although this requirement is easily included in our model. The corresponding term, = ("F) from Eq. (6.2), is added to the right side of the gap equation (6.1) and leads to a new solution, > 0, for the order parameter at the same temperature (note that in the non-equilibrium situation, the notion of a temperature of the Fermi system is unde ned, and here by temperature we imply the original temperature and that of the Bose bath). For T T(0)c T(0)c , Eq. (6.1) can be cast in the form of a Ginzburg-Landau equation, which, including the nonequilibrium term, becomes ln T T(0)c +b j j T(0)c 2 = 0; (6.4) where we assume that the coe cient in the cubic term of Eq. (6.4) is only weakly a ected by the perturbation and use its standard BCS value [ (z) is the Riemann zeta-function] b = 7 (3)=(8 2) 0:107::: (see also, Ref. [10]). Eq. (6.4) nominally leads to an exponential enhancement of the e ective critical temperature: Te c = e T(0)c , if is positive. 6.2.2 Kinetic Equation To calculate this enhancement for the Bose-Fermi mixture, we shall balance the Boltzmann equation for nk in the spirit of Eliashberg, including both a collisive contribution and that from Bragg scattering. _nk = Icoll [nk] +IBragg [nk]: (6.5) 97 For small departures from equilibrium, we can linearize the collision integral in nk and use the 1= -approximation: Icoll [n] = n= 0, where 0 is the quasiparticle lifetime. In our system, this lifetime will be dominated by the inelastic collision time between bosons and fermions. We may estimate this time as in [5, 20] by means of a 1= 0 = n v approximation where n is the boson density at the center of the trap, is the constant low temperature cross-section for boson-fermion scattering, and v is the average relative velocity associated with the collisions between bosons and fermions. Note that there exist other contributions to the collision integral, in particular those coming from fermion collisions. Our model assumes point-like interactions be- tween fermions: Such interactions can be separated into interactions in the reduced BCS channel, which involve particles with opposite momenta that eventually form Cooper pairs and other types of scattering events, which give rise to Fermi liquid renormalizations on the high-temperature side and superconducting uctuations on the BCS side. Note that dropping o the latter terms would lead to an integrable (Richardson) model that does not have any thermalization processes and therefore the collision integral for its quasiparticles must vanish. In thermodynamic limit this model is described by BCS mean- eld theory perfectly well and so we can say that the pairing part of fermion interactions is already incorporated in our theory. Of course, including fermion-boson collision and the second type of fermion interaction processes break integrability and lead to two types of e ects: First, such interactions lead to Fermi liquid renormalizations of the e ective mass and the quasiparticle Z- factor. However, these e ects are not germane to the physics of interest, and we 98 may assume that all relevant corrections are already included and treat our sys- tem as that consisting of Fermi liquid quasiparticles. However, there is of a course a second dissipative part coming from interactions, such as those due to bosons already included into 0 and quasiparticle scatterings and decay processes due to non-BCS fermion-fermion collisions. Similarly to the work of Eliashberg, we will assume that the latter contribution to the collision integral is less signi cant than 10 due to the bosons. Fermi Liquid quasiparticles are exactly de ned precisely on the Fermi sphere, but they have a nite lifetime due to decay processes elsewhere. By not including the lifetime k of a quasiparticle at momentum k in our lineariza- tion of the Boltzmann equation, we have implicitly assumed that k 0. Because 1 k / ( kBT) 2 + ("k "F)2 in a Fermi Liquid [71], there will always be an energy region where this assumption will indeed be true for low temperatures. The most important contribution to the integral in the expression for comes from states for which "k is within kBT of "F. As such, if we require that T TF and h! "F, then our linearization of the Boltzmann equation with respect to 0 will be legitimate for the calculation of an enhancement of super uidity. Again, we stress the importance of recognizing that the aforementioned requirements are necessary only for quantita- tive accuracy of our model. As with Eliashberg?s enhancement of superconductivity, we expect our e ect to be observable far outside the constrained parameter space that is necessary for strict validity of our simple model, which provides a proof of principle for using nonequilibrium perturbations to enhance fermion pairing in cold atom systems. With these caveats in mind, we shall tune the frequency of our Bragg pulse 99 such that ! 0 1. This will ensure that any non-stationary part of the distribution function will be small [34]. Equivalently, we may think of this requirement as the statement that the Bragg pulse pumps the system out of equilibrium must faster than the system relaxes. We may note here that the aforementioned assumption that nis small also implies that 1, thereby diminishing the desired e ect, Te c = (1+ )T(0)c T(0)c . Again, this approximation greatly simpli es our theoretical problem by allowing us to expand the Boltzmann equation, but it is only a mathematical convenience that represents no impediment to an experimentalist looking for striking enhancements of T(0)c . Eq. (6.5) can now be solved for n to yield nk = 0IBragg h n(FD)k i 1 e t= 0 ; (6.6) which shows that a stationary nonequilibrium state is formed in a characteristic time 0. The Bragg part in Eq. (6.5), IBragg [n], now depends only on n(FD)k and can be computed with Fermi?s golden rule. When the wavelength of the Bragg pulse is much larger than the DeBroglie wavelength of the fermions and the reciprocal frequency of the pulse is much smaller than the relaxation time ( Fjqj 1 and ! 0 1), Fermi?s golden rule yields IBragg h n(FD)k i = 2 h 2bg n(FD)k q 1 n(FD)k ("k "k q h!) n(FD)k 1 n(FD)k+q ("k+q "k h!) : (6.7) The determination of IBragg nFk allows us to nd and ultimately T(e )c . The optimal parameters depend on whether the fermionic gas is in the super uid or 100 normal phase at the time the Bragg pulse is applied. In the former case, the energy gap in the quasiparticle density of states protects a Bragg pulse with h! < 2 0 from producing new quasiparticles that will hinder super uidity. However, in the normal phase, the energy conservation requirement that k+q k = h! and an independent control of q and ! allow us to engineer a pulse that ensures that only \thermal" quasiparticles with energies " > "F are pushed to even higher energies, while the fermions below "F are not a ected. Thus far, we have assumed that heat is being dissipated from the fermions into the bosons through collisions. Because Bose-Einstein condensation inhibits collisions with fermions and severely reduces thermalization between the two popu- lations, our simple analysis depends on the bosons being at a constant temperature T greater than their Bose-Einstein condensation temperature TBEC ?0:94 h BN1=3B [2]. Condensation can be prevented at temperatures close to the BCS transition temperature by having B F and mB mF. Treating the bosons classically, we expect their temperature to increase no faster than dTdt = 1C bg!, which is the en- ergy pumping rate due to the Bragg pulse for a speci c heat C. For a harmonically con ned classical gas, we use a speci c heat given by C = 3kBNB. If tBragg is the time over which the Bragg pulse is turned on, then so long as tBraggdTdt is much less than the temperature of the bosons, we may consider the bosonic population to be an isothermal bath. This may be accomplished by having a large number of bosons at low density. One may be able to avoid the assumption of a bosonic population altogether when driving the transition from normal to super uid at temperatures above T(0)c by allowing energetic particles to leave the trap as in evaporative cooling. 101 We shall show later that this is possible because the Bragg pulse may be tuned to couple only to particles with energies above a threshold energy depending on ! and q. For concreteness however, we shall keep the bosonic population throughout the following section. 6.3 Numerical Results As an example, we calculate the nonequilibrium enhancement of Tc for a trapped mixture of 87Rb and 6Li under the aforementioned assumptions with n 1. We assume a cloud of 105 Lithium atoms and 107 Rubidium atoms in traps of frequencies F = 200 B = 3 kHz correspondingly. We use scattering lengths aBB = 109a0, aFF = 2160a0, aFB = 100a0 [44] where a0 is the Bohr radius and a typical collision time 0 136 ns is estimated via the 1= 0 = n v approximation given above and in section A.1. With these parameters, the equilibrium BCS and BEC condensation temperatures are T(0)c 0:15 TF = 291 nK and TBEC 23:2 nK. The quasiparticle lifetime k for the normal uid is estimated as k 3 s for j"k "Fj? h! via the methods in Refs. [71, 87]. 6.3.1 Super uid at Equilibrium Let us assume that the system is initially in the super uid phase at equilib- rium with a low enough temperature T h! = 0:15"F. Using Eq. (6.6), we may calculate the stationary distribution function (Fig. 6.2) for Bragg parameters bg = 70 F and jqj = 0:1 kF. Some comments on the form of nk are 102 0.0 0.1 0.2 0.3 0.4ESlash1?F 0.1 0.2 0.3 0.4nLParen1ERParen1 (a) CapDelta0PlusHBar? CapDelta0Plus2HBar? 0.0 0.1 0.2 0.3 0.4ESlash1?F 0.1 0.2 0.3 0.4nLParen1ERParen1 (b) Figure 6.2: (a) The rst order approximation to the quasiparticle oc- cupation as a function of E = p 2 + 20 for parameters bg = 70 F, jqj = 0:1kF, and h! = 0:15"F. The unphysical singularities at E = 0 and E = 0 + h! are not included in the calculation of . See text for details. The equilibrium values for this system are T = 0:14TF and 0 = 0:08"F. (b) The exact distribution function schematically drawn for the same parameters with 0 and h! in units of "F. The thick dashed lines represent the occupation at equilibrium. 103 necessary. As shown in Fig. 6.2(a), the rst order approximation to nk has unphys- ical singularities at E = and E = +!. These are due to rst-order transitions to E = + ! from E = where the quasiparticle density of states diverges for a super uid at equilibrium. The exact distribution function, schematically drawn in Fig. 6.2(b), has no in nities. Higher orders in the expansion of the Boltzmann equation are necessary to curtail the singularities at " = 0 + n! (n = 0;1;2:::). However, so long as these singularities are localized on energy intervals that are much smaller than !, the approximate nk calculated from Eq. (6.6) will be suit- able for the calculation of both the enhanced order parameter via Eqs. 6.1 and 6.2 as well as the value of in the Ginzburg-Landau equation (Eq. 6.4) [34]. As expected from the analogy to Eliashberg?s work, our singularities have energy widths of about 2 0 2bg F(6NF)1=3 . Hence, we may consider the inequality ! 2 0 2bg F(6NF)1=3 1 as a further requirement for the validity of our linearization of the Boltzmann equation for the calculation of in the super uid case. We may also note that due to the fact that h! < 2 0, no new quasiparticles are excited from the lower branch by pair break- ing. Hence, the quasiparticle number is conserved in this rst order approximation (Pk nk = 0). The quasiparticles are simply redistributed from the gap edge to higher energies. Substituting nk into Eq. (6.2), we nd that at T ? 0:13 TF we calculate an increase in by a factor of 1=10. So long as T < T(0)c , the relative enhancement increases with temperature because there are more particles to redis- tribute and the pulse does not have enough energy to break Cooper pairs. Unlike in the normal uid where 0 = 0 at equilibrium, the enhancement depends on the initial value of the gap. In Fig. 6.3, we plot the temperature dependence of the 104 enhanced nonequilibrium gap with a slightly stronger pulse given by bg = 110 F, h! = 0:15"F, and jqj = 0:1 kF. Note that after the pulse is applied at equilib- rium, the temperature of the bath may be increased above T(0)c while maintaining a nonzero gap. This temperature dependence is exactly what would be expected from the analogous plot in Ref. [34]. The new BCS transition temperature Tc is given by the maximum of the nonequilibrium plot where the inequality h!< 2 0 is saturated. If the temperature is further increased, discontinuously vanishes again. For these parameters we calculate a small 3% increase of the transition temperature as expected from our requirement that nonlinear e ects (even gap enhancing e ects) be ignored in the Boltzmann equation. The temperatures we have considered are su ciently close to the equilibrium transition temperature such that the approxi- mation TBCS T(0)BCS(1 + ) is justi ed. For small 0 a crude, order of magnitude estimate of may be given by 2 h 2bg 0" F n0( 0)n0( 0 + h!). 6.3.2 Normal at Equilibrium For contrast, let us now assume that we start in the normal phase at equilib- rium. The initial distribution function is simply the Fermi-Dirac function for the noninteracting quasiparticles of Fermi Liquid theory. We have 0 = 0, so Eliash- berg?s requirement that h! < 2 0 cannot be satis ed. However, we can guarantee that particles are not excited from below the Fermi level by choosing ! and q such that the constraint 4"q"F ( h! "q)2 is enforced. Because of this requirement, momentum and energy conservation cannot be simultaneously achieved for parti- 105 HBar?Slash12?F TBCS 0.13 0.14 0.15TSlash1T F 0.05 0.1 0.15 CapDeltaSlash1?F Figure 6.3: The order parameter as function of the temperature which solves the nonequilibrium gap equation for parameters bg = 110 F, jqj= 0:1 kF, and h! = 0:15"F. The black dashed line is the equilibrium dependence while the red dashed line gives the nonequilibrium transition temperature TBCS > T(0)BCS. We have constrained 0 > h!=2 to avoid pair-breaking. 106 cles with energies less than "F. As such, only particles outside the Fermi sphere can undergo transitions. The lower equilibrium occupation number and higher density of states at high energies ensures that quasiparticles just outside the Fermi sphere are excited to higher energies. Although in our system we have particle conservation just as in the super uid case, we comment here that if these excited particles are allowed to leave the trap, then we have e ectively cooled the fermions by sharpening the Fermi step, and the boson bath is unnecessary. Substituting nk into Eq. (6.2), we see that the depres- sion of the population at the Fermi level shown in Fig. 6.4 allows for a nonzero above T(0)BCS. As the temperature is increased, the nonequilibrium gap enhancement is overpowered by thermal smearing of the distribution function. There are more quasiparticles at energies near "F where they most strongly hinder super uidity. This contrasts with the super uid situation wherein the enhancement increases with temperature so long as T ;0ij (t;t0) = i D aj (t0)ayi (t) E 0 = i ijZ(0) 1X n=0 (n+ 1)ei( n+1 n)(t t0) (33) gT;0ij (t;t0) = (t t0)g<;0ij (t;t0) + (t0 t)g>;0ij (t;t0) (34) g~T;0ij (t;t0) = (t0 t)g<;0ij (t;t0) + (t t0)g>;0ij (t;t0) (35) 138 where Z(0) is the single site partition function for the charging Hamiltonian and n = U02 n(n 1) 0n. Similarly, the bath correlation functions Fij (t;t0) can be written in terms of four analogous functions though they are somewhat more tedious to calculate. Because these functions will always multiply one of the system green?s functions which vanish when i6= j, we will only need the same-site functions. They are rigorously derived using the method in Ref. [63] in section A.2.6 and they are reproduced here. Fii (t;t0) = D Xi (t0)Xyi (t) E B = e (t0 t) (37) FTii (t;t0) = (t t0)Fii (t;t0) = e (jt t0j) (38) F ~Tii (t;t0) = (t0 t)Fii (t;t0) = e ( jt t0j) (39) where the argument of the exponential is given in terms of the bosonic occupation N = e " 1 1 as (x) = X g " 2 (N + 1) 1 e i" x +N 1 ei" x = X g " 2h 1 e i" x +N 1 ei" x 2 i (40) We are now in a position to write the explicit form of eq. (21) in matrix form. 139 Noting that RCdt1 = R1 1dt+ R1 1dt and letting ^Gij (t;t0) = 0 BB @ GTij (t;t0) G>ij (t;t0) G;0ij (t;t0) g<;0ij (t;t0) g~T;0ij (t;t0) 1 CC A (42) = 0 BB @ gT;0ij (t;t0)FTij (t;t0) g>;0ij (t;t0)F>ij (t;t0) g<;0ij (t;t0)F;0ij (!) = i ij 1Z(0) 1X n=0 e "n (n+ 1)K (! +"n+1 "n) gT;0ij (!) = i ij 1Z(0) 1X n=0 e "n [nQ(! +"n "n 1) + (n+ 1)Q( ! "n+1 +"n)] g~T;0ij (!) = i ij 1Z(0) 1X n=0 e "n [nP ( ! "n +"n 1) + (n+ 1)P (! +"n+1 "n)] where H (x) = ( )e jxj= jxj 1 (1 + sign (x)) (45) K (x) = ( )e jxj= jxj 1 ( 1 + sign (x)) (46) Q(x) = ix i xR x xG x (47) P (x) = ix i xR x + xG x (48) and nally G(x) = 14e x[ 1 +e2x 2i 0 ( x) 2iln ( x)]] (49) + 14e x[2iln ( ix) + 2ie2x ( 0 (x) ln ( ix) + ln (x))] R(x) = exExpIntegralE (1;!) (50) +e x (ExpIntegralE (1; !) + ln ( !)) +e x cosh (!) ( 2 ln ( i!) + ln (!)) + sinh (!) ( i + ln (!)) 141 A.2.5.1 Phase Boundary at Equilibrium without Dissipation As a sanity-check, let us rst consider the system with no dissipation. Setting g = 0, we trivially get FT,F ~T,F<, and F> all equal to unity. Using the identity A(t t1)B(t1 t0) = A(t t0 x)B(x) along with the convolution theorem, we may write eq. (44) in frequency space as ^Gij (!) = ^g0ij (!) X i1i01 Ji1i01 ^g0ii0 1 (!) ^g0i1j (!) +::: (51) where the frequency transforms of these correlators take on a fairly simple form because the e ect of the bath is not included. gT;0ij (!) = ijZ(0) 1X n=0 e n n+ 1 n+1 n +! +i0 n n n 1 +! i0 (52) and the lesser (greater) Fourier transforms are g>;0ii0 1 (!) = 2 i ii01Z(0) 1X n=0 e n (n+ 1) ( n+1 n +!) (53) g<;0i1j (!) = 2 i i1jZ(0) 1X n=0 e n (n) ( n n 1 +!) (54) The transform of the anti-time-ordered function is not included as it will not play a part in our analysis at equilibrium. Because we are interested in long-time correla- tions, we shall let !!0. At the same time, we note that our system has a discrete spatial symmetry. We take advantage of this symmetry by transforming Eq. (51) to momentum space leaving ^G(!!0;q) = ^g0 (0) J (q) ^g0 (0) ^g0 (0) +::: (55) where J (q) = 2JP =1;2;3 cos (q ) is the lattice dispersion. 142 A simple inspection of eq. (55) shows that we may rewrite the equation more illustratively for small J as ^G(0;q) = ^g0 (0)h1 + ( J (q)) ^G(0)i = ^g0 (0) 1 +J (q) ^g0 (0) 1 (56) with J (q) being identi ed as the rst-order approximation to the self-energy. Let us take the Keldysh component of the matrix equation, Eq. (56). Ignoring terms higher than rst order in J for computational purposes only (they do not yield signi cant di erences when the phase boundary is numerically computed), we have GK (0;q) = g K;0 (0) 1 + 12J (q) (gA;0 (0) +gR;0 (0)) = g K;0 (0) 1 +J (q) (gT;0 (0) 12g>;0 (0) 12g<;0 (0)) (57) The equilibrium phase boundary is the manifold in parameter space where long- time, long-range correlations diverge, GT (!!0;q!0)!1. That is, where the real part of the denominator of eq. (57) goes to zero. This equation can be written for z dimensions as 1 = Re[J (0) (gT;0 (0) 12g>;0 (0) 12g<;0 (0))] = 2JzZ(0) 1X n=0 e n n+ 1 n+1 n n n n 1 (58) which is indeed the equation for the phase boundary given in the literature for nite and zero temperatures. We have thus shown that we can reproduce the equilibrium phase boundary when dissipation is not considered. 143 A.2.5.2 Phase Boundary at Equilibrium with Dissipation To replicate the results from Ref. [27], we shall include dissipation but let the temperature go to zero. Letting T !0 so that N !0, we trivially reproduce the results for the bath correlation function. FTii (t;t0) = e (jt t0j) = exp " X g " 2 1 e i" jt t0j # (59) as well as their renormalization of U and which leads to a phase boundary that is slightly di erent from the T = 0 boundary calculated without dissipation. Our treatment can also access the nite temperature regime. While we are aware of no other paper that treats both dissipation and nite temperature at the same time, we admit that this is computationally taxing. It is much easier to simply assume that the bath temperature is non-negligible compared to the energy scales of the system (the lattice) while it is very small compared to the energy scales of the bath. Under this assumption, we can approximate the bath correlation functions as their T ! 0 counterparts while still retaining the important e ect of the bath?s nite temperature on the system. The temperature of the bath will in uence system correlation functions through the initial condition that the bath and the system were at equilibrium in the in nite past. A.2.6 Phonon Green?s Functions We now wish to nd the bath correlators, i.e. the functions D Xyi (t)Xj (t0) E bath and D Xj (t0)Xyi (t) E bath from equations (30) and (43), using the method described by Mahan [63]. They will be refered to as \phonon correlators" because they the 144 system/bath interaction has a phononic form in Eq. (15). Furthermore, these functions will always be multiplying G0ij (t;t0)/ ij. As such we need only consider the case i = j. We may begin with the following repeated from earlier. Xj (t0) = eiHbatht0Xje iHbatht0 = eiHbatht0 exp " X byj bj # e iHbatht0 = exp " X byj ei" t0 bj e i" t0 # (60) Xyi (t) = eiHbathtXyie iHbatht = eiHbatht exp "X byi bi # e iHbatht = exp "X byi ei" t bi e i" t # (61) with = g =" . Since our goal is to take a thermal average of the product of these operators over bath states, D Xj (t0)Xyi (t) E bath = 1Z bath Tr[e P ! n Xj (t0)Xyi (t)] (62) with n the bosonic occupation number of bath mode and Zbath the bath partition function. After substituting the de nitions (60) and (61) into equation (62), we shall make use of the theorem that for any operators A and B such that both commute with C = [A;B], we have eA+B = eAeBe 1=2[A;B]. Thus we have Xj (t0) = exp h byj ei" t0 bj e i" t0 i = exp h byj ei" t0 i exp h bj e i" t0 i exp 2 =2 (63) Xyi (t) = exp h byi ei" t bi e i" t i = exp h byi ei" t i exp bi e i" t exp 2 =2 (64) 145 So that we may easily take averages over the thermal state, we will want all the destruction operators on the right. We may write exp h bj e i" t0 i exp h byi ei" t i (65) = exp h byi ei" t i exp h byi ei" t i exp h bj e i" t0 i exp h byi ei" t i = exp h byi ei" t i exp h exp h byi ei" t i bj exp h byi ei" t i e i" t0 i but we know from the Baker-Haussdorf lemma that for i = j, the parenthesized terms the last line of Eq. (65) can be written as exp h byi ei" t i bj exp h byi ei" t i = bj + h byi ei" t;bj i = bj + ei" t (66) This allows us to write expressions like the rst line of Eq. (65) in normal order as exp h bj e i" t0 i exp h byi ei" t i (67) = exp h byi ei" t i exp h bj + ei" t e i" t0 i = exp h 2 ei" (t t0) i exp h byi ei" t i exp h bj e i" t0 i With the product Xj (t0)Xyi (t) now written in normal order, the operators ^byi and ^bj will simply become complex numbers inside the bath average. We may write an explicit form for the phonon correlator using the following identity (equation 4.240 in Ref. ([63])). 1 e " X n e " n hn jexp h u byi i exp [ ubi ]jn i= e juj2N (68) where N = e " 1 1 and we can assign u = e i" t e i" t0 and u = ei" t ei" t0 . Thus, the phonon correlator can be written as D Xj (t0)Xyi (t) E bath = Y expf 2 1 e i" (t t0) +j1 +e i" (t0 t)j2N g (69) 146 De ning the phase associated with each bath mode as (t;t0) = 2 1 e i" (t0 t) + 1 ei" (t0 t) 2N = 2 h (N + 1) 1 e i" (t0 t) +N 1 ei" (t0 t) i (70) we may write the full phonon correlator quite simply as D Xj (t0)Xyi (t) E bath = e P (t;t0) = e (t;t0) (71) Employing the same procedure, we may nd the other function to be hXyi (t)Xj (t0)ibath = e (t0;t) (72) The time ordered and anti-time-ordered functions can now be built from the greater and lesser functions in equations (71) and (72). We can then approximate the bath modes as in nitely dense in energy space P !R d". This is all that is necessary for our treatment if one is willing to perform frequency transforms numerically. In the interest of numerical and analytical tractability, we went further and let the temperature be small enough such that N is negligible over most of the energy range from 0 to . This assumption allows us to approximate the bath correlators by the T !0 functional form appearing in references ([27]) and ([63]). Noting that (t;t0) is a function only of the di erence = t t0 due to the fact that the bath is assumed to be at equilibrium with itself, we nd e ( ) !(1 +i ) (73) 147 A.2.7 Frequency Transforms of Undriven Functions ^g0ij (t;t0) We wish to nd frequency transforms of the functions in eq. (43). For conve- nience they are reproduced here. g<;0ij (t;t0) = i ij 1Z(0) 1X n=0 nexp [i( n n 1) (t t0)]e (t t0) g>;0ij (t;t0) = i ij 1Z(0) 1X n=0 (n+ 1) exp [i( n+1 n) (t t0)]e (t0 t) gT;0ij (t;t0) = (t t0)g<;0ij (t;t0)e (t t0) + (t0 t)g>;0ij (t;t0)e (t0 t) g~T;0ij (t;t0) = (t0 t)g<;0ij (t;t0)e (t t0) + (t t0)g>;0ij (t;t0)e (t0 t) where the bath correlators yield factor that is an exponential of (x) given in terms of the boson occupation N = e " 1 1 as (x) = X g " 2 (N + 1) 1 e i" x +N 1 ei" x (74) = X g " 2h 1 e i" x +N 1 ei" x 2 i (75) Now we shall make a computationally useful assumption. We shall assume that the temperature of the bath is of an order comparable to the energy scales of the system but that it is small compared to energy scales of the environment. This will allow us approximate the bath correlators by theirT !0 (for the bosonic bath) counterparts. We have done this to simplify the analytical form of these bath correlators in order to make them computationally manageable with our computer resources, but it does not represent a conceptual di culty for the model. If we required the greater accuracy a orded by not making this approximation, we could merely perform the necessary Fourier transforms of these correlation function numerically rather than 148 looking for asymptotic approximations. As it is, however, we shall approximate the bath correlators in the zero-temperature limit: e (x) !(1 +i x) (76) We may then begin taking Fourier transforms. The greater and lesser functions are the simplest. Letting x = t t0 we have g<;0ij (!) = i ij 1Z(0) 1X n=0 n Z dxei!xei( n n 1)xe (x) (77) = i ij 1Z(0) 1X n=0 nH (! + n n 1) (78) g>;0ij (!) = i ij 1Z(0) 1X n=0 (n+ 1) Z dxei!xei( n+1 n)xe ( x) (79) = i ij 1Z(0) 1X n=0 (n+ 1)K (! +"n+1 "n) (80) where the integrals H and K are transforms of the bath correlators. They have the following exact expressions H (!) = Z 1 1 dxei!x (1 +i x) = ( )e j!j= j!j 1 (1 + sign (!)) (81) K (!) = Z 1 1 dxei!x (1 i x) = ( )e j!j= j!j 1 ( 1 + sign (!)) (82) 149 The time-ordered and anti-time-ordered functions are slightly more complex. gT;0ij (!) = i ij 1Z(0) 1X n=0 n Z 1 1 dxei!x (x)ei( n n 1)xe (x) (83) + (n+ 1) Z 1 1 dxei!x ( x)ei( n+1 n)xe ( x) = i ij 1Z(0) 1X n=0 nQ(! + n n 1) + (n+ 1)Q( ! ( n+1 n)) g~T;0ij (!) = i ij 1Z(0) 1X n=0 n Z 1 1 dxei!x ( x)ei( n n 1)xe (x) (84) + (n+ 1) Z 1 1 dxei!x (x)ei( n+1 n)xe ( x) = i ij 1Z(0) 1X n=0 nP ( ! n + n 1) + (n+ 1)P (! + n+1 n) An added simpli cation is a orded by our consideration of a speci c type of dis- sipation. Our dissipation mechanism is given as the interaction of lattice bosons with low-energy collective modes of a larger condensate that is not a ected by the lattice. In systems such as these, the strength of the dissipation modeled in this way tends to be much small than unity while the energy spread of the interaction =U can be made to be of order unity. Therefore, we are justi ed in approximating the functions Q and P in the 1 limit. This is the subject of the next section. A.2.7.1 The Functions Q and P We will now approximate the functions appearing in the expressions for the time-ordered and anti-time-ordered undriven correlation functions in equations (83) and (84). First we write the complex functions (1 i x) as the product of a 150 modulus and a phase in terms of the unitless variable x = ~x= . Q(!) = Z 1 0 dxei!x (1 +i x) (85) = 1 Z 1 0 d~xei!~x= 1 + ~x2 e i tan 1(~x) P (!) = Z 1 0 dxei!x (1 i x) (86) = 1 Z 1 0 d~xei!~x= 1 + ~x2 ei tan 1(~x) Now we shall use the aforementioned assumption regarding the strength of our dissipation. Under the assumption that 1, we may make the following approx- imation. 1 + ~x2 e i tan 1(~x) = 1 ln 1 + ~x2 i tan 1 (~x) +O 2 (87) Inserting this approximation into our expressions for P and Q, we note that these two functions can each be written as a sum of three integrals Q(!) = 1 Z 1 0 d~xei!~x= Z 1 0 d~xei!~x= ln 1 + ~x2 (88) i Z 1 0 d~xei!~x= tan 1 (~x) P (!) = 1 Z 1 0 d~xei!~x= Z 1 0 d~xei!~x= ln 1 + ~x2 (89) +i Z 1 0 d~xei!~x= tan 1 (~x) All of these integrals can be done by allowing ! to have a small positive imaginary part i . The rst integral in each line is simple in the limit where !0. 1 Z 1 0 d~xei(!+i )~x= = 1 1i(! +i ) 1 ei(!+i )~x= 1 0 = i ! (90) 151 The next integral,again the same both in Eq. (88) and Eq. (89), is rst done by parts with the boundary term going to zero, then by trigonometric substitution Z 1 0 d~xei!~x= ln 1 + ~x2 = ei(!+i )~x= i(! +i )= ln 1 + ~x2 1 0 Z 1 0 d~x e i(!+i )~x= i(! +i )= 2~x 1 + ~x2 = 2i (! +i ) Z =2 0 d ei(!+i ) tan( )= tan = 2i ! Z =2 0 d ei(!+i ) tan( )= tan = i !R ! (91) where R(x) has a somewhat unilluminating (but computationally manageable) form R(x) = exExpIntegralE (1;!) (92) +e x (ExpIntegralE (1; !) + ln ( !)) +e x cosh (!) ( 2 ln ( i!) + ln (!)) + sinh (!) ( i + ln (!)) Finally, the last integral in Eq. (88) and Eq. (89) is done in the same manner. i Z 1 0 d~xei!~x= tan 1 (~x) = i ei(!+i )~x= i(! +i )= tan 1 (~x) 1 0 ! +i Z 1 0 d~xei(!+i )~x= 11 + ~x2 = ! +i Z =2 0 d ei(!+i ) tan( )= = ! G ! (93) where G(x) has the form G(x) = 14e x[ 1 +e2x 2i 0 ( x) 2iln ( x)] + 14e x[2iln ( ix) + 2ie2x ( 0 (x) ln ( ix) + ln (x))] (94) 152 Thus, we are left with Q(!) = i! i !R ! !G ! (95) P (!) = i! i !R ! + !G ! (96) with R and G de ned above. A.3 Nonequilibrium Phase Boundary of the Bose-Hubbard Model with Periodic Driving A.3.1 Homogeneous Perturbation The analysis above can recreate all the equilibrium results within the Keldysh framework. Now we wish to generalize to nonequilibrium systems. Speci cally, we would like include the e ect of a perturbation of the form HV (t) = V X i ^ni cos ( p) (97) We shall nd that this will be insu cient to drive our system out of equilibrium in any meaningful way as it does not change the probabilities of site occupation numbers. Instead, it merely changes the energy of those occupation states for every site in the same time-dependent fashion. In this sense, the perturbaton can be \gauged away". Speci cally, we may write the Hamiltonian as H (t) = H0 +HJ +HV (t) (98) where H0 includes the energy relaxation due to the bosonic bath. We know from our analysis earlier that we can expand the Green?s function for this Hamiltonian 153 in a Dyson series in the the small parameter J=U. That means we may write the total Green?s function in terms nonequilibrium correlation functions that treat the Hamiltonian HV (t) to all orders. However, when we nd these nonequilibrium functions explicity, we shall note that the non-equilibrium character simply manifests as an unimportant global phase factor due to the fact that HV (t) is constant in space. Take for example the lesser function with respect to H0 +HV (t). g