ABSTRACT Title of dissertation: CONTROLLING MOLECULAR-SCALE MOTION: EXACT PREDICTIONS FOR DRIVEN STOCHASTIC SYSTEMS Jordan Michael Abel Horowitz, Doctor of Philosophy, 2010 Dissertation directed by: Professor Christopher Jarzynski Department of Chemistry and Biochemistry and Institute for Physical Science and Technology Despite inherent randomness and thermal fluctuations, controllable molecular devices or molecular machines are currently being synthesized around the world. Many of these molecular complexes are non-autonomous in that they are manipulated by external stim- uli. As these devices become more sophisticated, the need for a theoretical framework to describe them becomes more important. Many non-autonomous molecular machines are modeled as stochastic pumps: stochastic systems that are driven by time-dependent pertur- bations. A number of exact theoretical predictions have been made recently describing how stochastic pumps respond to arbitrary driving. This work investigates one such prediction, the current decomposition formula, and its consequences. The current decomposition formula is a theoretical formula that describes how stochas- tic systems respond to non-adiabatic time-dependent perturbations. This formula is derived for discrete stochastic pumps modeled as continuous-time Markov chains, as well as con- tinuous stochastic pumps described as one-dimensional diffusions. In addition, a number of interesting consequences following from the current decom- position formula are reported. For stochastic pumps driven adiabatically (slowly), their response can be given a purely geometric interpretation. The geometric nature of adiabatic pumping is then exploited to develop a method for controlling non-autonomous molecular machines. As a second consequence of the current decomposition formula, a no-pumping theorem is proved which provides conditions for which stochastic pumps with detailed balance exhibit no net directed motion in response to non-adiabatic cyclic driving. This no-pumping theorem provides an explanation of experimental observations made on 2- and 3-catenanes. CONTROLLING MOLECULAR-SCALE MOTION: EXACT PREDICTIONS FOR DRIVEN STOCHASTIC SYSTEMS by Jordan Michael Abel Horowitz Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2010 Advisory Committee: Professor Christopher Jarzynski, Chair Professor Millard Alexander Professor Michael E. Fisher Professor Bei-Lok Hu Professor J. Robert Dorfman c Copyright by Jordan Michael Abel Horowitz 2010 Acknowledgements There are a number of mentors who I have been fortunate to know and whose guidance was invaluable. Many thanks to my high school chemistry teacher Edwin Van Dam for his selfless mentorship and for making science fun. I would also like to thank Robert Dorfman for his kind help and for his written recommendations. I am especially grateful to my advisor Christopher Jarzynski for his patient guidance and thoughtful advice. He has shown me how to be a scientist and has taught me the importance of clear and precise thinking. I am also very appreciative of Saar Rahav, whose collaboration was integral to the successful completion of this project. My gratitude extends to Suriyanarayanan Vaikun- tanathan and Andy Ballard for their suport, thoughtful discussion, and friendship. Thank you Christopher Bertrand, Jonah Kanner, and Ryan Behunin; your friendship throughout graduate school has been very important to me. To Scott Loeb, Josh Terebelo, David Navarre, and Meggan Weyand, thank you for being there. I am profoundly grateful to my family for their love and support. Thank you Mom and Dad; without you none of this would have been possible. ii Contents Introduction 1 1 Setup for Discrete Stochastic Pumps 8 1.1 Mathematical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.1.1 Characterization of the Frozen Dynamics . . . . . . . . . . . . . . 11 1.1.2 Algebraic Properties of the Transition Rate Matrix and the Gener- alized Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.2 Detailed Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2 Current Decomposition Formula 22 2.1 Derivation 1: Generalized Inverse . . . . . . . . . . . . . . . . . . . . . . 24 2.1.1 Detailed balance and ?Vi j . . . . . . . . . . . . . . . . . . . . . . . 25 2.1.2 Explicit Expression for ?Vi j . . . . . . . . . . . . . . . . . . . . . . 26 2.2 Derivation 2: Cramer?s Rule . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.3 Relationship between Derivations . . . . . . . . . . . . . . . . . . . . . . 31 3 Adiabatic Pumping and Geometric Phases 33 3.1 Geometric Adiabatic Pumping . . . . . . . . . . . . . . . . . . . . . . . . 34 3.1.1 Geometric Adiabatic Pumping with Detailed Balance . . . . . . . . 37 3.2 Quantization and Topological Adiabatic Pumping . . . . . . . . . . . . . . 37 3.3 Geometric Structure of the Adiabatic Integrated Current . . . . . . . . . . . 39 4 No-Pumping Theorem 43 4.1 No-Pumping Theorem for Discrete Stochastic Pumps with Detailed Balance 43 4.2 Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.3 Alternative Derivation of the No-Pumping Theorem . . . . . . . . . . . . . 50 4.4 No-Pumping Theorem as a Consequence of the Pump-Restriction Theorem 53 5 Adiabatic Control Theory 55 5.1 Controlling Stochastic Pumps with Cyclic Adiabatic Protocols . . . . . . . 56 5.1.1 Implications of Probability Conservation in Cyclic Processes . . . . 56 5.1.2 Control Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2 Constraints on Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 iii 6 Continuous Stochastic Pumps 64 6.1 Mathematical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 6.1.1 Detailed Balance for Diffusion Processes . . . . . . . . . . . . . . 67 6.2 Current Decomposition Formula . . . . . . . . . . . . . . . . . . . . . . . 69 6.3 Adiabatic Pumping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.4 No-Pumping Theorem for Diffusions with Detailed Balance . . . . . . . . 74 6.5 Rectification of Current Requires Broken Symmetry . . . . . . . . . . . . . 75 Conclusion 78 A Specifying the Orientation of a Plane 80 B Connection between Discrete and Continuous No-Pumping Theorems 82 iv List of Figures 1 Depiction of a nanocar . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Illustration of a rotaxane . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1 Graph of the state space of a four-state discrete stochastic pump . . . . . . 9 1.2 Illustration of a probability distribution decomposition for a three-state dis- crete stochastic pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3 Examples of cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1 Illustration of the geometric formula for the integrated current for adiabatic cyclic pumping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2 Illustration of the connection holonomy . . . . . . . . . . . . . . . . . . . 41 4.1 Illustration of a 3-catenane . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Graph for a model of a 2-catenane . . . . . . . . . . . . . . . . . . . . . . 46 4.3 Energy landscape for a model of a 2-catenane . . . . . . . . . . . . . . . . 47 4.4 Integrated current for a non-adiabatic cycle . . . . . . . . . . . . . . . . . 49 4.5 Graph of a four-state stochastic pump illustrating the pump-restriction the- orem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.1 Graph of the state space of a four-state discrete stochastic pump . . . . . . 55 5.2 Graph illustrating the method for identifying a graph?s chords . . . . . . . . 57 5.3 Example of a fundamental set of cycles . . . . . . . . . . . . . . . . . . . 58 5.4 Illustration of the control strategy utilizing infinitesimal adiabatic protocols 60 5.5 Illustration of the orientation bi-vector . . . . . . . . . . . . . . . . . . . . 62 6.1 Symmetric ratchet potential . . . . . . . . . . . . . . . . . . . . . . . . . . 76 v List of Symbols This list describes each symbol used in this dissertation and references the location (and equation) of its first appearance. tensor product ? Sec. 3.1 ^ wedge product ? Sec. 3.1, Eq. 3.12 j j Euclidean norm ? Sec. 5.2 0T =(0;:::;0) zero vector ? Sec. 1.1.2, Eq. 1.19 1T =(1;:::;1) one vector ? Sec. 1.1.2, Eq. 1.19 A connection one-form ? Sec. 3.3 A, D, P, Q, U, Y matrix ? Sec. 1.1.2 A+ generalized inverse of the matrix A ? Sec. 1.1.2, Eq. 1.20 A~l(x) drift coefficient ? Sec. 6.1, Eq. 6.1 a, b, c, rn, r+n , v, w, ?wn N-dimensional vector ? Sec. 1.1.2 ~a,~b,~g L-dimensional vector ? Sec. 1.1, Sec. 3.1 B(~l), Bi j barrier energy matrix ? Sec. 1.1.1, Eq. 1.15 B~l(x) diffusion coefficient ? Sec. 6.1, Eq. 6.1 C cycle ? Sec. 1.2 Cc cycle associated to chord c ? Sec. 5.1.1 C number of cycles (or chords) ? Sec. 5.1.1, Eq. 5.2 D two-dimensional plane ? Sec. 3.1 Dd two-dimensional plane enclosed by~ld(t) ? Sec. 5.1.2 d exterior derivative ? Sec. 3.1 EC fundamental set ? Sec. 5.1.1 E number of edges of a graph ? Sec. 1.1 Ei(~l) state energy ? Sec. 1.1.1, Eq. 1.11 E(x) continuos analog of Ei ? Appendix B ?e1;:::; ?eN, ?e0 linear equation ? Sec. 2.2 ei basis vector ? Appendix A F(~l) free energy ? Sec. 1.1.1 f(~l) function of~l ? Sec. 3.1 f(x) function of x ? Sec. 6.1.1 f j weighted time integral of p ? Sec. 4.3, Eq. 4.27 f eqj weighted time integral of peq? Sec. 4.3, Eq. 4.23 ?G~l(x) pseudoinverse of ?L~l(x) ? Sec. 6.2, Eq. 6.18 G graph ? Sec. 1.1 vi G0, G1 subgraphs of G ? Sec. 4.4 gljk average number of visits ? Sec. 2.1.2, Eq. 2.24 g~l(x;x0) modified Green?s function ? Sec. 6.2, Eq. 6.19 Hi j differential form for adiabatic pumping ? Sec. 3.1, Eq. 3.11 ~Hi j vector for adiabatic pumping ? Sec. 3.1, Eq. 3.6 I identity matrix ? Sec. 1.1.2 ?Ji j, Jki j current operator ? Sec. 1.1, Eq. 1.6 ?J~l(x) current operator ? Sec. 6.1, Eq. 6.3 Ji j(t) current ? Sec. 1.1, Eq. 1.4 Jsi j(~l) stationary current ? Sec. 1.1.1, Eq. 1.9 Jexi j (t) excess current ? Chap. 2, Eq. 2.2 J(x;t) current ? Sec. 6.1, Eq. 6.3 Jex(x;t) excess current ? Sec. 6.2, Eq. 6.17 Js~l stationary current ? Sec. 6.1 kB Boltzmann?s constant ? Sec. 1.1.1 ?L~l(x) Fokker-Planck operator ? Sec. 6.1, Eq. 6.1 ?L?~ l(x) adjoint Fokker-Planck operator ? Sec. 6.1.1, Eq. 6.13L number of external parameters ? Sec. 1.1; interval length ? Sec. 6.1 M total space ? Sec. 3.3 N number of configurations (or states) ? Sec. 1.1 N(A) null space of the matrix A ? Sec. 1.1.2 n, nmn orientation bi-vector ? Sec. 5.2, Eq. 5.7 ?Oi j, Oki j linear operator ? Sec. 1.1, Eq. 1.7 P plane ? Appendix A, Eq. A.1 Pji path from state i to state j ? Sec. 1.2 P(z0;t0jz;t) transition probability for a Markov Process ? Sec. 1.2, Eq. 1.36 Ps(z) stationary probability density ? Sec. 1.2, Eq. 1.36 P(x;t) probability density ? Sec. 6.1, Eq. 6.1 Ps~l(x) stationary probability density ? Sec. 6.1 p(t), pi probability distribution ? Sec. 1.1, Eq. 1.1 ?p time derivative of the probability distribution p ? Sec. 1.1, Eq. 1.1 ?p0 modified time derivative of p ? Sec. 2.2, Eq. 2.27 ps(~l), psi stationary probability distribution ? Sec. 1.1.1, Eq. 1.8 peq(~l), peqi equilibrium probability distribution ? Sec. 1.1.1, Eq. 1.11 qi j branching fraction ? Sec. 2.2, Eq. 4.22 R(~l), Ri j transition rate matrix ? Sec. 1.1, Eq. 1.1 R+ generalized inverse of the transition rate matrix R ? Sec. 1.1.2 R0 modified transition rate matrix R ? Sec. 2.2, Eq. 2.28 R0m modified R0 ? Sec. 2.2, Eq. 2.29 R(A) range of the matrix A ? Sec. 1.1.2 R real numbers ? Sec. 3.3 r0 determinant of R0 ? Sec. 2.2, Eq. 2.30 vii r0lm cofactor of R0lm ? Sec. 2.2 S vector space ? Appendix A T temperature ? Sec. 1.1.1 ?Vi j(~l), V ki j current response kernal ? Chap. 2, Eq. 2.1 V~l(x;x0) current response kernal ? Sec. 6.2, Eq. 6.16 vi j(~l) function of~l ? Chap. 2 vg tangent vector to g(s) ? Sec. 3.3 W(C) weight of cycle C ? Sec. 1.2 Wi j(~l) barrier energy ? Sec. 1.1.1, Eq. 1.13 W(x) continuous analog of Wi j ? Appendix B x position ? Sec. 6.1 xk local coordinates ? Sec. 5.2 Z(~l) partition function ? Sec. 1.1.1 z, z0 microscopic configuration ? Sec. 1.1.1 z, zi j bi-vector ? Sec. 5.2 zp bi-vector associated to the plane P ? Appendix A, Eq. A.2 a, d scalar ? Sec. 1.1.2, Sec. 6.5 a+ generalized inverse of the scalar a ? Sec. 1.1.2, Eq. 1.21 g(s) path through parameter space ? Sec. 3.3 Dp N-dimensional vector ? Sec. 1.1.2 di j Kronecker delta ? Sec. 1.1 d(x x0) Dirac delta function ? Sec. 6.1.1 dstmni jkl multi-dimensional Kronecker delta ? Appendix A, Eq. A.5 ?D boundary of D ? Sec. 3.1 ?xk derivative along the coordinate direction xk ? sec. 5.2 e area of D ? Sec. 5.1.2 qj(~l) escape rate ? Sec. 4.3, Eq. 4.21 q(x y) Heaviside step function ? Sec. 6.2 L parameter space ? Sec. 3.3 ~l =(l1;:::;lL) L-dimensional vector of external parameters ? Sec. 1.1 ~lt,~l(t) external parameter protocol ? Sec. 1.1 ~ld(t) external parameter protocol for control method ? Sec. 5.1.2 n, w differential forms ? Sec. 3.1 X(x) ratio of drift to diffusion coefficient ? Sec. 6.4, Eq. 6.39 P diagonal matrix ? Sec. 1.1.1, Eq. 1.16 p projection mapping ? Sec. 3.3 p~l(x) splitting probability ? Sec. 6.2, Eq. 6.26 rljk average waiting time ? Sec. 2.1.2, Eq. 2.14 Si j differential form for adiabatic pumping ? Sec. 3.1, Eq. 3.8 ~Si j(~l) vector for adiabatic pumping ? Sec. 3.1, Eq. 3.4 ~S~l(x) vector for adiabatic pumping ? Sec. 6.3, Eq. 6.36 ski j product of elements and cofactors of R0 ? Sec. 2.2, Eq. 2.32 tlk mean first exit time ? Sec. 2.1.2, Eq. 2.15 viii t~l(x) conditional mean first exit time ? Sec. 6.2, Eq. 6.27 Fi j(t) integrated current ? Sec. 1.1, Eq. 1.5 Fsi j(t) stationary integrated current ? Chap. 2, Eq. 2.5 Fexi j (t) excess integrated current ? Chap. 2, Eq. 2.4 F(x;t) integrated current ? Sec. 6.1, Eq. 6.4 Fs(t) stationary integrated current ? Sec. 6.3 Fex(x;t) excess integrated current ? Sec. 6.3, Eq. 6.35 ji(~l) potential ? Sec. 1.1.1, Eq. 1.12 j~l(x) potential ? Sec. 6.1, Eq. 6.5 c path in G ? Sec. 3.2 Y excess integrated current space ? Sec. 3.3 y~l(x) auxiliary function ? Sec. 6.1, Eq. 6.5 ix Introduction The last fifty years has seen a rapid miniaturization of technology. Ever smaller artificial machines are executing ever more complex functions, yet the smallest man-made machines remain considerably larger than the molecular machines found in nature. Inside each liv- ing cell an astounding diversity of tasks are performed by molecular complexes only a few nanometers in size [1, 2]. Inspired by the effectiveness of biological molecular machines and motivated by the desire to master molecular behavior, chemists have been synthe- sizing molecular structures with interconnected movable mechanical components, whose mechanical motions can be manipulated [3, 4, 5, 6]. Although these structures are rudimen- tary in comparison with their biological counterparts, the prospect of someday developing artificial molecular machines that rival the ingenuity of nature is exciting. Generally speaking, a machine is a mechanical apparatus composed of interconnected mechanical parts which converts energy into useful mechanical work. A molecular ma- chine is a molecular device in which some stimulus triggers mechanical motion resulting in the performance of a useful task [3]. The microscopic world in which molecular machines operate is far removed from our everyday experience: viscous forces dominate inertial ones and mechanical motions are swamped by violent thermal fluctuations [7, 8]. Hence, the intuition gleaned from engineering macroscopic machines is often misleading when applied to molecular machines; their molecular nature is not fully captured by traditional macroscopic descriptions. Specialized theories and principles are required. Molecular machines can be divided into two classes, autonomous and non-autonomous, depending on the method used to power them. Autonomous molecular machines are fu- 1 eled by a steady supply of energy (e.g. chemical reactions or sunlight); by contrast, non- autonomous molecular machines are driven by the variation of macroscopic external pa- rameters ? such as electromagnetic fields, temperature, and chemical potentials. Biological molecular machines operating inside living cells are typically autonomous. One example is the enzyme F0F1-ATPase ? colloquially called the world?s smallest wind-up toy [9] ? which synthesizes adenosine triphosphate (ATP) [2]. By harnessing the electro- chemical energy stored in a transmembrane pH gradient, F0F1-ATPase rotates, generating a torque that synthesizes ATP. The energy stored in ATP can then be used to power other molecular machines. For instance, the motor proteins kinesin and dynein consume ATP in order to transport cargo (e.g. vesicles) throughout a cell [1]. Kinesin and dynein each have a pair of motor domains, which look and act very much like feet. ATP hydrolysis pro- pels these motor domains into a stepping motion, allowing kinesin and dynein to ?walk? along a cellular scaffolding (microtubles) while pulling a load. Another interesting exam- ple is the transmembrane enzyme sodium-calcium exchanger (NCX). Through a series of conformational changes, NCX channels the free energy stored in sodium?s transmembrane electrochemical potential gradient into the directed transport of calcium ions out of the cell, against their concentration gradient [10, 11]. Each of the above examples ? F0F1-ATPase, kinesin, dynein, and NCX ? is an autonomous molecular machines; each accomplishes its task through a series of mechanical motions powered by a constant energy source ? pH imbalance, ATP hydrolysis, or an ion concentration gradient. Some biological molecular machines are able to couple to external stimuli, allowing them to act as non-autonomous molecular machines. Inside a cell, the transmembrane en- zyme Na,K-ATPase is autonomous: it utilizes the energy in ATP to pump sodium and potas- sium ions across the cell membrane [2]. Nevertheless, in vitro experiments have demon- strated that an external oscillating electric field can also induce the Na,K-ATPase to pump ions [12, 13], opening the possibility for the Na,K-ATPase to operate non-autonomously. Although the theoretical study of biological autonomous molecular machines has a 2 thirty-year history [14], the past decade has been especially active [15]. Interest has grown due to recent experimental advances that allow one to monitor and control the motion of individual molecular machines (e.g. the experiment by Noji et al. [16] and the review of Kolomeisky and Fisher [15]). These experiments reveal new microscopic details that have prompted the development of accurate microscopic models [15, 17, 18]. We now understand that molecular machines operate by exploiting mechanochemistry, the coupling of (bio)chemical processes to mechanical motion [14, 15, 19]. In contrast to the autonomous machines of nature, artificial molecular devices generally are non-autonomous. Controllable artificial molecular complexes have varied structures [3, 4, 5, 6], with imaginative names like DNA tweezers [20], molecular gears [21], and molecular rotors [22]. Nanocars, pictured in Fig. 1, provide a typical example [23, 24]. As Figure 1: Depiction of a nanocar (foreground) and a trimer molecule (background). The nanocar is formed from an I-shaped molecular complex with four attached flourine balls. Due to the geometry of the nanocar, motion is only possible perpendicular to the molecular axles, as depicted by the arrow. The trimer molecule is similar to the nanocar except its molecular structure forbids translational motion, allowing only a pivoting (or rotational) motion. Reprinted with permission from Shirai et al. [23]. the name suggests, a nanocar is a microscope interpretation of an automobile. A nanocar?s chassis is formed from an I-shaped molecular frame attached to four flourine balls, which act as wheels. Due to the chassis?s geometry, only motion perpendicular to the axles is feasible. Undirected motion can be stimulated by thermal energy, or directed motion can 3 be achieved by pulling the nanocar with the tip of a scanning tunneling microscope (STM). Nanocars exemplify the ?hard matter? approach to molecular machine design. In this ap- proach, the engineering principles of macroscopic machines are simply translated down to the molecular scale [3]. Ideal ?hard matter? molecular machines are rigid in all degrees of freedom except those along the desired direction of motion. External probes (e.g. STM tip) are used to push the molecule, forcing the relevant configurational transformations (or translations) [25]. For the ?hard matter? approach, friction is a detriment and thermal noise is a nuisance ? as is often the case in macroscopic engines. On the other hand, the ?soft matter? approach to design exploits the noise, like nature does. Instead of forcing particular motions, undesired motions are blocked (e.g. by stabilizing or destabilizing particular con- figurations) leaving only the desired motions. Configurational changes then occur due to thermal noise: the fluctuations ?do the work? for us [25]. One example is a rotaxane [26], which is composed of a ring molecule threaded onto a molecular axle, see Fig. 2. By al- Figure 2: Illustration of a rotaxane, which is a molecular complex formed by threading a molecular ring onto a molecular axle. The molecular ring sits at one of two binding sites, pictured here as rectangles. By altering the chemical environment surrounding the rotaxane, the ring can be compelled to jump from one binding site to the other. tering the pH and by electrochemically oxidizing the rotaxane, the ring can be made to shuttle back and forth along the axle on command. Other examples include a bi-pedal DNA walker that can progressively walk along a DNA track [27]. The walker is powered by the sequential addition (and removal) of various strands of ?DNA fuel?. The preceding molecular complexes are not machines; they do not perform mechanical work. They are, none the less, molecular structures in which large-scale molecular motion is stimulated by an external perturbation. Developing autonomous artificial molecular complexes is another active area of re- 4 search. An autonomous polymerization motor powered by DNA hybridization has been successfully operated [28]. In addition, an autonomous DNA walker has been synthe- sized whose motion is mediated by endonuclease and ligase, which are ATP-powered en- zymes [29]. Given the diversity of non-autonomous molecular complexes, both artificial and natu- ral, there is a growing interest in constructing a general theoretical framework. The broad goal is to comprehend how molecular-scale systems respond to time-dependent perturba- tions and then to apply that understanding to the development of controllable molecular complexes. One specific aim is to devise robust methods or strategies for manipulating molecular systems using external stimuli, even though the dynamical evolution of molecu- lar systems is random and erratic. The operation of a non-autonmous molecular machine depends upon the controlled ma- nipulation of the molecular machine?s mechanical components; for example, the controlled switching of the molecular ring in a rotaxane (see Fig. 2). However, thermal fluctuations cause the molecular motion to be random. Consequently, a fruitful theoretical description of non-autonomous molecular machines is to model them as systems which make random transitions between different mesoscopic configurations (or states). The effects of external perturbations are captured by making the dynamics depend on a set of externally controlled parameters~l. The objective in operating a molecular machine is then to induce controlled transitions between different configurations by varying these external parameters with time, ~l(t). Such models are called stochastic pumps [30, 31, 32, 33] since they are stochastic processes in which directed motion or flow is ?pumped? in response to the time-dependent variation of the external parameters. Stochastic pumps whose states are discrete are called discrete stochastic pumps, and will be described using continuous-time Markov chains. However, for continuous stochastic pumps, the states are labeled using a continuous vari- able, and the evolution will be modeled as a diffusion processes. In addition to the non-autonomous molecular machines discussed above, many Brow- 5 nian ratchets can be considered stochastic pumps. (Excellent reviews of Brownian ratch- ets can be found in the articles by Astumian and H?anggi [8], Reimann [34], H?anggi and Marchesoni [35], and Astumian [36].) Brownian ratchets are a family of models designed to investigate transport phenomena in driven, nonequilibrium systems. Additionally, they are required to be spatially periodic with unbiased driving. Interestingly, some Brownian ratchets can act as true non-autonomous machines, capable of performing a useful task: for example, the separation of microscopic particles based on their size as proposed by Faucheux and Libchaber [37]. Many studies of stochastic pumps focus on models of specific biological molecular machines or Brownian ratchets [38, 39, 40, 41, 42]. Model-independent predictions have been restricted to limiting cases, such as adiabatic driving, where the external parameters are driven slowly [30, 43, 44, 45, 46], or weak oscillatory perturbations [47, 48, 49, 50]. However, Rahav, Horowitz, and Jarzynski [31]; Horowitz and Jarsynski [33]; Chernyak and Sinitsyn [51, 52]; and Maes, Neto?cn`y, and Thomas [53] have recently derived a number of exact theoretical results that apply to generic stochastic pumps irrespective of the strength and speed of the driving. In this dissertation, I discuss one of these exact theoretical results developed by Rahav, Horowitz, and Jaryznki [31] as well as Horowitz and Jarzynski [33], the current decomposition formula, and investigate its consequences. The current decomposition formula is a new theoretical prediction which describes the response of a stochastic pump to arbitrary time-dependent stimuli. This formula is a de- composition of the flow generated in a stochastic pump into the sum of two contributions: a stationary term that vanishes if the underlying dynamics are equilibrium (that is satisfy detailed balance), and an excess or ?pumped? term produced by the time-dependent varia- tion of the external parameters. The current decomposition formula leads to two interesting theoretical predictions. The first consequence is that for the adiabatic driving the net flows produced are given by a geometric formula. The second consequence is a new no-pumping theorem, a set of conditions under which zero net current is generated during a cyclic vari- 6 ation of the external parameters. The discussion of stochastic pumps begins in Chap. 1 with a review of the basic math- ematical tools required to describe discrete stochastic pumps. Then in Chap. 2, the current decomposition formula is derived for discrete stochastic pumps using two complemen- tary methods. Consequences of the current decomposition formula for discrete stochastic pumps are presented in Chaps. 3 and 4: chapter 3 contains a discussion of the adiabatic geometric pumping, and Chap. 4 proves and illustrates the no-pumping theorem. Building on the intuition gained from studying the current decomposition formula, I investigate a control method for discrete stochastic pumps in Chap. 5 based on the geometric formula for adiabatic pumping. The current decomposition and its consequences for continuous stochastic pumps are taken up in Chap. 6, where the content of Chaps. 1 - 4 is reinterpreted in the context of continuous systems. 7 Chapter 1 Setup for Discrete Stochastic Pumps This chapter serves to review the basic mathematics of stochastic pumps and to fix notation. Section 1.1 introduces continuous-time Markov chains on a finite graph as a mathematical model describing the random nature of transitions among the states of a discrete stochastic pump. Following the introduction of the basic mathematical tools, Sec. 1.2 reviews detailed balance in Markovian stochastic processes, which plays an important role in later analyses. 1.1 Mathematical Framework A discrete stochastic pump is modeled as a continuous-time Markov chain making random transitions among N microscopic states or configurations, with state-to-state transition rates that depend on the current configuration and not on the past configurations. Each micro- scopic state, for example, could be a distinct configuration of a molecular complex, the quantities of chemically reacting molecules, or the location of a mesoscopic particle. A convenient way to visualize the states and the possible transitions among them is with a graph G [54], see Fig. 1.1. Each vertex of G, labeled i = 1;:::;N, represents one of the states of the system. The E edges that connect pairs of vertices represent possible state- to-state transitions, and the graph is assumed to be connected ? there is a path along the graph?s edges that connects any pair of states. 8 Figure 1.1: Graph representing the states and possible transitions for a four-state stochastic pump. The vertices, pictured as black dots, represent microscopic states or configurations of the system. The edges, pictured as straight black lines, represent allowed transitions between pairs of connected states. The probability distribution pi(t) to observe the system in state i at time t evolves ac- cording to the master equation [55] ?p(t)=R(~l)p(t); (1.1) where p = (p1;:::; pN)T and ?p = dp=dt. The vector ~l = (l1;:::;lL) denotes a set of L external parameters, which enter the dynamics through the transition rate matrix R(~l). The off-diagonal elements Ri j 0 of this matrix are the probability rates to jump from state j to state i, and the diagonal elements are determined to ensure probability conservation, R j j = i6=j Ri j: (1.2) Thus, the quantityjR j jjrepresents the total probability rate to exit state j. For simplicity, let us assume that Ri j = 0 if and only if R ji = 0, implying that if two vertices i and j are connected by an edge, then transitions are possible in both directions. If the parameters~l are held fixed, then the dynamics represented by Eq. 1.1 are au- tonomous. However, we are interested in the non-autonmous case in which the system is ?pumped? by varying the external parameters with time according to a specific protocol~lt 9 [or~l(t)] from~l0 =~a at time t = 0 to~lt =~b at time t. Since probability is conserved it is natural to cast Eq. 1.1 as a continuity equation ?pi = j Ji j; (1.3) where the instantaneous current Ji j(t)= Ri j p j R ji pi (1.4) is the average number of transitions from state j to state i per unit time. Since we are interested in the net number of transitions between any pair of states, we will investigate the integrated current Fi j(t)= Z t 0 dt Ji j(t); (1.5) which represents the net flow of probability from j to i during the time interval 0 < t < t. Physically, directed motion in a molecular machine corresponds to a nonzero value of the integrated current. As we see in Eq. 1.4, the current Ji j is a linear function of the components of p. It is convenient to formalize this relationship by introducing the current operator ?Ji j: rewriting Eq. 1.3 we have Ji j = k R i jdjk R jidik p k = k Jki j pk = ?Ji jp; (1.6) where di j is the Kronecker delta. In my notation, an italicized capital letter with a hat, such as ?Ji j, denotes a linear operator which maps a N-dimensional vector (such as p or ?p) to the current flowing between two states. The action of any such linear operator ?Oi j on a 10 N-dimensional vector v =(v1;:::;vn)T will be denoted as ?Oi jv = k Oki jvk: (1.7) 1.1.1 Characterization of the Frozen Dynamics As will become evident in subsequent chapters, the response of the system when the control parameters are varied with time is strongly affected by the properties of the frozen dynam- ics, i.e. the dynamics generated at fixed~l. This section introduce a number of quantities that characterize these frozen dynamics. The assumptions that the graph G is connected and that Ri j = 0 if and only if R ji = 0 guarantee that for each fixed~l there is a unique stationary distribution ps(~l) satisfying [55, 56] Rps = 0; (1.8) with stationary currents Jsi j(~l)= ?Ji jps = Ri j psj R ji psi: (1.9) Thus, if the external parameters~l are held fixed the system is guaranteed to relax, in the infinite time limit, to the unique distribution ps(~l). If the additional condition Ri j psj = R ji psi (1.10) is satisfied for all i; j, I will say that the frozen dynamics satisfy detailed balance. If Eq. 1.10 is not satisfied I will say that the frozen dynamics violate detailed balance or that detailed balance is broken. Section 1.2 elaborates on the definition and interpretation of detailed balance. Here, I mention that when detailed balance is satisfied the stationary distribution 11 can be identified with the canonical equilibrium distribution ps = peq given by peqi (~l)= e Ei(~l) Z(~l) = eF(~l) Ei(~l); (1.11) where the Ei(~l) are state energies, Z(~l) = i e Ei(~l) is the partition function, F(~l) = lnZ(~l) is the free energy, and kBT = 1 to set the units of energy. The transition rate matrix R(~l) completely specifies the frozen dynamics. Moreover, the transition rate matrix together with the stationary distribution ps(~l) can be used to de- fine an alternative pair of quantities that characterize the frozen dynamics equally well and that will prove useful in a subsequent discussion of the no-pumping theorem in Chap. 4. These quantities are the potential ji(~l) which is defined in terms of the stationary distri- bution ji = ln psi; (1.12) and the barrier energies Wi j(~l) defined through the equation Ri j = e (Wi j ji): (1.13) This nomenclature is motivated by the observation that if the frozen dynamics satisfy de- tailed balance, then the barrier energies are symmetric Wi j = Wji (see Sec. 1.2), allowing us to interpret Ri j in Eq. 1.13 as the rate for thermally activated transition from an energy well of depth ji over an energy barrier of height Wi j (at temperature kBT = 1). Although this interpretation breaks down if the frozen dynamics violate detailed balance (Wi j6=Wi j), I will use the terms ?potential? and ?energy barriers? for the quantities in Eqs. 1.12 and 1.13, regardless of whether or not Eq. 1.10 is satisfied. When detailed balance is satisfied the potential is related to the state energies in Eq. 1.11 as ji = Ei F: (1.14) 12 Equation 1.13 may be cast in the matrix form R =BP 1; (1.15) by introducing the diagonal matrix P= 0 BB BB @ ps1 ... psN 1 CC CC A (1.16) together with the barrier energy matrix B with off-diagonal elements Bi j = e Wi j (1.17) and diagonal elements B j j = i6=j Bi j: (1.18) A consequence of Eq. 1.18 is that the rank of B ? the number of linearly independent columns (or rows) ? is N 1. 1.1.2 Algebraic Properties of the Transition Rate Matrix and the Gen- eralized Inverse Transition rate matrices, such as R, are generally non-symmetric, finite matrices, with the special property that the elements of each column sum to zero, see Eq. 1.2. In this section some of the linear-algebraic properties of R are reviewed, since they play a fundamental role in the subsequent investigations. Conservation of probability (Eq. 1.2) can be written as 1TR = 0T; (1.19) 13 which means that the vector 1T = (1;:::;1) is a left eigenvector of R with eigenvalue zero; the corresponding right eigenvector is the stationary distribution ps (Eq. 1.8) and it is unique since G is connected. The uniqueness of ps together with Eq. 1.8 indicates that the null space of R ? those vectors p for which Rp = 0 ? denoted by N(R) is the one- dimensional vector space spanned by ps. Because R has a zero eigenvalue, it is not invertible (detR = 0). However, R does possess a non-unique generalized inverse which we denote byR+. For any finite matrix A, there exists a non-unique generalized inverse A+ with the defining property [57] AA+A = A: (1.20) The generalized inverse is the natural generalization of the inverse matrix to singular and non-square matrices (for an invertible matrix U the ordinary inverse U 1 is a generalized inverse). A few examples serve to illustrate the generalized inverse. For any scalar a, the gener- alized inverse is a+ = 8 >< >: r a = 0 a 1 otherwise ; (1.21) where r is any number. Another example is the generalized inverse of a diagonalizable matrix Y with eigenvalues y1;:::;yN. A diagonalizable matrix can be written as Y = PDQ, where P and Q are invertible and D = 0 BB BB @ y1 ... yN 1 CC CC A (1.22) 14 is diagonal. Then the generalized inverse is Y+ =(PDQ)+ = Q 1 0 BB BB @ y+1 ... y+N 1 CC CC A P 1; (1.23) which can be checked by substitution into Eq. 1.20. For our purposes the central property of the generalized inverse is that the product A+A is a projection operator. Namely, if we let R(A+A) denote the range of A+A ? the vector space of vectors v for which there exists a vector w such that v=A+Aw ? then A+A projects onto R(A+A) along the null space of A, N(A) (Theorem 12 of Ref. [57]). Referring to Ref. [57] for the details, the consequence of this property is that any vector a can be decomposed uniquely, a = b+c; (1.24) into a pair of vectors b2R(A+A) and c2N(A), and A+A projects onto the part of a that is not in the null space of A, A+Aa = b: (1.25) In other words, the direct sum of R(A+A) and N(A) equals the entire vector space. We now restrict our attention to transition rate matrices with a unique stationary distri- bution. As noted following Eq. 1.19, the uniqueness of the stationary distribution means that the null space of such transition rate matrices is one-dimensional. As a result, the above mentioned projection property of the generalized inverse can be expressed as R+R =I pswT; (1.26) where I is the identity matrix, w is a N-dimensional vector that satisfies wT ps = 1, and pswT is a N N matrix that projects onto the null space of R. A quick check reveals that 15 any R+ that satisfies Eq. 1.26 is a generalized inverse of R, since it also satisfies Eq. 1.20. Equation 1.26 implies that the vector w is the vector that is perpendicular to the (N 1)- dimensional subspace R(R+R). Thus, for any R+, we can find a w such that Eq. 1.26 is true; namely, we choose w to be the vector perpendicular to R(R+R) and that satisfies wT ps = 1. On the other hand, given a vector w, there exists a R+ for which Eq. 1.26 is true, as I now show. Let rTn =(rn;1;:::;rn;N)=(Rn1;:::;RnN) (1.27) and (r+n )T =(r+n;1;:::;r+n;N)=(R+n1;:::;R+nN) (1.28) denote the n?th row of R and R+, respectively, and denote the n?th row of the matrix I pswT (Eq. 1.26) as ?wTn =(0;:::;0;1;0;:::;0) psn(w1;:::;wN); (1.29) where the 1 on the right hand side of the above equation is in the n?th position. Then Eq. 1.26 can be written as (r+n )TR = r+n;1rT1 + +r+n;NrTN = ?wTn ; n = 1;:::;N: (1.30) For each fixed n, Eq. 1.30 is N equations, of which N 1 are linearly independent, for the N components of r+n . Equation 1.30 makes evident that a non-trivial solution exists if ?wTn is in the row space of R ? the vector space spanned by the rows of R. Since ?wTn ps = 0, ?wTn is perpendicular to the null space of R and must therefore be in the row space of R, since the row space is the orthogonal complement of the null space [58]. Consequently, we conclude that there exists a non-trivial solution to Eq. 1.30 for each n, and there exists a non-trivial R+ satisfying Eq. 1.26 for any w. 16 Using our freedom to choose w in Eq. 1.26, we now fix w = 1 in order to single out a family of generalized inverses of R that satisfy R+R =I ps1T; (1.31) where ps1T is a N N matrix that projects onto the null space of R. In particular, the operation of R+R on a normalized probability distribution p is R+Rp = p ps: (1.32) From here on, R+ will be used to denote only those generalized inverses of R that satisfy Eq. 1.31. The significance of Eqs. 1.31 and 1.32 can be appreciated by noting that a normalized probability distribution p can be written as p = ps +Dp; (1.33) where the components of Dp must sum to zero (1TDp = 0) to preserve normalization, see Fig. 1.2. The two terms on the right side of Eq. 1.33 belong respectively to the null space of R and to the space of vectors perpendicular to 1T , which is the range of R+R for the choice of R+ in Eq. 1.31. For the decomposition in Eq. 1.33 we have ps1T p = ps (1.34) R+R p =Dp; (1.35) in accordance with the preceeding discussion. 17 Figure 1.2: For a three-state system, the shaded triangle represents all non-negative, nor- malized probability distributions ( i pi =1). The vector 1=(1;1;1)T , not shown, is normal to the plane that contains this triangle. A probability distribution p can be decomposed as the sum of the stationary distribution ps and a vector Dp that resides in this plane. In this picture, the null space of R is the line that contains the vector ps and the range of R+R is the plane parallel to the shaded region. 1.2 Detailed Balance As we saw in Sec. 1.1.1, when the frozen dynamics satisfy detailed balance there are im- portant consequences. Unfortunately, the use of the term detailed balance can be confusing, since it has been used by Hill [14], Astumian [59], Mahan [60], Parrondo and Cisneros [61] to describe situations different than those considered by other authors such as Zia and Schmittman [62]. For clarity, we now discuss the definition of detailed balance most ap- propriate for a stationary Markovian stochastic processes ? stochastic processes for which the statistical properties do not change when shifted in time ? such as the frozen dynamics. We then specialize the discussion to Markov chains and discrete stochastic pumps. Detailed balance is the statement that for systems in their stationary state any se- quence of events is as likely as the time-reversed sequence of events [63]. For stationary Markov processes, detailed balance can be formulated in terms of the transition probabili- ties (Eq. 1.36 below), since the transition probabilities determine their dynamics [55, 63]. Let z label a microscopic configuration of a system; z could be discrete or continuous. 18 Then detailed balance implies that the transition probability P(z0;t0jz;t) for the system to be found in configuration z0 at time t0 given that it was in configuration z at an earlier time t must satisfy the symmetry1 [63, 64] P(z0;t0jz;t)Ps(z)= P(z;t0jz0;t)Ps(z0); (1.36) where Ps(z) is the probability for the system to be found in configuration z in the station- ary state. As we will see below, Eq. 1.36 implies that the stationary currents are every- where zero and that the stationary distribution can be written in the familiar Boltzmann form. As such, when detailed balance is satisfied, one can associate the stationary distri- bution with an equilibrium distribution. Whether such a system relaxes to equilibrium or a non-equilibrium stationary state depends on the microscopic details of how the system exchanges heat with its environment. What we can say, is that when detailed balance is sat- isfied there exists a system with the same transition rates that will relax to an equilibrium state. Specifically for a Markov chain, detailed balance can be framed in terms of the tran- sition rates Ri j, as in Eq. 1.10. The transition rates are defined as the derivatives of the transition probabilities [55, 65] Ri j = lim Dt#0 P(i;t +Dtjj;t) P(i;tjj;t) Dt = limDt#0 P(i;t +Dtjj;t) di j Dt ; (1.37) where P(i;tjj;t) = di j. Substituting t0 = t +Dt in Eq. 1.36, then taking the limit Dt #0 while using Eq. 1.37, reduces Eq. 1.36 to [55] Ri j psj = R ji psi; (1.38) 1For simplicity, we assume that there are no magnetic fields and that the system?s inertia can be ignored; consequently, z is time-reversal invariant. Detailed balance can be extended to include these cases, see Refs. [55, 63] for a detailed discussion. 19 where we have reintroduced the notation specific to Markov chains [Ps(i) = psi ]. Thus, detailed balance as defined at the beginning of this section (above Eq. 1.36) is equivalent to the definition of detailed balance in Eq. 1.10 presented in Sec. 1.1.1. Equation 1.38 (Eq. 1.10) immediately implies that in the stationary state there are no currents anywhere in the system, Jsi j = Ri j psj R ji psi = 0 (Eq. 1.9). Additionally, Eq. 1.38 (Eq. 1.10) implies that the barrier energies Wi j are symmetric, Wi j =Wji, which follows by substituting the definitions of the potential (Eq. 1.12) and the barrier energies (Eq. 1.13) into Eq. 1.10 to find e (Wi j jj)e jj = e (Wji ji)e ji (1.39) e Wi j = e Wji: (1.40) Unfortunately, Eq. 1.38 (Eq. 1.36) gives the impression that detailed balance depends on the stationary distribution, when in fact detailed balance can be deduced solely from knowl- edge of R without recourse to the stationary distribution; a statement that is made explicit by the Kolmogorov condition (Eq. 1.41 below) [62]. Before stating the Kolmogorov condi- tion, we must first introduce the notion of cycles in graphs. A cycle is a directed sequence of vertices with the same initial and terminal point, C = i! j!k! !n!i [54, 56]. For example, Fig. 1.2 depicts three cycles of the graph in Fig. 1.1. Each cycle can be Figure 1.3: Three cycles of the graph in Fig. 1.1, from left to right they are 1!2!3! 4!1, 1!2!3!1, and 1!3!4!1. associated a weight given by the product of transition rates along the edges of the cycle 20 W(C)= Rin Rk jR ji. Similarly for the reverse cycleCrev = i!n! !k! j!i, we have W(Crev)= Rni R jkRi j. Then a necessary and sufficient condition for the dynamics to satisfy detailed balance (Eq. 1.10) is that [62] W(C) W(Crev) = Rin Rk jRi j Rni R jkR ji = 1; (1.41) for all cycles. Equation 1.41, known as the Kolmogorov condition, depends only on the transition rates and makes no mention of the stationary distribution. Knowledge solely of the transition rates is sufficient to determine if a system satisfies detailed balance. Interestingly, the breaking of detailed balance is connected to the deviation of the ratios in Eq. 1.41 from one. One can show that this deviation is related to entropy production, another signature of nonequilibrium behavior [56, 66, 67, 68, 69, 70, 71, 72, 73]. Equation 1.41 can also be used to define unique (up to an additive constant) state en- ergies Ei such that psi = peqi e Ei, which is the familiar Boltzmann weight for the equi- librium distribution [62]. These state energies are constructed as follows: the state energy of one particular state, say state i, is singled out and fixed as a reference energy Ei. To calculate the state energy for any other state j, E j, we choose any path along the graph?s edges that connects i to j, Pji = i!k!l! !n! j. Then E j is determined by the following products of ratios of transition rates along the path Pji, E j Ei = ln R jn RlkRki Rn j RklRik : (1.42) Equation 1.42 does not depend on the choice of Pji due to the path independence of the products of ratios of transition rates implied by Eq. 1.41 [62]. 21 Chapter 2 Current Decomposition Formula Currents in stochastic pumps are produced by two distinct mechanisms [74]. First, when detailed balance is broken the stationary distribution supports non-zero currents, Jsi j6= 0, as discussed in Sec. 1.1.1. Second, if the system is driven away from the stationary distribution by varying~l with time, additional currents arise due to the resulting flow of probability among the states of the stochastic pump. The current decomposition formula is an explicit decomposition of the current into these two contributions: Ji j = Jsi j + ?Vi j ?p; (2.1) where expressions for the linear operator ?Vi j are given in Eqs. 2.8, 2.23, 2.25, and 2.36 below. This exact result gives the net current as the sum of a baseline stationary contribution Jsi j and an excess or ?pumped? contribution Jexi j = ?Vi j ?p = k V ki j ?pk (2.2) due to the redistribution of probability across the graph that accompanies a variation of the external parameters. The baseline stationary current Jsi j(~l) can be identified experimentally by measuring the currents flowing through the system after having allowed the system to relax to the stationary distribution at fixed~l. When the external parameters are varied with 22 time, additionally currents in addition to the stationary current will flow, and Eq. 2.2 gives an explicit express for these currents. Substituting Eqs. 2.1 and 2.2 into in Eq. 1.5 leads to an analogous decomposition of the integrated current Fi j(t)=Fsi j(t)+Fexi j (t); (2.3) where the excess integrated current Fexi j (t)= Z t 0 dt ?Vi j ?p (2.4) is the net current produced in excess of the time-integrated baseline, stationary flow Fsi j(t)= Z t 0 dt Jsi j(t): (2.5) The linear operator ?Vi j is defined as any linear operator that satisfies Eq. 2.1. Its role is to describe how local changes in the probability distribution generate flows of probability or current. In particular, the componentV ki j governs how much probability flows from state j to state i in response to varying the probability distribution at state k. Any formula for ?Vi j is not unique, since probability conservation implies that Eq. 2.1 (Eq. 2.2) is unaffected by the replacement ?Vi j ! ?Vi j +vi j(~l)1T , where 1T = (1;:::;1) and vi j(~l) is an arbitrary function of~l for all i; j. Equation 2.1 is not a shortcut for calculating the current Ji j, since the solution of the master equation (Eq. 1.1) is still required to determine ?p. Although Eq. 2.1 is a formal expression, it lends itself to further theoretical analysis. The following chapters (Chaps. 3 and 4) discuss some of these implications in detail. For now, we concern ourselves with deriving formulas for ?Vi j which satisfy Eq. 2.1. Section 2.1 contains a derivation of Eq. 2.1 due to Horowitz and Jarzynski [75], which exploits properties of the generalized inverse (Sec. 1.1.2). Although the generalized inverse derivation is somewhat formal, the resulting 23 expression for ?Vi j (Eq. 2.8 below) is simple to analyze. Section 2.2 re-derives Eq. 2.1 using Cramer?s rule ? a standard result in linear algebra ? which was originally presented by Rahav, Horowitz, and Jarzynski in Ref. [31]. This derivation is more cumbersome, but does provide an analytic expression for ?Vi j in terms of the elements of R (Eq. 2.36 below). 2.1 Derivation 1: Generalized Inverse In order to derive Eq. 2.1, we first solve for p in terms of ?p (Eq. 1.1), then combine that result with Eq. 1.4 to determine the current Ji j. To this end, let us take the following atypical view of the master equation: for fixed t let us interpret Eq. 1.1 as a non-homogeneous matrix equation for p with matrix R and source term ?p. Ordinarily, we solve matrix equations by finding the inverse matrix. However, the transition rate matrix is not invertible (see Sec. 1.1.2). Therefore, we instead use the generalized inverse R+ of R. We apply R+ to both sides of Eq. 1.1, then use the generalized inverse property in Eq. 1.32 to obtain p = ps +R+ ?p: (2.6) Next, we apply the current operator (Eq. 1.6) to both sides of this equation. This gives us Ji j = Jsi j + ?Ji jR+ ?p (2.7) Comparing with Eq. 2.1 we see that ?Vi j = ?Ji jR+; (2.8) equivalently V ki j = l Jli jR+lk; (2.9) is a general formula for ?Vi j, but it is not a unique formula. A formally identical expression 24 for ?Vi j has been used by Chernyak and Sinitsyn to prove Eq. 2 of Ref. [76], which they utilized to investigate the properties of stochastic pumps in the low-temperature limit (see Sec. 3.2 for a brief discussion). We can develop a linear equation for any ?Vi j that satisfies Eq. 2.8 by operating ?Ji j (Eq. 1.6) onto Eq. 1.31 and then substituting in Eq. 2.8, ?Vi jR = ?Ji jI Jsi j1T: (2.10) 2.1.1 Detailed balance and ?Vi j An important property of ?Vi j, that will play a prominent role in Chap. 4, is that ?Vi j depends only on the barrier energiesfWi jgand not on the values of the potentialsfjigwhen detailed balance is satisfied, as we now show. When detailed balance is satisfied Jsi j = 0 and Eq. 2.10 becomes ?Vi jR = ?Ji jI: (2.11) After substituting in the decomposition R =BP 1 (Eq. 1.15) and the definition of ?Ji j (Eq. 1.6), we find that Eq. 2.11 can be written, after a short manipulation, as k V ki j Bkn = Bi jdjn B jidin; n = 1;:::;N: (2.12) For fixed (i; j), Eq. 2.12 is a collection of N equations for the N components of the vector ?Vi j =(V 1i j;:::;V Ni j ). Of these N equations only N 1 are linearly independent (the rank of B is N 1, see Eq. 1.18), but they all depend exclusively on the components of the barrier energy matrix B. The solution to Eq. 2.12 is not unique, yet we are free choose a solution that only depends on the barrier energy matrix, ?Vi j = ?Vi j(B). In particular, we can use the non-uniqueness of ?Vi j to set the N?th component to zero, ?Vi j! ?Vi j V Ni j 1T (see paragraph following Eq. 2.4); the other N 1 components of ?Vi j are then uniquely determined by the 25 elements of B using Eq. 2.12. Thus, we can construct a ?Vi j that is only a function of the barrier energies and does not depend on the potentials by appropriately choosing a solution to Eq. 2.12. 2.1.2 Explicit Expression for ?Vi j Equation 2.8 is a non-unique formal expression for ?Vi j in terms of the generalized inverse of the transition rate matrixR+. In this section, I develop explicit expressions for ?Vi j (Eqs. 2.23 and 2.25 below) by choosing particular R+?s from the family of generalized inverses that satisfy Eq. 1.31: reprinted here for convenience R+R =I ps1T: (2.13) The method is to treat Eq. 2.13 as a linear equation for R+ and then to find a family of solutions (Eq. 2.16 below) in terms of functions that characterize the frozen dynamics. We will then substitute that family of solutions into Eq. 2.8 to discover a collection of explicit expressions for ?Vi j. The family of solutions will be written in terms of two functions that characterize the frozen dynamics. The first is rljk which is the average time the system spends in state j before reaching the state l conditioned on initially being in state k. The state l is sometimes called the taboo state [77]. It can be shown that rljk is the solution of the linear equation [77] k6=l rlmkRkn = dmn; m;n6= l: (2.14) Notice that rljk is the inverse ofR on the subspace excluding state l. The second quantity is the mean first exit time tlk which is the average time to reach state l conditioned on the 26 system initially being in state k and is the solution of the linear equation [77] k6=l tlkRkn = 1; n6= l: (2.15) I now claim that R+mk = rlmk + psmtlk (2.16) are a family of solutions to Eq. 2.13 parameterized by the taboo state l = 1;:::;N. For each fixed l, Eq. 2.16 is an explicit formula for R+. To verify that Eq. 2.16 solves Eq. 2.13, we substitute Eq. 2.16 into Eq. 2.13 to find k R+mkRkn = k6=l ( rlmk + psmtlk)Rkn (2.17) =dmn psj; (2.18) where it is assumed that n6= l and used the defining equations of rljk (Eq. 2.14) and tlk (Eq. 2.15). The case where n = l is slightly more complicated, and requires the use of the identity Rkl = 1ps l r6=l Rkr psr; (2.19) which is a slight rearrangement of Rps = 0 (Eq. 1.8). Setting n = l in Eq. 2.17, and then substituting Eq. 2.19 followed by Eqs. 2.14 and 2.15, we find k R+mkRkl = 1ps l r6=l (dmr psm) psr: (2.20) Summing on r then gives k R+mkRkl = 1ps l [psm(1 dml) psm(1 psl)] (2.21) =dml psm: (2.22) 27 Equations 2.18 and 2.22 confirm that the expression for R+ in Eq. 2.16 is a solution of Eq. 2.13. We now replace R+ in Eq. 2.8 with Eq. 2.16 to find the family of expressions V ki j = Ri jrljk +R jirlik +Jsi jtlk; (2.23) parameterized by the taboo state l = 1;:::;N. For each fixed l, Eq. 2.23 is an explicit formula for ?Vi j. Interestingly, the quantity Ri jrljk (R jirlik) appearing in Eq. 2.23 is the expected number of transitions from j to i (i to j) prior to reaching state l, given the initial state k. Note that ?Vi j can be given an alternative form by introducing gljk the average number of visits to state j having never reached state l conditioned on the system initially being in state k. Multiplying gljk by the average time spent in state j per visit, 1=jR j jj, gives the average total time spent in state j under the taboo state l conditioned on starting in state k rljk = g l jk jR j jj: (2.24) Substituting the above expression for rljk into Eq. 2.23 gives V ki j = qi jgljk +q jiglik +Jsi jtlk; (2.25) where the branching fraction qi j = Ri jjR j jj (2.26) is the conditional probability for the system to transition to state i, conditioned on the system having left state j. 28 2.2 Derivation 2: Cramer?s Rule This alternative derivation of Eq. 2.1 proceeds in a similar manner. We again solve for p in terms of ?p (Eq. 1.1), then combine that result with Eq. 1.4 to determine the current. However, instead of using the generalized inverse to solve for p, we apply more standard linear-algebraic techniques. Equation 1.1 is a set of linear equations, which we label ?e1; ; ?eN. Since detR = 0, these are linearly dependent (one of them is redundant) and R cannot be inverted to solve for p in terms of ?p. Specifically, for a given ?p, if p satisfies Eq. 1.1 then so does p(a) = p+aps, for any value of a. We remove this degeneracy by imposing the normalization condition 1T p = 1, which we label ?e0: replacing the N?th equation ?eN in Eq. 1.1 by ?e0, gives a set of linearly independent equations ?p0=R0p; (2.27) where ?p0 ( ?p1; ; ?pN 1;1)T , andR0 is obtained by substituting the vector 1T for the N?th row of R, R0= 0 BB BB BB B@ R11 R1N ... ... RN 1;1 RN 1;N 1 1 1 CC CC CC CA : (2.28) [The choice to substitute the vector 1T into the N?th row of R is not unique, merely con- venient: recall that ?Vi j is not unique (see below Eq. 2.4)]. Since detR06= 0, we solve for p using Cramer?s rule [58]: pm = detR 0m detR0; (2.29) whereR0m is obtained fromR0by replacing the m?th column by ?p0. We can rewrite Eq. 2.29 29 in terms of r0= detR0; (2.30) and the cofactor of R0lm, r0lm, that is ( 1)l+m times the determinant of the matrix obtained by deleting row l and column m of R0 [58], by expanding detR0m along the m?th column pm = 1r0 N l=1 r0lm ?p0l; (2.31) After substituting Eq. 2.31 into Eq. 1.4 for the current with m = i; j, and defining ski j(~l)= Ri j r 0 k j r0 ; (2.32) we get Ji j = N k=1 ski j skji ?p0k; (2.33) = sNi j sNji + N 1 k=1 ski j skji ?p0k; (2.34) where Eq. 2.34 follows from ?p0N = 1. Comparing Eq. 2.34 with Eq. 2.1 and recognizing that Ji j = Jsi j when p = ps (i.e. when ?p = 0), we obtain Jsi j = sNi j sNji (2.35) and V ki j =(1 dkN) ski j skji ; (2.36) which gives ?Vi j in terms of the elements of R. In deriving Eq. 2.36, we have recovered an expression for Jsi j (Eq. 2.35) previously derived using graph theory by Hill [14] and later reviewed by Schnakenberg [56], and Zia and Schmittman [62]. In fact, the standard expression for the stationary current, Jsi j = 30 Ri j psj R ji psi (Eq. 1.9), can be recovered from Eq. 2.35 after a simple manipulation using Eq. 2.32 for sNi j and a common expression for the stationary distribution in terms of the cofactors of R0 [62], psl = r 0 Nl r0 : (2.37) 2.3 Relationship between Derivations In the preceding sections (Secs. 2.1 and 2.2), two methods were used to derive expressions for ?Vi j (Eqs. 2.8, 2.23, and 2.36). In this section, I show that Eq. 2.36 is a special case of Eq. 2.23. The first step is to show that the cofactors of R0 are related to the quantities rNjk (Eq. 2.14) and tNk (Eq. 2.15) as r0k j r0 = r N jk + p sjtN k ; (2.38) where the taboo state have been fixed to state N. Equation 2.38 follows from the observation that the right and left hand sides satisfy the same linear equation, as I now demonstrate. From Eqs. 2.14 and 2.15, we have k6=N rNjk + psjtNk Rkn =djn psj: (2.39) To show that r0k j=r0 satisfies the same equation, we recognize that Rkn = R0kn for all k6= N (Eq. 2.28), which gives k6=N r0k j r0 Rkn = k6=N r0k j r0 R 0 kn: (2.40) 31 Next we add and subtract (r0N j=r0)R0Nn from the right hand side k6=N r0k j r0 Rkn = N k=1 r0k j r0 R 0 kn r0N j r0 R 0Nn (2.41) =djn psj; (2.42) where the second line follows after substituting in Eq. 2.37, R0Nn = 1 (Eq. 2.28), and [58] R0jk 1 = r 0 k j r0 : (2.43) Equation 2.38 now follows from Eqs. 2.39 and 2.42, since these equations have a unique solution: R is invertible on the (N 1)-dimensional subspace excluding state N. To demonstrate that Eq. 2.36 is a special case of Eq. 2.23, we substitue Eq. 2.32 into Eq. 2.36 V ki j =(1 dkN) Ri j r 0 k j r0 R ji r0ki r0 ! : (2.44) We then substitute in Eq. 2.38 V ki j = Ri j rNjk + psjtNk R ji rNik + psitNk ; (2.45) where we have recognized that rNjN =tNN = 0. A slight rearrangement of the above equation combined with Eq. 1.9 for Jsi j leads to Eq. 2.23 with l = N. 32 Chapter 3 Adiabatic Pumping and Geometric Phases This chapter discusses one consequence of the current decomposition formula (Eq. 2.1). I show that when the external parameters are varied very slowly, i.e. adiabatically, the excess integrated current is given by a geometric formula: the excess integrated current Fexi j (t) is determined solely by the path the external parameters take through parameter space. For discrete stochastic pumps, geometric formulas for the integrated current in the adi- abatic limit have been noted by a number of authors. Astumian derived a geometric for- mula for the adiabatic integrated current in specific models of molecular machines and ion pumps [46, 48]. Sinitsyn and Nemenman have shown that all cumulants of the inte- grated current have a geometric contribution in the adiabatic limit by studying models of the Michaelis-Menten enzymatic reaction, reversible ratchets, and the SIS epidemiological model using the stochastic path integral representation of the moment generating function for fluxes in mesoscopic systems [30, 45]. Subsequently, Ohkubo showed that the predic- tions of Sinitsyn and Nemenman apply equally well to any Markovian system described by a master equation, by analyzing the Michaelis-Menten enzymatic reaction [78]. Ohkubo has also studied theoretically the non-adiabatic geometric phase in a periodically driven Michaelis-Menten reaction [50]. Sinitsyn?s review on geometric phases in dissipative and stochastic systems includes these results and others such as geometric phases in systems with limit cycles; non-adiabatic and non-cyclic driving; the relationship between linear re- 33 sponse and geometric phases; as well as a brief discussion of the role of geometric phases in the analysis of molecular motors [32]. In Sec. 3.1, I present a geometric formula for the excess integrated current in the adi- abatic limit (Eq. 3.3 below) originally derived by Rahav, Horowitz, and Jarzynki [31]. This geometric formula complements previous work by providing an analytic expression applicable to any discrete stochastic pump and by extending previous results to systems with broken detailed balance. Then to complement our discussion of geometric effects in stochastic pumps, I review work by Chernyak and Sinitsyn in Sec. 3.2 which demon- strates that when the external parameters are driven adiabatically and the barrier energies are much larger than the thermal energy kBT , the integrated current is topological and may become quantized [52]. The mathematical foundations of the adiabatic geometric formula are treated in Sec. 3.3. 3.1 Geometric Adiabatic Pumping In the adiabatic limit the external parameters~l are varied much more slowly than any char- acteristic relaxation time. In this limit, the system remains near the stationary distribution along the entire process, p(t) ps(~lt) . This suggests that in the adiabatic limit we may substitute ?p(t) ?~lt ~ ps(~lt); (3.1) where~ = ?=?l1;:::;?=?lN , into Eq. 2.4 to find Fexi j (t)= Z t 0 dt ?Vi j(~lt) h?~ lt ~ ps(~lt) i : (3.2) 34 Since each term in the above equation is only a function of l, we may write it as a line integral through parameter space along the path~lt from~l0 =~a to~lt =~b as Fexi j = Z ~Si j(~l) d~l; (3.3) where ~Si j(~l)= ?Vi j~ ps(~l)= k V ki j~ psk: (3.4) Equation 3.3 is geometric: time no longer explicitly appears, and the excess integrated current is expressed as a path integral from~a to~b in parameter space. When the external parameters are varied adiabatically through a cycle (~l0 =~lt), they trace out a closed curve in parameter space. This curve bounds a two-dimensional surface D in parameter space. In this case, we can use Stokes? theorem [79] to write the line integral in Eq. 3.3 as a surface integral. When there are only three external parameters (L = 3), we have Fexi j = Z ?D ~Si j(~l) d~l = Z D ~Hi j d~S; (3.5) where ?D is the boundary of D, d~S is the differential surface element, and ~Hi j =~ ~Si j =~ ?Vi j ~ ps (3.6) is the curl of~Si j. Figure 3.1 is a visual representation of Eq. 3.5. When L6= 3 we must use the language of differential forms to apply Stokes? theorem. A differential r-form or simply a r-form is a totally anti-symmetric tensor that maps r vectors to a real number. For cyclic pumping, the line integral in L-dimensional parameter space in Eq. 3.3 can be written as an integral over a one-form as Fexi j = Z ?D Si j; (3.7) 35 Figure 3.1: Illustration of the geometric formula for the adiabatic integrated current for cyclic pumping (Eq. 3.5) in a three-dimensional parameter space, (l1;l2;l3). The inte- grated current is given as the flux of the vector field ~Hi j through the surface D, pictured as shaded ellipse. The boundary of D, ?D, is the path the parameter protocol~lt traces out in parameter space. where Si j = ?Vi jdps (3.8) is the one-form associated to~Si j, and d is the exterior derivative [79]. For a differentiable function of~l, f(~l), the differential form d f is the differential of f and maps a vector in parameter space~g =(g1;:::;gL) to the real number d f(~g)= l gl(? f=?ll); (3.9) the derivative of f along~g. In particular, Eq. 3.8 becomes Si j(~g)= l gl ?Vi j(?ps=?ll). We now apply Stokes? theorem to Eq. 3.7 [79] Fexi j = Z D Hi j; (3.10) where Hi j = dSi j = d ?Vi j^dps (3.11) 36 is the two-form analogous to ~Hi j =~ ~Si j in three-dimensional space, and^is the wedge product: for two differential forms w and n the wedge product is the totally anti-symmetric tensor product [79] w^n =w n n w; (3.12) where is the tensor product. 3.1.1 Geometric Adiabatic Pumping with Detailed Balance When detailed balance is satisfied the geometric formula for adiabatic pumping takes a simple form. Detailed balance implies that the stationary currents are all zero, so that in the adiabatic limit the entire integrated current Fi j is given by the geometric formula in Eq. 3.3. We also observe that the stationary distribution is the canonical equilibrium distribution with state energies Ei (Eq. 1.11), and can be determined a priori without solving Rps = 0. Recall also that ?Vi j is only a function of the barrier energies (Sec. 2.1). In this situation, a convenient choice of external parameters for theoretical analysis are the barrier energies fWi jg and the state energies fEig. With this choice of external parameters, we can use Eqs. 3.10 and 3.11 to write the integrated current in the adiabatic limit with detailed balance for cyclic pumping as Fi j = Z D kl;n ? ?Vi j ?Wkl ?peq ?En dWkl^dEn; (3.13) where the sum is on all edges, k$l, and all vertices, n = 1;:::;N, of G and D is the region of parameter space enclosed by the parameter protocol. 3.2 Quantization and Topological Adiabatic Pumping When a stochastic pump with detailed balance is driven adiabatically and cyclically at a low temperature, the integrated current may become quantized: the integrated current takes only integer values. Chernyak and Sinitsyn have studied theoretically this quantization of 37 the integrated current in discrete stochastic pumps with detailed balance [52]. Their predic- tions agree with the experimental observations made by Leigh et al. [80]. In this section, we briefly review the work of Chernyak and Sinitsyn as it complements the discussion of geometric adiabatic pumping. Quantization of the integrated current occurs when the protocol varying the external parameters is such that degeneracies of the barrier energies never occur simultaneously with degeneracies of the state energies. If simultaneous degeneracies do occur, then fractional quantization is possible: the integrated current takes only fractional values. Recall that for systems that satisfy detailed balance, the stationary currents vanish iden- tically, and the barrier energies are symmetric, implying that there is a one to one corre- spondence between barrier energies and edges; namely, for each edge i$ j there is only one barrier energy, Wi j =Wji. The physical basis for quantization can be understood by realizing that in the low- temperature limit the barrier energies are much larger than the thermal energy kBT , in which case noise plays a negligible role. For an adiabatic protocol, we can divide the pro- tocol into segments where only the barrier energies vary or only the state energies vary by choosing these segments to be of sufficiently short duration. In the segments where only the barrier energies vary, no current whatsoever is produced (see Sec. 4.1 below). In the seg- ments where only the state energies vary, quantized current is generated, by the following argument. In the low-temperature limit, our classical system must be found with probability nearly one in the lowest energy state, say state m. Now, let us adiabatically raise the energy of state m and lower that of state n, so that n becomes the new lowest energy state. Over the course of this process, the system will evolve from state m to state n. Assuming during this process there are no degenerate barrier energies, the system transitions from m to n along the unique path (sequence of edges) c that connects m to n and whose edges have the lowest value of highest barrier energy (see Ref. [52] for details). As the system transitions from m to n, it passes over every edge of c once, increasing the integrated current by one 38 on each edge of c. The integrated current on every other edge remains unchanged. Thus, in the low-temperature limit, the integrated current can only change by integer amounts. Chernyak and Sinitsyn also observed that in the low-temperature limit the vector fields ~Hi j (Eq. 3.6) collapse onto singularities in parameter space, called ?flux tubes?, where there are simultaneous barrier energy degeneracies and state energy degeneracies. The flux gen- erated by ~Hi j in the surface integral of Eq. 3.5 is concentrated on these singularities. As a result, integrated current can only be produced when the protocol encircles at least one singularity. If no singularity punctures the surface enclosed by the curve traced out by the protocol, then no integrated current is produced. In addition, the path around the singu- larity is irrelevant: all curves that can be continuously deformed into each other without crossing a singularity produce the same integrated current. For these reasons, the adiabatic integrated current in the low-temperature limit is a topological effect. 3.3 Geometric Structure of the Adiabatic Integrated Cur- rent The geometric formula for adiabatic pumping (Eq. 3.3) has a rich mathematical structure: equation 3.3 can be understood as the holonomy of a connection over a trivial principal fiber bundle as briefly mentioned by Chernyak and Sinitsyn in the Appendix of Ref. [76]. In this section, I elucidate this geometric structure in greater detail, because it is the natural mathematical language for geometric effects. This section is not needed to understand the following chapters and may be skipped. For notational simplicity, we focus on the integrated current along only one edge, from l to k. All of the following conclusions can be generalized by identifying the typical fiber (see below) as the vector space of all excess integrated currents. Additionally, this section relies heavily on the mathematics of principal fiber bundles and their connections which can be found in the texts of Nakahara [79] or Bohm et al. [81]. 39 A fiber bundle is a space that locally looks like a cartesian product of two spaces, but globally could be twisted. To construct a fiber bundle, we need three spaces: the base space, the typical fiber, and the total space. Here, we identify the base space with the parameter space~l 2L, the fiber with the space of excess integrated current Fexkl 2Y, and the total space as their cartesian product M =L Y. Notice that Y =R is an abelian group making this fiber bundle a principal bundle and that M is a simple product making this fiber bundle a trivial bundle. Finally, a fiber bundle is equipped with a projection mapping p : M!L from the total space M to the base space L. Physically, currents are produced by the variation of the external parameters. To trans- late this notion into the language of fiber bundles, we must introduce a connection. Roughly speaking, a connection associates small changes in the base space to small changes along the fiber. This allows us to ?lift? a curve from parameter space to a curve in the total space. The distance the lifted curve moves up the fiber is the holonomy of the connection, and tells us how much excess integrated current is produced, see Fig. 3.2. A connection is specified by a connection one-form 1 A = dFexkl ~Skl(~l) d~l: (3.14) The basis one-forms, dFexkl and dlm, are defined such that dFexkl maps the coordinate vector ?Fexkl along Fexkl to one, and dlm maps the coordinate vector ?lm along lm to one. The connection one-form allows us to define a unique (horizontal) lift g(s)2M of the curve~l(s)2L. g(s) is specified by two conditions. The first p[g(s)]=~l(s) (3.15) guarantees that the lifted curve g(s) projects down onto the correct curve~l(s) in parameter space. The second condition singles out a unique lifted curve using the connection one- 1To be precise, A is a Lie-algebra valued one-form. The Lie-algebra in this case is R, since the structure group of this fiber bundle is Y =R. 40 Figure 3.2: Illustration of the connection holonomy for a two-dimensional parameter space (l1;l2)2L. The dotted line is a closed curve through parameter space~l(s), and the solid line is its horizontal lift g(s). The holonomy is represented by the vertical arrow which indicates both the distance g(s) moves up the fiber and the amount of excess integrated current Fexkl produced. form A: A(vg)= 0; (3.16) where vg is the tangent vector along g(s). Vectors that satisfy Eq. 3.16 are called horizontal vectors; hence, g(s) is called a horizontal lift. To recover Eq. 3.3 as a holonomy of the connection in Eq. 3.14, we write the lifted curve in local coordinates g(s)= Fexkl(s);~l(s) , where Fexkl(s) is the coordinate along the fiber and~l(s) is the prescribed cyclic adiabatic protocol in parameter space. Notice that Eq. 3.15 is trivially satisfied. Fexkl(s) is now determined by inserting the local expression for the tangent vector to g(s), vg = ?Fexkl(s)?Fexkl + ?~l(s) ?~l, into Eq. 3.16 A(vg)= dFexkl ~Skl(~l) d~l ? Fexkl(s)?Fexkl + ?~l(s) ?~l = 0: (3.17) 41 This is a differential equation for Fexkl(s), d dsF ex kl(s)=~Skl[~l(s)] d ds ~l(s); (3.18) whose solution is the geometric formula for the adiabatic excess integrated current in Eq. 3.3. 42 Chapter 4 No-Pumping Theorem Another consequence of the current decomposition formula (Eq. 2.1) is a no-pumping the- orem developed collaboratively by Rahav, Horowitz, and Jaryznski [31]. The no-pumping theorem provides a set of conditions under which no integrated current is produced during a non-adiabatic cyclic process. The the no-pumping theorem is derived in Sec. 4.1 and il- lustrated in Sec. 4.2 with simple models inspired by experiments performed by Leigh et al. [80]. To provide more insight, I include two alternative derivations. One derivation based on properties of the time-integrated master equation due to Maes, Neto?cn?y, and Thomas is presented in Sec. 4.3 [53], and in Sec. 4.4 the no-pumping theorem is seen as a consequence of the pump-restriction theorem due to Chernyak and Sinitsyn [51]. 4.1 No-Pumping Theorem for Discrete Stochastic Pumps with Detailed Balance I now use the current decomposition formula in Eq. 2.1 (Eq. 2.7) to prove a no-pumping theorem for discrete stochastic pumps with detailed balance. That is, I will present a set of conditions under which no integrated current is generated during a cyclic process. A cyclic process is one for which the probability distribution at the beginning of the pro- cess is equal to the that at the end of the process, p(0) = p(t). This can be accomplished 43 in a variety of ways. For example, one may repeatedly cycle the external parameters with period t from the distant past, so that by time t = 0 the system has settled into a periodic steady state p(t)= p(t+t) [82]. Alternatively, we can begin with the system in the station- ary distribution at time t = 0. The external parameters are then varied arbitrarily fast in a cycle through parameter space over a time period T . At time t = T the external parameters are frozen and the system is allowed to relax back to the original stationary distribution. From time t = 0 to t = ?, the probability distribution will have made a full cycle having returned to the original stationary distribution at the end of the process. When the frozen dynamics satisfy detailed balance Fsi j = 0, and we find from Eqs. 2.3 and 2.4 that the integrated current is Fi j(t)= Z t 0 dt ?Vi j ?p: (4.1) I now argue that in order to generate integrated current over the course of a cyclic process, both the potentials fji(~l)g and the barrier energies fWi j(~l)g must be varied with time. The first of these conditions is easily understood: if the potential is held fixed then the system remains in the intitial equilibrium distribution peq(~l0 =~a) for all times, and no currents whatsoever are produced, Ji j = 0. The second condition is a consequence of the fact that ?Vi j is only a function of the barrier energies when detailed balance is satisfied (Sec. 2.1.1). Imagine a cyclic process in which we vary the potentials, but fix the barrier energies. Then from Eq. 4.1, we have Fi j(t)= ?Vi j(B) Z t 0 dt ?p = 0; (4.2) since it is a cyclic process [p(0)= p(t)]. The no-pumping theorem can alternatively be framed using the state energiesfEigin place of the potentialsfjigdue to their correspondence in Eq. 1.14. Using similar argu- ments as the previous paragraph we conclude that in order to generate nonzero integrated 44 current over the course of a cyclic process both the state energies and the barrier energies must be varied with time. 4.2 Illustration I now illustrate the no-pumping theorem by studying models motivated by an experiment on 2- and 3-catenanes by Leigh et al. [80]. These models highlight the necessity of varying both the barrier energies and potentials during a cyclic process. I also use these models to interpret the experimental observation that directed motion is possible in 3-catenanes, but not possible in 2-catenanes. A catenane is a molecular complex formed by threading a number of small ring molecules onto a larger ring molecule, see the illustration in Fig. 4.1. A 2-catenane is composed of Figure 4.1: Illustration of a 3-catenane composed of one large ring and two small mobile rings. Each of the two small rings make thermally activated transition between the binding sites labeled 1, 2, and 3. The three mesoscopic configurations of the catenane ? labeled A, B, and C ? are specified by the location of both small rings. (Reprinted with permission from Astumian [46].) one small ring and one large ring, while a 3-catenane is composed of two small rings and 45 one large ring. (An n-catenane has n rings.) The small rings are constrained through hydro- gen binding to one of the three binding sites ? labeled 1,2, and 3 in Fig. 4.1 ? but are free to make thermally activated transitions between them. The binding affinity is determined by steric interactions and the number of hydrogen bonds between the small rings and the large ring. Laser irradiation and chemical stimuli are used to change the relative affinities of the binding sites through photoisomerization, which alters the number of hydrogen bonds. By decreasing the binding affinity of each binding site in sequence, 1!2!3!1, one attempts to induce directed rotation of the small rings. We first investigate a model inspired by a 2-catenane. The graph for the model is depicted in Fig. 4.2, and the energy landscape is depicted schematically in Fig. 4.3. Each Figure 4.2: Graph representing the states and possible transitions for a discrete stochastic pump model of a 2-catenane. vertex of the graph in Fig. 4.2 corresponds to the location of the small ring on the larger ring. In this case, integrated current corresponds to the directed rotation of the small ring about the larger ring. Transitions between states are thermally activated with transition rates Ri j = k exp b(Wi j E j) (cf. Eqs. 1.13 and 1.14), and we will take k;b = 1 to set the units of time and energy. R satisfies detailed balance, but by varying the well depths and barrier energies we can induce non-zero currents. 46 Figure 4.3: A model stochastic pump satisfying detailed balance. The particle makes ther- mally activated transitions among three states with energies E j(~l), over barriers with ener- gies Wi j(~l). These are varied with time to induce currents. Evaluating Eq. 2.36, and defining y1 = exp( W12 W31) (4.3) y2 = exp( W12 W23) (4.4) y3 = exp( W31 W23); (4.5) and K = j yj; (4.6) we obtain for (i; j)=(2;1) ?V21 = K 1( y1 y2;0; y1)!K 1( y2;y1;0); (4.7) where in the last step we have used the freedom ?V21! ?V21 +(y1=K)1T . Notice that when~l is varied adiabatically around a closed path, the pumped current is 47 given by Eq. 3.3, with ~S21 =( y2~ peq1 +y1~ peq2 )=K: (4.8) If the barrier energies are held fixed during this process, then y1, y2, and K are constant, F21 = 1K I ~ y2 peq1 +y1 peq2 d~l = 0; (4.9) the integrand is a total differential and the net current pumped over one cycle is zero, as predicted by Astumian [46]. Now let us analyze cyclic but non-adiabatic variation of the state energies and barrier energies. We first consider a process during which the barriers are held fixed. Specifically, we take (W12;W23;W31)=( 0:3;0:5;0), and E j(t)= 2+cos 2p t T + j 1 3 ; (4.10) for 0 < t < T = 10. Thus the state energies E j(t) undergo one cycle of pumping, with phases staggered by 2p=3 in a piston-like sequence. Outside this time interval all param- eters are fixed, so the system ultimately relaxes to its initial equilibrium state. The solid line in Fig. 4.4 shows the integrated current F21(t) =Rt0 dt J21(t), obtained by numerical integration of Eqs. 1.1 and 1.4. We see that probability sloshes back and forth on the link between states 1 and 2: initially there is a gentle flow from 1 to 2 (dF21=dt > 0 for t .2), then an interval of stronger current in the opposite direction, followed by another reversal shortly before t = 7:5. The eventual decay of F21 to zero indicates a net cancellation of these flows, as predicted by the no-pumping theorem. Next consider a process during which both the state energies and barrier energies are varied with time: the E j?s are again driven according to Eq. 4.10, but now each barrier moves in synchrony with the well to its immediate right in Fig. 4.3; e.g. as E1 goes down and then up, so does W31, so that their difference remains fixed at W31 E1 = 2 = W12 48 Figure 4.4: The integrated current F21 for non-adiabatic cycles with fixed barriers (solid line) or varying barriers (dashed). E2 =W23 E3: W31 = cos h 2p t T i (4.11) W12 = cos 2p t T + 1 3 (4.12) W23 = cos 2p t T + 2 3 : (4.13) The integrated current F21(t) is shown by the dashed line in Fig. 4.4; the asymptotic value F21 0:1 reveals a net transfer of probability from state 1 to state 2 over the cycle. Note that in both cases non-vanishing currents persist for some time after t = T , reflecting the decay to equilibrium that occurs after the parameters stop being varied. In their experiment with 2-catenanes Leigh et al. [80] varied only the binding affinities of the small ring to the larger ring, which in our model correspond to the state energies fEig. Since the barrier energiesfWi jgremain fixed, the no-pumping theorem predicts no directed motion, as was observed in experiment. This observation was previously under- 49 stood theoretically by both Leigh et al. [80] and Astumian [46] from different perspectives. Leigh et al. did induce directed motion in the 3-catenane by varying only the single- ring binding affinities ? E1, E2, and E3 in the model. This observation does not violate the no-pumping theorem as I now explain. To analyze the 3-catenane, let us introduce for each mesoscopic configuration A, B, and C (see Fig. 4.1) the multi-ring state energies EA, EB, and EC, and the corresponding barrier energies WAB, WBC, and WCA ? which are symmetric due to detailed balance (Sec. 1.2). In terms of the single-ring state energies and barrier energies, the 3-catenante?s multi-ring state energies and barrier energies are [76] EA = E1 +E3 (4.14) EB = E2 +E3 (4.15) EC = E1 +E2 (4.16) WAB =W12 +E3 (4.17) WCA =W23 +E1 (4.18) WBC =W13 +E2: (4.19) From the preceding equations, we see that by varying just the single-ring state energies, both the multi-ring state energies and multi-ring barrier energies change with time. For example, if E1 is time-dependent, then so is EA and WCA. Thus, the no-pumping theorem predicts the possibility of integrated current as was observed experimentally. 4.3 Alternative Derivation of the No-Pumping Theorem Shortly after the original publication of the no-pumping theorem in Ref. [31], Maes, Neto?cn?y, and Thomas proposed an alternative derivation [53]. Using the time-integrated master equation Maes, Neto?cn?y, and Thomas showed that varying the barrier energies is a re- quirement for pumping. Their derivation naturally generalizes to semi-Markov processes 50 (or continuous time random walks) and to diffusion processes. Beside being quite simple, this alternative derivation lends new insight into the no-pumping theorem. With this in mind, I thus present a version of their proof adapted for cyclic processes. To show that during a cyclic process no integrated current is generated if the barrier energies are held fixed, let us begin by introducing an alternative decomposition of the transition rate matrix Ri j = qi jqj: (4.20) Here, the escape rate from state j qj = ejj i6=j e Wi j =jR j jj (4.21) is the inverse of the average time spent in state j per visit and the branching fraction (cf. 4.22) qi j = e Wi j i6=j e Wi j = Ri j jR j jj (4.22) is the conditional probability for the system to jump to state i conditioned on having left state j. Notice that i6=j qi j = 1, implying that the qi j may be interpreted as transition probabilities for a discrete-time Markov chain embedded in the continuos-time Markov chain. Moreover, the symmetry of the barrier energies Wi j due to detailed balance (Sec. 1.2) implies that the quantity f eqj =jR j jjpeqj (4.23) satisfies the equation qi j f eqj q ji f eqi = 0 (4.24) for all i and j. Next, we consider a cyclic process such that the initial and final probability distributions are the same, p(0) = p(t). Over the course of this cyclic process the potentials jj(~lt) 51 is varied and the barrier energies Wi j are fixed; consequently, the escape rates are time- dependent through the external parameters qj(~lt) (Eq. 4.21) and the branching fractions qi j are constant (Eq. 4.22). We now integrate, from time t = 0 to t, the master equation (Eq. 1.1) and the definition of the current (Eqs. 1.4 and 1.5), to find j6=i qi j f j q ji fi = 0 (4.25) and Fi j = qi j f j q ji fi; (4.26) where f j = Z t 0 dsqj(~ls)p j(s): (4.27) Equation 4.26 means that the integrated current is determined by the solutions to Eq. 4.25, f j. The f j are determined by recognizing that Eq. 4.25 is the equation for the stationary distribution of a Markov chain with transition probabilities qi j, but the f j are not normal- ized. Thus, from Eq. 4.24 any solution to Eq. 4.25 must be proportional to f eqj (Eq. 4.23), f j f eqj . Consequently, Fi j = 0 (Eq. 4.26) for any solution to Eq. 4.25. This derivation lends some insight into the no-pumping theorem. No matter how the escape rates change, as long as the branching fractions are fixed, no integrated current is produced. Roughly speaking, the branching fractions determine how current spreads out among the edges of the graph. Over the course of a cyclic process, current merely sloshes back and forth over the various links; the relative amount flowing over any link is fixed, since the branching fractions are constant. The result is that the excess integrated current nets to zero over the course of a cycle. 52 4.4 No-Pumping Theorem as a Consequence of the Pump- Restriction Theorem The no-pumping theorem can also be seen as a consequence of the more general pump- restriction theorem due to Chernyak and Sinitsyn [51]. The pump-restriction theorem is a statement about how the topology of the state space of a discrete stochastic pump with detailed balance affects the integrated current during a non-adiabatic cyclic process. In this section, I briefly state the main conclusions of the pump-restriction theorem and illustrate it with a simple example. Then I argue that the pump-restriction theorem implies the no- pumping theorem derived above. To discuss the pump-restriction theorem, I must first introduce some notation. Consider a discrete stochastic pump with detailed balance whose barrier energies and state energies are being varied with time using a cyclic protocol. Take for example the four-state discrete stochastic pump whose state space is pictured in Fig. 4.5. For this stochastic pump imagine Figure 4.5: Graph of a four-state stochastic pump used to illustrate the pump-restriction theorem. Edges pictured with dotted lines are those edges whose barrier energies are varied over the course of a cyclic process and constitute the edges of the subgraph G0. Edges pictured with solid lines are edges whose barrier energies are held fixed and constitute the selected edges of the subgraph G1. an external parameter protocol in which at least one potential is varied and the barrier energies along edges 1$4 and 4$3 are the only barrier energies changing with time. Let us denote by G0 the collection of edges whose barrier energies are varied; edges 1$4 53 and 4$3 for our example in Fig. 4.5. Let G1 denote the collection of edges whose barrier energies are held fixed. In essence, the pump-restriction theorem states that those edges in G0 whose barrier energies are changing ?drive? the integrated currents in G1. That is to say, using only probability conservation the integrated currents along edges in G1 can be calculated from the integrated currents along the edges of G0 and the values of the fixed barrier energies in G1. For our example, knowing the integrated current pumped along edges 1$4 and 4$3 and the values of the fixed energy barriers on the remaining edges, one can determine the integrated current along any edge. Not all the integrated currents in G0 are independent: probability conservation implies that only a subset of the integrated currents on G0 determines all the integrated currents on G0. This subset of integrated currents on G0 depends on the topology of G and they can be identified in the following manner: sequentially remove edges in G that are also in G0 until the removal of an edge would disconnect G. The integrated currents along the removed edges are the required subset of integrated currents. Given knowledge of this subset of integrated currents, all other integrated currents throughout G may be determined. (If no edges can be removed without disconnecting G, then the integrated current along every edge is zero.) For example, the removal of both the 1$4 and 4$3 edges of the graph in Fig. 4.5 disconnects the graph. Thus, knowledge of the integrated current along only one of these edges is sufficient to determine all other integrated currents. Take for example F14, then the integrated current F43 can be determined using probability conservation at vertex 4, F14 =F43. The no-pumping theorem follows as a simple consequence of the pump-restriction the- orem. If no barrier energies are varied, there are no integrated currents in G0 to drive the system. The result is that there are no integrated currents anywhere. 54 Chapter 5 Adiabatic Control Theory A key requirement in the design of artificial non-autonomous molecular machines is the ability to consistently control the average molecular-level motion. Unfortunately, identify- ing in advance external parameter protocols that result in the desired molecular behavior is a difficult task. To address this problem, we will investigate methods for controlling dis- crete stochastic pumps. Specifically, we will be interested in devising cyclic protocols that produce an arbitrary set of integrated currentsfFi jgalong the edges of the graph G, subject to probability conservation. Consider, for example, a four-state stochastic pump with de- tailed balance whose state space is in Fig. 1.1, reproduced here as Fig. 5.1; by varying the Figure 5.1: Graph of the state space of a four-state discrete stochastic pump used to illus- trate our adiabatic control strategy. barrier energiesfWi jgand state energiesfEigwith a cyclic protocol, we wish to generate 55 a particular collection of integrated currents, fF12;F23;F34;F14;F13g. Not any collec- tion of integrated currents is possible; as we will see (Eq. 5.1), probability conservation demands that F12 =F23, among other restrictions. One method for controlling a molecular machine is to operate in the low-temperature limit using adiabatic protocols; such protocols are useful since they are topological in na- ture and robust against perturbations [52, 76]. Here, we investigate another method, pro- posed by Horowitz and Jaryznksi [75], to control discrete stochastic pumps that allows one to specify the integrated current along each edge of the graph, subject to the constraints of probability conservation, that utilizes infinitesimal adiabatic cyclic protocols. 5.1 Controlling Stochastic Pumps with Cyclic Adiabatic Protocols In this section, I propose a novel method for controlling discrete stochastic pumps. Since practical molecular machines will typically operate in a cyclic fashion, I will focus on stochastic pumps driven by cyclic processes. I will also assume for simplicity that detailed balance is satisfied. Before describing the control method in Sec. 5.1.2, I first discuss in Sec. 5.1.1 how probability conservation in cyclic processes restricts the possible collections of integrated currents that can be pumped. 5.1.1 Implications of Probability Conservation in Cyclic Processes The integrated currents fFi jg produced during a cyclic process are not arbitrary. Proba- bility conservation imposes certain relationships between them. For example, in Fig. 5.1, probability conservation (Eq. 5.1 below) demands that F12 =F23, F13 +F23 =F34, etc. In this section, I explain how probability conservation affects the integrated current and describe a method for identifying an independent set of integrated currents. 56 For cyclic processes [p(0)= p(t)], the integral of the continuity equation (Eq. 1.3) over one period (from time t = 0 to t) is N i=1 Fi j = 0; j = 1;:::;N; (5.1) where I have substituted in the definition of the integrated current (Eq. 1.5). Equation 5.1 is a set of constraints on the integrated currentsfFi jgimposed by probability conservation. Only the values of an independent subset of fFi jg are required to deduce the integrated current along any edge using Eq. 5.1. This (non-unique) subset of independent integrated currents depends strongly on the topology of the underlying state space G and can be iden- tified as follows, using the method outlined by Schnakenberg in Ref. [56]. We sequentially remove edges of G until the removal of an additional edge disconnects the graph. Each removed edge is called a chord and the collection of chords is labeled EC. The integrated currents along these chords of the original graph G form the desired (non-unique) collec- tion of independent integrated currents. Carrying out this procedure for the graph in Fig. 5.1, we can remove edges 1$2 and 1$3 to arrive at the graph in Fig. 5.2; the removal of a third edge would disconnect the graph. All the integrated currents in G can then be Figure 5.2: The graph in Fig. 5.1 after the removal of edges 1$2 and 1$3. This graph is constructed in order to identify a non-unique collection of chords for the graph in Fig. 5.1. The values of the integrated currents along chords 1$2 and 1$3 are sufficient to calculate the integrated current along any edge of the graph in Fig. 5.1 57 deduced using Eq. 5.1 and the values of the integrated currents F12 and F13. The choice of chords 1$2 and 1$3 is not unique, another possible selection could have been 1$4 and 2$3. Each chord c can be uniquely associated to a cycle in G, Cc (recall that a cycle is a directed sequence of vertices of a graph with common initial and terminal points), and the integrated current along c, Fc, can be equated with the integrated current flowing around the cycle Cc. The cycle Cc is identified as the unique cycle (apart from direction) that is closed by the re-addition of chord c. For example, inserting edge 1$2 into Fig. 5.2 closes the cycle 1!2!3!4!1, pictured in Fig. 5.3. The collection of cycles associated Figure 5.3: An example fundamental set of cycles for the graph in Fig. 5.1. The cycle on the left, 1!2!3!4!1, is obtained by reinserting the chord 1$2 into the graph in Fig. 5.2. The cycle on the right, 1!2!3!1, is obtained by reinserting the chord 1$3 into the graph in Fig. 5.2. The cycle directions have arbitrarily been chosen to be clockwise. to the chords in EC are called a fundamental set. For the graph in Fig. 5.1, an example fundamental set is depicted in Fig. 5.3. For a graph with E edges and N vertices, there are C = E N +1 (5.2) cycles in a fundamental set [56], a consequence of Euler?s formula [83]. Consequently, there are only C independent integrated currents, namely those integrated currents produced along the chords of the C cycles. Thus, the question of control reduces to fixing those 58 integrated currents flowing about the cycles of a fundamental set. 5.1.2 Control Method To fix the integrated currents flowing around the cycles of a fundamental set, I propose the following method which utilizes a family of adiabatic protocols ~lc(t), c = 1;:::;C, one for each chord of a fundamental set, that trace out infinitesimal loops in parameter space. Adiabatic protocols are advantageous, since adiabatic integrated currents are geometric, al- lowing us to use geometry to visualize how different protocols generate integrated currents. I focus on adiabatic protocols that trace out infinitesimal loops, because for such protocols the integrated current depends only on the location and orientation of the loop in parameter space. Let us fix an arbitrary point in parameter space~l0 and only consider infinitesimal adi- abatic protocols about this point. Now, consider the protocol~ld(t) associated to the chord d2EC. The protocol~ld(t) traces out an infinitesimal loop in parameter space (about~l0) that bounds a two-dimensional surface Dd of infinitesimal area e. From Eq. 3.10, the in- tegrated current produced along chord c2EC during the cyclic process generated by the adiabatic protocol~ld(t) is Fc = Z Dd Hc; (5.3) where Hc is the differential two-form associated to the chord c. As I demonstrate in the following section (Eq. 5.9 below), the integral in Eq. 5.3 is determined by the orientation of Dd. Now, the key step is to choose the orientation of Dd so that the integrated current (Eq. 5.3) along chord d is nonzero, Fd 6= 0, and the integrated current along all other chords is zero, Fc = 0, c6= d. An illustration of this setup is depicted in Fig. 5.4 for a three-dimensional parameter space. By repeatedly performing~ld(t) any value of integrated current can be generated along d. Then any collection of integrated currents fFi jg can be generated by repeatedly per- 59 Figure 5.4: Illustration of the control strategy utilizing infinitesimal adiabatic protocols. The shaded ellipse is the two-dimensional surface Dd bounded by the infinitesimal adia- batic protocol~ld(t). The integrated current along chord c, Fc, generated during the pro- tocol~ld(t) is given by the flux of the vector field ~Hc through Dd (Eq. 5.3). Similarly, the flux of the vector field ~Hd through Dd equals the integrated current along chord d, Fd. The control method is implemented by choosing the orientation of Dd to be perpendicular to ~Hc, as shown here, in order that Fc = 0 for c6= d. forming a particular sequence of the adiabatic protocols corresponding to different chords. 5.2 Constraints on Control The implementation of the control method outlined in the previous section (Sec. 5.1.2) may not always be achievable. One possible reason for its failure is that we control too few external parameters. In this section, I argue why a minimum number of external parameters are needed and derive a relationship between the minimum number of required external parameters and the topology of the state space (Eq. 5.4 below) . In order for the control method outlined in Sec. 5.1.2 to be achievable, the number of 60 external parameters that are manipulated, L, must be greater than L E N2 +2: (5.4) For our example in Figs. 5.1 and 5.3, E = 5 and N = 4 (there are C = 2 cycles); thus, from Eq. 5.4 at least 3 external parameters must be varied. This is a necessary condition, but it is not sufficient. Clearly, if we violate the conditions of the no-pumping theorem, no integrated currents will be produced no matter how many external parameters are varied. To prove Eq. 5.4, I first show that the integrated current for a cyclic adiabatic infinites- imal protocol is governed by the orientation of the loop traced out in parameter space. I then argue that the dimension of parameter space, L, must satisfy Eq. 5.4 in order to have enough degrees of freedom to choose the orientation of the loop in such a way to force all integrated currents along chords to zero, save for one. Consider again the protocol ~ld(t) associated to the chord d which bounds the two- dimensional surface Dd with infinitesimal area e. From Eq. 3.10 (Eq. 5.3) the integrated current along chord c generated during the adiabatic protocol~ld(t) is Fc = Z Dd Hc = Z Dd Hcmndlm^dln; (5.5) where the external parameters lm, m = 1;:::;L, play the role of coordinates for the L- dimensional parameter space, Hcmn are the components of Hc in these coordinates, and the Einstein summation convention is being used. To evaluate Eq. 5.5, let us introduce local coordinates xk = xk(l1;:::;lL), k = 1;2, on Dd. In terms of these local coordinates, Eq. 5.5 may be written as Fc = Z Dd HcmnnmndS; (5.6) 61 where nmn are the components of the orientation bi-vector on Dd n = ?x1^?x2j? x1^?x2j ; (5.7) which is an algebraic expression for the orientation of the plane Dd illustrated in Fig. 5.5, and Figure 5.5: Illustration of the orientation bi-vector of an infinitesimal loop in an L = 3 dimensional parameter sace, n (Eq. 5.7) with components n23, n31, and n12. Pictured is the unique three-vector~n =(n23;n31;n12)=(cosf sinq;sinf sinq;cosq) associated to the orientation bi-vector n, where the polar angle q and the azimuthal angle f are the two numbers required to specify the orientation of a plane in a three-dimensional space (see Appendix A). dS = ?lm ?x1 ?ln ?x2 ?lm ?x2 ?ln ?x1 dx1^dx2 (5.8) is the surface area element. Here, ?xk, k = 1;2, are the coordinate basis vectors along xk, and jzj=pzmnzmn is the Euclidean norm of the bi-vector z. Since the area of Dd is infinitesimally small, we can approximatejdSj e in Eq. 5.6 to conclude that Fc eHcmnnmn: (5.9) Thus, by appropriately choosing n (Eq. 5.7) we can fix the orientation of Dd so that all the 62 integrated currents along the chords are zero except along chord d. This imposes the C 1 conditions Fc eHcmnnmn = 0; c6= d; (5.10) on the components nmn of the orientation bi-vector In Appendix A, I show that the orientation bi-vector associated to a plane in an L- dimensional space is uniquely determined by 2(L 2) numbers, see Fig. 5.5. If the dimen- sion of parameter space is not large enough, the 2(L 2) undetermined components of n may not be sufficient to satisfy all the C 1 conditions in Eq. 5.10. To guarantee that we can satisfy all the C 1 constraints in Eq. 5.10 we need at least L (C 1)2 +2 (5.11) external parameters. Equation 5.4 then follows after substituting in Eq. 5.2. 63 Chapter 6 Continuous Stochastic Pumps The previous chapters have discussed the current decomposition formula and some of its consequence for discrete stochastic pumps. In this chapter, I investigate the implications of the current decomposition formula for continuous stochastic pumps. For simplicity, I focus on systems that evolve diffusively in one dimension. Although this does not encompass all continuous stochastic pumps, this does include the large class of Brownian ratchets [8, 34, 35, 36]. Since the results for continuous stochastic pumps are formally very similar to the discrete case, I have collected all the results into one chapter. I first review the mathematics of diffusion processes in Sec. 6.1. Then in Sec. 6.2, the discrete current decomposition formula obtained in Chap. 2 is extended to the case of one- dimensional continuous stochastic pumps. This formula is then used in Sec. 6.3 to show that adiabatic pumping in continuous pumps is also geometric. A no-pumping theorem for continuous stochastic pumps is then proved and investigated in Sec. 6.4. Finally, the current decomposition formula is shown in Sec. 6.5 to provide a simple mathematical argument for the widely known fact that the rectification of current requires broken spatial symmetry. The contents of Secs. 6.2 - 6.5 were originally derived by Horowitz with the guidance of Jarzynski and are published in Ref. [33]. 64 6.1 Mathematical Framework Let us consider a system that can be modeled as a one-dimensional diffusion process [63] on the circle. The state of the process at time t is denoted by x(t) and takes values in the range x2[0;L], with the end points identified. This class of diffusion processes encom- passes variables that are intrinsically periodic, such as the dihedral angle of a chemical bond, as well as extended reaction coordinates evolving in a periodic potential [34]. The probability density P(x;t) to observe state x at time t evolves according to the Fokker- Planck equation [64] ? ?t P(x;t)= ??xA~l(x)+ ? 2 ?x2 B~l(x) P(x;t) ?L~l(x)P(x;t); (6.1) with periodic boundary conditions. The dynamics depend on the external parameters ~l through the drift coefficient A~l(x) and diffusion coefficient B~l(x) which are assumed to be periodic in x. To make the notation concise, the time-dependence of~l will be left implicit throughout this chapter. Equation 6.1 is naturally expressed as a continuity equation ? ?t P(x;t)+ ? ?xJ(x;t)= 0; (6.2) where the instantaneous current J(x;t)= A~l(x) ??xB~l(x) P(x;t) ?J~l(x)P(x;t) (6.3) is the rate of flow of probability, in the positive direction, past a fixed observation point x 65 at time t. As in earlier chapters, we also focus on the integrated current F(x;t)= Z t 0 dt J(x;t); (6.4) which measures the net flow of probability past the point x during the time interval 0 < t < t. For every fixed~l, there exists a unique stationary distribution Ps~l(x), satisfying ?L~lPs~l = 0, with stationary current Js~l = ?J~l(x)Ps~l(x). From Eq. 6.2 it follows that Js does not depend on x: in the stationary state, the same current flows past every observation point x. To analyze such diffusion processes, it is convenient to define two auxiliary functions y~l(x) and j~l(x), which are not necessarily periodic in x: j~l(x)= lnB~l(x) Z x 0 dy A~l(y)B ~l(y) lnB~l(x)+y~l(x): (6.5) The function j~l(x) is called the potential, in view of the role this function plays when the dynamics satisfy detailed balance (see Sec. 6.1.1 below). Observe that A~l(x) and B~l(x) can be reconstructed from j~l(x) and y~l(x), that is Eq. 6.5 can be inverted: A~l(x)= y0~l(x)ej~l(x) y~l(x); B~l(x)= ej~l(x) y~l(x); (6.6) where y0~l(x)=(?=?x)y~l(x). In other words the diffusion process is characterized equally well by the auxiliary functions as by the drift and diffusion coefficients. A convenient alternative expression for Fokker-Planck operator ?L~l is in terms of j~l(x) and y~l(x) is ?L~l(x)= ? ?xe y~l(x) ? ?xe j~l(x): (6.7) 66 6.1.1 Detailed Balance for Diffusion Processes As with discrete stochastic processes, when the frozen dynamics satisfy detailed balance there are important consequences. With this in mind, I now discuss detailed balance in the context of stationary diffusion processes, such as the frozen dynamics of a continuous stochastic pump. Recall that detailed balance requires the transition probabilities to have the symmetry (cf. Eq. 1.10) P(x0;t0jx;t)Ps(x)= P(x;t0jx0;t)Ps(x0); (6.8) For a stationary diffusion process, detailed balance can be expressed in terms of the Fokker- Planck operator ?L. Following Gardiner [63], let us replace t0= t +Dt in Eq. 6.8 and let Dt!0. Recalling that the Fokker-Planck equation (Eq. 6.1) implies that P(x0;t +Dtjx;t) h 1+Dt ?L(x0) i d(x0 x) (6.9) for small time steps Dt, where d(x0 x) is the Dirac delta function, allows us to write Eq. 6.8 as ?L(x0)d(x0 x)Ps(x)= ?L(x)d(x0 x)Ps(x0) (6.10) The above equation restricts the functional form of the Fokker-Planck operator. To make this explicit, let us multiply Eq. 6.10 by a sufficiently well-behaved function f(x0) and then integrate over all x0 Z dx0f(x0) ?L(x0)d(x0 x)Ps(x) = Z dx0f(x0) ?L(x)d(x0 x)Ps(x0) (6.11) Ps(x) Z dx0d(x x0) ?L?(x0)f(x0) = ?L(x) Z dx0d(x x0)Ps(x0)f(x0); (6.12) 67 where in the second line we have integrated by parts on the left hand side and introduced the formal adjoint of the Fokker-Planck operator ?L?(x)= A(x) ? ?x +B(x) ?2 ?x2; (6.13) which is analogous to the transpose of a matrix. Integrating over x0 in Eq. 6.12 gives Ps(x) ?L?(x)f(x)= ?L(x)[Ps(x)f(x)]: (6.14) The above equation is an operator equation for ?L which must be true for all (well-behaved) functions. This is the continuous analogue of Eq. 1.10. Equation 6.14 implies that the stationary distribution can be expressed as Ps(x) e j(x) where the potential j(x) (Eq. 6.5) can be interpreted as an energy, as I now show. Inserting the alternative expression for ?L in Eq. 6.7 into Eq. 6.14 leads to ? f(x) ?x e y(x) ? ?x h ej(x)Ps(x) i = 0 (6.15) after some straightforward calculus. Since this must be true for any function f(x), we must have Ps(x) e j(x). The continuity of Ps(x) implies that j(0) = j(L), which combined with Eq. 6.5 implies thatRL0 dyA(y)=B(y)=0. Thus, the ratio A(x)=B(x) behaves like a con- servative force, allowing us to use its integralRx0 dyA(y)=B(y) to define a consistent energy function, namely the potential j(x). Thus, when detailed balance is satisfied (Eq. 6.14) we will identify the stationary density with equilibrium density, Ps(x)= Peq(x) e j(x). Detailed balance for continuous diffusion process can also be characterized by a con- dition similar to the Kolmogorov condition (Eq. 1.41) for discrete Markov processes, as noted by Qian [84]. 68 6.2 Current Decomposition Formula I now derive a current decomposition formula for continuous stochastic pumps analogous to the one for discrete stochastic pumps in Eq. 2.1. As will be proved, the current can be decomposed into two contributions according to J(x;t)= Js~l + Z L 0 dx0V~l(x;x0) ?P(x0;t); (6.16) where an analytic expression for the integral kernel V~l(x;x0) is given in Eq. 6.25 below. This exact result gives the net current as the sum of a baseline stationary contribution Js~l and an excess or ?pumped? contribution Jex(x;t)= Z L 0 dx0V~l(x;x0) ?P(x0;t); (6.17) associated with the variation of external parameters. Again, the stationary current Js~l repre- sents the underlying current that would flow if the parameters were held fixed, and can be identified by measuring the current after allowing the system to relax to the stationary state with external parameters fixed to~l. The excess current Jex(x;t) represents the additional flow of current beyond Js~l, which is induced by the variation of the external parameters. To derive Eq. 6.16, we first solve for P(x;t) in terms of ?P(x;t) (Eq. 6.21 below), and then combine that result with Eq. 6.3 to determine J(x;t). To this end, let us take the following atypical view of the Fokker-Planck equation: for fixed t let us interpret Eq. 6.1 as an operator equation for P(x;t) with operator ?L~l(x) and source term ?P(x;t). Ordinarily, we solve an operator equation by finding the inverse operator. However, since our Fokker- Planck operator has a null eigenvector, ?L~lPs~l = 0, it is not invertible. Therefore, we instead introduce the integral operator ?G~l(x)= Z dx0g~l(x;x0); (6.18) 69 which is the pseudoinverse of ?L~l(x) (see below for a brief definition of pseudoinverse in this context). Here, the integral kernel g~l(x;x0) is the modified Green?s function for ?L~l(x) [85], defined as the solution of the boundary value problem 8 >>< >>: ?L?~ l(x 0)g~ l(x;x 0)=d(x0 x) Ps ~l(x) g~l(x;x0) x0=L x0=0 = 0; ?g~l(x;x 0) ?x0 x0=L x0=0 = 0 ; (6.19) where ?L?~l(x) is defined in Eq. 6.13. The term Ps~l(x) in Eq. 6.19 accounts for the fact that ?L~l(x) is not invertible; without this term the boundary value problem has no solution [85]. Since Eq. 6.19 is unaffected by a replacement g~l(x;x0)!g~l(x;x0)+ f(x), the solution of Eq. 6.19 is not unique. This is the source of the non-uniqueness of V~l(x;x0) mentioned above. As mentioned, ?G~l is the pseudoinverse of ?L~l. That is, in place of the usual inverse property ( ?G~l ?L~l = ?I), ?G~l satisfies the inverse-like property ?G~l ?L~lP(x;t) Z L 0 dx0g~l(x;x0) ?L~l(x0)P(x0;t) = P(x;t) Ps~l(x); (6.20) where we have twice integrated by parts and exploited Eq. 6.19. We see that ?G~l ?L~l projects onto a complement of the null space of ?L~l [86]. Simply put, ?G~l acts as an inverse on the subspace where ?L~l is invertible. We now apply ?G~l to both sides of Eq. 6.1, then use the pseudoinverse property (Eq. 6.20) to obtain P(x;t)= Ps~l(x)+ Z L 0 dx0g~l(x;x0) ?P(x0;t): (6.21) Next we apply the current operator (Eq. 6.3) to both sides of this equation. This gives us J(x;t)= Js~l + Z L 0 dx0 ?J~l(x)g~l(x;x0) ?P(x0;t): (6.22) 70 Comparing with Eq. 6.16 we see that V~l(x;x0)= ?J~l(x)g~l(x;x0): (6.23) Finally, we apply ?J~l(x) to Eq. 6.19 to arrive at 8 >>< >>: ?L?~ l(x 0)V~ l(x;x 0)= ?J~ l(x)d(x 0 x) Js V~l(x;x0) x0=L x0=0 = 0; ?V~l(x;x 0) ?x0 x0=L x0=0 = 0 : (6.24) This boundary value problems is solved (as shown in the following paragraph) to obtain V~l(x;x0)= 1+Js~lt~l(L) p~l(x0)+q(x0 x)+Js~lt~l(x0); (6.25) where q(x0 x) is the Heaviside step function; j~l and y~l are given in Eq. 6.5; and we have introduced the splitting probability [55] p~l(x)= RL x dye y~l(y) RL 0 dye y~l(y); (6.26) and the conditional mean first exit time [55] t~l(x)= Z x 0 dy Z y 0 dzey~l(y) j~l(z): (6.27) The splitting probability and the conditional mean first exit time have the following inter- pretations [63]: with~l fixed, if the system evolves from x0 until it first exits the domain [0;L], then p~l(x0) is the probability that this exit will occur at x = 0, rather than x = L; and t~l(x0) is the average time for the system to make this first exit through x = 0. Roughly speaking, the splitting probability measures the relative likelihood for the process to go clockwise versus counterclockwise around the circle. I now show that Eq. 6.25 solves Eq. 6.24. The solution follows by combining the 71 homogeneous solution with the inhomogeneous solution and then applying the boundary conditions. One homogeneous solution is the splitting probability which is also the solution to the boundary value problem [55] 8 >< >: ?L?~ l(x 0)p~ l(x 0)= 0 p~l(0)= 1; p~l(L)= 0 ; (6.28) as can be checked by substitution of Eq. 6.26 into Eq. 6.28. The other homogenous solution is any arbitrary function of x alone, say f(x). The two contributions to the inhomogeneous solution, Js~lt~l(x0) and q(x0 x), are obtained by noting that the the conditional mean first exit time is the solution to the boundary value problem [55] 8 >>> < >>> : ?L?~ l(x 0) t~ l(x 0)= 1 t~l(0)= 0; ?t~l(x 0) ?x0 x0=L = 0 (6.29) and that ?L?~ l(x 0)q(x0 x) = A~l(x0) ??x0+B~l(x0) ? 2 ?x02 q(x0 x) (6.30) = A~l(x0)+B~l(x0) ??x0 d(x0 x) (6.31) = A~l(x) ??xB~l(x) d(x0 x) (6.32) = ?J~l(x)d(x0 x): (6.33) Thus, the most general solution to Eq. 6.24 is V~l(x;x0)=Cp~l(x0)+ f(x)+q(x0 x)+Js~lt~l(x0); (6.34) where C is an arbitrary constant. The value of C is fixed by satisfying the first boundary condition in Eq. 6.24. The second boundary condition is then automatically satisfied due to 72 the structure of Eq. 6.24. Finally, we arrive at the solution in Eq. 6.25 by setting f(x)= 0, which we are free to do since the solution to Eq. 6.24 is not unique. 6.3 Adiabatic Pumping The next step is to show that the current decomposition formula (Eq. 6.16), as in the discrete case, implies that the excess integrated current is geometric in the adiabatic limit. The excess integrated current Fex(x;t) is the net current pumped across the point x, in excess of the time-integrated, baseline stationary flow, Fs(t)=R dt Js~l(t). From Eqs. 6.17 and 6.4, we find Fex(x;t)= Z t 0 dt Z L 0 dx0V~l(x;x0) ?P(x0;t): (6.35) If ~l is varied very slowly from ~a to ~b, the system remains near the stationary density, allowing us to substitute P(x;t) Ps~l(t)(x) into Eq. 6.35 Fex(x)= Z ~S~l(x) d~l; (6.36) where~S~l(x)=R dx0V~l(x;x0)~ ~lPs~l(x0). This result is geometric: excess integrated current depends only on the path taken from ~a to~b in parameter space. If the drift and diffusion coefficients take the special form A~l(x) = (?=?x)V~l(x) and B~l(x) = D, Eq. 3.3 reduces to a result obtained by Parrondo for reversible ratchets [43] . Additionally, Shi and Niu have shown that the integrated current for adiabatic continuous one-dimensional stochastic pumps operating in the low-temperature limit is quantized [87]. 73 6.4 No-Pumping Theorem for Diffusions with Detailed Bal- ance Within the general model analyzed above, let us now restrict ourselves to the case that detailed balance holds for all~l, hence Js~l = 0. Imagine that the parameters are varied at an arbitrary rate from t = 0 to t such that the process is cyclic, P(x;0)= P(x;t). It is then natural to consider the integrated current over one cycle, which for a cyclic process with detailed balance (Js~l = 0) is F= Z t 0 dt Z L 0 dx0p~l(x0) ?P(x0;t); (6.37) see Eqs. 6.4, 6.16, and 6.25. The value of F represents the net circulation of probability during one cycle. If the probability merely sloshes back and forth without any accumulation of current, then F= 0; however, if F>0 (F<0) then there is a nonzero flow of probability in the counterclockwise (clockwise) direction. We now argue that to obtain F6= 0 both the potential j~l(x) and the splitting probability p~l(x) must be varied during the process. The first of these conditions is easy to understand: if the potential remains fixed during the process, i.e. j~l(t)(x) = j~a(x), then the system simply remains in the initial equilibrium state, P(x;t) exp[ j~a(x)], producing no currents whatsoever; this is the ?no-go theorem? of Reimann in Ref. [34], section 6.4.1. To see that the splitting probability must also be varied to produce integrated current, suppose we fix p~l(t)(x)=p~a(x) but vary j~l(t)(x). Then, since the process is cyclic, we get F= Z L 0 dx0p~a(x0) Z t 0 dt ?P(x0;t)= 0: (6.38) We can construct a heuristic interpretation of this result by recalling that p~l(x) measures the likelihood to generate clockwise rather than counterclockwise flow, as discussed following Eq. 6.26. The integrand p~l(x0) ?P(x0;t) appearing in Eq. 6.37 then represents, roughly, the 74 contribution to clockwise current induced by the redistribution of probability that occurs at location x0 and time t, and the integrated current F is a sum of such contributions. For a cyclic process, any probability that leaves the location x0 must eventually return, thus if p~l(x0) remains constant the clockwise and counterclockwise contributions ultimately cancel one another (F = 0). If the splitting probability varies with time, then there is no reason to expect such cancellation. This no-pumping theorem provides a concrete mathematical criterion for the genera- tion of zero integrated current. Actually realizing the independent variation of both the potential and the splitting probability in any particular system will greatly depend on the system?s properties and may not be feasible; yet, in systems where one may locally vary the drift and diffusion coefficients independently, one may also vary the potential and splitting probability independently, as can be seen from Eqs. 6.5 and 6.26. From Eqs. 6.5 and 6.26, we see that p~l(t)(x) [or equivalently y~l(t)(x)] remains constant when the ratio of the drift to the diffusion coefficient does not depend on~l: A~l(x) B~l(x) =X(x): (6.39) Thus our no-pumping theorem states that F = 0 if either (i) the potential is held fixed, or (ii) the drift and diffusion coefficients are related by Eq. 6.39. The connection between the no-pumping condition in Eq. 6.39 and the discrete no-pumping theorem (Chap. 4) is discussed briefly in Appendix B. 6.5 Rectification of Current Requires Broken Symmetry Finally, I show that the current decomposition formula (Eq. 6.16) reproduces the known fact that the rectification of current in a periodically driven Brownian rachet requires broken spatial symmetry [34, 35]. Specifically, we will show that when the driving protocol is time-periodic and the drift and diffusion coefficients have specific spatial symmetries, the 75 integrated current over one period of driving is zero. We will say that a periodic function f(x) is symmetric or has even symmetry if f(d + x) = f(d x), or has odd symmetry if f(d + x) = f(d x), for some fixed value d. Without loss of generality we set d = 0, since by a suitable coordinate shift d can take any value. Figure 6.1 depicts a symmetric ratchet potential. As a generalization of the symmetric 0.0 0.5 1.0 1.5 2.0 2.5 3.00.0 0.2 0.4 0.6 0.8 1.0 x L VHx L V0 Figure 6.1: Symmetric ratchet potential V(x)=V( x) plotted in dimensionless units. The corresponding drift and diffusion coefficients are A(x)=V0(x) and B(x)= D. ratchet potential, we now consider the situation where the drift coefficient has odd symme- try, A~l(x)= A~l( x), and the diffusion coefficient has even symmetry, B~l(x)= B~l( x), for every~l. These assumptions imply that the system satisfies detailed balance and that both y~l(x) and the periodic steady state P(x;t)= P(x;t+t) are symmetric. Equations 6.26 and 6.37 then give F= Z t 0 dt Z L 0 dx Z L 0 dyq(y x) e y~l(y) RL 0 dye y~l(y) ?P(x;t): (6.40) Changing variables y!L y and x!L x, and exploiting the symmetry and periodicity 76 of y~l(y) and P(x;t), we find F= Z t 0 dt Z L 0 d x Z L 0 dyq(x y) e y~l(y) RL 0 dye y~l(y) ?P(x;t): (6.41) If we now use the identity q(x y)=1 q(y x) and invoke conservation of normalization, RL 0 dx ?P(x;t) = 0, Eq. 6.41 becomes F = F. The anticipated conclusion F = 0 is then obvious. 77 Conclusion The control of molecular-scale motion is complicated by the stochastic nature of the micro- scopic environment. Nevertheless, controllable molecular complexes are continually being developed. As their sophistication grows, so will their utility. In light of their promise, there is growing interest in developing a theoretical framework which may aide with the design of future artificial molecular machines. Paramount to this goal is understanding how arbitrary time-dependent perturbations can be used to control stochastic systems. This dis- sertation describes a step taken in this direction, by developing and analyzing a theoretical tool useful for the study of non-autonomous molecular machines: the current decomposi- tion formula, Eqs. 2.1 and 6.16. Although the current decomposition formula is a formal expression, it has been useful in deriving additional results that may have otherwise been difficult to prove. Chapter 3 has shown that the current decomposition formula leads to a geometric formula for the excess integrated current, and in Chap. 4 a no-pumping theorem for the integrated current in a non-adiabatic cyclic process has been derived. The no-pumping theorem provides in- sight into how nonequilibrium stochastic systems respond to arbitrary driving by providing conditions under which no integrated current is produced. The theoretical validity of these results has been demonstrated for discrete stochastic pumps in Chaps. 1 - 4 and also for one-dimensional continuous stochastic pumps in Chap. 6. Unfortunately, no experiments to date have been performed with the express purpose of confirming these predictions, an important objective for future research. Additionally, a method for controlling discrete stochastic pumps that exploits the geo- 78 metric properties of adiabatic pumping has been proposed in Chap. 5. I argued that arbi- trary amounts of integrated current can be generated over the course of a cyclic process, by employing infinitesimal adiabatic protocols. In order to implement this control strat- egy a minimum number of external parameters must be manipulated, and this minimum number depends on the topology of the discrete stochastic pump. Future research regard- ing this control strategy would benefit from model studies. Further investigations using models should reveal limits of this strategy, develop intuition, and will hopefully provide opportunities to suggest testable experiments. The above results are just some of a growing collection of model-independent predic- tions for stochastic pumps. The pump-restriction theorem proved by Chernyak and Sinit- syn, which was mentioned briefly in Chap. 4, characterizes how the topology of a stochastic pump?s state space affects the generation of pumped current. I also touched on the pump- quantization theorem due Chernyak and Sinitsyn in Chap. 3, which describes how current pumped adiabatically can become quantized in the low-temperature limit. Furthermore, a number of previous authors ? including Astumian, Parrondo, Sinitsyn, and Nemenman ? have noted the geometric character of adiabatic pumping in a variety of contexts, as dis- cussed in Chap. 3. Integrating the above collection of predictions into a cohesive framework is an impor- tant next step. By understanding how each result is related to the others, we gain deeper intuition regarding how stochastic pumps may operate. The long-term goal is to draw upon this intuition to develop a theory of control, which should aid with the engineering of arti- ficial molecular machines. 79 Appendix A Specifying the Orientation of a Plane In this appendix, I show that the orientation of a plane in a N-dimensional space (N 3) can be specified by 2(N 2) numbers. This follows by observing that the orientation of a plane can be associated to a decomposable bi-vector of unit norm, called the orientation bi-vector. A plane P in a N-dimensional (vector) space S can be defined (in a non-unique way) as the span of two vectors w and v [88], P = spanfw;vg=faja = xw+yv; x;y2Rg: (A.1) We can use these two vectors w and v to give P an algebraic representation in the form of the bi-vector zp = w^v: (A.2) A basis for the space of all bi-vectors can be constructed from the basis vectors e j in S as the set of bi-vectors ei^e j, i; j = 1;:::;N. Any bi-vector z can be expanded in this basis as z = i j zi jei^e j; (A.3) where zi j are the components of z in the basis fei^e jg. A generic bi-vector is an anti- 80 symmetric tensor of rank two with N2 = N(N 1)=2 indepedent components. However, not every bi-vector corresponds to a plane. Planes are only assoicated to bi-vectors that are decomposable: bi-vectors that can be expressed as the wedge product of two vectors, as in Eq. A.2. A bi-vector is decomposable if and only if it satisfies the Pl?ucker relations [89, 90] i>>> < >>> >: +1; stmn even permutation of i jkl 1; stmn odd permutation of i jkl 0; otherwise : (A.5) The Pl?uker relations are a set of N4 conditions on the components of a bi-vector zi j that guarantee that the bi-vector is decomposable, and thus can be associated to a plane. Unfor- tunately, not all of the N4 Pl?ucker relations are linearly independent. A closer inspection of their derivation reveals that only N 22 of the Pl?ucker relations are linearly independent (Proposition 6.4.4 of Ref. [91]). So, a decomposable bi-vector has 2N 3 = N2 N 22 independent components. Consequently, a plane in a N-dimensional space is completely specified by 2N 3 numbers, the independent components of an associated bi-vector. The orientation of a plane can be represented as an orientation bi-vector, which is a de- composable bi-vector of unit norm. Fixing the length of the bi-vector to one, introduces one additional contraint. Thus, an orientation bi-vector has 2N 4 =(2N 3) 1 independent components, and the orientation of a plane is specified by these 2(N 2) numbers. 81 Appendix B Connection between Discrete and Continuous No-Pumping Theorems In this section I show that the no-pumping theorem for continuous stochastic pumps can be understood as the continuous generalization of the no-pumping theorem for discrete stochastic pumps as pointed out by Chernyak and Sinitsyn [51] as well as Maes, Neto?con?y, and Thomas [53]. The method utilized here is to approximate the continuous stochastic pump as a discrete stochastic pump in order to show that fixing barrier energies in the discrete stochastic pump is equivalent to Eq. 6.39 [92]. As an approximation of a one-dimensional diffusion process on the circle, we con- sider a particle making a random walk on the interval [0;1] jumping among N 1 sites uniformly distributed with separation distance Dx = 1=N. The particle is only allowed to jump between nearest-neighbor sites, in order that in the continuous limit its behavior is described as a diffusion process. For simplicity, we assume detailed balance is satisfied. Consequently, the barrier en- ergies Wi j are symmetric and the potential can be identified with the state energies Ei. Substituting Eq. 1.13 into Eq. 1.1, we find that with these assumptions the master equation is ?pi = pi+1eEi+1 Wi+1;i + pi 1eEi 1 Wi;i 1 pi eEi Wi+1;i +eEi Wi;i 1 : (B.1) 82 In order to make a connection with the continuous diffusion process we introduce con- tinuous analogous of the position, barrier energies, state energies, and probability density. To each discrete location i, we associate the continuous position xi = iDx. In addition, we define the differentiable functions W(x), E(x), and P(x;t) of position x with the property that W(xi)=Wi+1;i, E(xi)= Ei, and P(xi;t)= pi(t). Using W(x), E(x), and P(x;t), we can approximate the terms in Eq. B.1 in the small Dx limit as e Wi;i 1 !e W(xi 1) = e W(xi Dx) e W(x)+DxW0(x) Dx2W00(x)=2+ e W(x) 1+DxW0(x)+Dx2 (W0(x))2 2 W00(x) 2 + ; eEi 1 !eE(xi 1) = eE(xi Dx) eE(x) DxE0(x)+Dx2E00(x)=2+ eE(x) 1 DxE0(x)+Dx2 E00(x) 2 + (E0(x))2 2 + : (B.2) Substituting the above approximations into Eq. B.1 and taking the limit Dx!0 while diffusively scaling time t!t=(Dx)2, Eq. B.1 becomes after a lengthy, tedious manipulation ?P(x;t) ?t = ? ?xW 0(x)eE(x) W(x)P(x;t)+ ?2 ?x2 e E(x) W(x)P(x;t): (B.3) Comparing with the Fokker-Planck equation (Eq. 6.1), we can identify A(x)= W0(x)eE(x) W(x); B(x)= eE(x) W(x): (B.4) The continuous no-pumping theorem states that if the ratio A~l(x)=B~l(x) is independent of~l then no integrated current is produced. Equation B.4 implies that this no-pumping con- dition is equivalent to the discrete stochastic pump no-pumping condition that the barrier energies be independent of time A~l(x) B~l(x) = W 0 ~l(x): (B.5) 83 Bibliography [1] J. Howard, Mechanics of Motor Proteins and the Cytoskeleton (Sinauer, Sunderland, 2001) [2] R. Phillips, J. Kondev, and J. Theriot, Physical Biology of the Cell (Garland Science, New York, 2009) [3] E. D. Kay, D. A. Leigh, and F. Zerbetto, Angew. Chem., Int. Ed. 46, 72 (2007), and references therein. [4] B. Feringa, J. Org. Chem 72, 6635 (2007) [5] W. R. Browne and B. L. Feringa, Nat. Nanotechnol. 1, 25 (2006) [6] J. Bath and A. J. Turberfield, Nat. Nanotechnol. 2, 275 (2007) [7] E. M. Purcell, Am. J. Phys. 45, 3 (1977) [8] R. D. Astumian and P. H?anggi, Phys. Today 55, 33 (2002) [9] R. M. Berry, Curr. Biol. 15, R385 (2005) [10] N. Philip, Biological Physics: Energy, Information, Life (W. H. Freeman and Com- pany, New York, 2004) [11] K. D. Philipson and D. A. Nicoll, Annu. Rev. Physiol. 62, 111 (2000) [12] T. Y. Tsong and R. D. Astumian, Annu. Rev. Physiol. 50, 273 (1988) 84 [13] D. S. Liu, R. D. Astumian, and T. Y. Tsong, J. Bio. Chem. 265, 7260 (1990) [14] T. L. Hill, Free Energy Transduction in Biology (Academic Press, New York, 1977) [15] A. B. Kolomeisky and M. E. Fisher, Annu. Rev. Phys. Chem. 58, 675 (2007) [16] H. Noji, R. Yasuda, M. Yoshida, and K. Kinosita, Nature 386, 299 (1997) [17] F. J?ulicher, A. Ajdari, and J. Prost, Rev. Mod. Phys. 69, 1269 (1997) [18] H. Qian, J. Phys.: Condens. Matter 17, S3783 (2005) [19] D. Keller and C. Bustamante, Biophys. J. 78, 541 (2000) [20] B. Yurke, A. J. Turberfield, A. P. Mills, F. C. Simmel, and F. L. Neumann, Nature 406, 605 (2000) [21] Y. Tian and C. Mao, J. Am. Chem. Soc. 126, 11410 (2004) [22] S. Fletcher, F. Dumur, M. Pollard, and B. Feringa, Science 310, 80 (2005) [23] Y. Shirai, A. Osgood, Y. Zhao, K. Kelly, and J. Tour, Nano. Lett. 5, 2330 (2005) [24] Y. Shirai, J. Morin, T. Sasaki, J. M. Guerrero, and J. M. Tour, Chem. Soc. Rev. 35, 1043 (2006) [25] R. D. Astumian, Phys. Chem. Chem. Phys. 9, 5067 (2007) [26] R. Bissell, E. Cordova, A. Kaifer, and J. Stoddart, Nature 369, 133 (1994) [27] J. Shin and N. Pierce, J. Am. Chem. Soc. 126, 10834 (2004) [28] S. Venkataraman, R. M. Dirks, P. W. K. Rothemund, E. Winfree, and N. Pierce, Nat. Nanotechnol. 2, 490 (2007) [29] P. Yin, H. Yan, X. Daniell, A. J. Turberfield, and J. Reif, Angew. Chem., Int. Ed. 43, 4906 (2004) 85 [30] N. A. Sinitsyn and I. Nemenman, Europhys. Lett. 77, 58001 (2007) [31] S. Rahav, J. Horowitz, and C. Jarzynski, Phys. Rev. Lett. 101, 140602 (2008) [32] N. A. Sinitsyn, J. Phys. A 42, 193001 (2009) [33] J. M. Horowitz and C. Jarzynski, J. Stat. Phys. 136, 917 (2009) [34] P. Reimann, Phys. Rep. 361, 57 (2002) [35] P. H?anggi and F. Marchesoni, Rev. Mod. Phys. 81, 387 (2009) [36] R. D. Astumian, Science 276, 917 (1997) [37] L. P. Faucheux and A. Libchaber, J. Chem. Soc., Faraday Trans. 95, 3163 (1995) [38] B. Robertson and R. D. Astumian, Biophys. J. 57, 689 (1990) [39] R. D. Astumian, J. Phys.: Condens. Matter 17, S3753 (2005) [40] V. S. Markin, T. Y. Tsong, R. D. Astumian, and B. Robertson, J. Chem. Phys. 93, 5062 (1990) [41] R. D. Astumian, P. Chock, T. Y. Tsong, and H. Westerhoff, Phys. Rev. A 39, 6416 (1989) [42] H. Westerhoff, T. Y. Tsong, P. Chock, Y. Chen, and R. D. Astumian, Proc. Natl. Acad. Sci. 83, 4734 (1986) [43] J. M. R. Parrondo, Phys. Rev. E 57, 7297 (1998) [44] R. D. Astumian, Phys. Rev. Lett. 91, 118102 (2003) [45] N. A. Sinitsyn and I. Nemenman, Phys. Rev. Lett. 99, 220408 (2007) [46] R. D. Astumian, Proc. Natl. Acad. Sci. 104, 19715 (2007) 86 [47] I. M. Sokolov, J. Phys. A: Math. Gen. 32, 2541 (1999) [48] R. D. Astumian and I. Der?enyi, Phys. Rev. Lett. 86, 3859 (2001) [49] K. Jain, R. Marathe, A. Chaudhuri, and A. Dhar, Phys. Rev. Lett. 99, 190601 (2007) [50] J. Ohkubo, J. Stat. Mech.: Theory Exp., P02011 (2008) [51] V. Y. Chernyak and N. A. Sinitsyn, Phys. Rev. Lett. 101, 160601 (2008) [52] V. Y. Chernyak and N. A. Sinitsyn, J. Chem. Phys. 131, 181101 (2009) [53] C. Maes, K. Neto?cn?y, and S. R. Thomas, ?General no-go condition for stochastic pumping,? Arxiv:1002.3811v1 [54] R. G. Busacker and T. L. Saaty, Finite Graphs and Networks: An Introduction with Applications (McGraw-Hill, New York, 1965) [55] N. G. Van Kampen, Stochastic Processes in Physics and Chemistry, 3rd ed. (Elsevier Ltd., New York, 2007) [56] J. Schnakenberg, Rev. Mod. Phys. 48, 571 (1976) [57] T. Boullion and P. Odell, Generalized Inverse Matrices (Wiley-Interscience, New York, 1971) [58] S. J. Leon, Linear Algebra With Applications (Prentice Hall, New Jersey, 2002) [59] R. D. Astumian, Phys. Chem. Chem. Phys. 11, 9592 (2009) [60] B. H. Mahan, J. Chem. Educ. 52, 299 (1975) [61] J. M. R. Parrondo and B. J. De Cisneros, Appl. Phys. A 75, 179 (2002) [62] R. Zia and B. Schmittmann, J. Stat. Mech.: Theory Exp., P07012 (2007) 87 [63] C. W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, 3rd ed. (Springer-Verlag, New York, 2004) [64] H. Risken, The Fokker-Planck Equation: Methods of Solution and Applications (Springer-Verlag, New York, 1984) [65] R. Kubo, M. Toda, and N. Hashitsume, Statistical Physics II: Nonequilibrium Statis- tical Mechanics (Springer-Verlag, Berlin, 1985) [66] J. L. Lebowitz and H. Spohn, J. Stat. Phys. 95, 333 (1999) [67] C. Maes, F. Redig, and A. V. Moffaert, J. Math. Phys. 41, 1528 (2000) [68] C. Maes, Seminaire Poincare 2, 29 (2003) [69] P. Gaspard, J. Stat. Phys. 117, 599 (2004) [70] D. Q. Jiang, M. Qian, and M. P. Qian, Mathematical Theory of Nonequilibruim Steady States (Springer-Verlag, New York, 2004) [71] T. Speck and U. Seifert, J. Phys. A: Math. Gen. 38, L581 (2005) [72] D. Andrieux and P. Gaspard, J. Stat. Phys. 127, 107 (2007) [73] U. Seifert, Eur. Phys. J. B 64, 423 (2008) [74] V. Y. Chernyak, M. Chertkov, and C. Jarzynski, J. Stat. Mech.: Theory and Experi- ment, P08001(2006) [75] J. M. Horowitz and C. Jarzynski, (unpublished) [76] V. Y. Chernyak and N. A. Sinitsyn, ?Robust quantization of molecular motor motion in a stochastic environment,? Arxiv:0906.3032v2 [77] R. Syski, Passage Times for Markov Chains (IOS Press, Washington, 1992) 88 [78] J. Ohkubo, Phys. Rev. E 80, 012101 (2009) [79] M. Nakahara, Geometry, Topology and Physics, 2nd ed. (IOP Publishing, Philadel- phia, 2003) [80] D. A. Leigh, J. Wong, F. Dehez, and F. Zerbetto, Nature 424, 174 (2003) [81] A. Bohm, A. Mostafazadeh, H. Koizumi, Q. Niu, and J. Zwanziger, The Geometric Phase in Quantum Systems (Springer-Verlag, New York, 2003) [82] P. Talkner, New J. Phys. 1, 4.1 (1999) [83] J. M. Harris, J. L. Hirst, and M. J. Mossinghoff, Combinatorics and Graph Theory (Springer-Verlag, New York, 2000) [84] H. Qian, Phys. Rev. Lett. 81, 3063 (1998) [85] I. Stakgold, Green?s Functions and Boundary Value Problems, 2nd ed. (Wiley- Interscience, New York, 1998) [86] S. Cardus, Operator Theory of the Pseudo-Inverse, Queen?s Papers in Pure and Ap- plied Mathematics No. 38 (Queen?s University, Ontario, 1974) [87] Y. Shi and Q. Niu, Europhys. Lett. 59, 324 (2002) [88] W. Fleming, Functions of Several Variables, 2nd ed. (Springer-Verlag, New York, 1977) [89] M. Marcus, Finite Dimensional Multilinear Albegra: Part II (Marcel Dekker, Inc., New York, 1975) [90] T. Yokonuma, Tensor Space and Exterior Algebra (Amer. Math. Soc., Providence, 1991) 89 [91] P. M. Cohn, Basic Algebra: Groups, Rings, and Fields (Springer-Verlag, New York, 2002) [92] N. A. Sinitsyn, private communication 90