ABSTRACT Title of dissertation: UTILITY DRIVEN SAMPLED DATA CONTROL UNDER IMPERFECT INFORMATION Pavankumar Tallapragada, Doctor of Philosophy, 2013 Dissertation directed by: Dr. Nikhil Chopra Department of Mechanical Engineering Computer based control systems, which are ubiquitous today, are essentially sampled data control systems. In the traditional time-triggered control systems, the sampling period is conservatively chosen, based on a worst case analysis. However, in many control systems, such as those implemented on embedded computers or over a network, parsimonious sampling and computation is helpful. In this con- text, state/data based aperiodic utility driven sampled data control systems are a promising alternative. This dissertation is concerned with the design of utility driven event-triggers in certain classes of problems where the information available to the triggering mechanisms is imperfect. In the rst part, the problem of utility driven event-triggering under partial state information is considered - speci cally in the context of (i) decentralized sensing and (ii) dynamic output feedback control. In the case of full state feedback, albeit with decentralized sensing, methods are developed for designing local and asynchronous event-triggers for asymptotic stabi- lization of an equilibrium point of a general nonlinear system. In the special case of Linear Time Invariant (LTI) systems, the developed method also holds for dynamic output feedback control, which extends naturally to control over Sensor-Controller- Actuator Networks (SCAN), wherein even the controller is decentralized. The sec- ond direction that is pursued in this dissertation is that of parsimonious utility driven sampling not only in time but also in space. A methodology of co-designing an event-trigger and a quantizer of the sampled data controller is developed. E ec- tively, the proposed methodology provides a discrete-event controller for asymptotic stabilization of an equilibrium point of a general continuous-time nonlinear system. In the last part, a method is proposed for designing utility driven event-triggers for the problem of trajectory tracking in general nonlinear systems, where the source of imperfect information is the exogenous reference inputs. Then, speci cally in the context of robotic manipulators we develop utility driven sampled data implementa- tion of an adaptive controller for trajectory tracking, wherein imperfect knowledge of system parameters is an added complication. UTILITY DRIVEN SAMPLED DATA CONTROL UNDER IMPERFECT INFORMATION by Pavankumar Tallapragada Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial ful llment of the requirements for the degree of Doctor of Philosophy 2013 Advisory Committee: Assistant Professor Nikhil Chopra, Chair/Advisor Professor Balakumar Balachandran Professor Amr M. Baz Associate Professor Jaydev P. Desai Professor P. S. Krishnaprasad, Dean?s Representative c Copyright by Pavankumar Tallapragada 2013 Acknowledgments I owe my gratitude to all the people who made this dissertation possible and who have made my graduate studies a memorable learning experience. First and foremost, I am grateful to my advisor Dr. Nikhil Chopra for his continuous support, patience and encouragement during my PhD. I also thank him for providing an open and free environment that allowed me to explore and pursue various research problems. I also thank Prof. P. S. Krishnaprasad, Prof. Balakumar Balachandran, Prof. Amr M. Baz and Prof. Jaydev P. Desai for serving on my committee and for their valuable inputs that improved the quality of the dissertation. I am grateful to all my colleagues and friends for helping me in various ways and for making my time at UMD memorable. Speci cally, I want to thank Yen-Chen Liu, Rubyca Jaai, David Berman, Eliot Rudnick, Mohamed Raafat, Atul Thakur, Anupam Anand and Sabyasachee Mishra. I also thank the ME administrative sta for all their help in making my stay at UMD smooth. Lastly, I thank my parents and my brother for their unconditional love and support, without which this dissertation would not have been possible. I acknowledge support for my research by the National Science Foundation un- der the grants numbered 0931661 and 1232127; and by the O ce of Naval Research under the grant numbered N000141310160. ii Table of Contents List of Figures v 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Outline and Contributions of the Dissertation . . . . . . . . . . . . . 5 1.3 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 I Event-Triggering Under Partial State Information 14 2 Decentralized Utility Driven Event-Triggering for Control of Nonlinear Sys- tems 15 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Problem Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Decentralized Asynchronous Event-Triggering . . . . . . . . . . . . . 19 2.3.1 Centralized Asynchronous Event-Triggering . . . . . . . . . . 21 2.3.2 Decentralized Asynchronous Event-Triggering . . . . . . . . . 26 2.3.3 Decentralized Asynchronous Event-Triggering with Intermit- tent Communication from the Central Controller . . . . . . . 30 2.4 Linear Time Invariant Systems . . . . . . . . . . . . . . . . . . . . . 34 2.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5.1 Linear System Example . . . . . . . . . . . . . . . . . . . . . 41 2.5.2 Nonlinear System Example . . . . . . . . . . . . . . . . . . . . 44 2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3 Utility Driven Sampled Data Control of LTI Systems over Sensor-Controller- Actuator Networks 53 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2 Problem Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.3 Design of Decentralized Asynchronous Event-Triggering . . . . . . . . 59 3.4 Event-Triggered Implementations of The Dynamic Controller . . . . . 66 3.4.1 Architecture I - Centralized . . . . . . . . . . . . . . . . . . . 67 3.4.2 Architecture II - Centralized Synchronous . . . . . . . . . . . 70 3.4.3 Architecture III - Decentralized Architecture . . . . . . . . . . 71 3.4.4 Architecture IV - SCAN . . . . . . . . . . . . . . . . . . . . . 74 3.5 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.5.1 Architecture I . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.5.2 Architecture II . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.5.3 Architecture III . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.5.4 Architecture IV - SCAN . . . . . . . . . . . . . . . . . . . . . 87 3.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 iii II Co-Design of Event-Triggering and Quantization 91 4 Utility Driven Co-design of Event-Trigger and Quantizer 92 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.2 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.3 Design of the Flow and the Jump Sets . . . . . . . . . . . . . . . . . 98 4.3.1 Selection of W . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.4 Design of The Quantizer . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.4.1 Design of in One Dimensional Systems . . . . . . . . . . . . 107 4.4.2 Design of in Two Dimensional Systems . . . . . . . . . . . . 109 4.4.3 Design of in n Dimensional Systems . . . . . . . . . . . . . 114 4.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.6 Discussion and Conclusions . . . . . . . . . . . . . . . . . . . . . . . 116 III Utility Driven Event-Triggering for Trajectory Tracking 118 5 Utility Driven Sampled Data Control for Trajectory Tracking 119 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5.1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5.2 Problem statement and notation . . . . . . . . . . . . . . . . . . . . . 121 5.3 Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.4 Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.5 Examples and simulation results . . . . . . . . . . . . . . . . . . . . . 139 5.5.1 Nonlinear System Example . . . . . . . . . . . . . . . . . . . . 139 5.5.2 Linear System Example . . . . . . . . . . . . . . . . . . . . . 143 5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 6 Utility Driven Sampled Data Adaptive Control for Tracking in Robot Ma- nipulators 148 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 6.1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 6.2 Event-Triggered Control . . . . . . . . . . . . . . . . . . . . . . . . . 150 6.3 Event Based Adaptive Control . . . . . . . . . . . . . . . . . . . . . . 159 6.3.1 Inter-sample times . . . . . . . . . . . . . . . . . . . . . . . . 163 6.4 Two Link Planar Manipulator . . . . . . . . . . . . . . . . . . . . . . 168 6.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.5.1 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . 172 6.5.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . 176 6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 7 Conclusions 182 Bibliography 185 iv List of Figures 1.1 Time-triggered sampled data control. . . . . . . . . . . . . . . . . . . 2 1.2 Event-triggered sampled data control. . . . . . . . . . . . . . . . . . . 3 2.1 Batch reactor example: evolution of the (a) Lyapunov function, (b) time derivative of Lyapunov function, along the ow of the closed loop system. (c) Sensor inter-transmission times (d) cumulative frequency distribution of the sensor inter-transmission times. . . . . . . . . . . . 42 2.2 Nonlinear system example: evolution of the (a) Lyapunov function, (b) time derivative of Lyapunov function, along the ow of the closed loop system. (c) Sensor inter-transmission times (d) cumulative fre- quency distribution of the sensor inter-transmission times. . . . . . . 48 2.3 Nonlinear system example with event-triggered communication from the controller to the sensor event-triggers: (a) Sensor inter-transmission times (b) cumulative frequency distribution of the sensor inter-transmission times. Evolution of (c) wi, (d) Ti parameters of the sensor event-triggers. 50 3.1 Architecture I: Sensor output available to the controller at all time. Co-located components have access to the others? output at any given time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.2 Architecture II: Synchronous transmissions by the sensor and the controller. Co-located components have access to the others? output at any given time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.3 Architecture III: Centralized controller with decentralized sensors and actuators, each transmitting its data asynchronously. . . . . . . . . . 72 3.4 The SCAN control architecture has three functional layers. Each node in the sensor layer intermittently broadcasts its output to all the nodes in the observer layer. Each node in the observer layer in- termittently broadcasts its state to every other node in that layer. Each of the rst m nodes of the observer layer also transmit inter- mittently to one of the actuator nodes. The dotted arrows indicate even-triggered communication links, with the event-trigger running at the tail end of the arrow. The solid arrows are physical links. . . . 75 3.5 Architecture I: (a) The evolution of the Lyapunov function and (b) its derivative along the ow of the closed loop system. . . . . . . . . . 82 3.6 Architecture I: (a) Inter-event times and (b) the cumulative frequency distribution of the inter-event times. . . . . . . . . . . . . . . . . . . 83 3.7 Architecture II: (a) The evolution of the Lyapunov function and (b) its derivative along the ow of the closed loop system. . . . . . . . . . 84 3.8 Architecture II: (a) Inter-event times and (b) the cumulative fre- quency distribution of the inter-event times. . . . . . . . . . . . . . . 84 3.9 Architecture III: (a) The evolution of the Lyapunov function and (b) its derivative along the ow of the closed loop system. . . . . . . . . . 86 v 3.10 Architecture III: (a) Inter-transmission times and (b) the cumulative frequency distribution of the inter-transmission times of the nodes. The curves labelled with ui and yj denote the relevant inter-transmission time data of the controller output ui and the sensor output yj, re- spectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.11 Architecture IV: (a) The evolution of the Lyapunov function and (b) its derivative along the ow of the closed loop system. . . . . . . . . . 88 3.12 Architecture IV: (a) Inter-transmission times and (b) the cumulative frequency distribution of the inter-transmission times of the nodes. The curves labelled with zi and yj denote the relevant inter-transmission time data of those nodes, respectively. . . . . . . . . . . . . . . . . . 88 4.1 Design of for 1-D systems. The blue lines indicate the actual quan- tization cells or intervals, while ruk and r l k indicate the extremities of the over-designed quantization cells Ck . . . . . . . . . . . . . . . . . . 108 4.2 Possible types of cells in 2-D systems, excluding C0. The dots are the generators of the quantization cells, whose boundaries are represented by the polygons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 4.3 Geometry of Type 1 cells. . . . . . . . . . . . . . . . . . . . . . . . . 111 4.4 In the rst stage of the design process, annuli are selected in a process analogous to (4.27) and Figure 4.1. The inner and outer boundaries of the rst annulus are shown in blue, while those of the second annulus are shown in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.5 Fig. 4.5(a) and Fig. 4.5(b) demonstrate the steps in covering an an- nulus. The dots indicate the generators of the quantization cells. Fig. 4.5(c) and Fig. 4.5(d) show that the procedure leads to a loga- rithmic quantizer in two dimensions. . . . . . . . . . . . . . . . . . . 113 4.6 Evolution of jxj and jej=W . . . . . . . . . . . . . . . . . . . . . . . . 116 5.1 Simulation results for Case I. . . . . . . . . . . . . . . . . . . . . . . 142 5.2 Simulation results for Case II. . . . . . . . . . . . . . . . . . . . . . . 143 5.3 Theoretical lower bound on inter-event times for the linear system example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 5.4 Observed average ( T ) and minimum (Tmin) inter-event times observed in the simulations parametrized by !. . . . . . . . . . . . . . . . . . . 146 6.1 A schematic of a two link planar revolute manipulator with the second link remotely driven from base of Link 1. . . . . . . . . . . . . . . . . 169 6.2 (a) Controller has exact knowledge of the robot parameters. The gure shows norm of the tracking error and the scaled measurement error. (b) Controller has inaccurate knowledge of the robot parame- ters. The gure shows norm of the tracking error. . . . . . . . . . . 173 6.3 The norm of the tracking error and the scaled measurement error. (a) = 0:6 (b) = 0:95 . . . . . . . . . . . . . . . . . . . . . . . . . 175 vi 6.4 The desired joint positions and the actual positions of the robot. (a), (b) = 0:6, (c), (d) = 0:95 . . . . . . . . . . . . . . . . . . . . . . . 176 6.5 PHANToM OmniTM . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.6 The cumulative frequency distribution of the inter-tick times of the PHANToM OmniTM. . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 6.7 The desired joint positions and the actual positions of the robot. (a), (b) = 0:6, (c), (d) = 0:95 . . . . . . . . . . . . . . . . . . . . . . . 179 6.8 The cumulative frequency distribution of the control inter-update times in the experiments. (a) = 0:6, (b) = 0:95 . . . . . . . . . . 180 vii Chapter 1 Introduction The subject of this dissertation is the designing of utility driven sampling mecha- nisms in sampled data control systems, speci cally under some kind of imperfect information. In this chapter, the broader motivation for utility driven sampled data control is provided. Then, an outline of the dissertation and a summary of the contributions is given. The nal section paves the way for the subsequent chapters by introducing the notation commonly used in the dissertation and by highlighting the factors that a ect the design of utility driven sampled data control systems. 1.1 Motivation Computer based control systems, which are ubiquitous today, are essentially sampled data control systems, wherein the control input to a ?plant? is computed based on a sampled version of, often continuously varying, signals. In traditional time-triggered control systems, this sampling of the sensor data and computation/execution of the control is done periodically. A basic time-triggered sampled data control system is shown in Figure 1.1 (for simplicity time-triggering has been shown only on the actuation side). The control input to the plant is updated at discrete time instants and it is held constant between updates. At discrete (and usually periodic) time instants, the ?external? clock triggers the updates of the control input to the plant. 1 Figure 1.1: Time-triggered sampled data control. The reasons for the popularity of this paradigm are ease of implementation and applicability to a wide range of systems. However, such sampled data control sys- tems come at the cost of increased ine ciency from a sampling and computational perspective. This is because the period for sampling and control execution has to be determined by a worst case analysis and is independent of the system?s state. This issue assumes even greater signi cance in the context of Cyber Physical Systems. For example, for control systems implemented on embedded computers with low computational capabilities, or for control systems implemented over a network with data rate constraints, parsimonious sampling and computation is helpful. In this context, state based aperiodic sampling is a promising alternative. In sampled data control systems, the requirement of a sampling mechanism is not usu- ally reconstruction of the analog signal. Rather, it is to su ciently serve the overall control goal - such as stabilization of a xed point (equilibrium point). Thus, state based aperiodic sampling techniques have been explored over the years in di erent 2 forms and under di erent names [1{5], [6, 7] (Lebesgue sampling), [8] (interrupt- based control or feedback triggered control), [9] (state-triggered control). More re- cently, research in these directions has become focused as ?event-triggered? or ?event based? control [10{22], which is a representative list of some early papers. A basic event-triggered sampled data control system is shown in Figure 1.2. The triggering Figure 1.2: Event-triggered sampled data control. in this paradigm is, in general, aperiodic and is determined by a state/data de- pendent event-triggering condition that explicitly encodes the control goal. Thus, by appropriately designing the event-trigger, the control system samples only when necessary - when the last sampled data is deemed no longer useful towards meeting the control goal speci cations. Thus, such control systems may be called ?utility driven sampled data control systems?. Although this dissertation is closely related to the event-triggered control lit- erature, we often (specially in this chapter) refer to our own work and that of others in the literature by the phrases ?utility driven sampled data control?, ?utility driven event-triggering? and their variants. This has been done for two main reasons. First, these phrases emphasise the explicit encodement of the control goal in the event- 3 triggering conditions. Second, the term ?event? in the control community has other, well established connotations - such as in Discrete Event Systems [23] and in the area of robotics, where the controller responds to events such as the robot encoun- tering an obstacle in its path. In each of these cases, the term ?event? is referred to something that is external to the control system. On the other hand, in the event-triggered control paradigm of Figure 1.2 and in much of the literature on the subject, ?events? and ?event? generation are internal mechanisms of the control sys- tem. Thus, to highlight this important distinction, the phrase ?utility driven . . . ? and its variants are used in this dissertation. At this stage a further clari cation is needed. The sampled data control sys- tems that we consider are based on emulating a given continuous time controller. That is, the ?control computer? in Figure 1.2 is assumed to be given. The proposed design methods simply prescribe the event-triggers that determine the sampling time instants based on a notion of utility towards ful lling a control goal. Indeed, this is the approach adopted in much of event-triggered control literature. Moreover, in this dissertation we restrict the control goal speci cations to asymptotic stabi- lization of an equilibrium point or a reference trajectory with a prescribed (state dependent) minimum convergence rate. Our guiding principle during the design of the event-triggers is to ensure the sampling instances to be as parsimonious as possible while also ensuring that the event-triggering condition is su ciently simple enough. Obviously, each of these two requirements is in con ict with the other. However, a precise mathematical formulation of a trade-o is beyond the scope of this dissertation. Thus, the term ?utility? is also used in a somewhat mathematically 4 imprecise manner. 1.2 Outline and Contributions of the Dissertation Much of the emerging area of utility driven event-triggered control and the closely related eld of self-triggered control [24{28] is applicable for xed-point stabilization under full state feedback. However, in practice, there are many applications where there is some imperfection in the information available to an event-trigger. This imperfection in the information may be due to varied factors such as exogenous reference signals, quantization, imperfect knowledge of system?s dynamic parameters or lack of full state feedback at an event-trigger either due to decentralization or simply due to inherent lack of full state feedback in the system. This dissertation addresses each of these issues in settings of varying generality. An outline and a summary of the contributions of the dissertation follows. The dissertation is broadly divided into three parts. The rst part of the dis- sertation is utility driven event-triggering under partial state information. Much of the existing literature on event-triggered control assumes the availability of the full state information to event-trigger. This assumption fails to be satis ed in two very important scenarios - decentralized control systems and dynamic output feedback control. The rst scenario is addressed in Chapter 2, where in a control system with decentralized sensors and a central controller is considered. The decentralized sen- sors together are assumed to sense the complete state of the system, which however transmit data to the central controller intermittently and asynchronously at time 5 instants determined by local utility driven event-triggers. In the literature, some ap- proached this problem with restrictive assumptions. Others proposed event-triggers that could guarantee only semi-global practical stability even for linear systems if the sensors did not listen to the central controller. In contrast, the event-triggering scheme that we propose guarantees semi-global asymptotic stability for nonlinear systems and global asymptotic stability for linear systems without the sensors having to listen to the controller. However, in the nonlinear case the design is conserva- tive. Thus, we also propose a modi cation, wherein the sensors occasionally receive updates from the controller. Chapter 3 addresses the scenario where a system inherently lacks full state feedback and instead an output feedback dynamic (for example, observer based) controller has to be used. This chapter is concerned solely with Multi Input Multi Output (MIMO) Linear Time Invariant (LTI) systems. As one might expect, this problem is closely related to the subject matter of Chapter 2 and naturally extends to the case where the sensors are decentralized and not co-located with the con- troller. In this chapter, we in fact progress from a centralized architecture where the sensors, controller and the actuators are co-located to a fully decentralized con- trol system - a Sensor-Controller-Actuator Network (SCAN). Again, the existing results in the literature guarantee only semi-global practical stability, while the pro- posed utility driven event-triggering scheme guarantees global asymptotic stability. Even in the most general of the architectures considered in this chapter, Sensor- Controller-Actuator Network (SCAN), the assumptions on the system matrices are fairly simple. Portions of this chapter have been published in [29,30]. 6 The second part expands the de nition of utility driven sampling to include sampling in both time and space. The elds of event-triggered control and coarsest quantization have very similar motivations, although they are aimed at ?coarse sam- pling? in time and space, respectively. In Chapter 4, we exploit the common principle behind two elds, which is robustness/tolerance to measurement errors, to design implicitly veri ed discrete-event emulation based controllers for asymptotic stabi- lization of general nonlinear systems. In comparison to the coarsest quantization literature, our quantizer design holds for general multi-input nonlinear continuous time systems. A signi cant portion of the work in this chapter has been published in [31]. The third part is on utility driven sampled data control for trajectory tracking. Tracking a time varying trajectory or even a set-point is of tremendous practical importance in many control applications. In these applications, the goal is to make the state of the system follow a reference or desired trajectory, which is usually speci ed as an exogenous input to the system. In Chapter 5, a method for designing utility driven event-triggered controllers for trajectory tracking in nonlinear systems is proposed. Parts of the work in this chapter have been published in [32,33], which are also the rst to consider this important problem. In Chapter 6, we propose a utility driven sampled data implementation of an adaptive controller for trajectory tracking in robot manipulators. This is motivated by the fact that commonly, utility driven event-triggered controllers such as the one presented in Chapter 5 rely on the knowledge of an accurate model of the system. However, building a model of high accuracy is a time consuming process and in 7 many cases, it may not even be possible. Therefore, it is important to extend the design of implicitly veri ed event based controllers to cases where only a poor model of the system is available. In this work, we propose an event-triggered emulation of an adaptive controller from the existing literature. The proposed controller is tested through simulations and experiments performed on a PHANToM Omni robotic ma- nipulator. The contribution of this chapter is two fold. It is only the second work to consider an event-triggered implementation of an adaptive controller and, further, the only work applicable to a nonlinear and continuous time system. This chapter also contributes to the as yet limited body of experimental results on utility driven event-triggered control. Finally, the dissertation is concluded in Chapter 7 with a summary and some possible directions for future research. 1.3 Preliminaries The aim of this section is to introduce the preliminaries of utility driven sampled data control and highlight some important issues/factors a ecting the design pro- cess. To this end, we introduce some basic mathematical notation and consider the problem of asymptotic stabilization of Multi Input Multi Output (MIMO) Linear Time Invariant (LTI) systems. Now, in sampled data control systems, the controller and/or the actuator make use of sampled versions of continuous-time signals. Thus, let be any continuous- time signal (scalar or vector) and let ft i g be the increasing sequence of discrete time 8 instants at which is sampled. Then we denote the resulting piecewise constant sampled signal by s, that is, s , (t i ); 8t 2 [t i ; t i+1) (1.1) Often it is useful to view the sampled data, s, as resulting from an error in the the measurement of the continuous-time signal, , which is denoted by e , s = (t i ) ; 8t 2 [t i ; t i+1) (1.2) Note that e is discontinuous at t = t i , for each i, because e(t i ) = (t i ) (t i ) = 0 while lim t"t i e(t) = lim t"t i ( (t i 1) (t)). In time-triggered implementations, the time instants t i in (1.1) are pre-determined and are commonly a multiple of a xed sampling period. On the other hand, in event-triggered implementations the time instants t i are determined implicitly by a state/data based triggering condition that is checked online. Consequently, an event-triggering condition may result in the inter sampling times t i+1 t i to be arbitrarily close to zero or it may even result in the limit of the sequence ft i g to be a nite number (Zeno behavior). Thus for practical utility, an event-trigger has to ensure that these scenarios do not occur. The event-triggering condition may be as simple as a threshold crossing of the measurement error, e. In utility driven sampled data control, implicitly veri ed (guaranteed to meet the control goal) task speci c event-triggering conditions are designed so that the sampling is parsimonious. Now, consider the continuous-time system _x = Ax+Bus 9 where x 2 Rn, and us 2 Rm, are the plant state and the control input to the plant, respectively. The matrices A and B are of appropriate dimensions. The subscript s in us indicates that the controller is a sampled data controller. In this dissertation, we are interested in emulation based utility driven sampled data control. That is, the controller is a sampled data version of a given continuous-time controller. Our job then is to design a utility driven event-trigger that determines when the piecewise constant sampled data signal us is updated. Thus in the current example, let the control goal is global asymptotic stabi- lization of the origin of the closed loop system and let the continuous-time controller u = Kx be given to us. In other words, suppose that the gain matrix K renders the matrix A = (A+BK) Hurwitz. Then, the closed loop system with the sampled data controller is given by _x = Ax+Bus; us = Kxs (1.3) where xs is de ned as in (1.1). Now, given an n n symmetric positive de nite matrix Q, there exists a symmetric positive de nite matrix P that satis es P A+ ATP = Q: Then, consider the Lyapunov function V = xTPx and its derivative along the ow 10 of the closed loop system _V = xT [P A+ ATP ]x+ 2xTPBK(xs x) = xTQx+ 2xTPBKxe = (1 )xTQx xTQx+ 2xTPBKxe (1.4) where 2 (0; 1) is a design parameter. This suggests that _V (1 )xTQx < 0; if 2xTPBKxe xTQx Thus, global asymptotic stability of the origin is guaranteed if, for example, the time instants at which us = Kxs are given by tx0 = 0 txi+1 = minft txi : 2xTPBKxe xTQxg (1.5) Thus, the sampling time instants are given implicitly in terms of the last sampled data and the current state of the system. Of course, the initial sampled data or the rst sampling instant is to be speci ed explicitly. The inter-sample times implicitly de ned by (1.5) can be shown to have a positive lower bound [14,34]. Now, note that this state dependent event-trigger is designed speci cally for the task of asymptotic stabilization with a desired minimum rate of convergence. As one might expect, there is a direct trade-o between the desired minimum rate of convergence (higher is desirable) and the average sampling rate (lower is desirable). In the event-trigger (1.5), there is a tunable design parameter that lets us trade the desired minimum rate of convergence with the average sampling rate. Smaller value means a higher 11 desired minimum rate of convergence as well as a higher average sampling rate. On another note, in practice, there may be time delays in the control system which may adversely a ect the system. Although in the above example and in the rest of the dissertation we do not explicitly address the issue of time delays, one may follow the standard procedure in the literature (see [14] for example) to provide a bound on safely tolerable time delays (higher is desirable). It su ces to say that the parameter a ects the bound on safely tolerable time delays - smaller allows larger time delays. Therefore, there is again a trade-o between the average sampling rate and tolerable time delays, or alternatively, there is a trade-o between the desired minimum rate of convergence and tolerable time delays. In each of the proposed event-triggers, in the forthcoming chapters of the dissertation, there is a tunable parameter that analogously provides a trade-o between various characteristics. In the example, the event-trigger is designed speci cally for the task of asymp- totic stabilization with a desired minimum rate of convergence. In other words, the event-trigger explicitly encodes the control goal and triggers the sampling of the signals only when it is necessary - when the last sampled data is no longer useful. It is in this sense that (1.3)-(1.5) is called a utility driven sampled data control sys- tem. This basic idea can be extended to design utility driven event-triggers for more sophisticated control goals such as asymptotic stabilization, but without the require- ment of monotonically decreasing Lyapunov function V (see [34] for example). In this dissertation, the control goals are restricted to the simpler variety presented in the example, but, in scenarios where the information available to the event-trigger is imperfect. Chapters 2 and 3 are concerned with designing event-triggers that 12 have access only to partial state information. In Chapter 4, quantization is consid- ered in addition to sampling in time and proposes a method for co-designing the event-trigger and the quantizer. In Chapter 5, it is in the form of exogenous refer- ence trajectory. Chapter 6 explores the case where the dynamic parameters of the robotic system are unknown and adaptively estimated. Finally, we recall the guiding principle in our proposed designs - in addition to requiring the sampling to be parsi- monious, we also want the event-triggers to be simple enough. Notice that in (1.5), the complexity of the event-trigger increases with the state space dimension. For example, each of the expressions in the inequality requires n3 multiplications to be computed, where n is the state space dimension. Hence, the proposed event-triggers are usually simpler and conservative than the \coarsest" possible event-triggers. 13 Part I Event-Triggering Under Partial State Information 14 Chapter 2 Decentralized Utility Driven Event-Triggering for Control of Nonlinear Systems 2.1 Introduction Much of the literature on event-triggered control utilizes the full state information in the triggering conditions. However, in two very important classes of problems full state information is not available to the event-triggers. These are systems with decentralized sensing and/or dynamic output feedback control. In the latter case, full state information is not available even when the sensors and the controller are centralized (co-located). In systems with decentralized sensing, each individual sensor has to base its decision to transmit data to a central controller only on locally available information. These two classes of problems are receiving attention in the community only recently - [35{39] (decentralized sensing) and [29, 30, 40{44] (output feedback control). This chapter and the next present some useful ideas towards addressing these problems. 2.1.1 Contributions In this chapter we propose a methodology for designing implicitly veri ed decen- tralized event-triggers for control of nonlinear systems. The system architecture we 15 consider is one with full state feedback but with the sensors decentralized and not co-located with a central controller. The proposed design methodology provides event-triggers that determine when each sensor transmits data to a central con- troller. The event-triggers are designed to utilize only locally available information, making the transmissions from the sensors asynchronous. The proposed design guar- antees asymptotic stability of the origin of the system with an arbitrary, but xed a priori, compact region of attraction. It also guarantees a positive lower bound for the inter-transmission times of each sensor individually. In the special case of Linear Time Invariant (LTI) systems, global asymptotic stability is guaranteed and scale invariance of inter-transmission times is preserved. For nonlinear systems, we also propose a variant with event-triggered communication from the central controller to the sensors that signi cantly increases the average sensor inter-transmission times. In the literature, decentralized event-triggered control was studied in [38, 39] with the assumption that the subsystems are weakly coupled, which allowed the design of event-triggers depending on only local information. Our proposed design method requires much less restrictive assumptions. In [35{37], each sensor checks a local condition (based on threshold crossing) that triggers asynchronous transmis- sion of data by sensors to a central controller. However, this design guarantees only semi-global practical stability (even for linear systems) if the sensors do not listen to the central controller. Compared to this work, our proposed design guarantees semi-global asymptotic stability even when the sensors do not listen to the central controller. For linear systems, our proposed method gurantees global asymptotic stability without the sensors having to listen to the central controller. A similarity 16 between our work and [35{37] is that both are partially motivated by the need to eliminate or drastically reduce the listening e ort of the sensors to save energy. The rest of the chapter is organized as follows. Section 2.2 describes and for- mally sets up the problem under consideration. In Section 2.3, the design of asyn- chronous decentralized event-triggers for nonlinear systems is presented - without, and then with, feedback from the central controller. Section 2.4 presents the special case of Linear Time Invariant (LTI) systems. The proposed design methodology is illustrated through simulations in Section 2.5 and nally Section 2.6 provides some concluding remarks. 2.2 Problem Setup Consider a nonlinear control system _x = f(x; u); x 2 Rn; u 2 Rm (2.1) with the feedback control law u = k(x+ xe) (2.2) where xe is the error in the measurement of x. In general, the measurement error can be due to many factors such as sensor noise and quantization. However, we consider measurement error that is purely a result of \sampling" of the sensor data x. Before going into the precise de nition of this measurement error, we rst describe the broader problem. First, let us express (2.1) as a collection of n scalar di erential 17 equations _xi = fi(x; u); xi 2 R; i 2 f1; 2; : : : ; ng (2.3) where x = [x1; x2; : : : ; xn]T . In this chapter we are concerned with a decentralized sensing scenario where each component, xi, of the state vector x is sensed at a di erent location. Although the ith sensor senses xi continuously in time, it transmits this data to a central controller only intermittently. In other words, the controller is a sampled-data controller that uses intermittently transmitted/sampled sensor data. In particular, we are interested in designing a mechanism for asynchronous decentralized utility driven event-triggering that renders the origin of the closed loop system asymptotically stable. To precisely describe the sampled-data nature of the problem, we now intro- duce the following notation. Let ftxij g be the increasing sequence of time instants at which xi is sampled and transmitted to the controller. The resulting piecewise constant sampled signal is denoted by xi;s, that is, xi;s , x(t xi j ); 8t 2 [txij ; txij+1); 8j 2 f0; 1; 2; : : :g (2.4) As mentioned previously, the sampled data, xi;s, may also be viewed as resulting from an error in the the measurement of the continuous-time signal, xi. This mea- surement error is denoted by xi;e , xi;s xi = xi(txij ) xi; 8t 2 [txij ; txij+1) Finally, we de ne the sampled-data vector and the measurement error vector as xs , [x1;s; x2;s; : : : ; xn;s] T ; xe , [x1;e; x2;e; : : : ; xn;e] T 18 Note that, in general, the components of the vector xs are asynchronously sampled components of the plant state x. The components of xe are also de ned accordingly. Thus, the problem under consideration may be stated more precisely as fol- lows. For the n sensors, we want to design event-triggers that depend only on local information and implicitly de ne the non-identical sequences ftxij g such that (i) the origin of the closed loop system is rendered asymptotically stable and (ii) inter-sample (inter-transmission) times txij+1 txij are lower bounded by a positive constant. Finally, a point regarding the notation in the chapter is that the notation j:j denotes the Euclidean norm of a vector. In the next section, the main assumptions are introduced and the event-triggering conditions for the decentralized sensing ar- chitecture is developed. 2.3 Decentralized Asynchronous Event-Triggering In this section, the main assumptions are introduced and the event-triggers for the decentralized asynchronous sensing problem are developed. (A2.1) The closed loop system (2.1)-(2.2) is Input-to-State Stable (ISS) with respect to measurement error xe. That is, there exists a smooth function V : Rn ! R as well as class K1 functions1 1, 2, and i for each i 2 f1; : : : ; ng, such 1A continuous function : [0;1) ! [0;1) is said to belong to the class K1 if it is strictly increasing, (0) = 0 and (r)!1 as r !1 [45]. 19 that 1(jxj) V (x) 2(jxj) @V @x f(x; k(x+ xe)) (jxj); if i(jxi;ej) jxj; 8i: (A2.2) The functions f , k and i, for each i 2 f1; : : : ; ng, are Lipschitz on compact sets. Note that the standard ISS assumption involves a single condition (jxej) jxj instead of the n conditions: i(jxi;ej) jxj, for i 2 f1; : : : ; ng, in (A2.1). Given a function (:) in the standard ISS assumption, one may de ne i(:) as i(jxi;ej) = jxi;ej i ; i 2 f1; : : : ; ng where i 2 (0; 1) such that 2 = nX i=1 2i 1. Then, the n conditions in (A2.1) are equivalent to jxi;ej i 1(jxj). Thus, jxej = v u u t nX i=1 jxi;ej2 v u u t nX i=1 2i 1(jxj) 1(jxj) which is the condition in the standard ISS assumption. Similarly, given (A2.1) one may pick (:) = i(:) for any i to get the standard ISS assumption, although in practice it may be possible to choose a less conservative (:). In this section, our aim is to constructively show that decentralized asyn- chronous event-triggering can be used to asymptotically stabilize x 0 (the trivial solution or the origin) with a desired region of attraction while also guaranteeing positive minimum inter-sample times. Further, without loss of generality, the de- sired region of attraction may be assumed to be a compact sub-level set S(c) of the 20 Lyapunov like function V in (A2.1). Speci cally, S(c) is de ned as S(c) = fx 2 Rn : V (x) cg (2.5) 2.3.1 Centralized Asynchronous Event-Triggering The proposed design of decentralized asynchronous event-triggering progresses in stages. In the rst stage, centralized event-triggers for asynchronous transmission by the sensors are proposed in the following lemma. One of the key steps in the result is choosing linear bounds on the functions i(:) on appropriately de ned sets Ei. Given that x 2 S(c), we de ne the sets Ei over which the error bounds in (A2.1) are still satis ed, that is, Ei(c) = fxi;e 2 R : jxi;ej 1i (jxj); x 2 S(c)g = fxi;e 2 R : jxi;ej max x2S(c) f 1i (jxj)gg (2.6) Then, by (A2.2), for each c 0 and each i 2 f1; : : : ; ng, there exist positive con- stants Mi(c) such that i(jxi;ej) 1 Mi(c) jxi;ej; 8xi;e 2 Ei(c) (2.7) Lemma 2.1. Consider the closed loop system (2.1)-(2.2) and assume (A2.1) and (A2.2) hold. Suppose for each i 2 f1; : : : ; ng, the sampling instants, ftxij g ensure jxi;ej Mi(c)jxj for all time t 0, where Mi(c) are given by (2.7) and c 0 is an arbitrary constant. Then, the origin is asymptotically stable with S(c), given by (2.5), as the region of attraction. 21 Proof. Suppose x(0) 2 S(c) is an arbitrary point, we have to show that the trajectory x(:) asymptotically converges to zero. Note that, by assumption, the sampling instants are such that for each i 2 f1; : : : ; ng, jxi;ej Mi(c) jxj for all time t 0. Then, for all time t 0, (2.7) implies i(jxi;ej) 1 Mi(c) jxi;ej jxj; 8x 2 S(c) Consider the ISS Lyapunov function V (:) in (A2.1), which is a function of the state x. Letting E(c) , E1(c) E2(c) : : : En(c), the time derivative of the function V along the ow of the closed loop system, with a restricted domain, _V (x; xe) : S(c) E(c)! R can be upper-bounded as _V (x; xe) (jxj); 8x 2 S(c); 8xe 2 E(c) Thus, the ow of the closed loop system is dissipative on the sub-level set, S(c), of the Lyapunov function V . Therefore, the origin is asymptotically stable with S(c) as the region of attraction. The lemma does not mention a speci c choice of event-triggers but rather a family of them - all those that ensure the conditions jxi;ej Mi(c)jxj are satis- ed. Thus, any decentralized event-triggers in this family automatically guarantee asymptotic stability with the desired region of attraction. To enforce the conditions jxi;ej Mi(c)jxj strictly, event-triggers at each sensor would need to know jxj, which is possible only if we have centralized information. One obvious way to decentralize these conditions is to enforce jxi;ej Mi(c)jxij. However, such event-triggers cannot guarantee any positive lower bound for the inter-transmission times, which is not 22 acceptable. So, we take an alternative approach, in which the next step is to derive lower bounds for the inter-transmission times when the conditions in Lemma 2.1 are enforced strictly. Before analyzing the lower bounds for the inter-transmission times that emerge from the event-triggers in Lemma 2.1, we introduce some notation. Noting that for each c 0 the set S(c) contains the origin, Assumption (A2.2) implies that there exist Lipschitz constants L(c) and D(c) such that f(x; k(x+ xe)) L(c)jxj+D(c)jxej (2.8) for all x 2 S(c) and for all xe satisfying jxi;ej=jxj Mi(c), for each i. Similarly, there exist constants Li(c) and Di(c) for i 2 f1; 2; : : : ; ng such that fi(x; k(x+ xe)) Li(c)jxj+Di(c)jxej (2.9) for all x 2 S(c) and for all xe satisfying jxi;ej=jxj Mi(c), for each i. Now, consider the di erential equation _ = a0 + a1 + a2 2 (2.10) where a0, a1, a2 are non-negative constants. The solution of this di erential equation is denoted, as a function of time t and the initial condition 0, as (t; 0). In particular, if a0 > 0 then (t; 0) is a strictly increasing function of time t and if a0 = 0 then (t; 0) 0. Thus, the time it takes to evolve from 0 to a non-negative constant w is expressed as (w; a0; a1; a2) = minfft 0 : (t; 0) = wg [ f1gg (2.11) 23 Notice that (w; a0; a1; a2) 8 >>>>>>>>< >>>>>>>>: = 0; if w = 0 > 0; if w > 0 =1; if w > 0; a0 = 0 (2.12) Remark 2.1. Assuming a2 is non-zero, the solutions of the quadratic di erential equation (2.10) have a nite escape time. However, by de nition (2.11), (w; a0; a1; a2) is strictly less than the nite escape time of the solution (:; 0). Thus on the time interval of interest, [0; (w; a0; a1; a2)], the solution (:; 0) is well de ned. Lemma 2.2. Consider the closed loop system (2.1)-(2.2) and assume (A2.2) holds. Let c > 0 be any arbitrary known constant. For i 2 f1; : : : ; ng, let 0 wi Mi(c) be any arbitrary constants and let Wi = v u u t nX j=1 w2j ! w2i . Suppose the sampling instants are such that jxi;ej=jxj wi for each i 2 f1; : : : ; ng for all time t t0. Finally, assume that for all t t0, x belongs to the compact set S(c). Then, for all t t0, the time required for jxi;ej=jxj to evolve from 0 to wi is lower bounded by Ti = (wi; a0;i; a1;i; a2;i) (2.13) where the function is given by (2.11) and a0;i = Li(c) +Di(c)Wi; a1;i = L(c) +Di(c) +D(c)Wi; a2;i = D(c) Further, if wi > 0 then Ti > 0. Proof. By assumption, for all t t0, x belongs to a known compact set S(c) and jxi;ej=jxj wi Mi(c) for each i. Thus, (2.8) and (2.9) hold for all t t0. Now, 24 letting i , jxi;ej=jxj and by direct calculation we see that for i 2 f1; : : : ; ng d i dt = (xTi;exi;e) 1=2xTi;e _xi;e jxj xT _xjxi;ej jxj3 jxi;ejj _xi;ejjxi;ejjxj + jxjj _xjjxi;ej jxj3 Li(c)jxj+Di(c)jxejjxj + L(c)jxj+D(c)jxej jxi;ej jxj2 where for xi;e = 0 the relation holds for all directional derivatives. Next, notice that jxej jxj = v u u t j=nX j=1 2j v u u t j=nX j=1 w2j ! w2i + 2i Wi + i where the condition that i wi, the de nition of Wi and the triangle inequality property have been utilized. Thus, d i dt Li(c) + L(c) i + Di(c) +D(c) i (Wi + i) = a0;i + a1;i i + a2;i 2 i Now, let ti0 be any time instant such that i(t i 0) = 0. Next, consider the ow _ i = a0;i + a1;i i + a2;i i and its solution denoted, as a function of time t and the initial condition i;0, as i(t; i;0). Then, by the Comparison Lemma [45], it follows that i(t) i(t ti0; 0); 8 t ti0 As a consequence Ti, given by (2.13) is a lower bound on the time it takes i = jxi;ej=jxj to evolve from 0 to wi. The nal claim of the Lemma follows from the property (2.12) of the function . 25 Now, by combining Lemmas 2.1 and 2.2, we get the following result for the centralized asynchronous event-triggering. Theorem 2.1. Consider the closed loop system (2.1)-(2.2) and assume (A2.1)- (A2.2) hold. Suppose the ith sensor transmits its measurement to the controller whenever jxi;ej=jxj wi, where 0 < wi Mi(c), with Mi(c) given by (2.7) and c 0 any arbitrary constant. Then, the origin is asymptotically stable with S(c) as the region of attraction and the inter-transmission times of each sensor have a positive lower bound given by Ti in (2.13). Proof. The triggering conditions ensure that jxi;ej=jxj wi Mi(c) for all t > 0. Thus, Lemma 2.1 guarantees x 2 S(c) for all t 0 and that the origin is asymptotically stable with S(c) included in the region of attraction. Since S(c) is positively invariant, Lemma 2.2 guarantees a positive lower bound for the inter- transmission times. Remark 2.2. In Lemma 2.2, the procedure for the computation of the lower bounds to the inter-transmission times is quite similar to that in [14]. The signi cant dif- ference is that in Lemma 2.2, the guaranteed lower bounds are for asynchronous transmissions while [14] provides lower bounds for synchronous transmissions. 2.3.2 Decentralized Asynchronous Event-Triggering Now, turning to the main subject of this chapter, in the decentralized sensing case, unlike in the centralized sensing case, no single sensor knows the exact value of jxj from the locally sensed data. We may let the event-trigger at the ith sensor enforce 26 the more conservative condition jxi;ej=jxij wi and still satisfy the assumptions of Lemma 2.1, though such a choice cannot guarantee a positive minimum inter- sample time. At this stage, it might seem that Lemma 2.2 cannot be used to design an implicitly veri ed event-triggering mechanism in the decentralized sensing case. However, Lemma 2.2 can be interpreted in an alternative way, which would aid in our design goal. Rather than providing a minimum inter-sampling time for an event-triggering mechanism, Lemma 2.2 can be interpreted as providing a minimum time threshold only after which it is necessary to check a data based event-triggering condition. For example, the event-triggers in Theorem 2.1, txij+1 = minft txij : jxi;ej jxj wig; i 2 f1; : : : ; ng (2.14) can be equivalently expressed as txij+1 = minft txij + Ti : jxi;ej jxj wig (2.15) where Ti are the positive lower bounds for inter-sample times, that are guaranteed by Lemma 2.2 in (2.13). In the latter interpretation, a minimum threshold for inter- sample times is explicitly enforced, only after which, the state based condition is checked. Now, in order to let the event-triggers depend only on locally sensed data, one can let the sampling times, for i 2 f1; : : : ; ng, be determined as txij+1 = minft txij + Ti : jxi;ej wijxijg (2.16) where Ti are given by (2.13). This allows us to implement decentralized asyn- chronous event-triggering. The following theorem is the core result of this chapter 27 and it shows that by appropriately choosing the constants Ti and wi, the event triggers, (2.16), guarantee asymptotic stability of the origin while also explicitly enforcing a positive minimum inter-sample time. Theorem 2.2. Consider the closed loop system (2.1)-(2.2) and assume (A2.1) and (A2.2) hold. Let c 0 be an arbitrary known constant. For each i 2 f1; 2; : : : ; ng, let wi be a positive constant such that wi Mi(c), where Mi(c) is given by (2.7) and Ti be given by (2.13). Suppose the sensors asynchronously transmit the measured data at time instants determined by (2.16) and that txi0 0 for each i 2 f1; 2; : : : ; ng. Then, the origin is asymptotically stable with S(c) as the region of attraction and the inter-transmission times of each sensor are explicitly enforced to have a positive lower threshold. Proof. The statement about the positive lower threshold for inter-transmission times is obvious from (2.16) and only asymptotic stability remains to be proven. This can be done by showing that the event-triggers (2.16) are included in the family of event- triggers considered in Lemma 2.1. From the equivalence of (2.14) and (2.15), it is clearly true that jxi;ej=jxj wi for t 2 [txij ; txij + Ti], for each i 2 f1; 2; : : : ; ng and each j. Next, for t 2 [txij + Ti; txij+1], (2.16) enforces jxi;ej=jxij wi, which implies jxi;ej=jxj wi since jxij jxj. Therefore, the event-triggers in (2.16) are included in the family of event-triggers considered in Lemma 2.1. Hence, x 0 (the origin) is asymptotically stable with S(c) as the region of attraction. Remark 2.3. The idea of an explicit threshold for the inter-transmission times as in the event-triggers, (2.16), has been employed previously in [46]. However, 28 in [46] such a mechanism is used to trigger the controller updates rather than the asynchronous transmissions from the sensors to the controller. Further, in [46] the controller utilizes synchronous measurements from the sensors to compute the control input to the plant, which allows the lower bound for inter-transmission times from [14] to be used. On the other hand, in the proposed decentralized asynchronous event-triggering mechanism of Theorem 2.2 the controller utilizes asynchronously received data to compute the control input to the plant and the inter-transmission time thresholds in (2.16) need to be computed as in Lemma 2.2. Remark 2.4. Although the assumption that txi0 0, for each i, in Theorem 2.2 has not been used in the proof explicitly, it serves two key purposes - avoiding having the sensors send their rst transmissions of data synchronously; and for the controller to have some latest sensor data to compute the controller output at t = 0. Remark 2.5. In Theorem 2.2, the parameters wi cannot be chosen in a decentralized manner unless Mi(c) and hence c is xed a priori. In other words, the desired region of attraction S(c) has to be chosen at the time of the system installation. This can potentially lead to the parameters wi to be chosen conservatively to guarantee a larger region of attraction. One possible solution is to let the central controller communicate the parameters wi to the sensors at t = 0. In any case, for t > 0, the sensors need not listen for a communication and need only transmit their data to the controller. 29 2.3.3 Decentralized Asynchronous Event-Triggering with Intermit- tent Communication from the Central Controller In Theorem 2.2, apart from the fact that the set S(c) is chosen a priori, conser- vativeness in transmission frequency may also be introduced. This is because the Lipschitz constants of the nonlinear functions i(:), (2.7), are not updated after their initialization, despite knowing that the system state is progressively restricted to smaller and smaller subsets of S(c). Although we started from the idea that energy may be saved by making sure that sensors do not have to listen, the cost of increased transmissions may not be in its favor. Thus, we now describe a design where the central controller intermittently communicates updated wi and Ti to the event-triggers. The rst step in this design process is to characterize the region in which the system state actually lies, given xs, the asynchronously transmitted data available at the central controller. Since the central controller knows the parameters used by each event-trigger, it may compute an estimate of jxj based on the centralized asyn- chronous event-triggering of Theorem 2.1, of which (2.16) is an under-approximation. Thus, we have that jxi;s xij = jxi;ej wijxj; 8i 2 f1; : : : ; ng from which we obtain nX i=1 jxi;s xij2 W 2 nX i=1 jxij2; where W = v u u t nX i=1 w2i =) (1 W 2) nX i=1 jxij2 2 nX i=1 jxi;sjjxij+ nX i=1 jxi;sj2 0 30 which is the equation of an n-sphere. Thus, the system state is in the n-sphere given by jx xcj R (2.17) where xc = 1 1 W 2xs; R = W 1 W 2 jxsj (2.18) Obviously, for these equations to make sense, W 2 has to be strictly less than 1. However, this is not a restriction at all. Notice that, by de nition, a centralized event-trigger that enforces jxej = jx xsj W jxj asymptotically stabilizes the origin of the system with the required convergence rate. Further, if W 1 then jx 0j W jxj for all x 2 Rn. The implication is that the constant control u = k(0) is su cient to asymptotically stabilize the origin with required convergence rate. In that case, there is no need for event-triggered control. Thus, without loss of generality, we assume that W 2 < 1. The next idea is to estimate an upper bound on the value of V (x). From (2.17), we know that jxj jxcj+ R and hence that V (x) 2(jxcj+ R). However, this may be conservative and a better estimate may be obtained by maximizing V (x) on the set given by (2.17). In fact, on this set, V (x) is maximized on the boundary of the n-sphere. This is because if the maximum does not occur on the boundary and instead occurs only in the interior of the n-sphere (2.17), then the maximizing sub-level set, SM , of V lies strictly and completely in the interior of the n-sphere, which means SM is not the smallest sub-level set of V that contains the complete n-sphere. Thus, an upper bound on the value of V (x) is provided by V maxfV (x) : jx xcj = Rg (2.19) 31 The nal idea is to update sensor event-trigger parameters wi and Ti at time instants determined by an event-trigger running at the central controller, namely, tVj+1 = minft tVj + T : V V(tVj )g (2.20) where T > 0 and 2 (0; 1) are arbitrary constants. To be precise, tVj+1 are the time instants at which V is updated. In this chapter, we assume that these are also the time instants at which new values of wi and Ti are communicated to the sensors as well as updated by the sensors in (2.16). The initial condition V(tV0 ) = V(0) = c may be chosen, where c determines the region of attraction S(c). Thus the ?sampled? version of V is denoted by Vs , V(tVj ); 8t 2 [tVj ; tVj+1); Vs(tV0 ) = Vs(0) = c (2.21) where c > 0 is an arbitrary constant, tVj are given by (2.20) and V is given by (2.19). Now, the ideas in this subsection are formalized in the following result. Theorem 2.3. Consider the closed loop system (2.1)-(2.2) and assume (A2.1) and (A2.2) hold. Let Mi(:) and Vs be given by (2.7) and (2.21), respectively. For each i 2 f1; 2; : : : ; ng, let wi and Ti be positive piecewise-constant signals given by wi = Mi(Vs) and (2.13) (with c = Vs), respectively. Suppose the sensors asynchronously transmit the measured data at time instants determined by (2.16) and that txi0 0 for each i 2 f1; 2; : : : ; ng. Then, the origin is asymptotically stable with S(c) as the region of attraction and the inter-execution times of each event-trigger have a positive lower bound. Proof. Clearly, the Lyapunov function evaluated at the state of the system is at all times lesser than the piecewise constant and non-increasing signal Vs. Thus, 32 x 2 S(Vs) at all times, where S(:) is given by (2.5). Hence, wi = Mi(Vs) and Ti given by (2.13) guarantee asymptotic stability of the origin of the closed loop system, with S(Vs(0)) as the region of attraction. The inter-transmission times tVj are clearly lower bounded by T > 0. Note that given Vs, the di erent parameters in Lemma 2.2 are clearly determined, as is Ti in (2.16). Thus, the inter-transmission times of the ith sensor in the interval [tVj ; t V j+1) are lower bounded by Ti calculated with V(tVj ), which are guaranteed to be positive by Lemma 2.2. The di erent parameters in Lemma 2.2 are upper and lower bounded by positive constants determined by Vs(0). Thus, Ti for all time have positive lower bounds i. Each inter-transmission time of the ith sensor is thus lower bounded by i > 0. Remark 2.6. As S(c1) S(c2) if c1 c2, Mi(:) in (2.7) can be assumed to be non-increasing functions of c. Since the signal Vs is non-increasing, wi = Mi(Vs) are non-decreasing in time. Further, note that the aim of the event-triggers (2.16) is to enforce the conditions jxi;ej wijxj. Thus whenever wi and Ti are updated, the new parameters in the event-triggers are consistent with and an improvement over the previous parameters. Although wi are non-decreasing in time, the same cannot be said about Ti. However, it is not a restriction and the inter-transmission times are still lower bounded. Remark 2.7. Computing the upper bound on V , (2.19), may be computationally intensive depending on the Lyapunov function and the dimension of the system. However, since the Lyapunov function is guaranteed to decrease even with no updates 33 to wi and Ti, there is no restriction on the time needed to compute the upper bound on V and to update the parameters of the event-triggers. On the other hand, it is true that the updates to all the event-triggers have to occur synchronously. 2.4 Linear Time Invariant Systems Now, let us consider the special case of Linear Time Invariant (LTI) systems with quadratic Lyapunov functions. Thus, the system dynamics may be written as _x = Ax+Bu; x 2 Rn; u 2 Rm (2.22) u = K(x+ xe) (2.23) where A, B and K are matrices of appropriate dimensions. As in the general case, let us assume that for each i 2 f1; 2; : : : ; ng, xi 2 R is sensed by the ith sensor. Comparing with (2.22)-(2.23) we see that xi evolves as _xi = ri(A)x+ ri(BK)(x+ xe) (2.24) where the notation ri(H) denotes the ith row of the matrix H. Also note that xe and xi;e are de ned just as in Section 2.2. Now, suppose the matrix (A + BK) is Hurwitz, which is equivalent to the following statement. (A2.3) Suppose that for any given symmetric positive de nite matrix Q, there exists a symmetric positive de nite matrix P such that P (A+BK) + (A+BK)TP = Q 34 Then, the following Lemma describes a centralized asynchronous sensing mechanism for linear systems. Lemma 2.3. Consider the closed loop system (2.22)-(2.23) and assume (A2.3) holds. Let Q be any symmetric positive de nite matrix and let Qm be the small- est eigenvalue of Q. For each i 2 f1; 2; : : : ; ng, let i 2 (0; 1) s.t. = nX i=1 i 1 (2.25) wi = iQm jci(2PBK)j (2.26) where 2 (0; 1) is a design constant and ci(2PBK) is the ith column of the matrix (2PBK). Suppose the sampling instants are such that for each i 2 f1; : : : ; ng, jxi;ej=jxj wi for all time t 0. Then, the origin is globally asymptotically stable. Proof. Consider the candidate Lyapunov function V (x) = xTPx where P satis es (A2.3). The derivative of the function V along the ow of the closed loop system satis es _V = xT [P (A+BK) + (A+BK)TP ]x+ 2xTPBKxe (1 )xTQx+ jxj h j2PBKxej Qmjxj i (1 )xTQx+ jxj h nX i=1 jci(2PBK)xi;ej Qmjxj i (1 )xTQx+ jxj nX i=1 jci(2PBK)jjxi;ej Qmjxj The sensor update instants have been assumed to be such that jxi;ej=jxj wi = iQm jci(2PBK)j for each i and for all time t 0. Thus, _V (1 )xTQx 35 which implies that the origin is globally asymptotically stable. Lower bounds for the inter-sample times can be found in a manner analogous to the general nonlinear case in Lemma 2.2. Lemma 2.4. Consider the closed loop system (2.22)-(2.23). For each i 2 f1; : : : ; ng, let i, wi be de ned as in (2.25)-(2.26) and let Wi = v u u t nX j=1 w2j ! w2i . Suppose the sampling instants are such that jxi;ej=jxj wi for each i 2 f1; : : : ; ng for all time t t0. Then, for all t t0, the time required for jxi;ej=jxj to evolve from 0 to wi is lower bounded by Ti > 0, where Ti = (wi; a0; a1; a2) (2.27) where the function is given by (2.11) and a0 = jri(A+BK)j+ jri(BK)jWi; a1 = jA+BKj+ jri(BK)j+ jBKjWi; a2 = jBKj Proof. Letting i , jxi;ej=jxj, for i 2 f1; : : : ; ng, the an upper bound for the time derivative of i can be found by direct calculation. d i dt = (xTi;exi;e) 1=2xTi;e _xi;e jxj xT _xjxi;ej jxj3 jxi;ejj _xi;ejjxi;ejjxj + jxjj _xjjxi;ej jxj3 jri(A+BK)jjxj+ jri(BK)jjxejjxj + jA+BKjjxj+ jBKjjxej jxi;ej jxj2 where for xi;e = 0 the relation holds for all directional derivatives while the notation ri(H) denotes the ith row of the matrix H. Next, notice that jxej jxj = v u u t j=nX j=1 2j v u u t j=nX j=1 w2j ! w2i + 2i Wi + i 36 where the condition that i wi, the de nition of Wi and the triangle inequality property have been utilized. Thus, d i dt jri(A+BK)j+ jA+BKj i + jri(BK)j+ jBKj i (Wi + i) = a0 + a1 i + a2 2 i The claim of the Lemma now directly follows from analogous arguments as in the proof of Lemma 2.2. Next, the result for the centralized asynchronous event-triggering is presented, whose proof is quite analogous to Theorem 2.1. Theorem 2.4. Consider the closed loop system (2.22)-(2.23) and assume (A2.3) holds. Let Q be any symmetric positive de nite matrix and let Qm be the small- est eigenvalue of Q. For each i 2 f1; 2; : : : ; ng, let i and wi be de ned as in (2.25)-(2.26). Also suppose the ith sensor transmits its measurement to the con- troller whenever jxi;ej=jxj wi. Then, the origin is globally asymptotically stable and the inter-transmission times have a positive lower bound. The following result is analogous to Theorem 2.2 and prescribes the constants Ti and wi in the event triggers, (2.16), that guarantee global asymptotic stability of the origin while also explicitly enforcing a positive minimum inter-sample time. Theorem 2.5. Consider the closed loop system (2.22)-(2.23) and assume (A2.3) holds. Let Q be any symmetric positive de nite matrix and let Qm be the smallest eigenvalue of Q. For each i 2 f1; 2; : : : ; ng, let i, wi and Ti be de ned as in (2.25), (2.26) and (2.27), respectively. Suppose the sensors asynchronously transmit the 37 measured data at time instants determined by (2.16). Then, the origin is globally asymptotically stable and the inter-transmission times are explicitly enforced to have a positive lower threshold. In the context of the results for nonlinear systems in Section 2.3, the reason we are able to achieve global asymptotic stability for LTI systems is because, the system dynamics, the functions i(:) are globally Lipschitz, thus giving us constants wi and Ti that hold globally. In fact, for linear systems, something more is ensured - the proposed asynchronous event-triggers guarantee a type of scale invariance. Scaling laws of inter-execution times for centralized synchronous event-triggering have been studied in [28]. In particular, Theorem 4.3 of [28], in the special case of linear systems, guarantees scale invariance of the inter-execution times determined by a centralized event-trigger jxej = W jxj. The centralized and decentralized asyn- chronous event-triggers developed in this chapter are under-approximations of this kind of central event-triggering. In the following, we show that the scale invariance is preserved in the asynchronous event-triggers. As an aside, we would like to point out that the decentralized event-triggers proposed in [35{37] are not scale invariant. In order to precisely state the notion of scale invariance and to state the result, the following notation is useful. Let x(t) and z(t) be two solutions to the system: (2.22)-(2.23) along with the event-triggers (2.16). Theorem 2.6. Consider the closed loop system (2.22)-(2.23) and assume (A2.3) holds. Let Q be any symmetric positive de nite matrix and let Qm be the small- est eigenvalue of Q. For each i 2 f1; 2; : : : ; ng, let i, wi and Ti be de ned as in 38 (2.25), (2.26) and (2.27), respectively. Suppose the sensors asynchronously trans- mit the measured data at time instants determined by (2.16). Assuming b is any scalar constant, let [z(0)T ; zs(0)T ]T = b[x(0)T ; xs(0)T ]T 2 Rn Rn be two initial conditions for the system. Further let tzi0 = t xi 0 < 0 for each i 2 f1; : : : ; ng. Then, [z(t)T ; zs(t)T ]T = b[x(t)T ; xs(t)T ]T for all t 0 and txij = tzij for each i and j. Proof. First of all, let us introduce two strictly increasing sequences of time, ftzsj g and ftxsj g, at which one or more components of zs and xs are updated, respec- tively. Further, without loss of generality, assume tzs0 = t xs 0 . The proof pro- ceeds by mathematical induction. Let us suppose that tzsj = t xs j = tj for each j 2 f0; : : : ; kg and that [z(t)T ; zs(t)T ]T = b[x(t)T ; xs(t)T ]T for all t 2 [0; tk). Then, letting tk+1 = minftzsk+1; txsk+1g the solution, z, in the time interval [tk; tk+1) satis es z(t) = eA(t tk)z(tk) + Z t tk eA(t )BKzs(tk)d = beA(t tk)x(tk) + b Z t tk eA(t )BKxs(tk)d Hence, z(t) = bx(t); 8t 2 [tk; tk+1) (2.28) Further, in the time interval [tk; tk+1) zi;e(t) = zi(tk) zi(t) = b(xi(tk) xi(t)) = bxi;e(t) (2.29) Similarly, for all t 2 [tk; tk+1), jzi;e(t)j jz(t)j = jxi;e(t)j jx(t)j (2.30) 39 Without loss of generality, assume zi;s is updated at tk+1. Then, clearly, at least Ti amount of time has elapsed since zi;s was last updated. Next, by the assumption that tzi0 = t xi 0 < 0 and the induction statement, it is clear that at least Ti amount of time has elapsed since xi;s was also last updated. Further, it also means that jzi;s(tk) zi(tk+1)j wijzi(tk+1)j. Then, (2.28)-(2.29) imply that jxi;s(tk) xi(tk+1)j wijxi(tk+1)j, meaning tk+1 = tzsk+1 = txsk+1 = tk+1. Arguments analogous to the preceding also hold for multiple zi;s updated at tk+1 instead of one or even xi;s instead of zi;s. Since the induction statement is true for k = 0, we conclude that the statement of theorem is true. Remark 2.8. From the proof of Theorem 2.6, (2.30) speci cally, it is clear that the centralized asynchronous event-trggers of Theorem 2.4 also guarantee scale invari- ance. Remark 2.9. Scale invariance, as described in Theorem 2.6, means that the average inter-transmission times over an arbitrary length of time is independent of the scale (or the magnitude) of the initial condition of the system. Similarly for any given scalar, 2 (0; 1), the time and the number of transmissions it takes for jx(t)j to reduce to jx(0)j is independent of jx(0)j. So, the advantage is that the ?average? network usage remains the same over large portions of the state space. 2.5 Simulation Results In this section, the proposed decentralized asynchronous event-triggered sensing mechanism is illustrated with two examples. The rst is a linear system and the 40 second a nonlinear system. 2.5.1 Linear System Example We rst present the mechanism for a linearized model of a batch reactor, [47]. The plant and the controller are given by (2.22)-(2.23) with A = 2 6 6 6 6 6 6 6 6 6 6 4 1:38 0:20 6:71 5:67 0:58 4:29 0 0:67 1:06 4:27 6:65 5:89 0:04 4:27 1:34 2:10 3 7 7 7 7 7 7 7 7 7 7 5 ; B = 2 6 6 6 6 6 6 6 6 6 6 4 0 0 5:67 0 1:13 3:14 1:13 0 3 7 7 7 7 7 7 7 7 7 7 5 K = 2 6 6 4 0:1006 0:2469 0:0952 0:2447 1:4099 0:1966 0:0139 0:0823 3 7 7 5 which places the eigenvalues of the matrix (A+BK) at around f 2:98+1:19i; 2:98 1:19i; 3:89; 3:62g. The matrix Q was chosen as the identity matrix. The sys- tem matrices and Q have been chosen to be the same as in [35]. Lastly, the con- troller parameters were chosen as [ 1; 2; 3; 4] = [0:6; 0:17; 0:08; 0:15] and = 0:95. For the simulations presented here, the initial condition of the plant was selected as x(0) = [4; 7; 4; 3]T and the initial sampled data that the controller used was xs(0) = [4:1; 7:2; 4:5; 2]T . The zeroth sampling instant was chosen as txi0 = Ti for sensor i. This is to ensure sampling at t = 0 if the local triggering condition was satis ed. Finally the simulation time was chosen as 10s. Figures 2.1(a) and 2.1(b) show the evolution of the Lyapunov function and its derivative along the ow of the closed loop system, respectively. Figures 2.1(c) and 41 2.1(d) show the inter-transmission times and the cumulative frequency distribution of the inter-transmission times for each of the sensor. The cumulative frequency 0 2 4 6 8 10 0 5 10 15 20 25 t (seconds) V (a) 0 2 4 6 8 10 ?90 ?80 ?70 ?60 ?50 ?40 ?30 ?20 ?10 0 t (seconds) ? V (b) 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 t (seconds) Inter?sample times (seconds ) Sensor 1 Sensor 2 Sensor 3 Sensor 4 (c) 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 Inter?sample times (milli?seconds) Cum. freq. dist. (percentage ) Sensor 1 Sensor 2 Sensor 3 Sensor 4 (d) Figure 2.1: Batch reactor example: evolution of the (a) Lyapunov function, (b) time derivative of Lyapunov function, along the ow of the closed loop system. (c) Sensor inter-transmission times (d) cumulative frequency distribution of the sensor inter-transmission times. distribution of the inter-transmission times is a measure of the performance of the event-triggers. A distribution that rises sharply to 100% indicates that event-trigger 42 is not much better than a time-trigger. Thus, slower the rise of the cumulative distribution curves, greater is the justi cation for using the event-trigger instead of a time-trigger. The lower thresholds for the inter-transmission times Ti for the example can be computed as in Lemma 2.4 and have been obtained as [T1; T2; T3; T4] = [11; 15:4; 12:6; 19:9]ms which are also the minimum inter-transmission times in the simulations presented here. These numbers are a few orders of magnitude higher and an order higher than the guaranteed minimum inter-transmission times and the observed minimum inter-transmission times in [35, 36]. The average inter-transmission times obtained in the presented simulations were [ T1; T2; T3; T4] = [24:9; 27:7; 34:5; 34:2]ms, which are about an order of magnitude lower than those reported in [35, 36]. A possible explanation for this phenomenon is that in [35, 36], the average inter-transmission times depends quite critically on the evolution of the threshold . Although the controller gain matrix K and the matrix Q have been chosen to be the same, by inspection of the plots in [35,36], it appears that the rate of decay of the Lyapunov function V is roughly about half of that in our simulations. However, we would like to point out that our average inter-transmission times are of the same order as in [37] by the same authors. In any case, for LTI systems, our proposed method does not require communication from the controller to sensors to achieve global asymptotic stability. Lastly, as a measure of the usefulness of the event-triggering mechanism compared to a purely time-triggered mechanism, Ti= Ti was computed 43 for each i and were obtained as [T1= T1; T2= T2; T3= T3; T4= T1] = [0:44; 0:55; 0:36; 0:58]. The lower these numbers are, the better it is. 2.5.2 Nonlinear System Example The general result for nonlinear systems is illustrated through simulations of the following second order nonlinear system. _x = f(x; xe) = 2 6 6 4 f1(x; xe) f2(x; xe) 3 7 7 5 = Ax+ 2 6 6 4 0 x31 3 7 7 5+Bu (2.31) where A = 2 6 6 4 0 1 0 1 3 7 7 5 ; B = 2 6 6 4 0 1 3 7 7 5 where x = [x1; x2]T is a vector in R2 and the sampled data controller (in terms of the measurement error) is given as u = k(x+ xe) = K(x+ xe) (x1 + x1;e)3 (2.32) where K = [k1; k2] is a 1 2 row vector such that A = (A+BK) is Hurwitz. Then, the closed-loop tracking error system with event-triggered control can be written as _x = Ax+BKxe + 2 6 6 4 0 x31 (x1 + x1;e)3 3 7 7 5 = Ax+ 2 6 6 4 0 h1 + h2 3 7 7 5 (2.33) where h1 = x31;e + 3x1x 2 1;e + (3x 2 1 k1)x1;e (2.34) h2 = k2x2;e (2.35) 44 Now, consider the quadratic Lyapunov function V = xTPx where P is a symmetric positive de nite matrix that satis es the Lyapunov equation P A+ ATP = Q, with Q a symmetric positive de nite matrix. Let pm and pM be the smallest and largest eigenvalues of the matrix P . Since P is a symmetric positive de nite matrix, pm and pM are each positive real numbers. Further, 1(jxj) , pmjxj2 V (x) pM jxj2 , 2(jxj); 8x 2 R2 The time derivative of V along the ow of the closed loop system (2.33) can be shown to satisfy _V = xTQx+ 2xTPB(h1 + h2) (1 )Qmjxj2 + jxj j2PB(h1 + h2)j Qmjxj where Qm is the smallest eigenvalue of the symmetric positive de nite matrix Q and is a parameter satisfying 0 < < 1. Suppose that the desired region of attraction be S(c), for some non-negative c (see (2.5) for the de nition of S(c)). Let 1 be the maximum value of x1 on the sub-level set S(c). Then, we let hc1 = jx1;ej3 + 3 1jx1;ej2 + max jx1j 1 f3x21 k1gjx1;ej 1(jx1;ej) , j2PBjhc1 1Qm ; 2(jx2;ej) , j2PBk2jjx2;ej 2Qm where 1 and 2 are positive constants such that 1 + 2 = 1. It is clear that Assumption (A2.1) is satis ed and we have _V (1 )Qmjxj2; if ijxi;ej jxj; i 2 f1; 2g 45 Now, , 11 (c) = p c=pm is the maximum value of jxj on the set S(c). Hence, M1(c) in (2.7) has to be de ned for the set on which jx1;ej R1 , 11 ( ). Thus, we have that 1 M1(c) = j2PBj 1Qm R21 + 3 1R1 + max jx1j 1 f3x21 k1g while 1 M2(c) = j2PBk2j 2Qm . Now, only Ti for each i need to be determined. To this end, the closed loop system dynamics (2.33) are bounded as in (2.8) and (2.9). jf1(x; xe)j L1jxj+D1jxej jf2(x; xe)j L2jxj+D2; jxej; 8x s.t. jxj Comparing with (2.33) the following can be arrived at. L1 = jr1( A)j; D1 = 0; L2 = jr2( A)j D2; = s R21 + 3 1R1 + max jx1j 1 f3x21 k1g 2 + k22 In the example simulation results presented here, the following gains and pa- rameters were used. K = 5 3 ; Q = 2 6 6 4 1 0 0 1 3 7 7 5 ; 1 = 0:9; 2 = 0:1 = 0:9; c = 10; 1 = x(0) = [2:8; 2:6]T ; xs(0) = [2:9; 2:7]T (2.36) Notice that M2(c) is a constant independent of c. That is why 2 has been chosen much smaller than 1. The parameter 1 has been chosen to be equal to 46 . To be consistent with asynchronous transmissions, the initial value of xs(0) has been chosen to be di erent from x(0). For the chosen parameters and the initial conditions, the initial value of the Lyapunov function is V (0) = 8:574. Thus the initial state of the system is well within the region of attraction, given by S(c) = S(10). The event-trigger param- eters were obtained as [w1; w2] = [M1(c);M2(c)] = [0:0102; 0:0832] and [T1; T2] = [9; 5]ms, which were also the minimum inter-transmission times. The average inter- transmission times of the sensors for the duration of the simulated time were ob- tained as [ T1; T2] = [9:6; 25:8]ms. Thus for sensor 1, the average inter-transmission interval is only marginally better than the minimum. The number of transmissions by sensors 1 and 2 were 1041 and 388, respectively. Figures 2.2(a) and 2.2(b) show the evolution of the Lyapunov function and its derivative along the ow of the closed loop system, respectively. Figures 2.2(c) and 2.2(d) show the inter-transmission times and the cumulative frequency distri- bution of the inter-transmission times for each of the sensor. The sharp rise of the cumulative distribution curve for Sensor 1 clearly indicates that the event-triggered transmission is nearly equivalent to time-triggered transmission. On the other hand, the slow rise of the cumulative distribution curve of Sensor 2 demonstrates the use- fulness of event-triggering in its case. Simulations were also performed for the case when the central controller inter- mittently sends updates to the parameters of the sensor event-triggers, as in Theo- rem 2.3. For the simulation results presented here, the controller gains, parameters and the initial conditions have been chosen the same as in (2.36). Additionally, the 47 0 2 4 6 8 10 0 1 2 3 4 5 6 7 8 9 t (seconds) V (a) 0 2 4 6 8 10 ?15 ?10 ?5 0 t (seconds) ? V (b) 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 t (seconds) Inter?sample times (seconds ) Sensor 1 Sensor 2 (c) 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 Inter?sample times (milli?seconds) Cum. freq. dist. (percentage ) Sensor 1 Sensor 2 (d) Figure 2.2: Nonlinear system example: evolution of the (a) Lyapunov function, (b) time derivative of Lyapunov function, along the ow of the closed loop system. (c) Sensor inter-transmission times (d) cumulative frequency distribution of the sensor inter-transmission times. parameters in (2.20) were chosen as T = 0:5 and = 0:5. The initial condition Vs(0) = c = 10 was chosen. For the 2 dimensional system in this example, V in (2.19) is the maximum value of V along a circle. V was then found in MATLAB by maximization of V on the circle, which was parametrized by a single angle variable 48 varying on the closed interval [0; 2 ]. In this case, the number of transmissions by Sensor 1 were much lower at 106 while those by Sensor 2 were 324. Notice that w2 = M2(c) is a constant, independent of the value of c. Thus, we see that the reduction in the number of transmissions by Sensor 2 is only marginal while that of Sensor 1 is huge. The average inter-transmission times of the sensors for the duration of the simulated time were obtained as [ T1; T2] = [94:3; 30:9]ms. The minimum inter-transmission times were observed as 9:4ms and 9ms for Sensors 1 and 2, respectively. The number of times the parameters of the sensor event-triggers were updated was 15. The evolution of the Lyapunov function and its derivative along the ow of the closed loop system were very similar to that in Figures 2.2(a) and 2.2(b), respec- tively. Hence, they have not been presented here again. Figures 2.3(a) and 2.3(b) show the inter-transmission times and the cumulative frequency distribution of the inter-transmission times for each of the sensor. These two plots clearly show the usefulness of the event-triggered transmissions. Figure 2.3(c) shows the evolution of the wi parameters of the event-triggers at each of the sensors. As mentioned earlier, w2 is independent of c and hence is a constant. The evolution of w1 shows that it is a non-decreasing function of time. Finally, Figure 2.3(d) shows the evolution of the Ti parameters of the event-triggers at the sensors (for clarity T2 has been scaled by 20 times). Although, T1 evolves in a non-decreasing manner, the same is not the case with T2. However, as mentioned in Remark 2.6, this does not pose any problem and the inter-transmission times of the sensor are still lower bounded by a positive constant. 49 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 1.2 1.4 t (seconds) Inter?sample times (seconds ) Sensor 1 Sensor 2 (a) 0 50 100 150 200 250 300 350 400 0 10 20 30 40 50 60 70 80 90 100 Inter?sample times (milli?seconds) Cum. freq. dist. (percentage ) Sensor 1 Sensor 2 (b) 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 t (seconds) w1 w2 (c) 0 2 4 6 8 10 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 t (seconds) T1 20T2 (d) Figure 2.3: Nonlinear system example with event-triggered communication from the controller to the sensor event-triggers: (a) Sensor inter-transmission times (b) cumulative frequency distribution of the sensor inter-transmission times. Evolution of (c) wi, (d) Ti parameters of the sensor event-triggers. 2.6 Conclusions In this chapter, we have developed a method for designing decentralized event- triggers for control of nonlinear systems. The architecture of the systems considered 50 in this chapter included full state feedback, a central controller and decentralized sensors not co-located with the central controller. The aim was to develop event- triggers for determining the time instants of transmission from the sensors to the central controller. The proposed design ensures that the event-triggers at each sensor depend only on locally available information, thus allowing for asynchronous transmissions from the sensors to the central controller. Further, the design aimed at completely eliminating (or drastically reducing) the need for the sensors to listen to other sensors and/or the controller. The proposed design was shown to guarantee a positive lower bound for inter- transmission times of each sensor (and of the controller in one of the special cases). The origin of the closed loop system is also guaranteed to be asymptotically stable with an arbitrary, but priorly xed, region of attraction. In the special case of linear systems, the region of attraction was shown to be global with absolutely no need for the sensors to listen. Finally, the proposed design method was illustrated through simulations of a linear and a nonlinear example. In the system architecture considered in this chapter, although the control input to the plant is updated intermittently, it is not exactly event-triggered. In fact, in all the results the inter-transmission times of each sensor individually have been shown to have a positive lower bound. And the time interval between receptions of the central controller from two di erent sensors can be arbitrarily close to zero. Since the control input to the plant is updated each time the controller receives some information, no positive lower bound can be guaranteed for the inter-update times of the controller. However, it is not very tough to incorporate event-triggering 51 (with guaranteed positive minimum inter-update times) or explicit thresholds on inter-update times of the control by choosing smaller values in the event-triggers for the sensors. Future work will include results with event-triggered actuation in addition to event-triggered communication on the sensing side. Next, although the transmissions of sensors have been designed to be asyn- chronous, the communication from the central controller to the sensors in Section 2.3.3 have been assumed to be synchronous. In future, we aim to allow these commu- nications also to be asynchronous. Although time delays have not been considered explicitly, they may be handled as in most event-triggered control literature (see [14] for example). Finally, it is worthwhile to investigate more sophisticated triggers for updating the parameters wi and Ti (Section 2.3.3) as is a thorough study and quan- ti cation of sensor listening e ort. 52 Chapter 3 Utility Driven Sampled Data Control of LTI Systems over Sensor-Controller-Actuator Networks 3.1 Introduction As mentioned in the beginning of the previous chapter, much of the event-triggered control literature assumes the availability of full state information. However, in many practical applications only a part of the state information can be directly measured and a dynamic (for example, observer based) output feedback controller must be utilized. Thus, it is important to develop utility driven event-triggered implementations of dynamic output feedback controllers and this chapter is a con- tribution towards this aim. The work in this chapter is closely related to that of the previous chapter. As far as the individual decentralized event-triggers of the previ- ous chapter are concerned, each has access only to a partial output of the system. Thus, the proposed centralized event-triggered implementation of a dynamic output feedback controller naturally extends to the decentralized event-triggering scenario. In fact, in this chapter, we go one step further. We address the problem of utility driven sampled data control over Sensor-Controller-Actuator Networks (SCAN). Motivated by this, we group the nodes in a SCAN into three functional layers - sensor layer, controller/observer layer and the actuator layer, with no two nodes 53 being co-located. In practice though, several nodes from the same or di erent layers may be co-located. Any such scenario can simply be treated as a special case of the general framework of this chapter. The sensor nodes intermittently broadcast their data to the nodes in the observer (dynamic controller) layer. The nodes in the observer layer compute the state of the observer in a decentralized manner, with each node in the observer layer intermittently broadcasting its data to other nodes in the same layer. Each of the actuator nodes also intermittently receives data from a corresponding unique observer node. Thus, communication between the layers is unidirectional. Sensor-Controller-Actuator Networks (SCAN) consist of physically distributed nodes, each of which performs one or more of sensing, control computation and actuation tasks in order to control a plant. If the aggregate feedback provided by the sensor nodes does not constitute full state feedback, then the controller nodes may also have to distributively estimate the state of the plant. Interest in such networked control systems has been rising steadily, specially, in the context of large scale systems such as power grids, building HVAC and even in vehicles. Some of the challenges in SCAN are asynchronous transmission of data; asynchronous and decentralized computation; decision making based only on local information and time delays. Many of these features can be thought of as a manifestation of asynchronously sampled data. Further, in SCAN there are constraints on data rate, resources and energy. Given these factors, utility driven event-triggering techniques have great potential for analyzing and designing SCAN. 54 3.1.1 Contributions The fundamental contribution of this chapter is a methodology for designing im- plicitly veri ed utility driven event-triggered dynamic output feedback controllers for Linear Time Invariant (LTI) systems. The proposed methodology provides a means to achieve global asymptotic stability of the origin of the closed loop sys- tem. The methodology naturally extends to a decentralized sensing scenario (as in Chapter 2) and to the completely decentralized Sensor-Controller-Actuator Network (SCAN) control system. Each of these architectures is important in its own right and thus we address architectures where the sensors and the dynamic controller are co-located (centralized event-triggering), one where they are not co-located (decen- tralized sensing and actuation) and nally SCAN. In the latter architectures, all the transmissions are asynchronous. The proposed event-triggering conditions depend only on local information and include explicit positive lower thresholds for inter- sampling times that are designed to ensure global asymptotic stability of the closed loop system. In the literature, among the few works that consider the problem of event- triggered dynamic output feedback control, [40,41] proposed an event-triggered im- plementation that can guarantee uniform ultimate boundedness of the plant state and provided an estimate of minimum inter-communication time that holds semi- globally (dependent on the initial state of the dynamic controller and the unknown state of the plant). In comparison, the proposed controller guarantees asymptotic stability and an estimate of inter-communication times that holds globally. 55 In [42], a model based output feedback controller was proposed, where the communication from the observer subsystem to the system model subsystem is trig- gered by a condition that compares the observer state with that of a local copy of system model subsystem. Again, the controller guarantees only uniform ultimate boundedness of the closed loop state. In [43, 48] an output feedback control imple- mentation for discrete-time systems is considered as an optimal control problem. The proposed architecture includes a Kalman lter in the sensor subsystem and identical observers in the sensor as well as actuator subsystems. The results provide an upper bound on the optimal cost attained by the event-triggered system. In comparison to [42,43,48], we do not require identical observers/models to be run at di erent locations. Recently, [49] proposed a method for designing continuous time decentralized observers with discrete communication, wherein the sensor and the observer for each subsystem are co-located. In addition, an observability condition for each of the individual subsystems was assumed. Compared to [49], we consider non-co- located sensor and observer nodes, require an observability condition only for the overall system and further, incorporate decentralized dynamic control. Parts of the work in this chapter have appeared in [29,30]. The rest of the chapter is organized as follows. Section 3.2 describes the main problem under consideration and establishes the mathematical notation used in the chapter. In Section 3.3, the design of decentralized event-triggering is presented in a general setting, which is then applied to speci c dynamic output feedback control architectures in Section 3.4. The proposed design methodology is illustrated through 56 simulations in Section 3.5 and nally Section 3.6 provides some concluding remarks. In this chapter, the notation j:j is used to represent the Euclidean norm of a vector and also the induced Euclidean norm of a matrix. 3.2 Problem Setup Consider the closed loop system consisting of a Multi Input Multi Output (MIMO) Linear Time Invariant (LTI) plant and an observer based dynamic controller _x = Ax+Bu; y = Cx (3.1) _^x = (A+ FC)x^+BKx^ Fy; u = Kx^ (3.2) where x 2 Rn, x^ 2 Rn, y 2 Rp and u 2 Rm, are the plant state, the observer state, the output of the plant and the control input to the plant, respectively. The matrices A, B, C, F and K are of appropriate dimensions. Denoting the observer estimation error and the state of the closed loop system, respectively, as ~x , x^ x; , [xT ; ~xT ]T where the notation [xT ; ~xT ]T denotes the vector formed by concatenating the column vectors x and ~x, the closed loop system may be written as _ = 2 6 6 4 _x _~x 3 7 7 5 = 2 6 6 4 A+BK BK 0n;n A+ FC 3 7 7 5 2 6 6 4 x ~x 3 7 7 5 , A (3.3) where 0n;n represents an n n matrix of zeroes. The dynamic controller (3.2) renders the origin of the closed loop system (3.1)-(3.2) globally asymptotically stable if and only if the matrix A is Hurwitz. Typically, (A;B) and (A;C) are assumed to 57 be controllable and observable, respectively. This is su cient to design the gain matrices F and K (which exist) such that (A + FC), (A + BK) and hence A are Hurwitz. For our purpose here, it is su cient to assume that A is Hurwitz. In this chapter, we are interested in event-triggered implementation of the dynamic controller (3.2). Before proceeding, we recall some of the notation introduced in Section 1.3. Let be any continuous-time signal (scalar or vector) and let ft i g be the increasing sequence of time instants at which is sampled. Then we de ne the resulting piecewise constant sampled signal, s, and the ?measurement error?, e, as s , (t i ); 8t 2 [t i ; t i+1) (3.4) e , s = (t i ) ; 8t 2 [t i ; t i+1) (3.5) In the sequel, it is sometimes convenient (and intuitive) to group together asyn- chronously transmitted signals into a single vector. Let j;s 2 Rdj , for j 2 f1; : : : ; qg, be q piecewise constant sampled data signals de ned as in (3.4). Further, suppose that the q signals are asynchronously sampled. That is, the q sequences ft ji g are not necessarily identical. Then, the collection of q asynchronously sampled signals is compactly represented as s = [ T 1;s; : : : ; T q;s] T 2 Rd; where d = qX j=1 dj (3.6) The measurement error is correspondingly de ned as e , s (3.7) The speci c form of event-triggering depends on the architecture of the closed 58 loop system. In this chapter, we consider several di erent architectures ranging from the centralized case (sensors and the controller are co-located) to the completely de- centralized Sensor-Controller-Actuator Network (SCAN) control system. We would like to clarify that Co-located components are assumed to have access to each others? outputs at all times. Note that in this chapter, the terms ?transmit?, ?update? and ?sample? are used interchangeably. In this chapter, the sampled data control systems are designed to satisfy: (i) global asymptotic stability of the closed loop system and (ii) a positive lower bound for the inter-transmission times that holds globally. The proposed design procedure can be divided into two major stages. In the rst stage, utility driven event-triggers are designed for asynchronous transmissions using centralized information (norm of the complete state of the system). In the second stage, realizable event-triggers that depend only on local information are derived by appropriately under-approximating the centralized asynchronous event-triggers. The next section describes this proce- dure in a general setting and in the subsequent section, it is applied to di erent architectures. 3.3 Design of Decentralized Asynchronous Event-Triggering This section presents the design of decentralized asynchronous event-triggering in a general setting. Similarities may be found with the material of Section 2.4. Consider the system _ = A + qX j=1 Bj j;s = A + B s (3.8) 59 j;s 2 Rdj is the sampled-data version of j, Bj 2 Rn Rdj is the jth input matrix, s = [ T 1;s; : : : ; T q;s] T 2 Rd is the asynchronously sampled-data version of and is de ned according to (3.6), B = [B1; : : : ;Bq] 2 Rn Rm. With the continuous-time feedback control law = K ; j = Kj ; j 2 f1; : : : ; qg (3.9) where Kj are appropriately de ned block row matrices of K, the closed loop system with the sampled-data controller can be expressed as _ = (A+ BK) + B e = A + B e (3.10) where A = (A+ BK) and e = ( s ) 2 Rd is the measurement error due to sam- pling. Finally, suppose that the continuous time control law would have stabilized the closed loop system, that is, (A3.1) Suppose that the matrix A is Hurwitz, which ensures that for each symmetric positive de nite matrix Q, there exists a symmetric positive de nite matrix P such that P A+ ATP = Q. Note that the design of the event-triggered controller is completed only with the implicit speci cation of the sampling time instants, ft ji g, through the event- triggers. In order to develop the decentralized asynchronous event-triggers, let us rst consider the following stability result. Lemma 3.1. Consider the sampled-data system (3.8) and assume (A3.1) holds. Let Q be any symmetric positive de nite matrix and Qm its smallest eigenvalue. For 60 each j 2 f1; : : : ; qg, let j 2 (0; 1), s.t. = qX j=1 j 1 and wj = jQm j2PBjj (3.11) where 2 (0; 1) is a design parameter. Suppose that for each j 2 f1; : : : ; qg, the sampling instants t ji are such that j j;ej wjj j for all time t 0. Then, 0 (the origin) is globally asymptotically stable. Proof. Consider the candidate Lyapunov function V ( ) = TP where P satis es (A3.1). Utilizing the measurement error interpretation, (3.10), of the system (3.8), the derivative of the function V along the ow of the system is expressed as _V = T [P A+ ATP ] + 2 TPB e (1 ) TQ + j j h j2PB e j Qmj j i (1 ) TQ + j j h qX j=1 j2PBj j;ej Qmj j i (1 ) TQ + j j h qX j=1 j2PBjjj j;ej Qmj j i The sampling instants have been assumed to be such that the conditions j j;ej=j j wj = jQm j2PBjj for each j are satis ed for all time t 0. Thus, _V (1 ) TQ which implies that 0 (the origin) is globally asymptotically stable. Note that Lemma 3.1 holds for a family of asynchronous event-triggers, all satisfying the conditions j j;ej wjj j. In order to enforce these conditions strictly, each event-trigger requires centralized (non-local) information, in the form of j j. 61 Our aim now is to derive realizable decentralized asynchronous event-triggers that belong to the family considered in Lemma 3.1. To this end, consider the q centralized asynchronous event-triggers for the sampled-data system (3.8) t ji+1 = min n t t ji : j j;ej wjj j o ; j 2 f1; : : : ; qg (3.12) where wj are given by (3.11). From (3.9), we have that j = Kj , where Kj 2 Rdj Rn is the jth block-row matrix of K. Since j jj jKjjj j, enforcing the conditions j j;ej wjj jj=jKjj satis es the requirements of Lemma 3.1. Although these conditions utilize only locally available data, they fail to guarantee positive minimum inter-sampling times. In order to design event-triggers that utilize only locally available data while also guaranteeing minimum inter-sample times, let us rst analyze the emergent inter-sample times of the centralized asynchronous event- triggers (3.12). Now, consider the di erential equation _ = k + a+ b (3.13) where k, a, b are non-negative constants. The solution of this di erential equation is denoted, as a function of time t and the initial condition 0, as (t; 0). In particular, if ka > 0 then (t; 0) is a strictly increasing function of time t and if ka = 0 then (t; 0) 0. Thus, the time it takes to evolve from 0 to a non-negative constant w is expressed as (w; a; b; k) = minfft 0 : (t; 0) = wg [ f1gg (3.14) 62 Notice that (w; a; b; k) 8 >>>>>>>>< >>>>>>>>: = 0; if w = 0 > 0; if w > 0 =1; if w > 0; ka = 0 (3.15) Remark 3.1. Assuming b is non-zero, the solutions of the quadratic di erential equation (3.13) have a nite escape time. However, by de nition (3.14), (w; a; b; k) is strictly less than the nite escape time of the solution (:; 0). Thus on the time interval of interest, [0; (w; a; b; k)], the solution (:; 0) is well de ned. The following lemma guarantees positive lower bounds for the emergent inter- sample times for the system (3.8) with the event-triggers (3.12). Lemma 3.2. Consider the closed loop system given by (3.8) and the event-triggers (3.12). Let wj > 0 for j 2 f1; : : : ; qg be given by (3.11) and let W = qX i=j jBjjwj. Then for j 2 f1; : : : ; qg, the inter-sample times ft ji+1 t j i g are lower bounded by the positive constants Tj = (wj; j Aj+W jBjjwj; jBjj; jKjj): (3.16) where the function is given by (3.14). Proof. Letting j , j j;ej=j j and by direct calculation we see that for j 2 f1; : : : ; qg d j dt = ( Tj;e j;e) 1=2 Tj;eKj _ j j T _ j j;ej j j3 jKjj+ j j _ j j j jKjj+ j j A j+ qX j=1 jBjjj j;ej j j 63 where for j;e = 0 the relation holds for all directional derivatives. This relation is further simpli ed by considering (3.12), which ensures that the sampling instants are such that for all time j wj for each j 2 f1; : : : ; qg. d j dt jKjj+ j j Aj+W jBjjwj + jBjj j Now, by de nition, i(t j i ) = 0 for every sampling time instant t j i . Next, consider the ow _ j = jKjj+ j j Aj+W jBjjwj + jBjj j and its solution denoted, as a function of time t and the initial condition j;0, as j(t; j;0). Then, by the Comparison Lemma [45], it follows that j(t) j(t t ji ; 0); 8 t t j i As a consequence Tj, given by (3.16) is a lower bound on the inter-sample times ft ji+1 t j i g. The fact that Tj > 0 follows from the property (3.15). Remark 3.2. In Lemma 3.2, the procedure for the computation of the lower bounds to the inter-transmission times is quite similar to that in [14]. The signi cant dif- ference is that in Lemma 3.2, the guaranteed lower bounds are for asynchronous sampling while [14] provides lower bounds for synchronous sampling. Lemma 3.2 says that the inter-sample times that emerge from the event- triggers (3.12) have positive lower bounds, given by (3.16). An exactly equivalent method of implementing the event-triggers (3.12), for each j 2 f1; : : : ; qg, is as follows. t ji+1 = min n t t ji + Tj : j j;ej wjj j o (3.17) 64 In these event-triggers, the lower thresholds for the iner-sample times is explicitly enforced, although the actual inter-sample times that emerge from (3.17) may have lower bounds greater than Tj. The advantage with this implementation is that Tj depends only on the system matrices and hence is locally known at the corresponding event-trigger. In other words, the jth event-trigger (3.17) uses only locally available information for time Tj after each of its transmissions. Thus, having guaranteed a positive lower bound for inter-sample times, it is su cient to under-approximate j j to guarantee global asymptotic stability of the closed loop system. One obvious choice is to use the bound j jj=jKjj j j in the event-triggers, for j 2 f1; : : : ; qg, t ji+1 = min n t t ji + Tj : j j;ej wj j jj jKjj o : (3.18) A better option is to use the bound jK+j jj j j, where the notation :+ denotes the pseudo-inverse of the matrix. In fact, this is the greatest lower bound for j j given j. Hence the event-triggers, for j 2 f1; : : : ; qg, t ji+1 = min n t t ji + Tj : j j;ej wjjK+j jj o (3.19) use only locally available information and achieve all the design requirements. While the event-triggers we have described in [29, 30] are based on (3.18), the ones that are described in this chapter utilize the improved version (3.19). Note, however, that if j is scalar then (3.18) and (3.19) are equivalent. The following theorem prescribes the constants Tj and wj in the event triggers, (3.19), that guarantee global asymptotic stability of the origin. Theorem 3.1. Consider the closed loop system (3.10) and assume (A3.1) holds. Let Q be any symmetric positive de nite matrix and let Qm be the smallest eigenvalue 65 of Q. For each j 2 f1; 2; : : : ; qg, let wj and Tj be de ned as in (3.11) and (3.16), respectively. Suppose j are asynchronously updated at time instants determined by (3.19). Then, the origin is globally asymptotically stable and the inter-transmission times are explicitly enforced to have a positive lower threshold. Proof. The statement about the positive lower threshold for inter-transmission times is obvious from (3.19) and only asymptotic stability remains to be demonstrated. This can be done by showing that the event-triggers (3.19) are included in the family of event-triggers considered in Lemma 3.1. From the equivalence of (3.12) and (3.17), it is clearly true that j j;ej wjj j for t 2 [t ji ; t j i + Tj], for each j 2 f1; 2; : : : ; qg and each i. Next, for t 2 [t ji + Tj; t j i+1], (3.19) enforces j j;ej wjjK+j jj wjj j. Therefore, the event-triggers, (3.19), are included in the family of event-triggers considered in Lemma 3.1. Hence, 0 (the origin) is globally asymptotically stable. In the next section, this general formulation is applied to speci c architectures of the control system. 3.4 Event-Triggered Implementations of The Dynamic Controller In this section, the dynamic controllers and the event-triggering conditions are de- veloped for di erent architectures. 66 3.4.1 Architecture I - Centralized In Architecture I, Figure 3.1, the observer and the sensor are co-located, which means the observer has access to the sensor?s output at all times. The closed loop Figure 3.1: Architecture I: Sensor output available to the controller at all time. Co-located components have access to the others? output at any given time. system with the sampled data implementation of the observer and the controller is given by _x = Ax+Bus; y = Cx (3.20) _^x = (A+ FC)x^+BKx^s Fy; u = Kx^ (3.21) where the subscript s denotes the sampled versions of the corresponding continuous- time signals. The second term, BKx^s, in the observer, (3.21), is the natural choice to model the e ect of the sampled data control us = Kx^s in the plant dynamics (3.20). The closed loop system can be written in terms of the measurement error, 67 x^e = x^s x^, as _ = A + 2 6 6 4 BK 0n;n 3 7 7 5 x^e (3.22) where = [xT ; ~xT ]T = [xT ; (x^ x)T ]T , A is as de ned in (3.3), 0n;n is the n n matrix of zeroes. Note that the sampled-data nature of the system is implicit in the measurement error term, x^e (or e). In the notation of Section 3.3, = , = x^, A = A, B = G1, K = H1, where where G1 , 2 6 6 4 BK 0n;n 3 7 7 5 ; H1 , In In (3.23) so that x^ = H1 . Here, the notation In denotes the n n identity matrix. Since there is only one event-trigger, q = 1, s = s = x^s and similarly, e = e = x^e. Therefore, we have the following result as a direct consequence of Theorem 3.1. Theorem 3.2. Consider the system given by (3.22) and assume (A3.1) is satis ed with A = A. Let Q 2 R2n be any positive de nite matrix and let P be de ned according to (A3.1). Let the event-triggering condition be ti+1 = min n t ti + T : jx^ej wjH+1 x^j o where w = Qm j2PG1j , in which Qm is the smallest eigenvalue of Q, 0 < < 1, G1 and H1 are given by (3.23) while T = w; j Aj; jG1j; jH1j , the function being as de ned in (3.14). Then, the origin of the closed loop system is globally asymptotically stable and the inter-transmission times are lower bounded by T . Note that the special structure of the matrix H1 implies that in this case, the event-triggers of the form (3.18) and (3.19) (the one in the theorem) are exactly 68 equivalent. Now, in Architecture I (Figure 3.1), since the sensor, the dynamic controller and the event-trigger are all co-located, the event-trigger can in fact use the additional information obtained from the sensor in determining the transmission instants. In other words, an estimate of j j better than jH+1 x^j may be obtained by using the sensor data. Thus, let H , 2 6 6 4 In In C 0p;n 3 7 7 5 (3.24) so that [x^T ; yT ]T = H . The notation In again denotes the n n identity matrix. Therefore, we now have the following result. Theorem 3.3. Consider the system given by (3.22) and assume (A3.1) is satis ed with A = A. Let Q 2 R2n be any positive de nite matrix and let P be de ned according to (A3.1). Let the event-triggering condition be ti+1 = min n t ti + T : jx^ej w H+[x^T ; yT ]T o where w = Qm j2PG1j , in which Qm is the smallest eigenvalue of Q, 0 < < 1, G1 and H1 are given by (3.23), H is given by (3.24), while T = w; j Aj; jG1j; jH1j , the function being as de ned in (3.14). Then, the origin of the closed loop system is globally asymptotically stable and the inter-transmission times are lower bounded by T . Note that T remains the same as in Theorem 3.2. 69 3.4.2 Architecture II - Centralized Synchronous In Architecture II, Figure 3.2, the observer and the sensor are again co-located, which means the observer has access to the sensor information at all times. However, the Figure 3.2: Architecture II: Synchronous transmissions by the sensor and the con- troller. Co-located components have access to the others? output at any given time. observer in this architecture utilizes sampled version of the sensor output so that the dynamic controller has piecewise constant inputs (which simpli es the online computation of the observer state, x^). The controller and sensor outputs are sampled synchronously at time instants determined by a single event-trigger. The observer system in this case is given by _^x = (A+ FC)x^+BKx^s Fys (3.25) where the subscript s denotes the sampled versions of the corresponding continuous- time signals. The closed loop system may be written in terms of the measurement 70 error x^e = x^s x^ and ye = ys y as _ = A +Gs 2 6 6 4 x^e ye 3 7 7 5 ; where Gs , 2 6 6 4 BK 0n;p 0n;n F 3 7 7 5 (3.26) where = [xT ; ~xT ]T = [xT ; (x^ x)T ]T , A is as de ned in (3.3), 0a;b represents the a b matrix of zeroes. In the context of Section 3.3, = , = [x^T ; yT ]T , A = A, B = Gs, K = H, where H is the matrix de ned in (3.24) and again q = 1 as there is only one event-trigger. Theorem 3.4. Consider the system given by (3.26) and assume (A3.1) is satis ed with A = A. Let Q 2 R2n be any positive de nite matrix and let P be de ned according to (A3.1). Let the event-triggering condition be ti+1 = min n t ti + T : [x^Te ; y T e ] T w H+[x^T ; yT ]T o where w = Qm j2PGsj , in which Qm is the smallest eigenvalue of Q, 0 < < 1, Gs is given by (3.26), H is given by (3.24), while T = w; j Aj; jGsj; jHj , the function being as de ned in (3.14). Then, the origin of the closed loop system is globally asymptotically stable and the inter-transmission times are lower bounded by T . 3.4.3 Architecture III - Decentralized Architecture In the decentralized architecture of Figure 3.3, the sensors are decentralized. Their outputs are sampled and communicated to the central controller asynchronously by independent event-triggers that depend only on local information. Further, the di erent controller outputs are updated in parallel and asynchronously. The closed 71 Figure 3.3: Architecture III: Centralized controller with decentralized sensors and actuators, each transmitting its data asynchronously. loop system is given by _x = Ax+Bu s; y = Cx _^x = (A+ FC)x^+Bu s Fy s ; u = Kx^ where x 2 Rn is the state of the plant, x^ 2 Rn is the observer state, y 2 Rp is the vector of sensed outputs, u s 2 Rm is the vector of inputs to the plant from the actuators. The vectors u s and y s denote the asynchronously sampled versions of the corresponding continuous-time signals as in (3.6). In other words, u s = [u1;s; : : : ; um;s] and y s = [y1;s; : : : ; yp;s]. That is, each actuator output ui;s, for i 2 f1; : : : ;mg, and each sensor output yj;s, for j 2 f1; : : : ; pg, represent asynchronously sampled signals, ui;s = u(t ui k ); 8t 2 [tuik ; tuik+1) yj;s = y(t yj k ); 8t 2 [t yj k ; t yj k+1) It is possible to de ne ui and yj as vectors (instead of scalars) with only minor changes in notation. However, in this chapter we restrict to the scalar case for 72 simplicity. In terms of the measurement error vectors u e = u s u and y e = y s y, the closed loop system is _ = A + 2 6 6 4 B 0n;p 0n;m F 3 7 7 5 2 6 6 4 u e y e 3 7 7 5 ; Gd , 2 6 6 4 B 0n;p 0n;m F 3 7 7 5 (3.27) where = [xT ; ~xT ]T = [xT ; (x^ x)T ]T , A is as de ned in (3.3), 0a;b represents the a b matrix of zeroes. In the context of Section 3.3, = , = [uT ; yT ]T , A = A, B , B F , 2 6 6 4 B 0n;p 0n;m F 3 7 7 5 (3.28) K , 2 6 6 4 K C 3 7 7 5 , 2 6 6 4 K K C 0p;n 3 7 7 5 (3.29) where B 2 R2n Rm, F 2 R2n Rp, K 2 Rm R2n and C 2 Rp R2n are appropriately de ned block matrices. For this architecture, the number of event- triggers is q = p + m. Denoting the ith column of B and F by Bi and Fi; and similarly, the ith row of K and C by Ki and Ci, respectively, we have _ = A + mX i=1 Biui;e + pX j=1 Fjyj;e (3.30) ui = Ki ; yj = Cj (3.31) We now present the result for decentralized asynchronous event-triggering in dy- namic output feedback control. Theorem 3.5. Consider the system given by (3.30) and assume (A3.1) is satis ed with A = A. Let Q 2 R2n be any positive de nite matrix and let P be de ned 73 according to (A3.1). For i 2 f1; : : : ;mg and j 2 f1; : : : ; pg, let the event-triggering conditions be tuik+1 = minft tuik + Tu;i : jui;ej wu;ij K+i uijg tyjk+1 = minft t yj k + Ty;j : jyj;ej wy;jj C+j yjjg where wu;i = Qm u;i 2jP Bij , wy;j = Qm y;j 2jP Fjj , in which Qm is the smallest eigenvalue of Q, 0 < < 1, 0 < u;i < 1, 0 < y;j < 1 are design parameters such that P u;i + P y;j = 1, Bi and Fj are given by (3.28). Let the inter-sampling time thresholds be given by Tu;i = (wu;i; j Aj+W j Bijwu;i; j Bij; j Kij) Ty;j = (wy;j; j Aj+W j Fjjwy;j; j Fjj; j Cjj) where W = P j Bijwu;i + P j Fjjwy;j, and the function is de ned as in (3.14). Then, the origin of the closed loop system is globally asymptotically stable and the inter-sample times of ui and yj are lower bounded by Tu;i and Ty;j, respectively. 3.4.4 Architecture IV - SCAN Finally, we consider a Sensor-Controller-Actuator Network (SCAN) control system architecture, shown in Figure 3.4. The control system contains three functional lay- ers - the sensor layer, the dynamic controller/observer layer and the actuator layer. Each layer consists of non-co-located (physically distributed) nodes. The sensor, observer and the actuator layers consist of p sensor nodes, n observer nodes and m actuator nodes, respectively. In Figure 3.4, the solid arrows indicate physical links, 74 Figure 3.4: The SCAN control architecture has three functional layers. Each node in the sensor layer intermittently broadcasts its output to all the nodes in the observer layer. Each node in the observer layer intermittently broadcasts its state to every other node in that layer. Each of the rst m nodes of the observer layer also transmit intermittently to one of the actuator nodes. The dotted arrows indicate even-triggered communication links, with the event-trigger running at the tail end of the arrow. The solid arrows are physical links. 75 while the dotted arrows indicate the links on which the communication is event- triggered. The event-trigger for each of these links is located at the tail end of the arrow and uses only information that is locally available at that node. Meanwhile, the node or the nodes at the receiving end utilize the asynchronously transmitted data (sampled data), indicated by the additional subscript s. Note that the arrows that go from an arbitrary node ?A? to a layer circle in the gure indicate broadcast communication from the node ?A? to all the nodes in the layer circle. The aggregate observer state z = [z1; : : : ; zn]T is simply a basis transformation of the vector x^ of (3.2). When this basis transformation is appropriately chosen, the communication from the observer layer to the actuator layer is simpli ed and the actuator inputs to the plant are ui = zi;s for i 2 f1; : : : ;mg. Figure 3.4 is a functional description of the control system and also represents the most general case, where no two nodes are co-located. If some nodes (from the same or di erent layers) are co-located, then each collection of co-located nodes need not utilize the sampled versions of the data. Of particular interest is the case where the observer node zi is co-located with the actuator node ui for i 2 f1; : : : ;mg. In the sequel, apart from the general case, this special case is also discussed brie y. Next, in order to keep the notation simple, the data at each node is assumed to be scalar. Our results can easily be generalized to the vector case with only minor changes to the notation. Now, let us consider the design of event-triggered dynamic output feedback control over SCAN architecture of Figure 3.4. The heart of the SCAN architec- ture of Figure 3.4 is the observer layer. Once this is designed, the decentralized 76 asynchronous event-triggers can be designed using the results in Section 3.3. As noted earlier, the nodes in the observer layer do not compute x^ but rather a basis transformation of x^. De ning this transformation is our next task. (A3.2) Assume that the column space of the matrix K, in (3.2), is of dimension m. Under this assumption, the pseudoinverse K+ 2 Rn Rm has only the trivial null space. Consider the mapping x^ = K+u+ x^N (K) where x^N (K) 2 Rn m is an element of the null space ofK and by de nition, K+u is an element of the row space of K. Assumption (A3.2) implies that this mapping is one- to-one and onto. Further, since the row space and the null space of K are orthogonal to each other, the basis for the two subspaces can be chosen independently. Thus, let S = K+ KN (3.32) where KN 2 Rn Rn m is an arbitrary matrix whose columns span the null space of K. Then, the matrix S is invertible and satis es x^ = Sz (3.33) u = u s = KSz s = Kz s ; with K = Im 0m;n m (3.34) where Im is the m m identity matrix and 0m;n m is m (n m) matrix of zeroes. Note that there is no ?sampling? of the data between the actuator nodes and the 77 plant. However, the notation u s is useful for keeping in mind that the actuation sig- nals are the asynchronously transmitted signals Kz s . Thus, the dynamic controller (observer), (3.2), is equivalently expressed as _z = S 1[(A+ FC)Sz +B Kz s Fy] (3.35) where K = KS has been used. Letting H = S 1(A + FC)S, the sampled data version of the decentralized observer is given by _z = D(H)z + (H D(H))z s + S 1B Kz s S 1Fy s where D(H) is the diagonal matrix with its diagonal given by the diagonal of the matrix H. It is more convenient to write the observer equation in terms of the sampling induced measurement errors, as follows. _z = S 1[(A+ FC)Sz +B Kz s Fy] + (H D(H))z e S 1Fy e which when expressed in terms of x^ is given as _^x = (A+ FC)x^+Bu s Fy + S(H D(H))z e Fy e Let us denote the observer estimation error and the state of the closed loop system, respectively, as ~x , x^ x; , [xT ; ~xT ]T Then the closed loop system may be written compactly as _ = A + 2 6 6 4 B K S(H D(H)) 3 7 7 5 z e 2 6 6 4 0n;p F 3 7 7 5 y e (3.36) 78 where the matrix A is as de ned in (3.3). The following theorem prescribes the decentralized asynchronous event-triggering mechanism for the SCAN control ar- chitecture in Figure 3.4. Theorem 3.6. Consider the closed loop system, (3.36), and assume that (A3.1) holds with A = A. Also suppose (A3.2) holds. Let = [zT ; yT ]T and B = 2 6 6 4 B K 0n;p S(H D(H)) F 3 7 7 5 ; K = 2 6 6 4 S 1 S 1 C 0p;n 3 7 7 5 : Further, for each j 2 f1; : : : ; q = n + pg, let j 2 R, Bj is the jth column of B and Kj is the jth row of K. Let Q 2 R2n R2n be any symmetric positive de nite matrix and let Qm be the smallest eigenvalue of Q. For each j 2 f1; 2; : : : ; qg, let wj and Tj be de ned as in (3.11) and (3.16), respectively. Suppose j are asynchronously transmitted at time instants determined by (3.19), with t j0 < 0. Then, 0 (the origin) is globally asymptotically stable and the inter-transmission times are explicitly enforced to have a positive lower threshold. Proof. Assumption (A3.2) implies that S is invertible and that the matrices B and K are well de ned. The rest of the proof follows from Theorem 3.1. Remark 3.3. In case the rst m nodes of the observer layer, z, are co-located with the corresponding actuator nodes, then u = Kz may be used. In this case, the closed loop system equation is given by _ = A + 2 6 6 4 0n;n S(H D(H)) 3 7 7 5 z e 2 6 6 4 0n;p F 3 7 7 5 y e (3.37) 79 and Theorem 3.6 holds for this system if B is appropriately chosen as B = 2 6 6 4 0n;n 0n;p S(H D(H)) F 3 7 7 5 : Remark 3.4. In Figure 3.4 and in our results, the sensor nodes and the observer nodes have been assumed to intermittently broadcast their data to all the nodes in the controller/observer layer. However, this has been done purely for ease of pre- sentation. In practice, a sensor node yj need not transmit its data to the observer node zk if the dynamics of zk is not dependent on yj. A similar statement for intra observer layer communication also holds. Remark 3.5. As discussed in Remark 2.3 of the previous chapter, the idea of an explicit threshold for the inter-transmission times as in the event-triggers, (3.19), has been employed previously in [46]. However, in [46] such a mechanism is used to trigger the controller updates rather than the asynchronous transmissions from the sensors to the controller. Further, in [46] the controller utilizes synchronous measurements from the sensors to compute the control input to the plant, which allows the lower bound for inter-transmission times from [14] to be used. In Architectures I and II of this chapter, the transmissions/samplings are syn- chronous. As a result, the inter-transmission time thresholds are exactly as those obtained from [14]. On the other hand, in Architectures III and IV, the controller has access only to asynchronously received data and the inter-transmission time thresholds of each node need to be computed as in Lemma 3.2. In the next section, simulation results are presented to illustrate the proposed 80 event-triggered controllers. 3.5 Simulation Results In this section, the proposed event-triggered dynamic output feedback controllers are illustrated for a linearized model of a batch reactor, [47]. The plant and the dynamic controller are given by (3.1)-(3.2) with A = 2 6 6 6 6 6 6 6 6 6 6 4 1:38 0:2077 6:715 5:676 0:5814 4:29 0 0:675 1:067 4:273 6:654 5:893 0:048 4:273 1:343 2:104 3 7 7 7 7 7 7 7 7 7 7 5 ; B = 2 6 6 6 6 6 6 6 6 6 6 4 0 0 5:679 0 1:136 3:146 1:136 0 3 7 7 7 7 7 7 7 7 7 7 5 C = 2 6 6 4 1 0 1 1 0 1 0 0 3 7 7 5 ; K = 2 6 6 4 0:1768 0:079 0:0794 0:2464 1:0328 0:1896 0:4479 0:7176 3 7 7 5 F = 2 6 6 6 6 6 6 6 6 6 6 4 2 0 4 1 2 2 1 4 3 7 7 7 7 7 7 7 7 7 7 5 In the event-triggered controllers, Q = I8, the 8 8 identity matrix, = 0:95 were chosen. For the simulations presented here, the initial condition of the plant was chosen as x(0) = [2; 3; 1; 2]T . The state of the centralized observer in Architec- tures I-III was chosen as x^(0) = [0; 0; 0; 0]T . The simulation time for each of the simulations was Tsim = 10s. 81 3.5.1 Architecture I In this architecture, the sampled data is x^s and its initial condition was chosen as x^s(0) = x^(0). The inter-event time threshold of the event-triggers in Theorems 3.2 and 3.3 was obtained as T = 3ms. With the event-trigger of Theorem 3.2 (which is essentially the one proposed in our previous work, [29]), the number of events, aver- age inter-event time and minimum inter-event time were observed to be 543, 18:4ms and 3ms, respectively. With the event-trigger of Theorem 3.3 the corresponding values were 484, 20:7ms and 7:2ms, respectively. This clearly shows the improve- ment over our previous results in [29]. The simulation results for Theorem 3.3 are summarized in Figures 3.5 and 3.6. Figure 3.5 shows the evolution of the Lyapunov function and its derivative along the ow of the closed loop system. Figure 3.6 shows the inter-transmission times and the cumulative frequency distribution of the inter-event times. 0 2 4 6 8 10 0 5 10 15 20 25 30 35 t (seconds) V (a) 0 2 4 6 8 10 ?40 ?35 ?30 ?25 ?20 ?15 ?10 ?5 0 t (seconds) ? V (b) Figure 3.5: Architecture I: (a) The evolution of the Lyapunov function and (b) its derivative along the ow of the closed loop system. 82 0 2 4 6 8 10 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 t (seconds)Inter?sample times (seconds ) (a) 0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70 80 90 100 Inter?sample times (milli?seconds)Cum. freq. dist. (percentage ) (b) Figure 3.6: Architecture I: (a) Inter-event times and (b) the cumulative frequency distribution of the inter-event times. 3.5.2 Architecture II In this architecture, the sampled data is [x^Ts ; y T s ] T and its initial condition was chosen as [x^Ts (0); y T s (0)] T = [x^T (0); yT (0)]T . The inter-event time threshold of the event- trigger in Theorem 3.4 was obtained as T = 0:9ms. For the presented simulation, the number of events, average inter-event time and minimum inter-event time were observed to be 1081, 9:3ms and 3:2ms, respectively. To give a comparison, with the event-trigger corresponding to (3.18), these values were observed as 1548, 6:5ms and 2:2ms, respectively. Again the improvement over our previous results in [29] is clearly visible. Figure 3.7 shows the evolution of the Lyapunov function and its derivative along the ow of the closed loop system. Figure 3.8 shows the inter-transmission times and the cumulative frequency distribution of the inter-event times. 83 0 2 4 6 8 10 0 5 10 15 20 25 30 35 t (seconds) V (a) 0 2 4 6 8 10 ?40 ?35 ?30 ?25 ?20 ?15 ?10 ?5 0 t (seconds) ? V (b) Figure 3.7: Architecture II: (a) The evolution of the Lyapunov function and (b) its derivative along the ow of the closed loop system. 0 2 4 6 8 10 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 t (seconds)Inter?sample times (seconds ) (a) 0 5 10 15 20 0 10 20 30 40 50 60 70 80 90 100 Inter?sample times (milli?seconds)Cum. freq. dist. (percentage ) (b) Figure 3.8: Architecture II: (a) Inter-event times and (b) the cumulative frequency distribution of the inter-event times. 84 3.5.3 Architecture III Finally in Architecture III, the sampled data is s = [u T s ; y T s ] T . Even though y(0) = [ 1; 3]T , the initial sampled data was chosen as s (0) = [0; 0; 1:005; 3:01]T to be consistent with the asynchronous transmission model. The zeroth transmission instant was chosen as t j0 = Tj for each j 2 f1; : : : ; 4g. This is to ensure sampling at t = 0 if necessary. However, by choosing the initial sampled data su ciently close to the actual data, the asynchronous nature of transmissions is respected, as indicated by the rst transmission times by the controller and the sensors, which occur at t 1 = [0; 0; 0:6; 1:5]ms for the chosen initial conditions. The inter-transmission time thresholds in the event-triggers of Theorem 3.5 were obtained as Tu = [1:1; 0:8]ms; Ty = [0:7; 0:6]ms which were also the minimum inter-transmission times for the presented simula- tion. Over a simulation time of 10s, the average inter-transmission times were obtained as T = [5:3; 3:8; 3:6; 3:8]ms, which are roughly ve times larger than the inter-transmission time thresholds. Figure 3.9 shows the evolution of the Lyapunov function and its derivative along the ow of the closed loop system. Figure 3.10 shows the inter-transmission times and the cumulative frequency distribution of the inter-transmission times. 85 0 2 4 6 8 10 0 5 10 15 20 25 30 35 t (seconds) V (a) 0 2 4 6 8 10 ?40 ?35 ?30 ?25 ?20 ?15 ?10 ?5 0 t (seconds) ? V (b) Figure 3.9: Architecture III: (a) The evolution of the Lyapunov function and (b) its derivative along the ow of the closed loop system. 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 t (seconds)Inter?sample times (seconds ) u1 u2y1y2 (a) 0 5 10 15 20 0 10 20 30 40 50 60 70 80 90 100 Inter?sample times (milli?seconds)Cum. freq. dist. (percentage ) u1 u2y1y2 (b) Figure 3.10: Architecture III: (a) Inter-transmission times and (b) the cumulative frequency distribution of the inter-transmission times of the nodes. The curves labelled with ui and yj denote the relevant inter-transmission time data of the con- troller output ui and the sensor output yj, respectively. 86 3.5.4 Architecture IV - SCAN In the SCAN architecture, the initial condition of the observer was chosen as z(0) = [0; 1; 1; 1]T . Denoting = [zT ; yT ]T as in Theorem 3.6, the initial sampled data was chosen arbitrarily as s (0) = [ 1:001; 1:001; 1:001; 1:001; 1:001; 3:002]T so that it is consistent with the asynchronous transmission model. The zeroth transmission instant was chosen as t j0 = Tj for each j 2 f1; : : : ; 6g. This is to ensure sampling at t = 0 if necessary. However, by choosing the initial sampled data su ciently close to the actual data, the asynchronous nature of transmissions is respected, as indicated by the rst transmission times by the 6 nodes which occur at t 1 = [6; 1:1; 0:4; 1:2; 0:4; 0:9]ms for the chosen initial conditions. The inter- transmission time thresholds in the event-triggers, (3.19), were obtained as T = 10 4 [4:886; 4:676; 5:247; 3:976; 4:12; 3:881]s which were also the minimum inter-transmission times for the presented simulation. Over a simulation time of 10s, the average inter-transmission times for the nodes were obtained as T = [3:1; 3; 2:7; 2:6; 2:7; 3]ms, which are roughly an order of mag- nitude larger than the inter-transmission time thresholds. Figure 3.11 shows the evolution of the Lyapunov function and its derivative along the ow of the closed loop system. Figure 3.12 shows the inter-transmission times and the cumulative frequency distribution of the inter-transmission times of the nodes. 87 0 2 4 6 8 10 0 5 10 15 20 25 30 35 40 t (seconds) V (a) 0 2 4 6 8 10 ?50 ?45 ?40 ?35 ?30 ?25 ?20 ?15 ?10 ?5 0 t (seconds) ? V (b) Figure 3.11: Architecture IV: (a) The evolution of the Lyapunov function and (b) its derivative along the ow of the closed loop system. 0 2 4 6 8 10 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 t (seconds)Inter?sample times (seconds ) z1 z2 z3 z4y1y2 (a) 0 5 10 15 20 0 10 20 30 40 50 60 70 80 90 100 Inter?sample times (milli?seconds)Cum. freq. dist. (percentage ) z1 z2 z3 z4y1y2 (b) Figure 3.12: Architecture IV: (a) Inter-transmission times and (b) the cumulative frequency distribution of the inter-transmission times of the nodes. The curves labelled with zi and yj denote the relevant inter-transmission time data of those nodes, respectively. 88 3.6 Conclusions In this chapter event-triggered dynamic output feedback controllers have been de- veloped for architectures where the controller and the sensor are co-located as well as where they are not co-located. In each case, a minimum inter-transmission time is enforced by incorporating a lower threshold on inter-transmission interval in the event-triggering conditions. The design of these thresholds was also presented. The designed event-triggering conditions have been shown to ensure global asymptotic stability of the origin of the closed loop system. The proposed controllers have been illustrated through simulations. In Architecture III, the sensors, the controller and the actuators are not co-located. Hence, the event-triggering conditions have been designed for the sensors and controller outputs to be transmitted asynchronously. In Architecture IV, control of LTI systems over Sensor-Controller-Actuator Networks (SCAN) is considered. A SCAN is divided into three functional layers - sensor layer, controller/observer layer and the actuator layer, each layer consisting of several nodes. The communication between the nodes is intermittent and event- triggered. Further, the ow of information is only from the sensor to observer to actuator layer with the only intra-layer communication occurring in the observer layer. With a careful choice of basis for decentralized estimation of the plant state in the observer layer, each actuator node intermittently receives data from a cor- responding unique observer node. The event-triggers are designed to utilize only locally available information, making the nodes? transmissions asynchronous. Some of the future work will include relaxation of assumption (A3.2), extending the design 89 to the case where an arbitrary communication graph is given and optimal placement of the controller/observer nodes (see Remark 3.3 for example). In each architecture, the observer and the controller gains can be chosen in- dependently, as in the classical case. However, their e ect on the exact conver- gence rate and inter-sampling times has to be studied in detail. The inter-sample time thresholds can also be made less conservative. Finally, in [44] (and references therein), a self-triggered dynamic output feedback controller was presented that ren- ders the origin of the closed loop system globally asymptotically stable under the absence of exogenous disturbances. The controller was designed by allowing the Lyapunov function to evolve non-monotonically and obtained larger inter-sample times. It would be interesting to apply the ideas from the current chapter to design event-triggered variants of [44], specially with decentralized asynchronous event- triggering. 90 Part II Co-Design of Event-Triggering and Quantization 91 Chapter 4 Utility Driven Co-design of Event-Trigger and Quantizer 4.1 Introduction In this chapter we revisit the problem of control under data rate constraints and limited information, a problem that has been actively researched in the last decade. A good survey paper on this and other topics is [50]. Many papers have looked at issues such as fundamental limits on communication rate for stabilization (see for example [51{55]), while others have focused on asymptotic stabilization with dynamic quantization [56{60]. Control under data rate constraints and limited information occurs frequently in Cyber Physical Systems (CPS), often with the additional constraint of limited computational capabilities. The eld of event-triggered control (example: [14,15,19]) also has similar mo- tivations and seeks to systematically design controllers that update or sample the control action at low average rates. These controllers are based on the princi- ple of updating the control only when necessary (control by exception). In other words, event-triggered control seeks to minimize the average rate of communication instances, while the amount of information that can be conveyed at each commu- nication instance is not limited. However, in practical situations quantization is inevitable, and hence it is necessary to consider utility driven sampled data control along with quantization. The survey paper [22] makes a related remark that the con- 92 nection between quantized and even-triggered feedback must be studied. In [61,62] event-triggered control systems with dynamic quantization are proposed. While [61] considers a model based approach, with identical models of the plant running on the sensing and the actuation side, [62] considers a zero-order-hold actuation. Keeping in mind the limited computational resources, in this chapter we only consider static quantizers. Starting from very similar motivations as utility driven event-triggered control, there is a body of literature that seeks to design coarsest static quantizers. Elia and his co-workers rst studied this problem in the context of quadratically stabilizable linear time invariant systems [63, 64] (single input), [65] (two input), and demon- strated that the coarsest quantizer is the logarithmic quantizer. Fu and Xie [66] extended the results of [64] to linear multiple input systems by quantizing each dimension separately, and their design resulted in an in nite-density logarithmic quantizer. Finite density logarithmic quantizers for the multiple input case were de- signed in [67{70]. All the above references focussed on Linear Time Invariant (LTI) systems and, except for [63,64], the results were only developed for discrete time sys- tems. While [63] designed an implicitly veri ed discrete-event controller, [64] studied the optimal periodic sampling time. The references [71, 72] utilized a Robust Con- trol Lyapunov Function (RCLF) approach to characterize the coarsest quantizers for single input control a ne nonlinear systems. Systems with quantization can be viewed as switched systems [73], the switch- ing surfaces being the boundaries of the quantization cells. In other words, a quan- tizer is a discrete-event encoder, whose output is the quantization state. The quan- 93 tization state evolves in a discrete set and the boundaries of the quantization cells determines the event-trigger. The complexity of the event-triggering condition is determined by the complexity of the shape of the quantization cells. An RCLF approach to quantization in nonlinear systems may lead to very complicated geome- tries (for example, see Equation (10) in [71]) and the event-triggering condition may be as computationally intensive, if not more, as the original control law. Thus, we see that on the one hand, event-triggered control [14,15,19] assumes the availability of an in nite precision quantizer and on the other an RCLF quantizer assumes that the induced event-trigger is computationally inexpensive. Therefore, in the context of Cyber Physical Systems, there is a need to co-design the quantizer and the utility driven event-trigger for emulation based control. 4.1.1 Contributions In this chapter, we exploit the common principle behind utility driven sampled data control and coarsest quantization (robustness to measurement errors) to de- sign discrete-event controllers for semi-global asymptotic stabilization of general nonlinear systems. Speci cally, we propose a methodology for co-designing the event-trigger and the quantizer in an emulation based controller. Although the re- sultant quantizer is not necessarily coarsest, it is however a nite density logarithmic quantizer and is easy to implement. The proposed algorithm produces an implicitly veri ed emulation based discrete-event controller that asymptotically stabilizes the origin with a speci ed arbitrary compact region of attraction. In the special case 94 that a certain Lipschitz constant holds globally, the origin of the closed loop system is globally asymptotically stable. In comparison to the coarsest quantization liter- ature, our quantizer design holds for general multi-input nonlinear continuous time systems. Compared to [61,62] we co-design the event-trigger and the static quanti- zation, keeping in mind the applicability to control systems with low computational capabilities. Another important aspect of the proposed quantizer is the presence of hysteresis, which is utilized for guaranteeing a dwell time for the updates of the discrete event controller. A signi cant portion of the work in this chapter has been published in [31]. The rest of the chapter is organized as follows. Section 4.2 introduces the basic notation and states precisely the problem under study. The design of the event- trigger is discussed in Section 4.3 and the quantizer design is described in Section 4.4. An example of a two dimensional nonlinear system is provided in Section 4.5 and nally some concluding remarks are made in Section 4.6. 4.2 Problem statement Note: The results in Sections 4.2 and 4.3 do not depend on a speci c choice of a norm. However, the proposed quantizer design utilizes the max or the in nity norm. Therefore, we adopt this norm through out the chapter, and use the notation jyj to denote the max norm, jjyjj1, of a vector y. Consider a nonlinear system of the form _x = f(x; u); x 2 Rn; u 2 Rm (4.1) 95 with feedback control u = (x) that renders the origin of the closed loop system _x = f(x; (x)) (4.2) globally asymptotically stable. Now, consider the problem of controlling the system with quantized state feedback, where the quantizer is static. A static quantizer can be modeled as a nonlinear function of the state. However, in this chapter we consider quantizers with hysteresis (hence memory). Thus, we de ne the quantizer function in a more general sense as follows. De nition 4.1. A quantizer is a function q : Rn ! , where = f!0; !1; !2; : : : g is a countable set, with !k 2 Rn for each k and [ !k2 fx 2 Rn : q(x; !k) = !kg = Rn. In this chapter !k are called the generating points and is called the generating set (or the set of generating points) of the quantizer. The quantization density is de ned as De nition 4.2. Quantization density: For 0 < 1, let N( ) be the number of elements ! 2 such that j!j 1= . The quantization density of the quantizer q is de ned as q = lim sup !0 N( ) 2ln( ) : (4.3) This de nition is similar to the one in [64]. The presence of hysteresis in the quantization state xq, and the interpretation of a quantizer as a discrete-event encoder necessitates the treatment of xq as a state variable and the resultant closed loop system as a hybrid system. In this chapter, we adopt the notation and theory 96 described in [74] (and the references therein) to study this hybrid system. Let = [x;xq] 2 Rn denote the state of the hybrid system (the notation [x;xq] denotes the concatenation of the vectors x and xq). Then, the closed loop hybrid system may be expressed as _ = F ( ) := 8 >>>< >>>: _x = f(x; (xq)) _xq = 0 ; 2 C (4.4) + = G( ) := 8 >>>< >>>: x+ = x x+q = q(x; xq) ; 2 D (4.5) H = (C;F;D;G) (4.6) where C Rn and D Rn are appropriately de ned sets. The hybrid system H is the collection of the ow set, C, the ow map, F , the jump set, D, and the jump map, G. The quantizer is speci ed by the set and the function q(x; xq). As is clear from our formulation, the updates of the quantized state information, xq, are not periodic, unlike in [64]. Rather, the quantized state is updated whenever a state-dependent triggering condition is satis ed, that is when 2 D. The event-trigger determines when the feedback is communicated and the control updated. The quantizer determines what is communicated. As discussed earlier, an e cient discrete event controller necessitates the co-design of the event- trigger and the quantizer. Therefore, the problem under consideration in this chap- ter is that of co-designing the event-trigger and the quantizer in emulation based controllers for semi-global asymptotic stability of general nonlinear systems. Specif- 97 ically, the problem is to design the sets , C and D; and the quantizer function q such that both the event-trigger and the quantizer are e cient. In the next section, the design of the event-trigger (design of the sets C and D) is detailed. 4.3 Design of the Flow and the Jump Sets The following are the main assumptions in the chapter. (A4.1) The closed loop system (4.2) is input-to-state stable (ISS) with respect to measurement error, i.e., there exists a C1 Lyapunov function, V : Rn ! R, that satis es 1(jxj) V (x) 2(jxj) @V @x f(x; (x+ e)) (jxj); if (jej) jxj where 1(:), 2(:), (:) and (:) are class K11 functions. (A4.2) The function is Lipschitz on compact sets. It is actually su cient to assume that the origin of the system (4.2) is asymptotically stable as opposed to the ISS assumption (A4.1). However, the ISS assumption keeps the exposition focused and simpler. Expressing the measurement/quantization error as e , xq x; (4.7) 1A continuous function : [0;1) ! [0;1) is said to belong to the class K1 if it is strictly increasing, (0) = 0 and (r)!1 as r !1 [45]. 98 let us de ne the ow and the jump sets as C = f 2 Rn : jej W jxjg (4.8) D = f 2 Rn : jej W jxjg (4.9) where W is a positive constant. The sets C and D capture a simple event-triggering condition. The stability aspects of the hybrid system (4.6) maybe studied through a hybrid Lyapunov function candidate [74], which is de ned as follows. De nition 4.3 (Lyapunov-function candidate). Given the hybrid system H with data (C;F;D;G) and the compact set A Rp, the function Vh : dom Vh ! R is a Lyapunov-function candidate for (H;A) if (i) Vh is continous and nonnegative on (C [ D) n A dom Vh, (ii) Vh is continuously di erentiable on an open set O satisfying C n A O dom Vh, and (iii) lim f !A; 2(dom Vh)\(C[D)g Vh( ) = 0. For the hybrid system (4.6), let A , f 2 Rn : x = xq = 0g (4.10) and de ne the hybrid Lyapunov function candidate for the pair (H;A) as Vh( ) = V (x) + maxf0; jxq xj 2W jxjg (4.11) where V is given by (A4.1). Notice that Vh( ) = V (x) for all 2 C. The func- tion Vh( ) is positive de nite and its sub-level sets are compact. Also note that hrVh( ); F ( )i = @V @x f(x; (xq)) for all 2 C n A and in an open neighborhood of C n A. 99 4.3.1 Selection of W Let Br = fx 2 Rn : jxj rg: (4.12) Er = f 2 Rn : jxj r; jxqj rg: (4.13) Note that for each r nite, Br and Er are compact sets in Rn and Rn , respectively. For each 0 de ne R , f : Rn : V (x) ; jxqj R , 11 ( )g (4.14) where 1(:) is the function from assumption (A4.1). Then, it is clear that R ER. For each compact set B that contains the origin, there is a 0 such that B R. Therefore, without loss of generality it is assumed that the prescribed region of attraction is of the form (4.14). If assumption (A4.2) holds, then there exists a constant WR > 0 such that WRjxj 1(jxj); 8x 2 BR (4.15) The design of the ow and the jump sets is complete if we specify how the constant W is to be chosen. The following Lemma provides a methodology for accomplishing this goal. Lemma 4.1. Consider the hybrid system (4.6) with C and D de ned as in (4.8)- (4.9). Suppose assumptions (A4.1) and (A4.2) hold. Let the desired region of at- traction be R, (4.14), for some 0. If W WR then hrVh( ); F ( )i < 0; 8 2 C \R n A (4.16) 100 Proof. By the de nition of WR and the fact that W WR, it follows that W jxj WRjxj 1(jxj); 8x 2 BR Recall the de nition of the ow set C, (4.8). Also,R is a subset of ER, (4.13). There- fore, (C \ R) f 2 Rn : jej 1(jxj)g and assumption (A4.1) immediately implies that (4.16) is true. Remark 4.1. If the function is globally Lipschitz, then (4.16) holds for all 2 C n A and not just for 2 (C \R) n A. If WR 1 then quantization is not required and a constant control u (0) asymptotically stabilizes the origin of the nonlinear system (4.1). This is made more precise in the following proposition and the subsequent discussion. Proposition 4.1. Consider the hybrid system (4.6) with C and D de ned as in (4.8)-(4.9). If WR W > 1 and = f0g, then the set A, (4.10), is asymptotically stable with R included in the region of attraction. Proof. The set (D \ R) = fx 2 Rn f0g : jx 0j W jxjg \ R = ;, the empty set. On the other hand, (C \ R) = fx 2 Rn f0g : jx 0j W jxjg \ R = R. Lemma 4.1 then implies that the set A is asymptotically stable with R included in the region of attraction. IfWR W > 1 and = f0g, then the setD is empty. Thus, the hybrid system (4.6) is really just the continuous time system (4.4) with xq 0. If WR W = 1 and = f0g, then (C\R) = (D\R) = R and there can be jumps in the solutions of H, (4.6). However, the jump map is the identity map, x+ = x and x+q = xq = 0. Since 101 the jump map is induced by the controller and is not inherent in the system, the identity jump map can be ignored by the controller and we can focus only on purely owing solutions that start in RnA. All such solutions asymptotically converge to the set A. Therefore, in the sequel we assume that W = WR < 1 unless speci cally mentioned otherwise. In the next section the design process of the quantizer is detailed. 4.4 Design of The Quantizer Now, all that is left to be designed is the quantizer. Our goal here is the follow- ing. Given an event-trigger, (4.8)-(4.9), satisfying Lemma 4.1, design an e cient quantizer that semi-globally asymptotically stabilizes the origin of the system with a prescribed compact region of attraction. In the coarsest quantizer literature, robustness to measurement errors is ex- ploited to design nite density logarithmic quantizers and in single input LTI systems the coarsest quantizer. The quantizer in this chapter also utilizes the same princi- ple, although indirectly through the simpli ed event triggering condition designed in Section 4.3. In our opinion, this approach is better suited for continuous time nonlinear systems for two reasons. Considering general nonlinear systems, the set fx 2 Rn : @V@x f(x; (xq)) < 0g for an arbitrary xq can have a complicated shape. This can potentially lead to a complex design process, that requires signi cant cus- tomization for individual systems. The second drawback is that of implementation 102 - checking, in real time, whether the state belongs to a particular quantization cell can be as intensive, if not more, as computing the control itself. This defeats our motivation of designing controllers that require low rate of communication and low computation capabilities. The proposed quantizer design is much simpler and applicable to a wide range of nonlinear systems. The chief features of the proposed quantizer design are as follows. The quantization cells are determined by the simpli ed triggering condition, (4.8)-(4.9). In the triggering condition, the max or the in nity norm is used, leading to a very easily implementable triggering condition and quantizer. The quantization cells are allowed to be overlapped, and the resulting hysteresis is utilized to avoid chattering of the controller. De nition 4.4. For each k 2 f0; 1; 2; : : :g, the quantization cell generated by !k is the set Ck = fx 2 Rn : q(x; !k) = !kg. In the hybrid system (4.6), xq changes only during jumps. In order to minimize the number of control updates or jumps, it is necessary to ensure that at each jump the state is mapped outside the jump set D, or more precisely, it is required that + = G( ) 2 (C nD) \ ER; 8 2 (D \ ER) n A (4.17) However, x does not change during jumps, and x+q = q(x; xq). Hence, the quantizer needs to be designed such that x+q 6= xq. Therefore, by the de nition of a quantiza- tion cell it is necessary that Ck f!kg \(D\ER)nA = ; for each k 2 f0; 1; 2; : : :g. 103 In other words, the quantizer must be de ned such that for each k 2 f0; 1; 2; : : :g Ck f!kg \ (C \ ER) = (Ck f!kg) \ ER (4.18) Finally, A C, A D and A \ Ck = ; if !k 6= 0. Hence, it is necessary to choose !0 = 0. Therefore, the quantizer has to satisfy the following constraints. !k 2 Rn and j!kj R; k 2 f1; 2; : : :g (4.19) Ck = fx 2 Rn : j!k xj < WRjxjg; k 2 f1; 2; : : :g (4.20) !0 = 0 (4.21) C0 = f0g [ fx 2 Rn : jxj > Rg (4.22) C0 [ k=1[ k=1 Ck = Rn; 0 < < 1 (4.23) where Ck = fx 2 Rn : j!k xj WRjxjg and Ck denotes the closure of the set Ck . Note that Ck Ck for each k. The constraint (4.23) has been introduced so that the resultant quantizer is over designed and the quantization cells overlap. In other words, the nal constraint induces hysteresis in the quantizer, which is useful for avoiding chattering. Moreover, excluding C0, each cell Ck is such that in the region where Ck overlaps with no other cell, j!k xj WRjxj. Next, notice that the cell C0 includes the region outside BR. Any arbitrary nominal value could have been chosen as the quantization state for the region outside BR; and we have selected it to be 0. 104 We de ne the quantizer function as follows. q(x; !k) = 8 >>>>>>>>< >>>>>>>>: !k; if x 2 Ck argmin !j j!j xj WRjxj ; if x =2 Ck; x 6= 0 !0; if x = 0 (4.24) In the second case there can be more than one solution. Note that the quantizer function satis es (4.17). The following theorem demonstrates that a quantizer that satis es (4.19)- (4.24) asymptotically stabilizes the set A with R in the region of attraction. Theorem 4.2. Consider the hybrid system (4.6) with C and D de ned as in (4.8)- (4.9), and suppose assumptions (A4.1) and (A4.2) hold. Let the desired region of attraction be R, (4.14), for some 0. Suppose that W WR and that the quantizer is designed to satisfy (4.19)-(4.24). Then, the set A is asymptotically stable and the region of attraction includes R. Proof. The compact set R ER, where R = 11 ( ). The function Vh in (4.11) is a hybrid Lyapunov candidate function for the pair (H;A). Consider the event- trigger (the sets C and D) designed in (4.8)-(4.9). Given a quantizer that satis es (4.19)-(4.24), the following hold. hrVh( ); F ( )i < 0; 8 2 C \R n A Vh(G( )) Vh( ) 0; 8 2 D \R n A where the rst relation follows from Lemma 4.1, and the second from the fact that the quantizer function q ensures satis ability of (4.17). Hence, for every c > 0 no 105 complete solution remains in the compact set f 2 Rn : Vh( ) = cg \R. Recall the de nition of R, (4.14). The function V (x) decreases monotonously during ows and does not change during jumps. The constraints (4.19) and (4.21) imply that jxqj R at all times. Hence, R is forward-invariant2 and every maximal solution that starts in R is a complete solution. Therefore, Theorem 23 in [74] implies that the setA is asymptotically stable and the region of attraction includes the setR. Corollary 4.1. Suppose in addition to assumptions (A4.1), (A4.2) the functions f and are Lipschitz on compact sets. Then, there exists a constant d > 0 such that for all solutions starting in R nA the jumps are separated by at least an amount of time d. Proof. Outside the set A, x+q 6= xq and x+q is given by the second case of (4.24). Further, (4.23) implies that after a jump x 2 Ck , where k is such that x+q = !k. Therefore, jx+q xj=(WRjxj) < 1. The rest of the proof follows from an analysis similar to that in [14]. In Theorem 4.2, the set A is globally asymptotically stable if WR is a global constant. Notice that in event-triggered control, the measurement error is reset to zero at triggering instants. However, in the proposed discrete-event controller jx+q xj 6= 0 and instead satis es jx+q xj W jxj, which is not zero in general. This is the reason why hysteresis is required in the quantizer, to avoid chattering. Next, we demonstrate that a quantizer satisfying (4.19)-(4.24) indeed exists, 2See [74] for the de nitions of the terms ?forward invariance?, ?maximal solution? and ?complete solution?. 106 and construct a minimal set of generating points that satisfy (4.19)-(4.23). For the sake of clarity, we rst outline the design process for n = 1, that is, for nonlinear systems (4.2) that are one dimensional. 4.4.1 Design of in One Dimensional Systems Now, we invert the problem and ask: given a point in the region of interest what are the values !k, (4.19), can take such that Ck, (4.20), contains that point. If the point is 0 then it is contained in C0. Also, all cells other than C0 are intervals. Therefore, we ask the more speci c question: given ruk 6= 0 such that jruk j R, what should !k be such that j!kj jruk j and j!k ruk j = WRjruk j, where 0 < < 1 is a constant. Thus, ruk is the upper or the outer extreme of the interval C k (see Figure 4.1) and is a parameter that allows us to over-design. The inverse problem has the unique solution !k = (1 WR)ruk (4.25) Then the inner or lower extreme of the interval Ck is rlk = !k 1 + WR (4.26) Therefore, the interval Ck is the open interval (r l k; r u k) or (r u k ; r l k) depending on whether ruk is positive or negative, respectively. The points r u k and r l k are in the set Ck (see Figure 4.1). If we now set ruk+2 = r l k then we can recursively determine the 107 Figure 4.1: Design of for 1-D systems. The blue lines indicate the actual quanti- zation cells or intervals, while ruk and r l k indicate the extremities of the over-designed quantization cells Ck . set . Following this procedure, we arrive at the following !0 = 0; ru1 = R; r u 2 = R !k = (1 WR)ruk ; 8k 2 f1; 2; : : :g rlk = !k 1 + WR ; 8k 2 f1; 2; : : :g ruk+2 = r l k; 8k 2 f1; 2; : : :g !k+2 = 1 WR 1 + WR !k; 8k 2 f1; 2; : : :g (4.27) Note the symmetry in the positive and negative generators !k. Simple calculations give the quantization density as q = 2 ln (1+ WR) (1 WR) (4.28) Thus, the proposed quantizer is a nite density logarithmic quantizer. The design process is summarized in Figure 4.1. 108 4.4.2 Design of in Two Dimensional Systems The design process for two dimensional systems is based on that of one dimensional systems, though there are also some signi cant di erences. In 1-D systems there is only one type of cell (other than C0). In 2-D systems, there is a larger variety of cells. More speci cally, in 2-D systems there are three types of cells, other than C0. These are shown in Figure 4.2. The state variable x is the vector [x1;x2]. The 1 and 2 axes are the lines x1 = x2 and x1 = x2, respectively. Type 1 cells are the ones that lie completely within one of the quadrants of the 1- 2 axes. Type 2 cells are the ones whose generators lie on either 1 or 2 axis. Type 3 cells are those whose generators do not lie on the 1- 2 axes and yet the cell lies in more than one of 1- 2 quadrants. To describe the di erent types of cells algebraically, let us de ne Type 1 blocks as (for arbitrary n) Si(!) , fx 2 Rn : j! xj < WRjxijg; i 2 f1; 2; : : : ; ng Every cell Ck is the union of the n Type 1 blocks Ck = n[ i=1 Si(! k) A cell Ck is of Type 1 if and only if it satis es Ck = Si(! k); for some i 2 f1; 2; : : : ; ng (4.29) A cell Ck is of Type 2 if j!1j = j!2j = : : : = j!nj. A Type 3 cell is one that is neither of Type 1 nor Type 2. However, Type 3 cells can be approximated by 109 ?15 ?10 ?5 0 5 10 15 ?15 ?10 ?5 0 5 10 15 x1 x 2 2 3 1 1 3 ? 2 ? 1 Figure 4.2: Possible types of cells in 2-D systems, excluding C0. The dots are the generators of the quantization cells, whose boundaries are represented by the polygons. appropriate Type 1 blocks, which have the same shape as Type 1 cells. Note that similar statements hold for the quantization cells Ck with = 1. Figure 4.3 shows the geometry of Type 1 cells, Ck . They have two parallel sides, which are in turn parallel to either x1 or x2 axes. Each cell is completely determined by the lengths a and b, which are given as a = WR (1 WR) j!kj; b = WR (1 + WR) j!kj Note that a and b depend only on j!kj. Also, the magnitude of the slope of the non-parallel sides is equal to WR, which is independent of !k. The two parallel sides of the of the cell are at j!kj + a and j!kj b distances away from the origin. 110 ?15 ?10 ?5 0 5 10 15 0 5 10 15 x1 x 2 ? 1 ? 2 aa a bb b Figure 4.3: Geometry of Type 1 cells. That is, the cell is part of an annulus (in the max norm sense) whose outer radius is j!kj+ a and inner radius is j!kj b. The information about the geometry can be used to solve the inverse problem: given the outer radius of the cell what should j!kj be? The solution is of course given by (4.27) with ruk , r l k and ! k interpreted as the outer radius, the inner radius and j!kj, respectively. Using these facts we design the quantization cells with Type 1, Type 2 cells and Type 1 approximation of Type 3 cells. The algorithm progresses in stages by covering recursively one annulus after another with quantization cells. The process of determining these annuli is similar to the 1-D case, with the di erence that the procedure (4.27) now gives the inner and outer radii (in the max norm sense) of the overlapping annuli, Figure 4.4. The design process is summarized in Figure 4.5. Figures 4.5(a) and 4.5(b) demonstrate the process to cover an annulus of a given outer radius. The outer radius of the annulus determines the radius at which the generators need to lie 111 ?15 ?10 ?5 0 5 10 15 ?15 ?10 ?5 0 5 10 15 x1 x 2 Figure 4.4: In the rst stage of the design process, annuli are selected in a process analogous to (4.27) and Figure 4.1. The inner and outer boundaries of the rst annulus are shown in blue, while those of the second annulus are shown in red. according to the appropriate interpretation of (4.27). The generators are stacked equidistantly on a line to completely cover a quadrant of the annulus with the constraint that there be a generator on 1 and 2 axes. The process is repeated to cover each quadrant of the annulus. Then, the inner radius of the of the so covered annulus determines the outer radius of the next annulus. This process recursively designs the set completely. Simple calculations yield that Number of cells in an annulus = 4 1 + WR WR (4.30) where d:e denotes the greatest integer function. Note that this number is indepen- dent of the outer or inner radius of the annulus. Of course the number of annuli required to cover a region is given by a number that is the same as in 1-D systems. 112 ?10 ?5 0 5 10 ?10 ?5 0 5 10 x1 x 2 (a) Two stacked cells ?10 ?5 0 5 10 ?10 ?5 0 5 10 x1 x 2 (b) Cells stacked to cover a quadrant ?10 ?5 0 5 10 ?10 ?5 0 5 10 x1 x 2 (c) Annulus of a given outer radius ?15 ?10 ?5 0 5 10 15 ?15 ?10 ?5 0 5 10 15 x1 x 2 (d) Inner radius of the rst annulus is the outer radius of the next one Figure 4.5: Fig. 4.5(a) and Fig. 4.5(b) demonstrate the steps in covering an annulus. The dots indicate the generators of the quantization cells. Fig. 4.5(c) and Fig. 4.5(d) show that the procedure leads to a logarithmic quantizer in two dimensions. Thus the quantization density in 2-D systems is q = 4 1 + WR WR 1 ln (1+ WR) (1 WR) (4.31) Hence, again the designed quantizer is a nite density logarithmic quantizer. 113 4.4.3 Design of in n Dimensional Systems The design in higher dimensional systems is similar to the two stage process for 2-D systems. In the rst stage, an annulus of a given outer radius is covered and then the outer radius is updated, thus yielding recursively. As in the 2-D case there are three main types of cells. Type 1 and Type 2 cells are similar to those in the 2-D case. However, Type 3 cells can be classi ed into multiple sub-types thus giving rise to much richer design options. In this chapter, we do not however investigate them further. We propose a direct adaptation of the 2-D case, that is using Type 1 approximation of Type 3 cells. This process gives the quantization density as q = 2n 1 + WR WR n 1 1 ln (1+ WR) (1 WR) (4.32) This is a nite density logarithmic quantizer. However, it is very ine cient as the density grows exponentially with the dimension of the system. Hence, e cient design in higher dimensions is a topic of future research. 4.5 Example In this section, the proposed emulation based controller is illustrated through an example. Consider the second order nonlinear system _x1 = x2 _x2 = 1 l (g sin(x1) + u): (4.33) Let the control input be given as u = (x) = l x2 g sin(x1) K(x2 + x1): (4.34) 114 where K > 0 and > 0. Let the Lyapunov function in assumptions (A4.1) be V (x) = l 2 (x2 + x1) 2 + Kx21: (4.35) Routine calculations yield @V @x f(x; (x+ e)) 3(jxj) + L (jxj)jej (4.36) where 3(jxj) = min(K;K 2)jxj2 (jxj) = p 2 p 2 + 1jxj L = p 2 p ( K + g)2 + (K + l)2 Let us choose the constant W in the sets C and D as W = Wr = minfK;K 2g L p 2 p 2 + 1 ; 0 < < 1 Then, it is clear that if jej W jxj then @V @x f(x; (x+ e)) (1 ) 3(jxj) = (jxj) that is assumption (A4.1) is satis ed. Further, since Wr is a global constant (inde- pendent of r), the discrete-event controller guarantees global asymptotic stability of the set A in the hybrid system H, (4.6). The quantizer designed as in Section 4.4 with = 0:99 and = 0:9 has a density 2582. Figure 4.6 shows the evolution of jxj and jej=W for a sample trajectory. In the simulations the parameters g, l, K and were chosen as 10, 0:2, 1 and 1, respectively, from which W = 0:0447 is obtained. The number of jumps or 115 equivalently the number of control updates was observed to be 165 in the simulated time, giving an average update frequency of 33Hz. The minimum inter-update time was observed to be 0:0011s. 0 1 2 3 4 5 0 0.5 1 1.5 2 t | e| /W | x| Figure 4.6: Evolution of jxj and jej=W . 4.6 Discussion and Conclusions This chapter revisits the problem of control under data-rate constraints. Speci cally, we have combined the ideas of event-triggered control and coarsest quantization to propose a method for co-designing the event-trigger and the quantizer in emulation based controllers for stabilization tasks. The resulting quantizer is a nite den- sity logarithmic quantizer, applicable to general multi-input and multi-dimensional continuous-time nonlinear systems. To the best of our knowledge, this work is the rst to look at the co-design of the event-trigger and the quantizer in emulation based discrete-event controllers. The proposed design algorithm results in a con- troller that guarantees semi-global asymptotic stability of the origin of the system with a speci ed arbitrary compact region of attraction. In case a certain Lipschitz 116 constant is global, the origin is globally asymptotically stable. If only semi-global practical stability is desired, with any speci ed compact region of attraction and ultimate bound, the quantizer has a nite number of cells. This makes the sensing and control system very simple, and by storing the control values for each cell in memory, the control response can be made signi cantly faster. Several extensions are possible, such as treating W itself as a state, that is updated during the jumps along with the quantized state. In the quantizer design process, WR need not be held xed. Instead, for each annulus R and hence WR can be appropriately re-de ned. This is possible only in nonlinear systems, and it will lead to lower density quantizers than otherwise. Some future directions of research are the use of coordinate transformations as pre and post processing stages for lower density quantizers, and improvements to the design process in three and higher dimensions. Finally, as mentioned in Section 4.3, the proposed design easily extends to a case with a weaker assumption than the ISS one. 117 Part III Utility Driven Event-Triggering for Trajectory Tracking 118 Chapter 5 Utility Driven Sampled Data Control for Trajectory Tracking 5.1 Introduction In this chapter, we investigate an event triggered control algorithm for trajectory tracking. Tracking a time varying trajectory or even a set-point is of tremendous practical importance in many control applications. In these applications, the goal is to make the state of the system follow a reference or desired trajectory, which is usually speci ed as an exogenous input to the system. In this chapter, the reference trajectory is generated by a reference system. The majority of the previous works in the event-triggered control literature assumed a state feedback control strategy with no exogenous input, some exceptions being [12,15,18,19,21,41,75], where unknown disturbances appear as exogenous inputs. However, in this chapter, we consider exogenous inputs that are available to the controller through measurements, namely the reference trajectory and the input to the reference system. 5.1.1 Contributions The main contribution of this chapter is the design of event-triggered controllers for trajectory tracking in nonlinear systems, which is a special case of nonlinear systems with exogenous inputs. It is assumed that the reference trajectory and the exogenous input to the reference system are uniformly bounded. Given a nonlinear system and 119 a continuous-time controller that ensures global uniform asymptotic tracking of the desired trajectory, the proposed algorithm provides an event based controller that guarantees uniform ultimate boundedness of the tracking error and ensures that the inter-event times of the controller are bounded away from zero. In the special case that the derivative of the exogenous input to the reference system is also uniformly bounded, an arbitrarily small ultimate bound for the tracking error can be designed. In this chapter, unlike in the event-triggered control literature, the continuous-time control law is assumed to render the closed loop system asymptotically stable rather than ISS with respect to measurement errors. Although on compact sets the latter condition can be arrived at from the former, our choice allows a direct and clear procedure for designing an event-triggering condition with time-varying components that results in fewer controller updates. The results in this chapter for nonlinear systems have appeared in [32,33]. The rest of the chapter is organized as follows. In Section 5.2 we set up the problem and introduce the notation used in the chapter. Subsequently, in Section 5.3, the basic design procedure is highlighted for the special case of linear systems. Then in Section 5.4 the general case of nonlinear systems is addressed and results for three di erent classes of reference trajectories are presented. The theoretical results in the chapter are illustrated through numerical simulations of a second order nonlinear system in Section 5.5. Finally, the results are summarized in Section 5.6. 120 5.2 Problem statement and notation Consider a nonlinear system of the form _x = f(x; u); x 2 Rn; u 2 Rm (5.1) which has to track a reference trajectory de ned implicitly by the dynamical system _xd = fr(xd; v); xd 2 Rn; v 2 Rq (5.2) where the external signal v and the initial condition of the signal xd determine the speci c reference trajectory. Let the tracking error be de ned as ~x , x xd. In general, a controller for tracking a reference trajectory depends on both the tracking error as well as the reference trajectory. Hence, we assume that the control signal is of the form u = ( ); where , [~x;xd; v] (5.3) where the notation [a1; a2; a3] denotes the column vector formed by the concatena- tion of the vectors a1, a2 and a3. Consequently, the closed loop system that describes the tracking error is given as _~x = f(~x+ xd; ( )) _xd: (5.4) Now, consider a controller that updates the control only intermittently and not continuously in time. Let ti for i = 0; 1; 2; : : : be the time instants at which the control is computed and updated. Then, the tracking error evolves as _~x = f ~x+ xd; ( (ti)) _xd; for t 2 [ti; ti+1): (5.5) 121 The above dynamical system can also be viewed as a continuously updated control system, albeit with an error in the measurement of the state and the exogenous input. By de ning the measurement error as e , 2 6 6 6 6 6 6 4 ~xe xd;e ve 3 7 7 7 7 7 7 5 , (ti) , 2 6 6 6 6 6 6 4 ~x(ti) ~x xd(ti) xd v(ti) v 3 7 7 7 7 7 7 5 ; t 2 [ti; ti+1) (5.6) the system in (5.5) can be rewritten as _~x = f(~x+ xd; ( )) _xd + f(~x+ xd; ( + e)) f(~x+ xd; ( )) (5.7) where we have expressed the above system as a perturbed version of the dynamical system (5.4). Our objective is to develop an event based controller for tracking a trajectory within a desired ultimate bound. To this end, we assume that when the control is updated continuously in time, the state x tracks the desired trajectory asymptoti- cally, that is, there exists such that system (5.4) satis es ~x! 0 as t!1. Then a utility driven event-triggered trajectory tracking control mechanism is proposed that (i) guarantees the tracking error to be uniformly ultimately bounded (within a desired bound), and (ii) ensures a positive lower bound for control update times. 5.3 Linear Systems Before we address the problem for general nonlinear systems, we rst describe the design procedure for linear systems. Thus, the plant and the reference system are 122 given by _x = Ax+Bu; x 2 Rn; u 2 Rm (5.8) _xd = Arxd +Brv; xd 2 Rn; v 2 Rq (5.9) where A, B, Ar and Br are matrices of appropriate dimensions. Letting ~x , x xd and , [~x;xd; v], we assume that the control signal is of the form u = G = G~x~x+Gxdxd +Gvv (5.10) where G 2 Rm R2n+q is a matrix while G~x, Gxd and Gv are appropriately de ned block matrices of G. In the sequel, each of the two forms is used depending on the requirement. As a result, the closed loop system that describes the tracking error is given as _~x = A~x+BG + (A Ar)xd Brv: (5.11) Then, the sampled data control system in terms of the measurement error, (5.6), is given as _~x = A~x+BG + (A Ar)xd Brv +BGe: (5.12) Now, we state the main assumption that the continuous-time control law ren- ders the origin of the closed loop system, (5.11), globally asymptotically stable. (A5.1) Suppose [~x;xd; v] 0 is an equilibrium solution for the dynamical system in (5.4). Further, suppose that there exists a quadratic Lyapunov function, V = ~xTP ~x, where P is a symmetric positive de nite matrix, such that for all 123 admissible xd and v, a1k~xk2 V (~x) a2k~xk2 (5.13) 2~xTP A~x+BG + (A Ar)xd Brv a3k~xk2 (5.14) where a1, a2, and a3 are positive constants. The notation k:k denotes the Euclidean norm of a vector. In the sequel, it is also used to denote the induced Euclidean norm of a matrix. Note that (5.13) is technically not required as it follows from the positive de niteness of the matrix P . However, its purpose in the assumption is to collect all the relevant notation in a single place. Also note that the meaning of ?admissible xd and v? in (A5.1) di ers in each of our main results, where in each case it is speci ed precisely. Consider the Lyapunov function, V (:), in assumption (A5.1) as a candidate Lyapunov function for the system (5.12). The time derivative of V (~x), along the ow of the tracking error system, (5.12), is given by _V = 2~xTP A~x+BG + (A Ar)xd Brv + 2~xTPBGe a3k~xk2 + 2~xTPBGe a3k~xk2 + k~xkLT jej (5.15) where where jej denotes the vector of the absolute values of the components of e and L 2 R2n+q is a non-zero vector, with non-negative elements, given by L = kc1(2PBG)k kc2(2PBG)k : : : kc2n+q(2PBG)k (5.16) where the notation ci(:) denotes the ith column of the matrix argument. Then, 124 (5.15) suggests the following triggering condition. t0 = minft 0 : k~xk r > 0g; and ti+1 = minft ti : LT jej a3k~xk 0; k~xk rg (5.17) where 2 (0; 1) and r > 0 are design parameters. The parameter r determines the ultimate bound of the tracking error. It is necessary to update the control only when k~xk r, for some r > 0, else it may result in the accumulation of control update times. Notice that each update instant ti+1 is de ned implicitly with respect to ti. Hence, the initial update instant t0 has been speci ed separately. As the proposed triggering condition does not allow the control to be updated whenever k~xk < r, the rst update instant, t0, need not be at t = 0. Therefore, it is assumed that u = 0 for t 2 (0; t0). We now show that the triggering condition (5.17) ensures uniform ultimate boundedness of the tracking error under suitable conditions. The rst of the condi- tions is the following assumption on the reference trajectory. (A5.2) For all time t 0, k[xd; v]k d for some d 0 and v is piecewise continuous. The following lemma that the event-triggering condition (5.17) ensures that the tracking error is ultimately bounded, provided the sequence of control execution times does not exhibit Zeno behavior (accumulation of inter-event times), in other words either the sequence of control execution times is nite or lim i!1 ti =1. Lemma 5.1. Consider the event-triggered system given by (5.12) and (5.17). Sup- pose that assumptions (A5.1) and (A5.2) are satis ed. If the sequence of control 125 execution times does not exhibit Zeno behavior, then the tracking error, ~x, is uni- formly ultimately bounded by a ball of radius r1 = r a2 a1 r. Proof. The assumption that the sequence of control execution times does not exhibit Zeno behavior implies that the triggering condition, (5.17), is well de ned 8t 2 [0;1) (if there are nitely many control updates, that is i 2 f0; 1; : : : ; Ng, then tN+1 =1). As a result, (5.15) and (5.17) imply that _V (1 )a3k~xk2 (1 )a3r2 < 0; 8~x 2 f~x 2 Rn : k~xk rg (5.18) Thus, given any initial condition ~x(0), there is a nite time (dependent on the initial condition) in which the solution enters the set f~x : V (~x) a2r2; k[xd; v]k dg and stays there. Therefore, the tracking error, ~x, is uniformly ultimately bounded by a ball of radius r1 = r a2 a1 r. Now we show that, under suitable conditions, the inter-event times resulting from (5.17) have a positive lower bound guaranteeing the non-occurrence of Zeno behavior. For the rst result, we need the following additional assumption. (A5.3) For all time t 0, v is di erentiable and k _vk c for some c 0. Theorem 5.1. Consider the event-triggered system given by (5.12) and (5.17). Sup- pose that assumptions (A5.1), (A5.2) and (A5.3) are satis ed. Then, the tracking error, ~x, is uniformly ultimately bounded by a ball of radius r1 = r a2 a1 r, and the inter-event times (ti+1 ti) for i 2 f0; 1; 2; : : :g are uniformly bounded below by a positive constant that depends on the bound of the initial tracking error. 126 Proof. Uniform ultimate boundedness of the tracking error automatically follows from Lemma 5.1 if the existence of a positive lower bound for the inter-event times is shown. Note that for each i, ke(ti)k = 0 and k~x(ti)k r. Further, note that LT jej kLkkek for all e. Hence, the triggering condition (5.17) implies that the inter-event times, (ti+1 ti) T , where T is the time it takes kek to grow from 0 to a3 kLkr a3 kLkk~xk. If we show that T > 0, then the proof is complete. From (5.12), (5.9), (5.10) and triangle inequality property, we observe that k _~xk k(A+BG~x)~xk+ k(A Ar +BGxd)xd + (BGv Br)vk+ kBGek k _xdk kArxd +Brvk Now, note that the triggering condition (5.17) implies that k~x(t0)k r and (5.18) implies that for all time t t0, kx(t)k 0, where 0 = r a2 a1 k~x(t0)k Thus, letting P1 = k(A+BG~x)k; P2 = (A Ar +BGxd) (BGv Br) ; Pe = kBGk P3 = Ar Br (A5.2) implies that k _~xk P1k~xk+ P2k[xd; v]k+ Pekek P1 0 + P2d+ Pekek and k _xdk P3d, while (A5.3) implies k _vk c. Then, by letting P0 = P1 0 + (P2 + 127 P3)d and from the de nition _e = [ _~x; _xd; _v] it follows that dkek dt k _ek Pekek+ P0 + c (5.19) Note that for kek = 0, the rst inequality holds for all the directional derivatives of kek. Then, according to the Comparison Lemma [45] kek P0 + c Pe (ePe(t ti) 1); for t ti: (5.20) Thus, the inter-event times are uniformly lower bounded by T , which satis es T 1 Pe log 1 + a3rPe kLk(P0 + c) : (5.21) Thus, we conclude that the uniform positive lower bound for the inter-event times, T , is positive. In the next section, the event-triggering condition and the corresponding re- sults for nonlinear systems are given. We also demonstrate two additional results, where the assumption (A5.3) is relaxed to include piecewise continuous v. 5.4 Nonlinear Systems In this section, we address the problem for general nonlinear systems. We start by stating the main assumptions. (A5.4) Suppose f(0; (0)) fr(0; 0) = 0 and that there exists a C1 Lyapunov function for the dynamical system in (5.4), V : Rn ! R, such that for all admissible xd 128 and v, 1(k~xk) V (~x) 2(k~xk) @V @~x f(~x+ xd; ( )) fr(xd; v) 3(k~xk) where 1(:), 2(:), and 3(:) are class K1 functions1. (A5.5) The functions f , and fr are Lipschitz on compact sets. The notation k:k denotes the Euclidean norm of a vector. In the sequel, it is also used to denote the induced Euclidean norm of a matrix. Note that the meaning of ?admissible xd and v? in (A5.4) di ers in each of our main results, where in each case it is speci ed precisely. At this stage, it is enough to know that (A5.2) is satis ed in each case. Now, consider the following family of compact sets: S(R) = f : V (~x) 2(R); k[xd; v]k dg S(R) = f : V (~x) = 2(R); k[xd; v]k dg (5.22) Note that for each R 0, the sets S(R) and S(R) include all the admissible reference signals, xd and v. For each set S(R) there exists, by assumption (A5.5), a non-zero vector L(R) 2 R2n+q, with non-negative elements, such that kf(~x+ xd; ( + e)) f(~x+ xd; ( ))k L(R)T jej kL(R)kkek; 8 ; ( + e) 2 S(R) (5.23) 1A continuous function : [0;1) ! [0;1) is said to belong to the class K1 if it is strictly increasing, (0) = 0 and (r)!1 as r !1 [45]. 129 where jej denotes the vector of the absolute values of the components of e. With- out loss of generality, it may be assumed that each component of L(R) is a non- decreasing function of R. In the sequel, we use the notation Si, Si and Li to denote S(k~x(ti)k), S(k~x(ti)k) and L(k~x(ti)k), respectively. Next, we de ne a continuous function, (:), that satis es (R) max kwk R @V (w) @w ; 8R 0 (5.24) We now derive the triggering condition that determines the time instants ti at which the control is updated. Consider the Lyapunov function, V (:), in assumption (A5.4) as a candidate Lyapunov function for the system (5.5). The time derivative of V (~x), along the ow of the tracking error system, _V = (@V=@~x) _~x, may be obtained through the measurement error interpretation, (5.7). _V = @V @~x f(~x+ xd; ( )) _xd + @V @~x f(~x+ xd; ( + e)) f(~x+ xd; ( )) 3(k~xk) + @V @~x f(~x+ xd; ( + e)) f(~x+ xd; ( )) 3(k~xk) + (k~xk)L(R)T jej; 8 ; ( + e) 2 S(R) (5.25) where the second last equation is obtained from assumption (A5.4), and (5.25) is then obtained from (5.22)-(5.24). Then, (5.25) suggests a triggering condition. Consider the following triggering condition (for the sake of clarity, the com- plete system description including the state equation and the triggering condition 130 are given). _~x = f ~x+ xd; ( (ti)) _xd; 8t 2 [ti; ti+1) (5.26) t0 = minft 0 : k~xk r > 0g; and ti+1 = minft ti : LTi jej 3(k~xk) (k~xk) 0; k~xk rg (5.27) where 0 < < 1 and r > 0 is a design parameter that determines the ultimate bound of the tracking error. It is necessary to update the control only when k~xk r, for some r > 0, else it may result in the accumulation of control update times. Notice that each update instant ti+1 is de ned implicitly with respect to ti. Hence, the initial update instant t0 has been speci ed separately. As the proposed triggering condition does not allow the control to be updated whenever k~xk < r, the rst update instant, t0, need not be at t = 0. Therefore, it is assumed that u = 0 for 0 t < t0. Under assumptions (A5.2), (A5.4) and (A5.5), the following lemma demon- strates that the event-triggering condition (5.27) ensures 2 Si for all t 2 [ti; ti+1), for each i. Moreover, the lemma also demonstrates that the event-triggering condi- tion (5.27) renders the tracking error ultimately bounded, provided the sequence of control execution times does not exhibit Zeno behavior (accumulation of inter-event times), in other words either the sequence of control execution times is nite or lim i!1 ti =1. Lemma 5.2. Consider the system (5.4). Suppose that assumptions (A5.2), (A5.4) and (A5.5) are satis ed. Then, in the event-triggered system (5.26)-(5.27), for each i, 2 Si for all t 2 [ti; ti+1). Further, if the initial condition is bounded and the 131 sequence of control execution times does not exhibit Zeno behavior, then the tracking error, ~x, is uniformly ultimately bounded by a ball of radius r1 = 1 1 ( 2(r)). Proof. First, we establish by contradiction that for each i, 2 Si for all t 2 [ti; ti+1). Note that by de nition, ( + e) = (ti) 2 Si and the triggering condition enforces k~x(ti)k r. Further, since k~x(ti)k r, the open r-ball is a proper subset of and is contained within the interior of Si (that is, its intersection with Si is an empty set). Also note that sets Si and Si (see (5.22) and the text following (5.23)) are essentially a sub-level set and a level set, respectively, of the Lyapunov function V . Now, let us assume that does escape Si during the interval [ti; ti+1). Then, since the tracking error ~x is continuous as a function of time, there exists a t i 2 [ti; ti+1) such that (t i ) 2 Si Si and _V jt=t i > 0 (where _V jt=t i denotes _V evaluated at t = t i ). However, as (t i ) 2 Si Si, (5.25) and (5.27) imply _V jt=t i (1 ) 3(k~x(t i )k) < 0. Thus, having arrived at a contradiction, we conclude that no such t i exists and that the rst claim of the lemma is true. Consequently, (5.25) and (5.27) again imply that the derivative _V along the ow of the system satis es _V (1 ) 3(k~xk) < 0; 8t 2 [ti; ti+1); k~x(t)k r (5.28) and further, for each R r it is true that any solution that enters the set S(R) does not leave it subsequently. The assumption that ~x(0) is bounded and the de nition of t0 imply that ~x(t0) is also bounded. Then, the assumption that the sequence of control execution times does not exhibit Zeno behavior implies that the triggering condition, (5.27), is well de ned and that _V (1 ) 3(k~xk) < 0, 8t 2 [0;1) s.t. k~x(t)k r (if there are 132 nitely many control updates, that is i 2 f0; 1; : : : ; Ng, then tN+1 = 1). Then, in fact, it is true that S(R) is positively invariant for each R r. In particular, S0 is positively invariant. Then, (5.28) implies that _V (1 ) 3(r) < 0 for all 2 S0 such that k~xk r. Hence all solutions, , with bounded initial conditions enter the set S(r) in nite time and as S(r) is positively invariant, the solutions stay there. Therefore the tracking error, ~x, is uniformly ultimately bounded by the closed ball of radius r1 = 1 1 ( 2(r)). Looking back at (5.27), it is clear that the functions 3 and play a crucial role in determining how often an event is triggered or in computing a lower bound for the inter-event times. Speci cally, the following de nition is useful. s2s1 , mins1 k~xk s2 3(k~xk)= (k~xk) (5.29) where s2 s1 > 0 are any positive real numbers, the functions 3 and are as de ned in (A5.4) and (5.24), respectively. Since 3 and are continuous positive de nite functions, s2s1 is well de ned and positive for any given s2 s1 > 0. Now we present the rst main result of the chapter. It demonstrates, for a particular class of reference trajectories, that in the event-triggered system (5.26)- (5.27) the inter-event times are uniformly bounded away from zero while the tracking error is uniformly ultimately bounded. Theorem 5.2. Consider the system (5.4). Suppose that assumptions (A5.2), (A5.3), (A5.4) and (A5.5) are satis ed. Then, for the event-triggered system (5.26)-(5.27), the tracking error, ~x, is uniformly ultimately bounded by a ball of radius r1 = 11 ( 2(r)), and the inter-event times (ti+1 ti) for i 2 f0; 1; 2; : : :g are uniformly 133 bounded below by a positive constant that depends on the bound of the initial tracking error. Proof. Uniform ultimate boundedness of the tracking error follows from Lemma 5.2. Only the existence of a positive lower bound for the inter-event times remains to be shown. Note that for each i, ke(ti)k = 0 and k~x(ti)k r. Hence, the triggering condition (5.27) implies that the ith inter-update time, (ti+1 ti), is at least equal to the time it takes kLikkek to grow from 0 to 3(k~xk)= (k~xk). Recall from the proof of Lemma 5.2 that every solution, , stays in the set S0 for all t 2 [t0; ti), for each i. Thus, kLik kL0k for each i. Notice S0 f : k~xk 0; k[xd; v]k dg (5.30) where 0 = 1 1 ( 2(k~x(t0)k)). Then, (5.29) implies ti+1 ti T , where T is the time it takes kek to grow from 0 to 0r =kL0k. If we show that T > 0, then the proof is complete. From (5.7), and triangle inequality property, we observe that k _~xk kf(~x+ xd; ( )) _xdk+ kf(~x+ xd; ( + e)) f(~x+ xd; ( ))k (5.31) From (5.23), the second term is bounded by LT0 jej kL0kkek on the set S0. Since, according to (A5.4), f(0; (0)) fr(0; 0) = 0, (A5.5) then implies that there exist Lipschitz constants P1 0 and P2 0 such that k _~xk P1k~xk+ P2k[xd; v]k+ LT0 jej P1 0 + P2d+ kL0kkek 134 where the second inequality is obtained from (5.30). Assumptions (A5.5)-(A5.2) imply that there exists a constant P3 0 such that k _xdk P3d and (A5.3) implies k _vk c. Then, by letting P0 = P1 0 + (P2 + P3)d and from the de nition _e = [ _~x; _xd; _v] it follows that dkek dt k _ek kL0kkek+ P0 + c (5.32) Note that for kek = 0, the rst inequality holds for all the directional derivatives of kek. Then, according to the Comparison Lemma [45] kek P0 + ckL0k (ekL0k(t ti) 1); for t ti: (5.33) Thus, the inter-event times are uniformly lower bounded by T , which satis es T 1kL0k log 1 + 0r P0 + c : (5.34) As kL0k is nite and 0r > 0, we conclude that the inter-event times have a uniform positive lower bound, T . In the next result, the conditions on the reference trajectory are relaxed by no longer requiring it to satisfy assumption (A5.3). Instead, to ensure the absence of Zeno behavior, a new assumption is made - that dv, the uniform bound on kvk, is no larger than a quantity determined by 0r and L0. The new assumptions, in contrast to Theorem 5.2, lead to a constraint on the choice of the radius r in the triggering condition and ensure only local uniform ultimate boundedness of the trajectory tracking error. Let L(R) , [Q(R);M(R)] and Li , [Qi;Mi] where Q(R); Qi 2 R2n and M(R);Mi 2 Rq. Now, the second main result is presented. 135 Theorem 5.3. Consider the system de ned by (5.4). Suppose that the assump- tions (A5.2), (A5.4) and (A5.5) hold. Also, for some R0 r suppose that 0r 2dvkM(R0)k > 0, where 0 = 11 ( 2(R0)), 0r is given by (5.29) and dv is the uniform bound on kvk. If k~x(0)k R0, then in the event-triggered system (5.26)- (5.27), the tracking error, ~x, is uniformly ultimately bounded by a ball of radius r1 = 1 1 ( 2(r)), and the inter-update times (ti+1 ti) for i 2 f0; 1; 2; : : :g are uniformly bounded below by a positive constant that depends on R0. Proof. The proof is very similar to that of Theorem 5.2, and hence only the essential steps are described here. According to Lemma 5.2 each solution, , with k~x(0)k R0 stays in the set S(R0). Hence, kMik kM(R0)k and kQik kQ(R0)k for each i. Since kvk is uniformly bounded by dv it follows that for each i, MTi jvej kMikkvek 2dvkM(R0)k, where ve = v(ti) v and jvej denotes the component- wise absolute value of the vector ve. The de nitions of Qi and Mi imply that LTi jej = QTi j[~xe;xd;e]j+MTi jvej QTi j[~xe;xd;e]j+ 2dvkM(R0)k. Note that for each i, r k~x(ti)k 0. Thus, the triggering condition in (5.27) implies that for each i, LTi 1je(t i )j 0r , or equivalently, QTi 1j[~xe(t i );xd;e(t i )]j , 0r 2dvkM(R0)k > 0, the last inequality being one of the assumptions. Hence, the inter-event times ti+1 ti T , where T is the time it takes k[~xe;xd;e]k to grow from 0 to =kQ(R0)k. If we show that T > 0, then the proof is complete. Following steps similar to those in the proof of Theorem 5.2, we know that there exists a nite P0 0 such that dk[~xe;xd;e]k dt kQ0kk[~xe;xd;e]k+ P0 + 2dvkM(R0)k. Note that for k[~xe;xd;e]k = 0, the inequality holds for all the directional derivatives. 136 Thus, the inter-event times are uniformly lower bounded by T , which satis es T 1kQ0k log 1 + 0r 2dv P0 + 2dvkM(R0)k : (5.35) As kQ0k is nite, we conclude that the inter-event times have a lower bound, T , that is greater than zero. Theorem 5.3 is somewhat conservative because only the uniform bound on kvk is utilized in determining the ultimate bound and the lower bound on the inter-event times. A more useful result is obtained by imposing only slightly stricter constraints on v - that jumps in v are separated in time by Tv > 0, that the magnitude of each jump is upper bounded by a known constant and that v is Lipschitz between jumps. This is expressed formally in the following assumption. (A5.6) There exist constants c 0, Tv 0 and Jv 0 such that for all t; s 0, the following holds: kv(t) v(s)k cjt sj + l jt sj Tv m Jv, where d:e is the ceiling function. Now the nal result is presented. Theorem 5.4. Consider the system de ned by (5.4). Suppose that the assump- tions (A5.2), (A5.4), (A5.5) and (A5.6) hold. Also, for some R0 r suppose that 0r JvkM(R0)k > 0, where 0 = 11 ( 2(R0)) and 0r is given by (5.29). If k~x(0)k R0, then in the event-triggered system (5.26)-(5.27), the tracking error, ~x, is uniformly ultimately bounded by a ball of radius r1 = 1 1 ( 2(r)), and the inter- update times (ti+1 ti) for i 2 f0; 1; 2; : : :g are uniformly bounded below by a positive constant that depends on R0. 137 Proof. Let e , [e~x; exd ; ev ], where ev , c(t ti) for t 2 [ti; ti+1) and each i. Then, by (A5.6), kek ke k + l jt tij Tv m Jv. Now, let Tk be the time it takes ke k to grow from zero to ( r kJvkM(R0)k)=kL0k. Then, a lower bound on the inter-event times is given by max k2f1;2;:::;Ng fminfkTv; Tkgg; N = r JvkM(R0)k (5.36) where b:c denotes the oor function. Following the proof of Theorem 5.2, Tk is estimated as Tk 1 kL0k log 1 + r kJvkM(R0)k P0 + c : (5.37) Note that Tk > 0 for k 2 f1; : : : ; N 1g and TN 0. Further, fkTvg is an increasing sequence of positive numbers while fTkg is a decreasing sequence. Thus the lower bounded on inter-event times given by (5.36) is positive. The ultimate boundedness of the tracking error follows from Lemma 5.2. Remark 5.1. Notice from (5.23) that in order to compute Li = L(k~x(ti)k) it is necessary to compute the set Si = S(k~x(ti)k) or at least a set of which Si is a subset, such as Bi , f : k~xk 11 ( 2(k~x(tik)); k[xd; v]k dg. However, if k~x(ti)k k~x(ti 1)k then clearly some components of Li may be greater than those of Li 1. But from Lemma 5.2, we know that Si Si 1 for each i, so at time instant ti instead of computing Li based on Bi, we can let Li = Li 1. Following this rule, the sequence fLig can be chosen to be component-wise non-increasing. The triggering condition and the estimates of lower bounds on the inter-update times depend critically on L and hence using a time-varying L lowers the overall average 138 update rate. Computing L is in general a computationally costly task and it is not useful to update L continuously in time like 3(k~xk) and (k~xk). In the next section our theoretical results are illustrated through simulations. 5.5 Examples and simulation results The theoretical results developed in the previous sections are illustrated through simulations. 5.5.1 Nonlinear System Example First, we present the simulation results for the following second order nonlinear system. _x = 2 6 6 4 _x1 _x2 3 7 7 5 = 2 6 6 4 0 1 0 1 3 7 7 5x+ 2 6 6 4 0 x31 3 7 7 5+ 2 6 6 4 0 1 3 7 7 5u = Ax+ 2 6 6 4 0 x31 3 7 7 5+Bu (5.38) The desired trajectory is a solution of the system [ _xd;1; _xd;2] = [xd;2; v], where v is an exogenous input, which along with the initial conditions of the state of the reference system, xd = [xd;1;xd;2], determines the speci c trajectory. The control function is chosen as ( ) = K~x+ v + (~x1 + xd;1) 3 + xd;2 (5.39) where K = [k1; k2]T is a 2 1 row vector such that ~A = (A+BK) is Hurwitz, and ~x = [~x1; ~x2] is the tracking error. Then, the closed-loop tracking error system with 139 event-triggered control can be written as _~x1 = ~x2 _~x2 = (~x2 + xd;2) (~x1 + xd;1)3 + ( + e) v: (5.40) Now, consider the quadratic Lyapunov function V = ~xTP ~x where P is a positive de nite matrix that satis es the Lyapunov equation P ~A+ ~ATP = H, where H is a given positive de nite matrix. The time derivative of V along the ow de ned by (5.40) can be shown to satisfy _V ~xTH~x+ 2~xTPB[ ( + e) ( )] ak~xk2 + (k~xk)L(R)T jej; 8 ; ( + e) 2 S(R) (5.41) where a > 0 is the minimum eigenvalue of H, (k~xk) = 2kPBkk~xk and L(R) = 3( + d1) 2 + jk1j; jk2j; 3( + d1)2; 1; 1 (5.42) where = 11 ( 2(R)) and d1 d is the uniform bound on xd;1. If d1 is not known explicitly then d from assumption (A5.2) may be used instead. Note that B has been absorbed in rather than in L(R), as it should have been according to their de nitions. This makes the function point-wise lower. The vectors Li were com- puted according to the procedure in Remark 5.1. Finally, given a desired ultimate bound for the trajectory tracking error, the parameter r in the triggering condition can be designed. Next, we present simulation results for two cases corresponding to the two main classes of reference trajectories considered in this chapter. Case I: The signals xd;1, xd;2, and v were chosen as sinusoidal signals with peak-to-peak amplitude 2. This was done by choosing [xd;1(0); xd;2(0); v(0)] = 140 [ =3; 1; 0] and _v = cos(t). The initial condition of the plant was [x1(0);x2(0)] = [5; 1]. The parameter d1 was chosen as 2:5 while the actual uniform bounds on xd;1 and k[xd; v]k were observed to be around 2 and 2:28, respectively. The parameters in the controller were chosen as K = [20; 20]T , = 0:95 and H was chosen as the identity matrix. According to Theorem 5.2, we chose r = 0:0154 in the triggering condition to achieve an ultimate bound of r1 = 0:1 in the tracking error. The simulation results are shown in Figure 5.1(a). The Figure shows the norm of the tracking error, the radius r in the triggering condition, the desired ultimate bound r1 and W Ti jej, where Wi = (2kPBkLi)=( a). The gure demonstrates that the tracking error is ultimately bounded, and well below the desired bound. We recall that according to the triggering condition (5.27), the control is not updated when k~xk < r. Hence, as long as k~xk r, the weighted measurement error, W Ti jej, is bounded above by the norm of the tracking error, k~xk, and an event is triggered (control is updated) each time W Ti jej k~xk. However, when k~xk < r, W Ti jej may exceed k~xk. A zoomed version of the plot in Figure 5.1(a) is shown in Figure 5.1(b), where it is clearly seen that the tracking error is only ultimately bounded. The number of control executions in the simulated time duration was 301, and the minimum inter-event time was observed to be 0:005s. The observed average frequency of control updates was around 30Hz. Since most of the updates occur before ~x rst enters the ball of radius r, it is important to also consider the average frequency for this time period, and in this simulation it was found to be around 46Hz. If L is kept constant then these average frequencies are much higher at 943Hz and 1586Hz, respectively, with almost no change in the rate of convergence. 141 0 2 4 6 8 10 0 1 2 3 4 5 6 t WTi |e| ||x?|| r r1 (a) Case I 5 6 7 8 9 10 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 t (b) Case I (zoom) Figure 5.1: Simulation results for Case I. The theoretical estimate of the minimum inter-event time is around 6 10 8s, which is orders of magnitude lower than the observed value. Case II: In this case the result in Theorem 5.4 is illustrated, where the input signal v is piecewise continuous. In the simulations it was de ned as the piecewise constant function, taking values in the set Q = f0; 0:1; 0:2; : : :g and de ned as v(t) = arg min k2Q fj sin(t) kjg: For the time instants when ( sin(t)) equals an odd multiple of 0:05, v(t) is chosen as the higher or the lower of the two possible values based on whether the time deriva- tive of ( sin(t)) is positive or negative, respectively. In the context of Theorem 5.4, the constants c = 0 and Jv = 0:1. The initial condition of the reference system was [xd;1(0);xd;2(0); v(0)] = [1; 1:003; 0]. From Theorem 5.4, we know that r has to be greater than Jv = 0:1, which implies that r has to be greater than 0:0075. For the example system here, R0 in Theorem 142 5.4 can assume any value. Thus, as in CASE I, r = 0:0154 was chosen. The rest of the parameters were the same as in Case I. Figure 5.2 shows the simulation results. The number of control updates were observed to be 304, with the minimum exe- 0 2 4 6 8 10 0 1 2 3 4 5 6 t WTi |e| ||x?|| r r1 Figure 5.2: Simulation results for Case II. cution time at around 0:005s. The observed average frequencies of control updates were found to be around 30Hz and 46Hz for the simulated time duration and the time duration that ~x takes to rst enter the ball of radius r, respectively. These average frequencies are comparable to those in Case I. The theoretical estimate of the minimum inter-event time is around 3 10 8s, which is very conservative. 5.5.2 Linear System Example In this example, the plant and the reference system are given by (5.8)-(5.9) with A = Ar = 2 6 6 4 0 1 0 0 3 7 7 5 ; B = Br = 2 6 6 4 0 1 3 7 7 5 143 The control gain was chosen as G = 2 3 0 0 1 Thus, in the notation of (5.10), G~x = 2 3 From (5.12), the tracking error is seen to evolve as _~x = (A+BG~x)~x+BGe The gain matrix G~x has been chosen so that the eigenvalues of A = (A+BG~x) are at f 1; 2g. Thus, consider the candidate Lyapunov function V (~x) = ~xTP ~x where P is the symmetric positive de nte matrix satisfying P A+ ATP = I2 where I2 is a 2 2 identity matrix. In the simulations, = 0:95, r1 = 0:1 have been chosen, giving L = [1:414; 2:121; 0; 0; 0:707], a3 = 0:95 and r = 0:038. The initial condition of the plant x(0) = [5; 0] has been chosen. The reference trajectory and the input to the reference system were chosen as [xd; v] = [cos(!t); ! sin(!t); !2 cos(!t)] Then, a number of simulations, parametrized by !, were performed. The parameter ! was varied from 1 to 10 in steps of 0:1. Notice that k[xd; v]k = q cos2(!t) + !2 sin2(!t) + !4 cos2(!t) = p (1 + w4 w2) cos2(!t) + w2 p 1 + w4 = d 144 and k _vk w3 = c Thus, the theoretical lower bound on inter-event times may be computed, (5.21) as a function of w, which is shown in Figure 5.3. In each of the simulation, the initial 0 2 4 6 8 10 0 0.5 1 1.5 2 2.5 3 3.5 x 10?4 ? Theoretical Lower Bound on Inter?Event Time s Figure 5.3: Theoretical lower bound on inter-event times for the linear system ex- ample. tracking error is ~x(0) = [4; 0]. Each simulation was performed until the time it took the trajectory to reach the r-ball. In the corresponding time duration, the minimum and the average inter-event times were found. The resulting relationship with w is shown in Figure 5.4. Clearly, the the theoretical lower bound for the inter-event times in Figure 5.3 is very conservative. However, Figure 5.4 clearly demonstrates one of the most signi cant advantages of the utility driven event-triggered control - the ability to adjust to the sampling rate according to the requirement. 145 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 w ?T Tmin Figure 5.4: Observed average ( T ) and minimum (Tmin) inter-event times observed in the simulations parametrized by !. 5.6 Conclusions In this chapter, we developed an event based control algorithm for trajectory track- ing in nonlinear systems. It was demonstrated that given a nonlinear dynamical system, and a continuous-time controller that ensures uniform asymptotic tracking of the desired trajectory, an event based controller can be designed that not only guarantees uniform ultimate boundedness of the tracking error, but also ensures that the inter-event times for the control algorithm are uniformly bounded away from zero. The rst result demonstrated that uniform boundedness with an arbi- trary ultimate bound for the tracking error can be achieved, provided the reference trajectory, the exogenous input to the reference system, and its derivative are all uniformly bounded. However, the minimum guaranteed inter-event time decreases 146 along with the ultimate bound. In the second and third results, we relaxed the as- sumption on the derivative of the input to the reference system, and demonstrated that the tracking error is uniformly ultimately bounded. In these cases, the analyt- ical results show that it may not be feasible to reduce the ultimate bound below a certain threshold and moreover, the result is only local in general. The theoretical results were demonstrated through simulations of a second order nonlinear system. The theoretical lower bounds on inter-update times have been found to be very conservative. This is partially due to the fact that the estimates are based on the rate of change of kek (made necessary by the presence of exogenous signals) rather than that of k~xk=kek as in [14]. Thus, there is signi cant room for improvement in these estimates and how they are computed. Numerical simulations indicated that the ultimate bound on the tracking error is much lower than the desired value, which is another area for improvement of the theoretical predictions. Finally, it is important to extend these results to output feedback systems. 147 Chapter 6 Utility Driven Sampled Data Adaptive Control for Tracking in Robot Manipulators 6.1 Introduction Many of the utility driven event-triggered controllers in the literature are essentially sampled data versions of continuous time controllers, with the sampling instants determined by state based triggering conditions. While utility driven event-triggered controllers implicitly guarantee stability, they have a drawback. These controllers rely critically on the knowledge of a good model of the system. For example, the results in [14,32] are general enough to hold for robotic manipulators when perfect knowledge of the system is available. However, building a model of high accuracy is a time consuming process and in many cases, it may not even be possible. Therefore, it is important to extend the design of implicitly veri ed event based controllers to cases where only a poor model of the system is available. This is specially important in the eld of robotics, where adaptive and robust controllers are often used. It is our opinion that event-triggered controllers can have a signi cant impact in the eld of robotics. For example, many industrial robotics applications use vi- sual feedback, which inherently works at a low rate. Hence, we are interested in introducing speci c event-triggered controllers for robotics. Therefore, in this chap- 148 ter we develop a speci c event-triggered adaptive controller for trajectory tracking in robotic manipulators. That is, we incorporate adaptation in the proposed con- troller. The controller is demonstrated through simulations and experiments on a two-link planar robotic manipulator. 6.1.1 Contributions The contribution of this chapter is twofold. In this chapter, we design a speci c event-triggered controller applicable in the eld of robotics. In addition, the pro- posed controller incorporates adaptation. The only other reference in the event- triggered control literature that explores adaptation is [76], where in a Kalman lter like approach was adopted to estimate the system parameters of a discrete time linear system. We explore the problem of adaptation for continuous-time tra- jectory tracking in nonlinear robotic systems. Finally, this work adds to the limited body of work on event-triggered implementations in experiments [5, 12, 22, 77{80]. By incorporating adaptation, we allow for larger modelling errors and thus make safe experimentation of event-triggered controllers more feasible. The rest of the chapter is organized as follows. In Section 6.2, an event- triggered implementation of the controller of [81] is presented under the assumption that the controller has exact knowledge of the robot dynamics. Then in Section 6.3, the adaptive controller of [81] is introduced, and the design of the proposed event based adaptive controller is described. In Section 6.4 the dynamic model of a planar two-link robot is presented. The simulation and experimental results are presented 149 in Section 6.5. Finally, some concluding remarks are made in Section 6.6 and some future directions of work are proposed. 6.2 Event-Triggered Control In this section we introduce the idea of event-triggered control, and design an event- triggered controller for trajectory tracking in robotic manipulators through a process similar to that in [32]. Secondly, in this section we provide motivation for incorpo- rating adaptation in the event-triggered controller. Consider a standard n-degree of freedom rigid robot model of the form, [82], M(q) q + C(q; _q) _q +G(q) = u; q 2 Rn; u 2 Rn (6.1) where M : Rn ! Rn n, C : Rn Rn ! Rn n and G : Rn ! Rn. Let xd , [qd; _qd] 2 Rn Rn be the state of the desired trajectory that the robot has to track. Here the notation [a1; a2] denotes the column vector formed by concatenating the vectors a1 and a2. This notation is used in this chapter to refer to various concatenated vectors. Let ~q , q qd, then the tracking error is de ned as ~x , [~q; _~q]. Let u = ( ) 2 Rm be a known continuous-time control law for trajectory tracking, where is the data that the controller depends on. For example in the passivity based Slotine-Li controller [83] or in the controller of [81], = [~x;xd; qd] (6.2) 150 More speci cally, consider the controller of [81], an event-triggered implementation of which is proposed controller in this chapter. u = ( ) = M(q) qd + C(q; ) _qd +G(q) Kd _~q Kp~q = Y (q; ; _qd; qd) Kd _~q Kp~q (6.3) where , _q ~q, Kd = KTd > 0 and Kp = KTp > 0. Additionally = 0 1 + k~qk (6.4) where 0 is a positive constant and k:k denotes the Euclidean norm. The second relation in (6.3) is a result of the well-known fact that the Lagrangian robot dy- namics are linearly parametrizable [82]. In the sequel, Y (q; ; _qd; qd) is nearly always shortened to Y to make the notation compact. (A6.1) Assume that the controller gains are chosen such that 0 < min Kd;m 3MM + 2CM ; 4Kp;m Kd;M +Kd;m (6.5) where Kd;m m(Kd), Kd;M M(Kd), Kp;m m(Kp), with m(:), M(:) the minimum and maximum eigenvalues respectively. The constants Mm, MM and CM satisfy 0 < Mm kM(q)k MM (6.6) kC(q; w)k CMkwk; for all w (6.7) where w denotes an arbitrary vector. The following result shows that when Assumption (A6.1) is satis ed the robot manipulator asymptotically tracks the desired trajectory. The result as well as its proof are taken from [81]. 151 Proposition 6.1 (Prop. 2.1, [81]). Suppose that assumption (A6.1) holds. Then the closed loop system (6.1, 6.3) is globally convergent, that is ~q and _~q asymtptotically converge to zero, and all the internal signals are bounded. Proof. The proof strongly relies on the following well known properties of C(q; :). C(q; x)y = C(q; y)x (6.8) C(q; x+ y) = C(q; x) + C(q; y) (6.9) for all x; y; q 2 Rn and 2 R. Using (6.9), the closed loop system given by (6.1) and (6.3) can be shown to satisfy M(q) ~q + C(q; _q) _~q + C(q; ~q) _qd +Kd _~q +Kp~q = 0 (6.10) Consider the positive-de nite candidate Lyapunov function V (~q; _~q) = 1 2 TM(q) + 1 2 ~qTKp~q (6.11) where = _~q + ~q. The time derivative of the candidate Lyapunov function along the ow of the system (6.10) is given by _V = T [ M(q) _~q + _ M(q)~q + C(q; _q)~q C(q; ~q) _qd Kd _q Kp~q] + _~qTKp~q where (6.9) and the skew-symmetry of _M(q) 2C(q; _q) [82] have been used. Further, applying (6.8) and (6.9) yields _V = T [Kd M(q)] _~q + _ TM(q)~q + TC(q; _~q)~q ~qTKp~q (6.12) 152 Now we introduce a new variable, namely, 1 = _~q + 2 ~q Then, (6.12) can be rewritten as _V = T1 Kd M(q) 1 + _ TM(q)~q + TC(q; _~q)~q ~q 2 T 4 Kp (Kd M(q)) ~q 2 (6.13) Now we establish a bound on the second term _ TM(q)~q = ~q T _~q k~qk(1 + k~qk) TM(q)~q = ~q T ( 1 2 ~q) k~qk(1 + k~qk) ( 1 + 2 k~qk)TM(q)~q MMk~qk 1 + k~qk k 1k2 + 2k 1k 2 ~q + 2 ~q 2 2 MMk~qk 1 + k~qk k 1k2 + 2 ~q 2 2 0MM k 1k2 + 2 ~q 2 (6.14) and on the third term TC(q; _~q)~q = 1 + 2 ~q T C(q; ~q) 1 2 ~q CMk~qk k 1k2 + 2k 1k 2 ~q + 2 ~q 2 2 CMk~qk k 1k2 + 2 ~q 2 2 0CM k 1k2 + 2 ~q 2 (6.15) where (6.8) and (6.7) have been used in the rst and the second steps, respectively. Replacing these bounds in (6.13) and rearranging terms we obtain _V (~q; _~q) k1k 1k2 k2 2 ~q 2 (6.16) 153 where k1 = Kd;m 3 0MM 2 0CM (6.17) k2 = 4 1 0 Kp;m Kd;M 2 0MM 2 0CM (6.18) The condition (6.5) ensures that k1 and k2 are positive. Thus V (~q; _~q) is a non- increasing function bounded from below. The de nition of V (~q; _~q), (6.11), then implies that ; ~q 2 Ln1, and consequently _~q; 1 2 Ln1. Further, since 2 L1, (6.16) implies that 1; ~q 2 Ln2 . From square integrability of ~q and the fact that _~q 2 Ln1 we conclude that ~q asymptotically converges to zero. Also notice that _~q 2 Ln2 and that the tracking error dynamics (6.10) implies ~q 2 Ln1. Thus, _~q also asymptotically converges to zero. Now let us consider the event-triggered implementation of the controller (6.3). Recall the notation, from Section 1.3, used to denote the sampled data versions of di erent signals in the system. The sampled data version of any signal (which can be a scalar, a vector or a matrix) is denoted by s. In particular, the data sampled by the controller is denoted by s, and is de ned as s(t) = (ti); for all t 2 [ti; ti+1); for each i (6.19) where ti are the sampling instants. All the other sampled data signals are similarly de ned. The ?measurement error? of the sampled data is denoted by e , s = (ti) ; for t 2 [ti; ti+1); i 2 f0; 1; 2; :::g (6.20) The sampled data controller is then given as us = ( s) = Ys Kd _~qs Kp~qs (6.21) 154 Utilizing the measurement error view, the closed loop tracking error system can be written as a perturbation of (6.10). M(q) ~q + C(q; _q) _~q + C(q; ~q) _qd +Kd _~q +Kp~q = (Ys Y ) Kd( _~qs _~q) Kp(~qs ~q) (6.22) Before we describe the event-triggering condition we state the assumptions that are made regarding the desired trajectory and the robot. (A6.2) The desired trajectory [qd; _qd], and its rst two derivatives are uniformly bounded by known constants. That is, qd, _qd, qd and ... q d exist for all time, and are uni- formly bounded by known constants d0, d1, d2 and d3, respectively. (A6.3) The matrices M(:), C(:; :) and G(:) are globally Lipschitz. The following lemma is used to bound the terms on the right hand side of (6.22). In the sequel, the notation j:j denotes the component-wise absolute value of a vector or matrix. A Lipschitz vector is similar to a Lipschitz constant. More speci cally, it is a vector of non-negative elements other than the zero vector. Lemma 6.1. Suppose that assumptions (A6.2),(A6.3) and conditions (6.6), (6.7) hold. Also assume that Kp = KTp > 0 and Kd = K T d > 0. Then, there exist Lipschitz vectors LY and D that depend only on the sampled data, and the uniform bound on _qd such that k(Ys Y ) k LTY jej (6.23) kKd( _~qs _~q) +Kp(~qs ~q)k DT jej (6.24) 155 Proof. Equation (6.24) is satis ed with D = [Kp;M1n;Kd;M1n; 03n], where 1n is an n dimensional vector of 1?s and 03n is a vector of zeros of dimension 3n. Next, by (6.6) and assumption (A6.3), there exist constants MM and LM , respectively, such that kM(qs) qd;s M(q) qdk = k(M(qs) M(q)) qd;s +M(q)( qd;s qd)k LMk qd;skkqs qk+MMk qd;s qdk Again, by (6.7) and assumption (A6.3), there exist constants CM and LC , respec- tively such that kC(qs; s) _qd;s C(q; ) _qdk = kC(qs; _qd;s) s C(q; _qd) k = k(C(qs; _qd;s) C(q; _qd)) s + C(q; _qd)( s )k LCk sk(kqs qk+ k _qd;s _qdk) + CMd1k( s )k where = _q ~q, s the sampled version of and d1 is a known upper bound for k _qdk from assumption (A6.1). Next, note that k( s )k = k( _qs s~qs) ( _q ~q)k k _qs _qk+ k~qskj s j+ k~qs ~qk Now, note that j s j = 0 1 1 + k~qsk 1 1 + k~qk s k~qsk k~qk sk~qs ~qk 156 Noting that 0, for all ~q, we see that k( s )k k _qs _qk+ sk~qskk~qs ~qk+ 0k~qs ~qk LT jej where L is a vector that depends only on the sampled data. Finally, assumption (A6.3) also guarantees a constant LG such that kG(qs) G(q)k LGkqs qk By the linear parametrizability of robot dynamics, we know that Y (q; ; _qd; qd) = M(q) qd + C(q; ) _qd +G(q) Hence, there exists a Lipschitz vector LY that depends only on the sampled data such that k(Ys Y ) k LTY jej Notice that the process of computing the Lipschitz vectors is simpli ed consid- erably by allowing them to depend on the sampled data. For example, the approach adopted in Chapter 5 requires an appropriate set to be de ned rst over which a Lipschitz vector is computed. Such a Lipschitz vector holds for any two points in the set. However, we only need to estimate the ?error? in a function with respect to a xed sampled value. Using this fact, for the system under consideration in this chapter, it is possible to nd a Lipschitz vector in terms of the sampled data that 157 holds ?globally? - in the sense that one of the points where the function is evaluated at is the xed sample point while the other can be any arbitrary point. As seen from the results in the sequel, such a formulation simpli es the analysis considerably compared to that in Chapter 5. Now let (~x) = k1 _~q + 2 ~q 2 + k2 2 ~q 2 (6.25) (~x) = k k = k _~q + ~qk (6.26) where k1 and k2 are given by (6.17) and (6.18), respectively. Then we de ne the sampling or control execution instants implicitly with an event-triggering condition in the following way. t0 = 0 ti+1 = minft ti : (~x)LT j (ti) (t)j (~x)g (6.27) where 2 (0; 1) is a design parameter, L = LY + D with LY and D satisfying Lemma 6.1. Notice that each update instant ti+1 is de ned implicitly with respect to ti. Hence, the initial update instant t0 has been speci ed separately. Given this event-trigger, the following result demonstrates the global convergence of the tracking error to zero. Theorem 6.2. Under assumptions (A6.1)-(A6.3) and dynamics (6.1), (6.21), (6.27), the tracking error, ~x = [~q; _~q] globally asymptotically converges to zero. Proof. Consider the candidate Lyapunov function. V (~q; _~q) = 1 2 TM(q) + 1 2 ~qTKp~q 158 Through the measurement error view of (6.22) and the analysis of 6.1, it can be shown that the derivative of the candidate Lyapunov function along the ow of the closed loop system (6.1), (6.21), (6.27) satis es the following. _V (~x) + T (Ys Y ) Kd( _~qs _~q) Kp(~qs ~q) (~x) + (~x)LT jej where the second step is obtained using the de nitions of , (6.26) and L. The triggering condition (6.27) ensures that (~x)LT jej (~x), which then implies that _V (1 ) (~x) Then, asymptotic convergence of ~q and _~q to zero follows from arguments used in the proof of Proposition 6.1. Now, it must be pointed out that both the control law (6.21) and the triggering condition (6.27) (through L) depend on the knowledge of a good model of the robot system. However, in many applications an accurate model is not available. If only a poor model is available then the tracking performance may deteriorate. It would be useful for practical applications to incorporate adaptation in the event-triggered controller. In the next section, we present the methodology for accomplishing this goal. 6.3 Event Based Adaptive Control In this section, the adaptive controller from [81] for tracking in robot manipulators is introduced, and a utility driven event-triggered implementation of it is developed. 159 Consider the following controller and adaptation law from [81] u = M^(q) qd + C^(q; _q ~q) _qd + G^(q) Kd _~q Kp~q = Y (q; _q ~q; _qd; qd) ^ Kd _~q Kp~q (6.28) _^ = 1Y T (q; _q ~q; _qd; qd) (6.29) where Y (:) is a regressor matrix, = _~q + ~q, is an arbitrary positive de nite matrix and ^ is a vector of estimates of the true system parameters , which depend on parameters such as link masses and link lengths. Then, the following result can be proven, which is taken from [81] and stated here without proof. Proposition 6.3 (Prop. 3.1, [81]). Suppose that assumption (A6.1) holds. Then the adaptive system (6.1, 6.28, 6.29) is globally convergent, that is ~q and _~q asymtp- totically converge to zero, and all the internal signals are bounded. The proof of this proposition is similar to Proposition 6.1 and relies on the candidate Lyapunov function V (~q; _~q; ~ ) = 1 2 TM(q) + 1 2 ~qTKp~q + 1 2 ~ T ~ (6.30) where = T is a positive de nite matrix and ~ , ^ is the parameter estimation error. Now, we develop an event-triggered adaptive controller based on (6.28)-(6.29). First, we make the following assumption (A6.4) An upper bound on each of the parameters i is known, that is, is known such that i i. 160 Note that the conditions (6.6), (6.7), (A6.3) on the one hand and (A6.4) on other are not entirely independent. However, (A6.4) is a convenient form to base our design on. Now, the complete system including the robot dynamics, the event-triggered controller and the adaptation law are as follows. M(q) q + C(q; _q) _q +G(q) = us; q 2 Rn (6.31) us = Ys ^s Kd _~qs Kp~qs = ( s); if t t0 (6.32) , [~q; _~q; qd; _qd; qd; ^] (6.33) s(t) = (ti); for all t 2 [ti; ti+1); for each i t0 = 0 ti+1 = minft ti : (~x)LT j (ti) (t)j (~x)g (6.34) _^ = 1Y Ts ; if t t0 (6.35) where 2 (0; 1) is a design parameter. Notice that the data required by the con- troller is, , now additionally includes ^ compared to that in Section 6.2. Equations (6.32)-(6.34) provide a complete description of the event-triggered controller. The condition that implicitly de nes the sampling instants (6.34) is the event-trigger. The functions and are given by (6.25) and (6.26), respectively. The vector L = LY +D+N , where LY and D satisfy Lemma 6.1 (with e de ned appropriately to include ^s ^) whereas N is a Lipschitz vector that satis es kYs( ^s ^)k NT jej More speci cally, N = [0T ; Column-wise sum of jYsj ]T , where 0 is a vector of zeros 161 of appropriate dimension. Given this complete system, the following result demon- strates the global convergence of the tracking error to zero. Theorem 6.4. Under assumptions (A6.1)-(A6.3) and dynamics (6.31)-(6.35), the tracking error, ~x = [~q; _~q] globally asymptotically converges to zero. Proof. Using the measurement error approach, the tracking error dynamics can be shown to satisfy M(q) ~q + C(q; _q) _~q + C(q; ~q) _qd +Kd _~q +Kp~q = (Ys ^s Y ) Kd( _~qs _~q) Kp(~qs ~q) which is essentially a perturbed version of (6.10). Now, consider the candidate Lyapunov function (6.30). Again, following the analysis in Proposition 6.1, the derivative of the Lyapunov function along the ow of the closed loop system (6.31)- (6.35) can be shown to satisfy _V (~x) + ~ T _~ + T (Ys ^s Y ) Kd( _~qs _~q) Kp(~qs ~q) = (~x) + ~ T _~ + T (Ys Y ) + Ys( ^s ^) + Ys~ Kd( _~qs _~q) Kp(~qs ~q) = (~x) + T (Ys Y ) + Ys( ^s ^) Kd( _~qs _~q) Kp(~qs ~q) + ~ T _~ + Y Ts = (~x) + (~x)LT jej+ 0 where the last step is obtained using the de nition of , (6.26), the de nition of L and the adaptation law (6.35). The event-trigger (6.34) ensures that (~x)LT jej (~x), which then implies that _V (1 ) (~x) < 0 162 The rest of the proof is similar to that of Proposition 6.1. Notice that in our treatment so far, the implementation aspects arising out of implicitly de ned sampling instants given by (6.34) have not been discussed. For example, implicitly de ned inter-sample times may exhibit zeno behavior - sampling in nitely many times in a nite time period, which is something not realistically implementable. Ideally, it is good to have a positive lower bound between every two consecutive sample times. Given that sampling the complete system data involves sampling an external desired trajectory as well as parameter estimates resulting from adaptation along with the state of the robot system, it is not easy to provide analytical bounds that hold globally, semi-globally or even over signi cant regions of the state space. Therefore, in the following subsection, we provide a method to analytically estimate the inter-sample time as a function of only the tracking error, independent of the robot parameter estimation error. 6.3.1 Inter-sample times The basic idea behind the method we have adopted to estimate the inter-sampling times is to estimate an upper bound on kek, and a lower bound on (~x)kLk (~x) as functions of time since the last sample. Then, from (6.34) it is seen that, the time required for the above two quantities to equal each other provides a lower estimate of the inter- sample time. As a rst step, we provide estimates of (~x) and (~x) as functions of k~xk. 163 (~x) = k1 _~q + 2 ~q T _~q + 2 ~q + k2 2 ~q T 2 ~q = [~q; _~q]T 2 6 6 4 2 4 (k1 + k2)In 2k1In 2k1In k1In 3 7 7 5 [~q; _~q] where In is a n n identity vector and [~q; _~q] is a concatenated column vector. The two distinct eigenvalues of the matrix are given by s = 2(k1 + k2) + 4k1 8 q 2(k1 + k2) 4k1 2 + 16 2k21 8 Note = 0 1 + k~qk is a function of k~qk and so are the eigenvalues s . Now, since 0 > 0, > 0 for any nite value of k~qk. Thus, both the eignevalues, s , are strictly positive for any nite value of k~qk and the smaller eigenvalue converges to 0 as k~qk converges to 1. We denote the smaller of the eigenvalues s as a(~q) , 2(k1 + k2) + 4k1 8 q 2(k1 + k2) 4k1 2 + 16 2k21 8 (6.36) Thus for any nite ~x, a(~q)k~xk2 (~x) (6.37) Similarly, (~x) = k k = q ( _~q + ~q)T ( _~q + ~q) = v u u u u u t ~q _~q 2 6 6 4 2In 2 In 2 In In 3 7 7 5 2 6 6 4 ~q _~q 3 7 7 5 164 The 2n 2n matrix in the above equation has only two distinct eigenvalues, which are given by s = ( 2 + 1) p ( 2 + 1)2 + 12 2 2 The largest eigenvalue is an increasing function of and since 0 for all ~q, the following quantity is an upper bound for the eigenvalues s . b2 = ( 20 + 1) + p ( 20 + 1)2 + 12 2 0 2 (6.38) Then, (~x) = k k bk~xk (6.39) The next step in the procedure is to estimate an upper bound on the rate of change of kek. Notice that e = s = (ti) , for t 2 [ti; ti+1) and each i, where is the data given in (6.33). Therefore, _e = _ . Hence, we look at how the derivative of each of the components of (see (6.33)) can be bounded, starting with that of ~ . From (6.35), we see that dk~ k dt k 1Y Ts k bk 1Y Ts k:k~xk (6.40) where (6.39) has been used to obtain the second inequality. The rate of change of the desired trajectory, and its derivatives is provided by Assumption (A6.2). In fact, only the constant d3 is required here, as seen in the following equation. d dt 2 6 6 6 6 6 6 4 kqdk k _qdk k qdk 3 7 7 7 7 7 7 5 = 2 6 6 6 6 6 6 4 0 1 0 0 0 1 0 0 0 3 7 7 7 7 7 7 5 2 6 6 6 6 6 6 4 kqdk k _qdk k qdk 3 7 7 7 7 7 7 5 + 2 6 6 6 6 6 6 4 0 0 d3 3 7 7 7 7 7 7 5 (6.41) 165 Next, from the robot dynamics (6.31) and the sampled data controller (6.32), the equation of motion can be written as M(q) q + C(q; _q) _q +G(q) = Y + (Ys ^s Y ) Kd _~qs Kp~qs = M(q) qd + C(q; ) _qd +G(q) + (Ys ^s Y ) Kd _~qs Kp~qs After rearranging terms we obtain M(q) ~q + C(q; _q) _~q + C(q; ~q) _qd = (Ys Y ) + Ys( ^s ^) + Ys~ Kd _~qs Kp~qs where ~ = ^ . Thus, ~q = M 1(q)[ C(q; _q) _~q C(q; ~q) _qd + (Ys Y ) + Ys( ^s ^) + Ys~ Kd _~qs Kp~qs] Then by assumption (A6.1) and Lemma 6.1, it follows that dk ~qk dt 1 Mm CMk _qkk _~qk+ CMk ~qkkqdk+ kLkkek+ kYskk~ k+Kdk _~qsk+Kpk~qsk (6.42) Now, we introduce two new variables. Let e~x and ed be the measurement error in the tracking error ~x and the rest of the data, respectively. That is, e~x = ~xs ~x (6.43) ed = [qd;s; _qd;s; qd;s; ^s] [qd; _qd; qd; ^] (6.44) Hence, e = [e~x; ed]. From (6.40)-(6.42) along with the facts that _e = _ and = s e, we see that d dt 2 6 6 4 ke~xk kedk 3 7 7 5 A 2 6 6 4 ke~xk kedk 3 7 7 5+B1 CMk _qkk _~qk Mm +B2 (6.45) 166 where A and B2 are matrices that depend on sampled data and system constants, and B1 = [1; 0]. Thus, for any nite s, the matrices A, B1 and B2 are nite. However, we still have a nonlinear term. To simplify the analysis let us consider a ball in R2n centred around ~xs. More speci cally, if we let Rs , k~xsk, then consider the ball de ned by Rhs , f~x : ke~xk hRsg for any arbitrary h > 0. On this set k _~qk (1 + h)Rs and hence it is possible to obtain a linear di erential equation as follows d dt 2 6 6 4 ke~xk kedk 3 7 7 5 A1 2 6 6 4 ke~xk kedk 3 7 7 5+B3; 8 e s.t. ke~xk hRs (6.46) where A1 and B3 depend on sampled data and are nite for any nite s. At any given sample instants ti, s = = (ti). Hence, kek = 0 at t = ti for each i. Therefore, by using the Comparison Lemma [45] and (6.46) it is possible to estimate the time it takes for ke~xk to grow from 0 to hRs. Let this time be T1. Therefore, (6.46) is useful for further analysis only over this time period. The triggering condition ensures that kek (~x)kLk (~x) and the inter-sample time is lower bounded by the time it takes kek to grow from 0 to (~x) kLk (~x) . The estimation of this time can be simpli ed in the following way. On the set Rhs = f~x : ke~xk hRsg, a(k~qk) attains a minimum, which we denote by ahs . Thus on this set, (~x) kLk (~x) ahsk~xk2 kLkbk~xk = ahs (k~xs e~xk) kLkb (6.47) 167 Notice that this equation is well de ned for all ~x 6= 0. Now let T2 be the time de ned as T2 = minf(t ti) > 0 : bkLkkek = ahs (k~xsk ke~xk)g (6.48) This time T2 can be found numerically or estimated analytically from (6.46). Then, the inter-sample time ti+1 ti minfT1; T2g when s = (ti). Clearly, this inter- sample time is greater than zero if ~xs 6= 0. However, the analysis presented so far is not powerful enough to provide an explicit and non-conservative lower bound for inter-sample times over a region of interest. We believe numerical analysis would reveal such bounds much more e ciently. Note, however, nding estimates of T1 and T2 for any given sampling point doesn?t require the exact knowledge of the robot parameters, which is a signi cant advantage from a practical perspective. In the next section, we present a dynamic model of a two-link planar manip- ulator on which we have performed simulations and conducted experiments. 6.4 Two Link Planar Manipulator In this section we describe the dynamic model of a planar two-link revolute joint arm, with both the joints driven by motors mounted at the base. We choose this model because of a similar driving mechanism in PHANToM Omni. A schematic of the arm is shown along with the generalized coordinates in Figure 6.1. The M(q), C(q; ) and G(q) matrices can be easily found from the Euler-Lagrange equations, 168 Link 2 Link 1 Figure 6.1: A schematic of a two link planar revolute manipulator with the second link remotely driven from base of Link 1. and are given as follows. M(q) = 2 6 6 4 m1l2c1 +m2l 2 1 + I1 m2l1lc2 cos(q2 q1) m2l1lc2 cos(q2 q1) m2l2c2 + I2 3 7 7 5 C(q; ) = 2 6 6 4 0 m2l1lc2 sin(q2 q1) 2 m2l1lc2 sin(q2 q1) 1 0 3 7 7 5 G(q) = 2 6 6 4 (m1lc1 +m2l1)gcos(q1) m2lc2gcos(q2) 3 7 7 5 where mi, li, lci and Ii are the mass, length, distance of the center of mass from the joint, and moment of inertia about the center of mass of the ith link, respectively. 169 Thus, the regressor matrix is given as Y (q; ; _qd; qd) T = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 qd;1 0 qd;2 cos(q2 q1) _qd;2 sin(q2 q1) 2 qd;1 cos(q2 q1) + _qd;1 sin(q2 q1) 1 0 qd;2 cos(q1) 0 0 cos(q2) 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 and the vector of parameters is given as = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 m1l2c1 +m2l 2 1 + I1 m2l1lc2 m2l2c2 + I2 (m1lc1 +m2l1)g m2lc2g 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 (6.49) 170 The vector LY is given as LY = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 2( s;1 + d1 s;2) + 4 2( s;1 + d1 s;2) + 5 2d1 2d1 2 s;1 + 4 2 s;1 + 5 2(j s;1j+ d1) 2(j s;2j+ d1) 1 + 2 3 + 2 0 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 (6.50) s;1 = j s;1 _qd;s;1j+ j s;2 _qd;s;2j+ j qd;s;1j+ j qd;s;2j s;2 = s(1 + 2j~qs;1j+ 2j~qs;2j) where sk = _qs;k s~qs;k, for k = 1; 2, 0 is a vector of zeros of appropriate dimension and i are from (A6.4). As in section 6.3, L = LY + N + D. Notice that most of the elements in these vectors are constants or easily computable functions of the sampled data. Of course when a good model of the system is available (the system parameters i are known with good accuracy), i in the de nition of LY may simply be replaced with i. In the next section we present the simulation and experimental results. 171 6.5 Results In both the simulation and experimental results presented here, the position vari- ables of the desired trajectory was chosen as qd;1 = 0:4(cos(0:8t) 1:1) qd;2 = 0:4(cos(0:3 t) 1) ( =2) The signals _qd, qd and ... q d were de ned simply as the corresponding derivatives of qd. The control gains and the parameters were chosen as 0 = 0:7; Kd = 0:03; Kp = 0:7; 2 f0:95; 0:6; 0:2g = diag([30; 40; 50; 10; 10]T ) d2 = 0:5 where d2 is the uniform upper bound on j _qd;1j and j _qd;2j. The initial condition of the robot was chosen as [q1; q2; _q1; _q2] T (0) = [0; =2; 0; 0]T 6.5.1 Simulation Results In the simulations, the true robot parameters were assumed to be the following. m1 = 0:065; m2 = 0:065; I1 = 10 5; I2 = 10 5 l1 = 0:14; l2 = 0:2; lc;1 = 0:07; lc;2 = 0:09; g = 9:8 thus giving = [0:0016; 0:0008; 0:0005; 0:1338; 0:0573]. 172 In the rst set of simulations, we avoid adaptation and show the e ect of an inaccurate knowledge of the parameters i. In these set of simulations = 0:95 was chosen. Figure 6.2 shows the results when the controller has exact knowledge of the robot parameters (Figure 6.2(a)), and when the controller has an inaccurate knowledge of the robot parameters (Figure 6.2(b)). These gures show the norm of the tracking error, k~xk. In addition, the former gure also shows the measurement error scaled such that it equals k~xk whenever the equality is satis ed in the triggering condition (6.27). In the rst case, the norm of the tracking error converges to zero very quickly while in the latter case the tracking error does not converge even after a long time. In the second case, the controller assumes robot parameters 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 t (seconds) Q | e| | | x?| | (a) 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 1.2 1.4 t (seconds) | | x?| | (b) Figure 6.2: (a) Controller has exact knowledge of the robot parameters. The gure shows norm of the tracking error and the scaled measurement error. (b) Controller has inaccurate knowledge of the robot parameters. The gure shows norm of the tracking error. to be ^ = [0:0019; 0:0010; 0:0004; 0:1605; 0:0459] while the actual parameters were 173 = [0:0016; 0:0008; 0:0005; 0:1338; 0:0573], which represents a plus/minus 20% error in each of the parameters. For the simulations with adaptation, we rst assumed = [0:0035; 0:0035; 0:002; 0:2; 0:1]T ; hl = 10 8 where hl is a lower bound on ( 1 3 22), which can be easily shown to be positive for a two link manipulator. Using these quantities, MM , Mm and CM can be estimated as MM = 1 + 3 + q 1 + 3 2 4hl 2 Mm = 1 + 3 q 1 + 3 2 4hl 2 CM = 2 Finally, the initial system parameter estimates have been chosen as ^(0) = [0:0001; 0:0001; 0:0001; 0:01; 0:001]T The choice of such low initial values for ^ is motivated by the fact that initial torques will be lower in the absence of knowledge of the system parameters. Figure 6.3 shows, for the cases of = 0:6 and = 0:95, the norm of the tracking error, k~xk = k[~q; _~q]k. As expected, the convergence is faster for the case with the smaller = 0:6. Figure 6.4 shows the desired and the actual joint positions as functions of time. The observed minimum inter-update times and average frequency in simula- tions are reported in Table 6.1. 174 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 t (seconds) | | x?| | (a) 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 t (seconds) | | x?| | (b) Figure 6.3: The norm of the tracking error and the scaled measurement error. (a) = 0:6 (b) = 0:95 Table 6.1: The observed minimum inter-update times and average frequency in simulations. Observed minimum Observed inter-update time (s) average Frequency (Hz) 0.6 0.0017 28 0.95 0.0028 26.5 Next, we present the experimental results of the algorithm on a PHANToM Omni. 175 0 10 20 30 40 50 ?0.2 0 0.2 0.4 0.6 0.8 1 1.2 t (seconds) radian s q1qd , 1 (a) 0 10 20 30 40 50 ?1.6 ?1.5 ?1.4 ?1.3 ?1.2 ?1.1 ?1 ?0.9 ?0.8 ?0.7 t (seconds) radian s q2qd , 2 (b) 0 10 20 30 40 50 ?0.2 0 0.2 0.4 0.6 0.8 1 1.2 t (seconds) radian s q1qd , 1 (c) 0 10 20 30 40 50 ?1.6 ?1.5 ?1.4 ?1.3 ?1.2 ?1.1 ?1 ?0.9 ?0.8 ?0.7 t (seconds) radian s q2qd , 2 (d) Figure 6.4: The desired joint positions and the actual positions of the robot. (a), (b) = 0:6, (c), (d) = 0:95 6.5.2 Experimental Results PHANToM Omni, a picture shown in Figure 6.5, is a 6 degree of freedom robotic manipulator. It uses IEEE-1394 Firewire to communicate with a computer. The OpenHaptics 3.0 [84] is an API that allows one to program the PHANToM Omni and one can perform tasks such as reading the sensors and controlling the joint 176 torques. Figure 6.5: PHANToM OmniTM For the experiments presented here, only the second and third joint have been kept active. The rst joint was never actuated, and the remaining joints were either removed or constrained to a xed position. Hence, this provides a simple test bed for the event-triggered controller developed in the previous sections. The OpenHaptics 3.0 API does not provide the capability to arbitrarily choose the sampling and control update instants. The API samples the sensors and updates the control torques at a roughly constant inter-tick period of 1 milli-seconds. Figure 6.6 shows the cumulative frequency distribution of the inter-tick times for a typical experiment. As can be seen most of the ticks occur with a 1 milli-second period or a frequency of 1000Hz. Hence, in the experiments the event-triggering condition was checked at a roughly constant frequency of 1000Hz. The experimental results are presented in Figure 6.7. Joint 2 tracking is com- parable to the simulation results, though with more error near the peaks. In the beginning of the experiment, Joint 1 tracking error converges to zero faster com- pared to the simulation results. This is because of the physical joint limits, due to which Joint 1 is at equilibrium in the beginning of the experiment. On the other 177 0 1 2 3 4 5 6 7 0 20 40 60 80 100 Inter?tick times (milli?seconds) Cumulative frequency distribution (in percentage ) Figure 6.6: The cumulative frequency distribution of the inter-tick times of the PHANToM OmniTM. hand, in the simulation, joint limits are not considered, and hence Link 1 is in free fall in the beginning, which contributes to the sharp rise in the tracking error and slightly slower convergence of Joint 1 tracking error. In experiments, there are also unmodeled factors such as friction which contribute to the persisting tracking error, specially near the peaks and troughs of the qd;1 and qd:2. The observed minimum inter-update times and average frequency in simu- lations are reported in Table 6.2. The observed minimum inter-update time is, however, partly determined by the roughly xed sampling and control update fre- quency inherent in the Phantom Omni system. Figure 6.8 shows the cumulative distribution of the control inter-update times. The maximum inter-update time was around 0:6s and 0:98s, in the experiments with = 0:6 and = 0:95, respectively. 178 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 t (seconds) radian s q1qd , 1 (a) 0 10 20 30 40 50 60 ?1.6 ?1.5 ?1.4 ?1.3 ?1.2 ?1.1 ?1 ?0.9 ?0.8 ?0.7 t (seconds) radian s q2qd , 2 (b) 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 t (seconds) radian s q1qd , 1 (c) 0 10 20 30 40 50 60 ?1.6 ?1.5 ?1.4 ?1.3 ?1.2 ?1.1 ?1 ?0.9 ?0.8 ?0.7 t (seconds) radian s q2qd , 2 (d) Figure 6.7: The desired joint positions and the actual positions of the robot. (a), (b) = 0:6, (c), (d) = 0:95 6.6 Conclusions A major drawback of the event-triggered control paradigm is that it requires an accurate model of the system, which is not always possible to obtain. Motivated by this challenge to the practical utility of event-triggered control, we seek to design event based adaptive controllers. In this chapter, an event based implementation 179 Table 6.2: The observed minimum inter-update times and average frequency in experiments. Observed minimum Observed inter-update time (s) average Frequency (Hz) 0.6 9:8 10 4 50 0.95 9:8 10 4 34 0 0.05 0.1 0.15 0.2 0 20 40 60 80 100 Inter?update times (seconds) Cumulative frequency distribution (in percentage ) (a) 0 0.05 0.1 0.15 0.2 0 20 40 60 80 100 Inter?update times (seconds) Cumulative frequency distribution (in percentage ) (b) Figure 6.8: The cumulative frequency distribution of the control inter-update times in the experiments. (a) = 0:6, (b) = 0:95 of an adaptive controller for trajectory tracking in robot manipulators has been presented. More precisely, an existing continuous-time adaptive controller from the literature was chosen and an event-trigger was designed in a manner similar to that in [32] for trajectory tracking applications. Then, simulation and experimental results on a two-link planar manipulator were presented demonstrating the e cacy 180 of the algorithm. Both simulation and experimental results demonstrate the promise that event based algorithms hold in robotic applications. Future work will include improving the event-triggering and adaptation to obtain better results and numerical analysis necessary for estimating the inter-update times. 181 Chapter 7 Conclusions This dissertation is motivated by the need to design e cient sampled data controllers through utility driven event-triggering. Much of the existing literature in the area is applicable for xed-point stabilization under full state feedback. This dissertation explores a few important classes of problems where only imperfect information, of di erent kinds, is available. The dissertation is broadly divided into three parts. The rst part of the disser- tation is utility driven event-triggering under partial state information. Much of the existing literature on event-triggered control assumes the availability of the full state information to the event-trigger. This assumption fails to be satis ed in two very important scenarios - decentralized control systems and dynamic output feedback control. The rst scenario is addressed in Chapter 2, where in a control system with distributed sensors and a central controller is considered. The decentralized sen- sors together are assumed to sense the complete state of the system, which however transmit data to the central controller intermittently and asynchronously at time instants determined by local utility driven event-triggers. We were able to approach this problem with less restrictive assumptions than in some of the references. Unlike in the literature, we were also able to guarantee semi-global asymptotic stability for nonlinear systems and global asymptotic stability for linear systems without the 182 sensors having to listen to the controller. However, in the nonlinear case the de- sign is conservative. Thus, we also proposed a modi cation, wherein the sensors occasionally receive updates from the controller. Chapter 3 addressed the scenario where a system inherently lacks full state feedback and an output feedback dynamic (for example, observer based) controller has to be used. This chapter is concerned solely with Multi Input Multi Output (MIMO) Linear Time Invariant (LTI) systems. This problem naturally extends to the case where the sensors are decentralized and not co-located with the controller. In this chapter, we in fact progress from a centralized architecture where the sensors, controller and the actuators are co-located to a fully decentralized control system - a Sensor-Controller-Actuator Network (SCAN). Again, unlike in the existing lit- erature, we were able to guarantee global asymptotic stability. Even in the most general of the architectures considered in this chapter, Sensor-Controller-Actuator Network (SCAN), the assumptions on the system matrices are fairly simple. In future, the ideas used in these two chapters will be utilized to design schemes to decentralize sophisticated centralized event-triggers. The second part expands the de nition of utility driven sampling to include sampling in both time and space. The elds of event-triggered control and coarsest quantization have very similar motivations, although they are aimed at ?coarse sam- pling? in time and space, respectively. In Chapter 4, we exploit the common principle behind the two elds, which is robustness/tolerance to measurement errors, to design implicitly veri ed discrete-event emulation based controllers for asymptotic stabi- lization of general nonlinear systems. In comparison to the coarsest quantization 183 literature, our quantizer design holds for general multi-input nonlinear continuous time systems. The third part is on utility driven sampled data control for trajectory tracking. Tracking a time varying trajectory or even a set-point is of tremendous practical importance in many control applications. In these applications, the goal is to make the state of the system follow a reference or desired trajectory, which is usually speci ed as an exogenous input to the system. In Chapter 5, a method for designing utility driven event-triggered controllers for trajectory tracking in nonlinear systems is proposed. In Chapter 6, we propose a utility driven sampled data implementation of an adaptive controller for trajectory tracking in robot manipulators. This is motivated by the fact that commonly, utility driven event-triggered controllers such as the one presented in Chapter 5 rely on the knowledge of an accurate model of the system. However, building a model of high accuracy is a time consuming process and in many cases, it may not even be possible. Therefore, it is important to extend the design of implicitly veri ed event based controllers to cases where only a poor model of the system is available. In this work, we propose an event-triggered emulation of an adaptive controller from the existing literature. 184 Bibliography [1] P. Ellis. Extension of phase plane analysis to quantized systems. IRE Trans- actions on Automatic Control, 4(2):43{54, 1959. [2] R. Dorf, M. Farren, and C. Phillips. Adaptive sampling frequency for sampled- data control systems. IRE Transactions on Automatic Control, 7(1):38{47, 1962. [3] A.M. Phillips and M. Tomizuka. Multirate estimation and control under time- varying data sampling with applications to information storage devices. In American Control Conference, volume 6, pages 4151{4155, 1995. [4] D. Seto, J.P. Lehoczky, L. Sha, and K.G. Shin. On task schedulability in real-time control systems. In rtss, page 13. Published by the IEEE Computer Society, 1996. [5] W.P.M.H. Heemels, R.J.A. Gorter, A. van Zijl, P.P.J. Van den Bosch, S. Wei- land, W.H.A. Hendrix, and M.R. Vonder. Asynchronous measurement and control: a case study on motor synchronization. Control Engineering Practice, 7(12):1467{1482, 1999. [6] K.J. Astr om and B.M. Bernhardsson. Comparison of Riemann and Lebesgue sampling for rst order stochastic systems. In 41st IEEE Conference on Deci- sion and Control, Las Vegas, USA, Dec 2002. [7] K.J. Astr om and B. Bernhardsson. Systems with Lebesgue sampling. In Anders Rantzer and Christopher Byrnes, editors, Directions in Mathematical Systems Theory and Optimization, volume 286 of Lecture Notes in Control and Infor- mation Sciences, pages 1{13. Springer Berlin / Heidelberg, 2003. [8] D. Hristu-Varsakelis and P.R. Kumar. Interrupt-based feedback control over a shared communication medium. In 41st IEEE Conference on Decision and Control, volume 3, pages 3223{3228, 2002. [9] P. Tabuada and X. Wang. Preliminary results on state-trigered scheduling of stabilizing control tasks. In 45th IEEE Conference on Decision and Control, pages 282{287, 2006. [10] K.-E. Arz en. A simple event-based PID controller. In Preprints 14th World Congress of IFAC, Beijing, P.R. China, Jan 1999. [11] M. Miskowicz. The event-triggered sampling optimization criterion for dis- tributed networked monitoring and control systems. In IEEE International Conference on Industrial Technology, volume 2, pages 1083{1088, 2003. 185 [12] J.H. Sandee. Event-driven control in theory and practice. PhD thesis, Technis- che Universiteit Eindhoven, Eindhoven, Dec 2006. [13] Ernesto Kofman and Julio H Braslavsky. Level crossing sampling in feedback stabilization under data-rate constraints. In IEEE Conference on Decision and Control, pages 4423{4428. IEEE, 2006. [14] P. Tabuada. Event-triggered real-time scheduling of stabilizing control tasks. IEEE Transactions on Automatic Control, 52(9):1680{1685, 2007. [15] W.P.M.H. Heemels, J.H. Sandee, and P.P.J. Van Den Bosch. Analysis of event-driven controllers for linear systems. International Journal of Control, 81(4):571{590, 2008. [16] K.J. Astr om. Event based control. In Alessandro Astol and Lorenzo Marconi, editors, Analysis and Design of Nonlinear Control Systems: In Honor of Alberto Isidori, pages 127{147. Springer Berlin Heidelberg, 2008. [17] X. Wang and M.D. Lemmon. Event design in event-triggered feedback control systems. In 47th IEEE Conference on Decision and Control, pages 2105{2110, 2008. [18] X. Wang and M.D. Lemmon. Self-triggered feedback control systems with nite-gain L2 stability. IEEE Transactions on Automatic Control, 54:452{467, 2009. [19] X. Wang and M.D. Lemmon. Self-triggering under state-independent distur- bances. IEEE Transactions on Automatic Control, 55(6):1494{1500, 2010. [20] Manel Velasco, Pau Mart , and Enrico Bini. On Lyapunov sampling for event- driven controllers. In IEEE Conference on Decision and Control and Chinese Control Conference, pages 6238{6243. IEEE, 2009. [21] J. Lunze and D. Lehmann. A state-feedback approach to event-based control. Automatica, 46(1):211{215, 2010. [22] M. D. Lemmon. Event-triggered feedback in control, estimation, and optimiza- tion. In Alberto Bemporad, Maurice Heemels, and Mikael Johansson, editors, Networked Control Systems, volume 406 of Lecture Notes in Control and Infor- mation Sciences, pages 293{358. Springer Berlin / Heidelberg, 2011. [23] Peter JG Ramadge and W Murray Wonham. The control of discrete event systems. Proceedings of the IEEE, 77(1):81{98, 1989. [24] A. Anta and P. Tabuada. Self-triggered stabilization of homogeneous control systems. In American Control Conference, pages 4129{4134, 2008. [25] M. Lemmon, T. Chantem, X.S. Hu, and M. Zyskowski. On self-triggered full- information h-in nity controllers. HSCC 2007, 4416:371, 2007. 186 [26] A. Anta and P. Tabuada. Space-time scaling laws for self-triggered control. In 47th IEEE Conference on Decision and Control, pages 4420{4425, 2008. [27] M. Mazo Jr, A. Anta, and P. Tabuada. On self-triggered control for linear systems: Guarantees and complexity. In European control conference, 2009. [28] A. Anta and P. Tabuada. To sample or not to sample: Self-triggered control for nonlinear systems. IEEE Transactions on Automatic Control, 55(9):2030{2042, 2010. [29] P. Tallapragada and N. Chopra. Event-triggered dynamic output feedback control for LTI systems. In IEEE Conference on Decision and Control, pages 6597{6602, 2012. [30] P. Tallapragada and N. Chopra. Event-triggered decentralized dynamic output feedback control for LTI systems. In Estimation and Control of Networked Systems, volume 3, pages 31{36, 2012. [31] P. Tallapragada and N. Chopra. On co-design of event trigger and quantizer for emulation based control. In American Control Conference, pages 3772{3777, 2012. [32] P. Tallapragada and N. Chopra. On event triggered trajectory tracking for control a ne nonlinear systems. In IEEE Conference on Decision and Control and European Control Conference, pages 5377{5382, 2011. [33] P. Tallapragada and N. Chopra. On event triggered tracking for nonlinear systems. IEEE Transactions on Automatic Control. Accepted. [34] WPMH Heemels, Karl Henrik Johansson, and P Tabuada. An introduction to event-triggered and self-triggered control. In IEEE Conference on Decision and Control, pages 3270{3285. IEEE, 2012. [35] M. Mazo Jr. and M. Cao. Decentralized event-triggered control with asyn- chronous updates. In IEEE Conference on Decision and Control and European Control Conference, pages 2547 {2552, 2011. [36] M. Mazo Jr. and M. Cao. Decentralized event-triggered control with one bit communications. In IFAC Conference on Analysis and Design of Hybrid Sys- tems, pages 52{57, 2012. [37] Manuel Mazo Jr and Ming Cao. Asynchronous decentralized event-triggered control. arXiv preprint arXiv:1206.6648v1 [math.OC], 2012. [38] X. Wang and M. Lemmon. Event-triggering in distributed networked systems with data dropouts and delays. Hybrid systems: Computation and control, pages 366{380, 2009. 187 [39] X. Wang and M.D. Lemmon. Event triggering in distributed networked control systems. IEEE Transactions on Automatic Control, 56(3):586{601, 2011. [40] M.C.F. Donkers and W.P.M.H. Heemels. Output-based event-triggered control with guaranteed L1-gain and improved and decentralized event-triggering. Au- tomatic Control, IEEE Transactions on, 57(6):1362{1376, 2012. [41] M.C.F. Donkers and W. Heemels. Output-based event-triggered control with guaranteed L1-gain and improved event-triggering. In IEEE Conference on Decision and Control, pages 3246{3251, 2010. [42] D. Lehmann and J. Lunze. Event-based output-feedback control. In Mediter- ranean Conference on Control & Automation, pages 982{987, 2011. [43] L. Li and M. Lemmon. Weakly coupled event triggered output feedback con- trol in wireless networked control systems. In Annual Allerton Conference on Communication, Control, and Computing, pages 572{579, 2011. [44] J. Almeida, C. Silvestre, and A. M. Pascoal. Observer based self-triggered control of linear plants with unknown disturbances. In American Control Con- ference, pages 5688{5693, 2012. [45] H.K. Khalil. Nonlinear systems. Prentice Hall, third edition, 2002. [46] M. Mazo Jr. and P. Tabuada. Decentralized event-triggered control over Wire- less Sensor/Actuator Networks. IEEE Transactions on Automatic Control, 56(10):2456{2461, 2011. [47] G.C. Walsh and H. Ye. Scheduling of networked control systems. IEEE Control Systems Magazine, 21(1):57{65, 2001. [48] L. Li and M. Lemmon. Event-triggered output feedback control of nite hori- zon discrete-time multi-dimensional linear processes. In IEEE Conference on Decision and Control, pages 3221{3226, 2010. [49] F. Dor er, F. Pasqualetti, and F. Bullo. Continuous-time distributed observers with discrete communication. Selected Topics in Signal Processing, IEEE Jour- nal of, 2013. [50] G.N. Nair, F. Fagnani, S. Zampieri, and R.J. Evans. Feedback control under data rate constraints: an overview. Proceedings of the IEEE, 95(1):108{137, 2007. [51] W.S. Wong and R.W. Brockett. Systems with nite communication bandwidth constraints. ii. stabilization with limited information feedback. IEEE Transac- tions on Automatic Control, 44(5):1049{1053, 1999. [52] G.N. Nair and R.J. Evans. Stabilization with data-rate-limited feedback: tight- est attainable bounds. Systems & Control Letters, 41(1):49{56, 2000. 188 [53] S. Tatikonda and S. Mitter. Control under communication constraints. IEEE Transactions on Automatic Control, 49(7):1056{1068, 2004. [54] C. De Persis. n-bit stabilization of n-dimensional nonlinear systems in feedfor- ward form. IEEE Transactions on Automatic Control, 50(3):299{311, 2005. [55] Q. Ling, M.D. Lemmon, and H. Lin. Asymptotic stabilization of dynamically quantized nonlinear systems in feedforward form. Journal of Control Theory and Applications, 8(1):27{33, 2010. [56] R.W. Brockett and D. Liberzon. Quantized feedback stabilization of linear systems. IEEE Transactions on Automatic Control, 45(7):1279{1289, 2000. [57] D. Liberzon. Hybrid feedback stabilization of systems with quantized signals. Automatica, 39(9):1543{1554, 2003. [58] D. Liberzon and J.P. Hespanha. Stabilization of nonlinear systems with limited information feedback. IEEE Transactions on Automatic Control, 50(6):910{ 915, 2005. [59] D. Liberzon. Quantization, time delays, and nonlinear stabilization. IEEE Transactions on Automatic Control, 51(7):1190{1195, 2006. [60] C. De Persis and A. Isidori. Stabilizability by state feedback implies stabiliz- ability by encoded state feedback. Systems & control letters, 53(3-4):249{258, 2004. [61] Daniel Lehmann and Jan Lunze. Event-based control using quantized state information. In Estimation and Control of Networked Systems, pages 1{6, 2010. [62] Lichun Li, Xiaofeng Wang, and Michael Lemmon. Stabilizing bit-rates in quan- tized event triggered control systems. In International Conference on Hybrid Systems: Computation and Control, pages 245{254, 2012. [63] N. Elia. Design of hybrid systems with guaranteed performance. In 39th IEEE Conference on Decision and Control, volume 1, pages 993{998, 2000. [64] N. Elia and S.K. Mitter. Stabilization of linear systems with limited informa- tion. IEEE Transactions on Automatic Control, 46(9):1384{1400, 2001. [65] N. Elia and E. Frazzoli. Quantized stabilization of two-input linear systems: a lower bound on the minimal quantization density. Hybrid Systems: Computa- tion and Control, pages 335{349, 2002. [66] M. Fu and L. Xie. The sector bound approach to quantized feedback control. IEEE Transactions on Automatic Control, 50(11):1698{1711, 2005. [67] C.Y. Kao and S.R. Venkatesh. Stabilization of linear systems with limited information multiple input case. In American Control Conference, volume 3, pages 2406{2411, 2002. 189 [68] H. Haimovich and M.M. Seron. On in mum quantization density for multiple- input systems. In IEEE Conference on Decision and Control and European Control Conference, pages 7692{7697, 2005. [69] H. Haimovich, MM Seron, and GC Goodwin. Geometric characterization of multivariable quadratically stabilizing quantizers. International Journal of Control, 79(8):845{857, 2006. [70] H. Haimovich and M.M. Seron. Multivariable quadratically-stabilizing quan- tizers with nite density. Automatica, 44(7):1880{1885, 2008. [71] J. Liu and N. Elia. Quantized feedback stabilization of non-linear a ne systems. International Journal of Control, 77(3):239{249, 2004. [72] F. Ceragioli and C. De Persis. Discontinuous stabilization of nonlinear systems: Quantized and switching controls. Systems & control letters, 56(7-8):461{473, 2007. [73] D. Liberzon. Switching in systems and control. Birkh auser, 2003. [74] R. Goebel, R. Sanfelice, and A. Teel. Hybrid dynamical systems. IEEE Control Systems Magazine, 29(2):28{93, 2009. [75] H. Yu and P.J. Antsaklis. Event-triggered real-time scheduling for stabilization of passive and output feedback passive systems. In American Control Confer- ence, pages 1674 {1679, 2011. [76] Eloy Garcia and Panos J. Antsaklis. Parameter estimation in time-triggered and event-triggered model-based control of uncertain systems. International Journal of Control, 85(9):1327{1342, 2012. [77] Toivo Henningsson and Anton Cervin. Comparison of LTI and event-based control for a moving cart with quantized position measurements. In European Control Conference, 2009. [78] A. Camacho, P. Mart , M. Velasco, C. Lozoya, R. Villa, J.M. Fuertes, and E. Griful. Self-triggered networked control systems: an experimental case study. In IEEE International Conference on Industrial Technology, pages 123{128. IEEE, 2010. [79] Sebastian Trimpe and Ra aello DAndrea. An experimental demonstration of a distributed and event-based state estimation algorithm. In IFAC World Congress, pages 8811{8818, 2011. [80] D. Lehmann and J. Lunze. Extension and experimental evaluation of an event- based state-feedback approach. Control Engineering Practice, 19(2):101{112, 2011. 190 [81] H. Berghuis, R. Ortega, and H. Nijmeijer. A robust adaptive controller for robot manipulators. In IEEE International Conference on Robotics and Automation, pages 1876{1881, 1992. [82] M.W. Spong, S. Hutchinson, and M. Vidyasagar. Robot Modeling and Control. John Wiley & Sons, Inc., New York, 2006. [83] J. Slotine and L. Weiping. Adaptive manipulator control: A case study. IEEE Transactions on Automatic Control, 33(11):995{1003, 1988. [84] SensAble Technologies OpenHaptics Toolkit 3.0, Jan 2009, http://www.sensable.com/products-openhaptics-toolkit.htm. 191