ABSTRACT Title of Dissertation: TOWARDS AUTONOMOUS VERTICAL LANDING ON SHIP-DECKS USING COMPUTER VISION Abhishek Shastry Doctor of Philosophy, 2022 Dissertation Directed by: Professor Anubhav Datta, and Professor Inderjit Chopra Department of Aerospace Engineering The objective of this dissertation is to develop and demonstrate autonomous ship-board landing with computer vision. The problem is hard primarily due to the unpredictable stochastic nature of deck motion. The work involves a fundamental understanding of how vision works, what are needed to implement it, how it interacts with aircraft controls, the necessary and sufficient hardware, and software, how it differs from human vision, its limits, and finally the avenues of growth in the context of aircraft landing. The ship-deck motion dataset is provided by the U.S. Navy. This data is analyzed to gain fundamental understanding and is then used to replicate stochastic deck motion in a laboratory setting on a six degrees of freedom motion platform, also called Stewart platform. The method uses a shaping filter derived from the dataset to excite the platform. An autonomous quadrotor UAV aircraft is designed and fabricated for experimental testing of vision-based landing methods. The entire structure, avionics architecture, and flight controls for the aircraft are completely developed in-house. This provides the flexibility and fundamental understanding needed for this research. A fiducial-based vision system is first designed for detection and tracking of ship-deck. This is then utilized to design a tracking controller with the best possible bandwidth to track the deck with minimum error. Systematic experiments are conducted with static, sinusoidal, and stochastic motions to quantify the tracking performance. A feature-based vision system is designed next. Simple experiments are used to quantitatively and qualitatively evaluate the superior robustness of feature-based vision under various degraded visual conditions. This includes: (1) partial occlusion, (2) illumination variation, (3) glare, and (4) water distortion. The weight and power penalty for using feature-based vision are also determined. The results show that it is possible to autonomously land on ship-deck using computer vision alone. An autonomous aircraft can be constructed with only an IMU and a Visual Odometry software running on stereo camera. The aircraft then only needs a monocular, global shutter, high frame rate camera as an extra sensor to detect ship-deck and estimate its relative position. The relative velocity however needs to be derived using Kalman filter on the position signal. For the filter, knowledge of disturbance/motion spectrum is not needed, but a white noise disturbance model is sufficient. For control, a minimum bandwidth of 0.15 Hz is required. For vision, a fiducial is not needed. A feature-rich landing area is all that is required. The limits of the algorithm are set by occlusion(80% tolerable), illumination (20,000 lux-0.01 lux), angle of landing (up to 45 degrees), 2D nature of features, and motion blur. Future research should extend the capability to 3D features and use of event-based cameras. Feature-based vision is more versatile and human-like than fiducial-based, but at the cost of 20 times higher computing power which is increasingly possible with modern processors. The goal is not an imitation of nature but derive inspiration from it and overcome its limitations. The feature-based landing opens a window towards emulating the best of human training and cognition, without its burden of latency, fatigue, and divided attention. TOWARDS AUTONOMOUS VERTICAL LANDING ON SHIP-DECKS USING COMPUTER VISION by Abhishek Shastry Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2022 Advisory Committee: Professor Anubhav Datta, Chair/Advisor Professor Inderjit Chopra, Co-Advisor Professor Robert Sanner Professor Mumu Xu Professor Tobias Von Petersdorff © Copyright by Abhishek Shastry 2022 Dedicated to all the square pegs in round holes. ii Acknowledgments Although it is difficult to acknowledge everyone who has touched my life for the past five years and in some way helped to make this dissertation possible, I will give it a try. First and foremost, I am grateful to my wonderful advisor Dr. Anubhav Datta, without whose help and patience, I would not have made it through grad school. I must mention though that I was scared of him in the beginning, not knowing what kind of person he was and fearing the worst possible. But he proved otherwise. He was a constant source of support both inside and outside academics and made my student life much easier than it would have been otherwise. He pushed me to the boundary of my comfort zone for my optimal growth both as a human being and a researcher. I have learned so much from him, not only about science but also about problem-solving and different new ways of thinking. His enthusiasm about taking on the hardest problems and getting to the bottom of it in the spirit of gaining new knowledge is infectious, and I will always carry a part of him with me wherever I go. I would also like to thank my second advisor, Prof. Inderjit Chopra, for his guidance and letting me take on this challenging problem for my Ph.D. dissertation. I am grateful to him for being patient with me whenever I disagreed with him and for patiently offering kind and helpful suggestions throughout the last five years. It is impressive how after so many years, he still maintains the level of enthusiasm and curiosity of that of a child. I should also thank iii Prof. Sanner. Many of the ideas in this dissertation have been inspired by his teachings and the discussions that I have had with him. My housemates for the past five years, Mrinal, Nikhil, Shashank, Animesh, and Koushik, deserve special mention for their help and support, without which these five years might have been torturous. I had many productive discussions with them, especially with my brother Animesh whose research overlaps with this dissertation topic. A big thanks to my seniors from the Alfred Gessow Rotorcraft Center, Bharath, Ananth, Dan, Elizabeth, Tyler, Thomas, Lex, Fred, Brandyn, Wanyi, and Vera. They made sure I felt welcomed and a part of the group. I am also grateful for all the wonderful friendships I have developed over the past five years with my colleagues, Seyhan, Ravi, Wanyi, Amy, Emily, James, Katie, Brent, Eric, Ola, Matt, Victoria, and everyone else in the rotorcraft center. All the time that I have spent with them will surely be cherished. I owe unending gratitude to my parents. Nothing I have ever done would have been possible without their sacrifices. And the love of my life, Lea who has been always there for me with all her love and support and has pushed me to finally complete this dissertation. I am thankful to Naval Air Systems Command (NAVAIR) for financially supporting this research and to all the people at the organization, including Jim Bumbaugh, Daniel Schafer, and Peter Arslanian, who continuously gave valuable feedback to improve my methods. I am also thankful to Dr. Mahendra Bhagwat for his support and the many productive discussions I have had with him throughout my Ph.D. life. Lastly, I would like to thank all my committee members for their valuable advice, which has greatly improved the quality of research in this dissertation. iv Table of Contents Dedication ii Acknowledgements iii Table of Contents v List of Tables viii List of Figures ix List of Abbreviations xiii List of Notations xiv Chapter 1: Introduction 1 1.1 Brief History of Ship-Deck Landing . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Vision for Ship-Deck Landing . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5 Autonomous Ship-Deck Landing . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.6 Objectives of the Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.7 Technical Approach and Organization of the Dissertation . . . . . . . . . . . . . 19 Chapter 2: Ship-Deck Motion 21 2.1 Types of ship-deck motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2 Sea States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3 Deck Motion Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.4 Construction of Shaping Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5 Hardware to replicate deck motion . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Van Loan’s algorithm to discretize linear system . . . . . . . . . . . . . . . . . . 31 2.7 Procedure to replicate deck motion . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Chapter 3: Fabrication of Quadrotor UAV 37 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.2 Vehicle Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Propulsion System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 v 3.4 Avionics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.5 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.5.1 Inertial Measurement Unit (IMU): . . . . . . . . . . . . . . . . . . . . . 47 3.5.2 Visual Inertial Odometry (VIO) sensor: . . . . . . . . . . . . . . . . . . 47 3.5.3 Monocular Camera: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.6 Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.6.1 Flight controller: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.6.2 Flight Computer: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.7 Communication Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.7.1 UART: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.7.2 I2C: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.7.3 DShot: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.7.4 USB: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.8 Assembled Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.9 Flight Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.9.1 Attitude Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.9.2 Translational Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.9.3 Control mixer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.10 Filtering and State-estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.11 Kalman Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.11.1 For angular velocity: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.11.2 For attitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.12 Low Pass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.13 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Chapter 4: Fiducial-Based Vision System 70 4.1 Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.2 Adjusting camera focus: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 Camera Calibration: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.4 Fiducial-based vision algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.5 Verification of Vision using Vicon . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.6 Velocity Estimation From Vision Measurements . . . . . . . . . . . . . . . . . . 79 4.6.1 Low pass filter on numerical differentiation . . . . . . . . . . . . . . . . 79 4.6.2 Kalman filter with white disturbance model . . . . . . . . . . . . . . . . 80 4.6.3 Kalman filter with colored disturbance model . . . . . . . . . . . . . . . 83 4.7 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Chapter 5: Motion Tracking Performance With Fiducial-Based Vision 88 5.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.2 Tracking of Static Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 5.3 Tracking a Platform Oscillating at 0.075 Hz . . . . . . . . . . . . . . . . . . . . 90 5.4 Tracking a Platform Oscillating at 0.2 Hz . . . . . . . . . . . . . . . . . . . . . 93 5.5 Tracking a Platform Oscillating at 0.33 Hz . . . . . . . . . . . . . . . . . . . . . 96 5.6 Explanation Using Linear Systems Theory . . . . . . . . . . . . . . . . . . . . . 99 5.7 Tracking a platform with stochastic motion . . . . . . . . . . . . . . . . . . . . . 102 vi 5.8 Landing strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Chapter 6: Feature-Based Vision and Robustness 108 6.1 Problems with fiducial based vision . . . . . . . . . . . . . . . . . . . . . . . . . 108 6.2 Feature-Based Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6.3 New Feature-based vision algorithm . . . . . . . . . . . . . . . . . . . . . . . . 111 6.4 Feature Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.5 Accuracy and precision of vision measurements . . . . . . . . . . . . . . . . . . 118 6.6 Effect of occlusion on the vision system . . . . . . . . . . . . . . . . . . . . . . 121 6.7 Effect of ambient illumination on the vision system . . . . . . . . . . . . . . . . 123 6.8 Effect of motion blur on the vision system . . . . . . . . . . . . . . . . . . . . . 125 6.9 Effect of glare and water distortion . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.10 Effect of camera resolution on the vision system . . . . . . . . . . . . . . . . . . 128 6.11 Effect of processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 6.12 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Chapter 7: Summary and Conclusions 134 7.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 7.3 Recommendations for Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 138 Bibliography 142 vii List of Tables 2.1 Douglas Sea-scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2 Stewart Platform Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.1 Aircraft Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.2 Inner loop control Gains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.3 Outer loop control Gains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.1 Camera properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6.1 Effect of using different processors for the vision system . . . . . . . . . . . . . 131 viii List of Figures 1.1 Birth of naval aviation with the first ship landing performed by Eugene Ely on USS Pennsylvania anchored in San Francisco Bay (Photo from Naval History and Heritage Command) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 First naval rotorcraft XOP-1, similar to the one used by Alfred M. Pride pictured in flight over the aircraft carrier USS Langley in 1931 (Photo from Naval History and Heritage Command) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 A rare color picture of first naval VTOL aircraft, USCG R-4/HNS-1 landing on a ship near Floyd Bennett Field in Brooklyn, New York. (Photo from USCG/Stewart Graham Collection Photo, Vertical Magazine) . . . . . . . . . . . . . . . . . . . 4 1.4 Helicopter Hauldown and Rapid Securing Device (HHRSD) also known as beartrap 5 1.5 Sea-King helicopter CH-SS2 is hauled down by a beartrap to a landing on the deck of Canadian ship Assiniboine on May, 1967 . . . . . . . . . . . . . . . . . 6 1.6 First digital image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.7 Hubel and Weisel experiment to understand the visual cortex . . . . . . . . . . . 10 1.8 Standard vs event camera outputs; Standard camera pixels synchronously capture the entire scene at a specified frame rate. Event camera only detects changes in scene with every pixel firing asynchronously. . . . . . . . . . . . . . . . . . . . 13 1.9 Different types of landing pads proposed in vision literature for ship-deck landing 15 1.10 SNC’s UCARS system for autonomous ship-deck landing of unmanned helicopters 16 1.11 Autonomous Flight testing of Airbus DeckFinder system; . . . . . . . . . . . . . 17 2.1 6 types of ship-motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2 Sample ship-deck motion time series from Navy provided dataset . . . . . . . . . 24 2.3 Fourier transform of deck acceleration; different colors are from different time histories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4 Power Spectral Density of deck’s sway acceleration . . . . . . . . . . . . . . . . 27 2.5 Linear time invariant system with stochastic wide sense stationary (WSS) input gives stochastic WSS output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.6 6-DOF stewart motion platform used to replicate ship-deck type motion in lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.7 A sample stochastic deck motion trajectory generated by commanding the Stewart platform through the designed shaping filter and measured using a VICON system 35 3.1 Quadrotor skeleton made from square aluminum tubes . . . . . . . . . . . . . . 38 3.2 Quadrotor frame ready to be mounted with propulsion and avionics components . 39 3.3 iFlight 2207 2750Kv motor used on the vehicle; Configuration: 12N14P . . . . . 40 ix 3.4 Electronic Speed Controller running BLHeli32 firmware . . . . . . . . . . . . . 41 3.5 Experimental setup used to measure thrust and torque in hover . . . . . . . . . . 41 3.6 Experimentally measured variation of thrust and torque with RPM . . . . . . . . 42 3.7 Power vs RPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.8 Motor Efficiency vs RPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.9 Aircraft’s Avionics architecture; DShot, UART, and I2C are three different digital communication protocols used on the aircraft . . . . . . . . . . . . . . . . . . . 46 3.10 Sensors on the quadrotor vehicle . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.11 Primary computing elements on the quadrotor vehicle . . . . . . . . . . . . . . . 49 3.12 The in-house fabricated quadrotor aircraft . . . . . . . . . . . . . . . . . . . . . 53 3.13 Body-axes and direction of rotation of rotors; Z is +ve down . . . . . . . . . . . 55 3.14 Control architecture for vision-based autonomous tracking and landing on a stochastically moving platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.15 Inner and outer loop step response of quadrotor along X-axis; response along other axes is similar; The response time verifies the time scale separation of attitude and translational dynamics assumed in the control design . . . . . . . . . 61 3.16 Effective outer loop control diagram . . . . . . . . . . . . . . . . . . . . . . . . 61 3.17 Bode plot of closed outer loop control system T (s) for X and Y axes . . . . . . . 62 3.18 Large noise in measurements by accelerometers . . . . . . . . . . . . . . . . . . 63 3.19 Fourier transform of the measurements show large noise is due to 1/rev vibration from the rotors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.1 Monocular global shutter camera with fixed focus and auto-exposure; Made by: Econsystem; Product number: See3CAM 24CUG . . . . . . . . . . . . . . . . . 71 4.2 Pinhole camera model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.3 Checkerboard pattern used for camera calibration . . . . . . . . . . . . . . . . . 74 4.4 Detected checkerboard points in camera image; also visible is the barrel and pincishion distortion from lens . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.5 Fiducial 308 of the original Aruco marker library is used for the vision-based landing system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.6 Fiducial-based vision system for deck detection and estimation of its pose . . . . 77 4.7 Vicon system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.8 Comparison of vision and vicon measurements for a static deck . . . . . . . . . . 78 4.9 Comparison of vision and vicon measurements for an oscillating deck . . . . . . 78 4.10 Velocity estimation using numerical differentiation shows large spikes . . . . . . 80 4.11 Velocity estimation using Kalman filter is less noisy but shows a lag . . . . . . . 82 4.12 Modified power spectral density function with peak at 0.34 Hz . . . . . . . . . . 83 4.13 Comparison of the two Kalman filters for vision-based velocity estimation of a simple oscillating ship-deck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.14 Validation of Kalman filter estimates of quadrotor relative ship-deck velocity using Vicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.1 Experimental setup for studying motion tracking performance . . . . . . . . . . . 88 5.2 Tracking of the stationary ship-deck by the quadrotor . . . . . . . . . . . . . . . 89 x 5.3 Tracking of a stationary deck by the quadrotor. Box shows the extent of landing platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 5.4 Quadrotor hovering over the stationary deck. . . . . . . . . . . . . . . . . . . . . 90 5.5 Tracking performance of quadrotor for a deck oscillating at 0.075 Hz frequency . 91 5.6 Tracking of a deck oscillating at 0.075 Hz . . . . . . . . . . . . . . . . . . . . . 92 5.7 Video frames showing quadrotor tracking of the deck tracking at 0.075 Hz . . . . 93 5.8 Tracking performance of quadrotor for a deck oscillating at 0.2 Hz frequency . . 94 5.9 Tracking of a deck oscillating at 0.2 Hz . . . . . . . . . . . . . . . . . . . . . . 95 5.10 Video frames showing quadrotor tracking of the deck tracking at 0.2 Hz . . . . . 96 5.11 Tracking performance of quadrotor for a deck oscillating at 0.33 Hz frequency . . 97 5.12 Tracking of a deck oscillating at 0.33 Hz . . . . . . . . . . . . . . . . . . . . . . 98 5.13 Video frames showing quadrotor tracking of the deck tracking at 0.33 Hz . . . . . 99 5.14 Linear time invariant system with oscillatory input produces an oscillatory output whose magnitude and phase modulated by the transfer function of the system . . 100 5.15 Effective outer loop control block diagram . . . . . . . . . . . . . . . . . . . . . 101 5.16 Outer loop bode plot showing predicted attentuation and phase lag in quadrotor oscillations; Prediction matches experiments except for magnitude at 0.33 Hz . . 102 5.17 Tracking performance of quadrotor for the platform with ship-deck type motion . 103 5.18 Effective outer loop control block diagram . . . . . . . . . . . . . . . . . . . . . 104 5.19 Autonomous landing strategy for ship-deck landing . . . . . . . . . . . . . . . . 105 5.20 Error plot with chosen limits to start and abort descent . . . . . . . . . . . . . . . 106 5.21 Video frames showing quadrotor tracking and landing on a platform with stochastic ship-deck type motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 6.1 Sample failure cases for fiducial-based vision system . . . . . . . . . . . . . . . 109 6.2 Features are image patches with special invariant properties that makes them detectable across wide variety of different images . . . . . . . . . . . . . . . . . 110 6.3 Feature-based Vision Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 115 6.4 Vision algorithm can track any planar target as long as the target is rich in features; lines show matching of each individual feature from model to scene image . . . . 116 6.5 Landing pad design with large number of features for improved robustness . . . . 117 6.6 Roll and pitch estimation using the new vision system; truth values are obtained using a digital protractor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 6.7 X-Y position estimation using the new vision system; truth values are obtained using a ruler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.8 Deck distance estimation using the new vision system . . . . . . . . . . . . . . . 119 6.9 Noise in vision estimation increases with decrease in number of observed features; noise multiplier is the factor by which the standard deviation in measurements increases in comparison to the baseline at large (> 120) number of features . . . 119 6.10 Vision system works under occlusion as high as 80%; note that features of even single occluded tags are also detected by the vision algorithm . . . . . . . . . . . 122 6.11 Vision system can detect the pad on a bright day (20,000 lux illumination), with lighting variation created by shadows on the pad . . . . . . . . . . . . . . . . . . 123 6.12 Vision system can detect the pad even in a dark room of only 0.01 lux illumination 123 6.13 Maximum visible distance decreases for illumination lower than 10 lux . . . . . . 124 xi 6.14 Motion blur can distort features leading their non-detection and failure of the vision system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.15 New vision system can detect landing pad even with glare . . . . . . . . . . . . . 127 6.16 Water distortion increases noise in detection of landing pad . . . . . . . . . . . . 128 6.17 Increasing image resolution increases the maximum distance from which the landing pad can be observed with the side-effect of decrease in algorithm’s speed 130 7.1 Larger Stewart platform in process of being procured by University of Maryland . 140 7.2 View from cockpit during an actual ship-deck landing shows that pilots use 3D features of the ship rather than that of the deck for manual estimation and tracking of the ship position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 7.3 YP Patrol boats from Navy can be used for real-life testing of autonomous ship- deck landing methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 xii List of Abbreviations ABS Acrylonitrile Butadiene Styrene CNN Convolutional Neural Network COTS Commercial Off The Shelf CPU Central Processing Unit CRC Cyclic Redudancy Check CV Computer Vision DOF Degree of Freedom DShot Digital Shot ESC Electronic Speed Controller FFT Fast Fourier Transform FOV Field Of View I2C Inter-Integrated Circuit IC Integrated Circuit IMU Inertial Measurement Unit LQE Linear Quadratic Estimator MAV Micro Air Vehicle MAVLink Micro Air Vehicle Link MEMS Micro-Electro-Mechanical System ONR Office of Naval Research OS Operating System PID Proportional Integral Derivative PnP Perspective n Point PWM Pulse Width Modulation RANSAC Random Sample Consensus ROS Robot Operating System SBC Single Board Computer SCONE Systematic Characterization Of the Naval Environment SIFT Scale Invariant Feature Transform SNR Signal to Noise Ratio SWaP Size Weight and Power UART Universal Asynchonous Receiver and Transmitter UAS Unmanned Aerial System UAV Unmanned Aerial Vehicle USB Universal Serial Bus VFS Vertical Flight Society VTOL Vertical Take-off and Landing xiii List of Notations CT Thrust coefficient CQ Torque coefficient Ixx, Iyy, Izz Quadrotor’s moment of inertia Kp, Ki, Kd PID gains R Blade radius S Power Spectral Density T (s) Closed loop tracking transfer function cx, cy Camera optical center f Camera focal length k1, k2, k3 Camera distortion coefficients km Motor constant u, v 2D image co-ordinates vk Measurement noise wk Process disturbance ρ Air density Ω Rotor RPM σ Blade solidity xiv Chapter 1: Introduction This chapter introduces the topic of this dissertation. It begins with a history of ship-deck landing and then explains the the motivation behind the research topic. This is followed by history of computer vision and explanation of prior works in use of vision for ship-deck landing and autonomous ship-deck landing methods. Next it states the objectives of the research, and technical approach followed to achieve those objectives. Finally it ends with an organization of the rest of the dissertation. 1.1 Brief History of Ship-Deck Landing As we walk into the age of automation when all but high-level decision making may be delegated to machines, there are still tasks that are extremely difficult to automate. Landing on a ship-deck is one such tasks. The task is a formidable one even for navy pilots and yet forms the basis many critical operations in naval aviation. The first aircraft to be used for naval operations were sea-planes and the first ship-deck landing was performed by Eugene Burton Ely with a Curtiss pusher biplane on board the USS Pennsylvania anchored in San Francisco bay, on January 18, 1911 [1]. It used a braking system of sandbags and ropes - a direct pre-cursor to modern arrester-hook and wire system used on aircraft carriers. 1 Figure 1.1: Birth of naval aviation with the first ship landing performed by Eugene Ely on USS Pennsylvania anchored in San Francisco Bay (Photo from Naval History and Heritage Command) The first rotary wing aircraft landing on a ship however would not happen until 30 years later when an autogiro called Pitcairn XOP-1 (PCA-2), piloted by Lt. Alfred Metville Pride took off and landed on the aircraft carrier USS Langley while it was underway on September 23, 1931 [2]. However, autogiros are not powered and they cannot takeoff or land vertically. Hence the first powered vertical landing on a ship is credited to Lt. Stewart Ross Graham with Sikorsky YR-4B onboard British ship SS Daghestan docking off from North Atlantic on January 16, 1944 under high wind [2]. This was also the first rotary wing landing on a small ship with a single flight deck. This was important as it allowed Navy to operate helicopters from smaller combat ships such as cruisers, destroyers, and frigates instead of only giant aircraft carriers, which have larger landing area and are more stable due to larger mass. 2 Figure 1.2: First naval rotorcraft XOP-1, similar to the one used by Alfred M. Pride pictured in flight over the aircraft carrier USS Langley in 1931 (Photo from Naval History and Heritage Command) 3 Figure 1.3: A rare color picture of first naval VTOL aircraft, USCG R-4/HNS-1 landing on a ship near Floyd Bennett Field in Brooklyn, New York. (Photo from USCG/Stewart Graham Collection Photo, Vertical Magazine) Helicopters are now a mainstay of naval operations, but only when the conditions on flight deck are favorable. In bad-weather or high sea-states, they remain grounded. In the late 1950s the Royal Canadian Navy invented a helicopter hauldown and rapid securing (HHRSD) device, also known as beartrap. This system was cleared for service in 1967 for both day and night operations up to 30 degrees of roll and 9 degrees of pitch in Sea State 6 [3]. It was so successful that it was replicated by other navies around the world including the United States and is still in use even today. In the typical functioning of the system, a cable drops from the helicopter that is 4 attached by the deck crew to a heavier cable passing through the center of beartrap and attached to a winch below the deck. The cable is then pulled back and secured to the helicopter. Then when the landing safety officer determines the conditions to be favorable, he instructs the pilot to decrease and increases the tension in the cable to pull the helicopter down and attach it to the deck by closing the beartrap. Once the helicopter is secured to the deck, the beartrap is also used to move the aircraft in and out of the hangar using rail lines on the deck. Figure 1.4: Helicopter Hauldown and Rapid Securing Device (HHRSD) also known as beartrap 5 Figure 1.5: Sea-King helicopter CH-SS2 is hauled down by a beartrap to a landing on the deck of Canadian ship Assiniboine on May, 1967 6 The bear trap system improves helicopter ship-deck landing in two ways: 1. It helps to keep the helicopter aligned with the center of the ship-deck when the helicopter is hovering over it. 2. It helps handling of the landed helicopter by securing it to the deck and preventing it from falling over. In some modern modified versions of beartrap, such as Aircraft Ship Integrated Secure and Traverse (ASIST) system, the first functionality is performed ’wirelessly’ by an electro-optical system. In case of ASIST, laser beacons placed on the helicopter are used to track the aircraft from an optical tracking device on the ship. The precise relative position is then communicated to the pilot through visual cues, which then uses it to better align the helicopter with the ship-deck. However once landed, a beartrap like mechanical system is still used to secure the aircraft to the ship. The benefit of using the electro-optical system for aligning the aircraft is that the pilot is always in complete control of the helicopter. In a Japanese version of the system produced by Mitsubishi Heavy Industries, a laser beam from the ship is reflected off a mirror placed on the helicopter to calculate the relative position of aircraft with the ship. 1.2 Motivation The modern modifications to the bear-trap makes it clear that the original beartrap landing system has deficiencies and needs to be changed. The tethered wire approach to align the helicopter with the ship-deck is hazardous for the helicopter pilot as he gives up control over his aircraft. Moreover, the system needs personnel on deck for its operation which leads to the risk of man overboard, especially during bad-weather and high sea-states. Furthermore, as automation in 7 cockpit increases, ship-deck landing needs to be automated to achieve a completely autonomous naval helicopter. Finally with the recent rise of unmanned aerial vehicles (UAVs), an autonomous ship-deck landing system is necessary for use of these aircrafts in naval aviation. All these form the overarching motivation behind this research that aims to develop a vision-based autonomous system for vertical landing on ship-deck. The vision based approach to autonomously align the aircraft with the ship-deck seeks to replace the tethered wire approach of beartrap. The deck handling of the landed aircraft however is still expected to be with the mechanical securing device of beartrap. The idea is that with precise vision based measurements the helicopter autonomously tracks and land at the center of the beartrap, which then closes and secures the aircraft to the ship. The vision based autonomous landing approach is envisioned to be similar to a pilot on visual flight rule but with more precise measurements of relative position between the aircraft and the ship and with faster response due to the absence of inherent delays present in human action. The next section goes into detail on history and methods of computer vision. 1.3 Computer Vision Computer vision aims to replace the functionality of human eye but with greater precision and speed. Similar to eye, the primary goal here is to recover useful information about the world from the output of a light sensitive sensor called camera. The useful information to be extracted depends on the actual application, but typically it is recognizing the structure of the world and the objects present in it. The history of computer vision is primary an effort in this direction with different breakthroughs over time establishing four different paradigms or approaches to the same problem. 8 Figure 1.6: First digital image The field was born when the first digital image was produced in 1957. Many algorithms for manipulating these images were soon developed. Most of these were aimed at photographic enhancement. But soon in 1960s researchers turned their attention to extraction of edges and contours from images and using them to derive shapes and then infer about more complicated patterns and objects present in the images. This was the first paradigm of vision, the method of deriving progressively complicated structures from underlying simpler ones. A good summary of work from this early period can be found in [4] and [5].A common misconception of the era was that solving vision should be easy as humans do it all the time. A famous of story of the time is that in 1966 Marvin Minsky, a professor at MIT asked his undergraduate student Gerald Jay Sussman to “spend the summer linking a camera to a computer and getting the computer to describe what it saw” [6]. It is now known that the problem is more difficult. 9 Figure 1.7: Hubel and Weisel experiment to understand the visual cortex The most important work for computer vision during these early years however came from another field. David Hubel and Torsten Wiesel, two scientists working in the field of neurophysiology, through a series of experiments on cats between 1959-1969 described in detail how the signals from the eye are processed by the visual cortex [7]. Their groundbreaking work earned them the 1981 Nobel prize in medicine. And as it is shown later, their work directly inspired two different modern paradigms of computer vision: 1)feature based vision, and 2) Convolutional Neural Network (CNN). The rest of the 20-th century saw steady improvements in both low level vision i.e. extraction of image primitives such as edges, contours, and corners and high level vision i.e. detecting more complicated shapes and objects in the images. Also, methods to obtain camera motion, and 3D structure of the environment from multiple images based on geometry were developed during this period. Popular optical flow methods were also developed during this time [8, 9]. These works 10 are summarized in [10–12]. An important work of the era that led to the second paradigm of computer vision was the development of neocognitron. Neocognitron was a multilayered artificial neural network proposed by Kunihiko Fukushima in 1980 for pattern recognition [13]. It was heavily inspired from the previous work on visual cortex by Hubel and Weisel, and in turn inspired the development of modern deep convolutional neural networks (CNN) used in computer vision. The first decade of 21-st century saw the development of image point-feature extraction algorithms. The well known paper by Lowe [14], established this third paradigm of computer vision. It uses point features instead of lines and shapes for higher level vision tasks such as object detection or 3D reconstruction of a scene. Lowe’s work was also partly inspired by the earlier work on visual cortex by Hubel and Weisel. The second decade of the century saw the re-emergence of neural networks in computer vision with the influential paper by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton [15]. The paper developed a deep CNN architecture that significantly outperformed the state of the art in visual recognition. The depth of the network was necessary to achieve this result. This work was inspired by a previous work by Yan le Cun [16] who introduced the backpropagation algorithm to train neural networks. Yan le Cun’s work was in turn inspired from the neocognitron work by Fukushima. Deep neural networks have now completely overtaken the field of computer vision. This is primarily due to their very high success rate in almost all vision tasks, compared to any other conventional method developed over the previous half a century of vision research. 11 However, there is little to no explanation as to why these deep neural networks are better or why they even work in the first place, which limits their use in aerospace. A good summary of these modern works in computer vision can be found in Ref. [17]. While CNN has come to dominate the field in the last decade, a new paradigm of computer vision has also developed quietly during this time. Unlike the previous ones, this has occurred in vision hardware. Researchers have now realized that traditional frame-based cameras are made for photography which has different requirements than vision. As a result, researchers at University of Zurich trying to replicate the human eye, invented the event-based camera in 2008 [18]. This new type of camera has no shutter and is always on, but records only changes in intensity of light. Change in intensity received by a pixel is called an event and the changes are recorded asynchronously as they happen. This is quite different from the frame-based cameras which record the total energy of light received by all the pixels synchronously over a specified exposure time. The event-based cameras record dynamic phenomena and as such are not pleasant to the eye but have significant benefits over frame-based ones such as low-latency, low power, low data bandwidth and high dynamic range. However, conventional vision algorithms cannot be used with them and hence they require a complete re-formulation of computer vision algorithms. This has been slow due to low quantity and high cost of event cameras in market, which in turn is due to low consumer use cases. A good survey of event based vision can be found in Ref [19]. 12 Figure 1.8: Standard vs event camera outputs; Standard camera pixels synchronously capture the entire scene at a specified frame rate. Event camera only detects changes in scene with every pixel firing asynchronously. 1.4 Vision for Ship-Deck Landing Computer vision has already found applications in many areas such as facial recognition in surveillance and law enforcement, automatic license plate reading in traffic management, 3D medical imaging and diagonostics in medicine, and automated sorting and monitoring of crops in agriculture. In aerospace, vision has found use in analysis of satellite images, inspection of aircraft parts, and in star trackers and earth sensors for satellite attitude determination and navigation. The benefits of vision are obvious as a low cost and low SwaP sensor. Hence, recently researchers have started exploring use of vision to detect and track ship-deck. One of the earliest work in this area was by Sanchez Lopez [20] in 2008, who proposed a method to detect the H marker on helipads. This approach has more recently morphed into detection and use of custom designed fiducials [21,22] which are more reliable to detect. This fiducial based vision is 13 one of the method used in the current dissertation. Other approaches include using color LEDs on deck [23], custom designed colored objects on deck [24, 25], and using other sensors such as Lidar [26, 27] or acoustic sensors [28] in addition to or as a replacement of vision. All these previous work fall under the first paradigm of computer vision that detects image primitives such as color, lines and contours in images and uses them to detect more complicated shapes and objects. The reason they have been widely used in practice is because they are not robust to environmental degradation and lighting conditions. This dissertation begins fiducial based vision, but then develops a feature based vision for deck landing, which falls under the third paradigm of computer vision discussed earlier. It is found that this approach is far more robust. 14 (a) Landing pad with helipad marker (b) Landing pad with red LEDs (c) Landing area constructed with red cones (d) Landing pad with a vision fiducial Figure 1.9: Different types of landing pads proposed in vision literature for ship-deck landing 1.5 Autonomous Ship-Deck Landing Landing on the deck of a small ship is hard because of three important problems: 15 1. Turbulent wake from ship’s structures 2. Confined landing area 3. Stochastic deck motion Landing a helicopter in turbulence or in a confined area is quite common even on land. Hence of the three factors, the one unique to ship-deck landings is the deck motion. Beartrap was invented primary to tackle this problem. This problem is also the focus of this dissertation In recent years, an autonomous landing method for Unmanned Air Vehicles (UAV) has been released by Sierra Nevada Corporation (SNC). The system called UAV Common Automatic Recovery System (UCARS) consists of a radio transponder attached to the UAV and a tracking device on the ship (see Fig. 1.10). The tracking device localizes the UAV with respect to the ship with the help of a microwave radar and a beacon on the airborne transponder and then guides it to the center of the deck. The system is currently in use on the Northrop Grumman Fire Scout UAV. (a) Transponder to be placed on the UAV (b) Ground Tracker to be placed on the ship-deck Figure 1.10: SNC’s UCARS system for autonomous ship-deck landing of unmanned helicopters 16 More recently, Airbus has developed a similar system for local positioning of helicopters with respect to ship-deck called DeckFinder. It consists of three airborne and six ground units that use RF signals to triangulate and localize the aircraft similar to GPS. Figure 1.11: Autonomous Flight testing of Airbus DeckFinder system; Both the SNC UCARS and Airbus DeckFinder are pure localization systems and neglect deck motion. Hence they can only be used when the deck is calm and in low sea-states, which does not solves the main problem. Moreover, they use active communication that may not be desirable if the ships want to maintain a low profile. This dissertation develops a vision-based method for localization and tracking of ship-deck using only a single passive sensor, a monocular camera. There is literature on autonomous ship-deck landing research, but most deal with simulation [29, 30] and only a few with actual test data [31]. They have proposed various non-linear 17 adaptive techniques but none have provided a clear demonstration. Experimental demonstration is where the real challenge is encountered: sensor noise which thwarts any technique that tries to dynamically adapt to deck motion without adapting to noise. Other works on autonomous landing on ship-deck pertain to prediction of quiescent period of deck motion during which the helicopter can descend and land on the deck. To this end, some use an empirical formulation called energy index that is a function of deck motion and is used to predict when a quiescent deck period might be imminent. Such a system is often called Landing Period Designator (LPD) [32, 33] or Landing Period Indicator (LPI) [34]. A few others try to predict the deck motion into the future by fitting a parametric model to the past values [35]. The present work does not try to predict but instead search for a quiescent period using vision measured deck motion. Recent studies that have used small-scale experiments on this topic can be found in Refs [36,37]. They have ignored the effect of stochasticity of deck motion in their efforts which is important and is one of the distinguishing aspect of this dissertation. 1.6 Objectives of the Dissertation The objectives of this dissertation are the following: 1. Develop an experimental setup to produce stochastic ship-deck type motion. 2. Fabricate an in-house 3.75-lb quadrotor platform to establish minimal sensor & computing requirement for vision only autonomy. 18 3. Develop a ship-deck tracking/landing method using fiducial based vision. 4. Experimental evaluation of fiducial based landing method. 5. Develop a 2D-feature based vision for ship-deck detection and tracking. 6. Explore feature-based vision over fiducial based vision systematically from robustness of algorithm to overhead weight and power penalty. 1.7 Technical Approach and Organization of the Dissertation The first objective is achieved by developing a shaping filter based method to generate stochastic ship-deck type motion from white noise input. This method is then used to control the motion of a commercially procured robotic hardware called Stewart platform. The hardware consists of six parallel actuators that together control the 6-degrees of freedom of a rigid platform. The description of the platform and detailed derivation of shaping filter based method to excite the platform into ship-deck type motion is provided in Chapter 2. For the second objective, an autonomous 3.75-lb quadrotor UAV is fabricated in-house with 3 sensors and 2 flight computers. The aircraft serves as a testing platform for the fiducial vision based landing method. The detailed description of the fabrication process, the avionics architecture and the control algorithms for the aircraft is provided in Chapter 3. To achieve the third objective a method to estimate both position and velocity of ship-deck using fiducial based vision is developed. The results are validated using measurements from Vicon. 19 The description and validation of fiducial based vision is provided in Chapter 4. For the fourth objective, systematic experiments are carried out to study the tracking performance of the quadrotor UAV using fiducial based vision. The ground truth values about motion of both the quadrotor and the Stewart platform is obtained using Vicon. The details of the experiment and the results are provided in Chapter 5. The fifth and final objectives are achieved by developing a new feature based vision system and through systematic experiments evaluating its robustness in different environmental conditions such as partial occlusion, illumination variation, glare and water distortion. Also, weight and power penalty for using featured based vision are studied by evaluating its performance on different flight computers. The details are provided in Chapter 6. 20 Chapter 2: Ship-Deck Motion This chapter focuses on ship-deck motion and methods to replicate it in lab at a small scale. It starts with a description of different types of ship-deck motion and sea-states, followed by a description of the deck motion dataset obtained from the Navy. Next, it describes the construction of a shaping filter from the dataset to generate stochastic ship-deck motion. After that, the hardware used to replicate deck motion is described. This is followed by Van Loan’s algorithm to discretize a linear system and the process followed to replicate the deck motion in the laboratory setting. 2.1 Types of ship-deck motion Ship-deck motion can be broadly categorized into six types: 3 translations and 3 rotations. The translational motion along the longitudinal axis of ship is called surge, while the translational motion along the lateral axis is called sway. The translational motion perpendicular to these axes is called heave, which is the raising or lowering of the ship-deck. The rotational motions along the longitudinal axis is called roll (positive to right), along the lateral axis is called pitch (nose down positive), and along the axis perpendicular to these is called yaw (nose left positive). These motions for a ship-deck are depicted in Fig. 2.1. 21 Figure 2.1: 6 types of ship-motion Table 2.1: Douglas Sea-scale State Wave Height Characteristic 0 0 m Calm Glassy Sea 1 0-0.1 m Calm Rippled Sea 2 0.1-0.5 m Smooth Sea 3 0.5-1.25 m Slight Sea 4 1.25-2.5 m Moderate Sea 5 2.5-4 m Rough Sea 6 4-6 m Very Rough Sea 7 6-9 m High Sea 8 9-14 m Very High Sea 9 >14 m Phenomenal Sea 22 2.2 Sea States Sea states are numbers used to characterize the roughness of sea. It is part of the Douglas sea scale devised in 1921 by Capt. H.P. Douglas of the Royal Navy for ship navigation. The scale is summarized in Table 2.1. Ship-deck motion is stochastic as the exact forces applied by sea-waves to the ship cannot be pre- determined. However, statistical properties of these forces can be inferred from available datasets. These properties can then be used to simulate the deck motion in the laboratory environment. This is the approach followed here. 2.3 Deck Motion Dataset A standard deck motion dataset for a generic surface combatant that is representative of a DDG-51 destroyer was provided by NAVAIR. The dataset contained 30 time histories in 3 different sea-states for the ship traveling in a straight line along its surge direction. A sample snapshot of the time history of deck motion from the dataset is presented in Fig. 2.2. Motion histories for all six degrees of freedom are shown. 23 0 100 200 300 Time, s 0 1000 2000 3000 S u rg e , m (a) Surge 0 100 200 300 Time, s -10 0 10 S w a y , m (b) Sway 0 100 200 300 Time, s -10 0 10 H e a v e , m (c) Heave 0 100 200 300 Time, s -10 0 10 R o ll , d e g (d) Roll 0 100 200 300 Time, s -10 0 10 P it c h , d e g (e) Pitch 0 100 200 300 Time, s -10 0 10 Y a w , d e g (f) Yaw Figure 2.2: Sample ship-deck motion time series from Navy provided dataset The dataset when converted to frequency domain provides unique insights. Figure 2.3 shows fourier amplitude vs. frequency of deck’s sway acceleration at three different sea-states. Each color in the figure is from a single time history. It is observed from the figure that the primary effect of increasing sea-state is an increase in the amplitude of deck acceleration. The frequencies always remain between 0.1-0.3 Hz. This frequency domain information is used in the next section to construct a shaping filter used later to replicate deck motion in the laboratory. 24 (a) Sea State 2-3 (b) Sea State 4-5 (c) Sea State 7 Figure 2.3: Fourier transform of deck acceleration; different colors are from different time histories 2.4 Construction of Shaping Filter This section deals with the construction of a shaping filter from the Navy provided dataset. The shaping filter is essentially a linear system which takes a white noise input and generates ship-deck motions similar to that in the dataset. To begin, we first analyze the power spectrum of the deck motion. The power spectrum of a signal is the average power per unit frequency present in the signal as a function of frequency. And power spectrum of a signal of an ergodic random process is expected power spectrum of the process itself. Additionally, according to Weiner-Kinchin theorem, this is the same as the Fourier transform of the process auto-correlation function. So, let X(t) be a continuous signal generated by an ergodic random process. Then the power spectrum S(ω) of the signal, as well as the expected power spectrum of the random process is given by the following equation: 25 S(ω) = lim T→∞ 1 T |X̂T (ω)|2 (2.1) where X̂T (ω) is fourier transform of XT (t) defined as XT (t) ∆ =  X(t) if − T/2 ≤ t ≤ T/2 0 otherwise Now if we only have a set of N discrete measurements of the signal X(t), spaced ∆t apart. Then the power spectrum can be re-written in discrete form as, S(ωk) ≈ ∆t2 N∆t |X(ωk)|2 = 1 FsN X(ωk)X̄(ωk) (2.2) whereX(ωk) = ∑N−1 n=0 X(tn)e −inωk∆t is discrete Fourier transform of the discrete measurements X(tn) of the signal, Fs = 1 ∆t is sampling frequency, and ωk = 2πk/(N∆t) for k = 0 to N − 1 are the N discrete circular frequencies where the power spectrum values are obtained. Equation 2.2 is used to calculate the power spectral density of the deck’s sway acceleration time series data for Sea State 4-5 (Fig. 2.3b). Figure 2.4 shows this calculated power spectral density for different time series represented by different colors. Also shown in the figure is an analytical function approximation of the power spectrum, which has the following equation S(ω) = Kω6 ω8 − 1.2ω4 + 1 (2.3) 26 The gain K of this power spectrum function is a parameter that controls its amplitude and hence can be used to represent different sea-states. Figure 2.4: Power Spectral Density of deck’s sway acceleration Now according to the theory of linear systems and stochastic processes, a stochastic wide sense stationary (WSS) input signal to a linear time invariant system produces a stochastic wide sense stationary output signal whose power spectrums are related by the following equation, Syy(ω) = |G(jω)|2Suu(ω) (2.4) 27 Figure 2.5: Linear time invariant system with stochastic wide sense stationary (WSS) input gives stochastic WSS output Now, if the input signal to the linear system is a white noise with unit intensity, i.e. Suu(ω) = 1, the output is Syy(ω) = |G(jω)|2 = G(jω)G(−jω) (2.5) Hence the power spectrum function found above in Eq. 2.3 can be thought of as generated by a linear system with white noise of unit intensity as input. A spectral factorization of the power spectrum is performed to recover the linear system: S(w) = G(jω)G(−jω) G(jω) = √ K(jω)3 (jω)4 + 2.406(jω)3 + 2.8944(jω)2 + 2.406(jω) + 1 (2.6) where G(jw) is chosen so as to contain the stable poles and to have minimum order. Lifting the factor G(jw) into laplace domain by simply replacing jω with s results in the following transfer function. This is also called a shaping filter. G(s) = √ Ks3 s4 + 2.406s3 + 2.8944s2 + 2.406s+ 1 (2.7) 28 Converting it to state space results in, ẋ =  0 0 0 −1 1 0 0 −2.406 0 1 0 −2.8944 0 0 1 −2.406  x+  0 0 0 1  u q = [ 0 0 0 1 ] x (2.8) where u is white noise with intensity K. Equation 2.8 is the shaping filter that drives the ship-deck sway with white noise input. It should be noted that its output is deck’s sway acceleration. Hence the output should be fed into a double integrator to get the deck sway position. Physically, Eq. 2.8 represents a model of the stochastic forces acting on the ship-deck. 2.5 Hardware to replicate deck motion A Stewart platform is used to replicate ship-deck motion in the lab at a small scale. The platform is a parallel manipulator that uses six electric linear actuators to control the complete 6-DOF motion of a horizontal platform representing the ship-deck. The platform used in this research was procured from Acrome robotics and is shown in Fig. 2.6. Its specifications are presented in Table 2.2. 29 Figure 2.6: 6-DOF stewart motion platform used to replicate ship-deck type motion in lab environment Table 2.2: Stewart Platform Specifications Parameters Value Gross mass 5.2 kg Platform diameter 30 cm Linear travel range +/- 10 cm Angular travel range +/- 30 degrees Max speed (No load) 50 cm/s Max force 280 N(static) The discrete dynamic equations required to excite this platform to simulate ship-deck motion is derived next. 30 2.6 Van Loan’s algorithm to discretize linear system Van Loan’s algorithm with zero-order hold is used in this chapter and throughout this dissertation to discretize continuous linear time-invariant systems. The method followed in these cases is described here. Any continuous linear time invariant system has the following form: ẋ = Ax+Bu+B1w where x is system’s state, u is control, w is white noise of intensity Q, and A,B,B1 are time invariant matrices. From linear systems theory, the above system can be discretized into a discrete linear system of the form: xk+1 = Fkxk +Gkuk + wk where xk and xk+1 are states of the system at time steps tk and tk+1 respectively, uk is control input assumed to be held fixed between the two time steps, and wk is a Gaussian random variable with variance Qk. The matrices Fk, Gk and Qk are related to the continuous system as given by the following equations: Fk = eA(tk+1−tk) (2.9) Gk = ∫ tk+1 tk eA(tk+1−σ)Bdσ (2.10) Qk = ∫ tk+1 tk eA(tk+1−σ)B1QB T 1 e AT (tk+1−σ)dσ (2.11) 31 Van Loan [38] gave a fast and easy way to find the matrices Fk and Qk. His algorithm is to construct another matrix: Λ = −A B1QB T 1 0 AT  (2.12) Then use the following identity to find Fk and Qk: eΛ(tk+1−tk) = − F−1 k Qk 0 F T k  (2.13) where − means that term is irrelevant to the present discussion. Hence, Fk is transpose of lower right partition of eΛ(tk+1−tk) and Qk is upper right partition pre- multiplied by Fk. Similarly, Gk can also be found by using the following identity, e  A B 0 0 (tk+1−tk) = Fk Gk 0 I  (2.14) where I is identity matrix. The matrix exponentials in this work was evaluated using the MATLAB expm function. 32 2.7 Procedure to replicate deck motion The trajectory of the Stewart platform hardware described in Sec. 2.5 is controlled using a Simulink model provided by Acrome robotics. The input to the model is desired position and orientation of the platform at each time step. Hence to simulate deck motion using the platform, we need a way to generate discrete deck positions using the shaping filter designed in Section 2.4. This is described here. The output of the designed shaping filter is deck’s acceleration. Hence it is first augmented with a double integrator to get a linear system whose output is deck’s position. This results in the following linear system. ẋ =  0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 −1 0 0 1 0 0 −2.406 0 0 0 1 0 −2.8944 0 0 0 0 1 −2.406  x+  0 0 0 0 0 1  u y = [ 1 0 0 0 0 0 ] x (2.15) where u is again a white noise process with intensity K. The value of K controls the amplitude of deck disturbance and hence different values can be used to simulate different sea-state. 33 The above linear system is discretized using Van Loan’s algorithm resulting in the following discrete linear system. The discretization time step is chosen as 0.01 s as the simulink model runs at 100 Hz. xk+1 =  1 1e−2 8.3e−13 4.15e−10 1.66e−7 4.96e−5 0 1 4.15e−10 1.66e−7 4.96e−5 9.88e−3 0 0 1 −1.66e−7 −4.96e−5 9.88e−3 0 0 1e−2 1 −1.19e−4 −2.38e−2 0 0 5e−5 1e−2 1 −2.87e−2 0 0 1.65e−7 4.96e−5 9.88e−3 9.76e−1  xk + wk yk = [ 1 0 0 0 0 0 ] xk (2.16) where wk is a random vector with zero mean Gaussian distribution whose covariance matrix Qk is found to be: Qk =  4e−10 9.96e−8 −9.96e−8 −2.40e−7 −2.89e−7 1.32e−5 9.96e−8 2.65e−5 −2.65e−5 −6.39e−5 −7.70e−5 3.95e−3 −9.96e−8 −2.65e−5 2.65e−5 6.39e−5 7.70e−5 −3.95e−3 −2.40e−7 −6.39e−5 6.39e−5 1.54e−4 1.86e−4 −9.53e−3 −2.89e−7 −7.7e−5 7.7e−5 1.86e−4 2.24e−4 −1.15e−2 1.32e−5 3.95e−3 −3.95e−3 −9.53e−3 −1.15e−2 7.91e−1  (2.17) The value of white noise intensity K used while discretizing above is chosen as 81 cm2/s3 to bound the output positions from the discrete system to be within the range of the platform motion. 34 The discrete linear system represented by Eq. 2.16 is used to generate desired discrete deck positions at each time-step and fed into the simulink model provided by Acrome robotics to control the position of Stewart platform. The linear system itself is driven by Gaussian white noise with covariance matrix given by Eq. 2.17. A sample stochastic Stewart platform trajectory generated using the above method is shown in Fig. 2.7. The trajectory shown is platform motion measured using a VICON system. The trajectory is observed to be stochastic and similar to the sway motion of the original data. Figure 2.7: A sample stochastic deck motion trajectory generated by commanding the Stewart platform through the designed shaping filter and measured using a VICON system 2.8 Summary In this chapter, the Navy provided dataset was used to construct a shaping filter which was then used in this research to replicate ship-deck type motion in the lab at a small scale. The motion itself is produced using a 6-DOF Stewart platform hardware. The properties of the platform and 35 the method to excite it into ship-deck type motion have also been discussed in this chapter. Based on this part of the work, following conclusions are drawn. 1. Navy provided dataset showed that the effect of increasing sea-state is an increase in deck motion disturbance amplitude; the frequency distribution of the disturbance remains the same. 2. The frequencies of deck motion disturbances for the DDG-51 destroyer for which the data was provided lie predominantly between 0.1-0.3 Hz irrespective of sea-state. 3. A small 30 cm diameter Stewart platform can be used to replicate the ship-deck motion in laboratory setting. 4. The power spectrum of ship-deck disturbances derived from ship-deck motion dataset can be used to construct a shaping filter. The filter with a white noise input can then be used to excite the Stewart platform into stochastic deck motion similar to the original data. 36 Chapter 3: Fabrication of Quadrotor UAV This chapter describes the design and fabrication of the VTOL UAV used in this research. It begins with a short overview, and then it covers the structure, propulsion system, avionics, sensors, computers, communication protocols, control system, and state estimators in deatil. 3.1 Overview A Vertical Take-Off and Landing quadrotor UAS was fabricated in-house as a representative vertical lift vehicle to track and land on the ship-deck represented by the Stewart platform. The quadrotor configuration was specifically selected because of the inherent simplicity in its flight dynamics and control allocation. So, the principal emphasis can be on the avionics, and vision- based algorithm required for ship-deck landing, and not on system identification of the plant. Commercial-Off-The-Shelf (COTS) quadrotors were unsuitable for this research because of the numerous modifications required to integrate the many special purpose sensors and computing platforms, while not being over-burden with unnecessary features. This flexibility was important particularly as components were often replaced and upgraded as the research progressed. An additional consideration was the use of secure source in-house designed components and software as much as possible. Modern additive manufacturing technology with metals and carbon-fiber made this a feasible endeavor at least at the small UAV scale studied in this research. Hence, 37 a quadrotor was designed and fabricated ground up with a scale about the same as that of the Stewart platform and with just the right sensors and processing units required for ship-deck landing. The final vehicle has a dimension of about 280 mm × 280 mm (11 in. × 11 in.), a gross take-off weight of about 1700 grams (3.75 lbs), and a maximum thrust to weight ratio of about 2. The vehicle is described in detail in the next few sections. 3.2 Vehicle Structure The primary load-carrying component of the vehicle is an H-shape structure, as shown in Figure 3.1. The structure is made from 10 mm hollow square aluminium-6063 T5 tubes of 1 mm thickness. Al-6063 T5 was chosen for good strength and machinability at a low cost. The tubes with a flexural modulus of 112.84 MPa give a factor of safety of at least 64 for material yield under steady flight conditions. The high factor of safety is essential for this research vehicle as it reduces the probability of structural failure during high impact landings. Figure 3.1: Quadrotor skeleton made from square aluminum tubes 38 3D printed parts are used to secure components to the aluminum frame. Motor mounts (Figure 3.2) used to secure motors as well as transfer loads from the motor base to the aluminum structure are printed from Onyx, a material made of micro-carbon fibers in a nylon matrix. These are also reinforced with 1 mm thick unidirectional 3D printed carbon fiber sheets at the top. This gives it a similar elastic modulus and strength as Al-6061 T6. 3D printed plates made from ABS plastic are used to secure avionics components to the quadrotor frame (Figure 3.2). The slits and cutouts of the plates are tailored to the attachment that will secure the avionics later. The avionics components are soft-mounted with rubber dampers to reduce vibration transfer to the components. Figure 3.2: Quadrotor frame ready to be mounted with propulsion and avionics components 39 3.3 Propulsion System A pair of clockwise and counter-clockwise rotating 3-bladed, 6-inch fixed-pitch rigid rotors are used for propulsion. The rotors are driven by iFlight Xing-E 2450KV brushless direct-drive electric motors (Fig. 3.3), each with 12 electromagnets and 14 N52SH neodymium permanent magnets. The motors are controlled by electronic speed controllers (ESC) (Fig. 3.4) running BLHeli32 firmware. Communication with the ESC is established using DSHOT protocol. The benefit of this protocol over the conventional analog pulse width modulation (PWM) is that it is digital and hence is faster and more robust to noise. This leads to more precise and faster changes of motor RPM than what is possible with PWM signals. A brief description of the protocol is presented in the section 3.7. Figure 3.3: iFlight 2207 2750Kv motor used on the vehicle; Configuration: 12N14P 40 Figure 3.4: Electronic Speed Controller running BLHeli32 firmware Figure 3.5: Experimental setup used to measure thrust and torque in hover A simple experiment is used to characterize the propulsion system. The experimental setup is shown in Fig. 3.5. The motor and the rotor are mounted in series to a RTS-50 torque cell and 41 a single axis MDB-2.5 load cell from Transducer Techniques. The entire setup is held in place using a vice attached to a bench. The motor is instrumented with a hall-effect switch and a magnet for RPM measurement. A second RPM measurement is taken using a laser tachometer and a retro-reflective marker placed on the motor. The experimental measurements along with quadratic curve fits are shown in Fig.3.6. From the curve fit, thrust and torque coefficients (CT and CQ) of the rotor were calculated to be: CT = T ρA(ΩR)2 = 1 ρπR4 T Ω2 = 0.0254 (3.1) CQ = Q ρA(ΩR)2R = 1 ρπR5 Q Ω2 = 0.0042 (3.2) where ρ = 1.23 kg/m3. This leads to a Figure of Merit of 0.68, and with solidity(σ) = 0.095, a blade loading CT/σ of 0.266. (a) Thrust vs RPM (b) Torque vs RPM Figure 3.6: Experimentally measured variation of thrust and torque with RPM 42 For the above experiment, power was supplied to the motor at a constant 11.8 V through a DC power supply. The current consumed was measured through the power supply. From these power consumed by the motor at different RPMs was calculated and is shown in Fig. 3.7. The curve fit has the following equation: Psupply = 3.9× 10−8Ω3 + 2.428× 10−11Ω4 Watts (3.3) where the first term corresponds to mechanical power provided to rotor and the second term represents resistive power loss. From the second term, the motor constant is calculated as: Km = Q√ Ploss = ρπR5CQ√ 2.428 ∗ 10−11 = 0.0085 N-m/ √ Watts (3.4) This motor constant is related to the torque constant KT by the well known expression Km = KT/ √ R where R is motor’s resistance. Since the first term in Eq. 3.3 represents power provided to rotors, it can be used to evaluate the rotor’s power constant as: CP = Protor ρA(ΩR)3 = 0.0039 (3.5) Now CP = CQ from momentum theory of rotors. It is observed that our calculated values of CP and CQ are quite close within 7% of each other. This serves as a sanity check and goes on to show that use of torque cell in the experiment is redundant. Both motor constant and rotor torque constant can be found from just Power vs. RPM measurements. 43 Figure 3.7: Power vs RPM Figure 3.8: Motor Efficiency vs RPM 44 Figure 3.8 shows the plot of efficiency of the motor with its RPM. Efficiency of the motor is defined as η = Protor/Psupply. From the plot, it is seen that the motor operates near its peak efficiency for the quadrotor in hover. This is by design as the aircraft will operate in near hover conditions throughout its operations. 3.4 Avionics Figure 3.9 shows the complete avionics architecture of the quadrotor. The avionics sub- system is designed to be simple, lightweight, programmable, and robust to noise and external disturbances. It primarily consists of two computers and three sensors. The computers are 1) a 600 MHz Arm Cortex M7 core based flight controller, and 2) a 1.92 GHz Intel Atom based flight computer. The sensors are: 1) an Inertial Measurement Unit (IMU), 2) a Visual Inertial Odometry (VIO) sensor, and 4) a Monocular camera. These components communicate with each other through 4 communication protocols: 1) UART, 2) I2C, 3) DSHOT, and 4) USB. The avionics is powered through a single cell 1Ah battery that is different from that used for propulsion. This is to isolate the avionics components from electrical transients in propulsion. Apart from the above components, since this is a research aircraft a 2.4 GHz radio receiver is used for emergency manual control of the quadrotor. These components of avionics are briefly described next. 3.5 Sensors The vehicle carries a set of 3 sensors; 2 for autonomous flight and 1 for ship-deck detection. These are: 45 Figure 3.9: A ircraft’s A vionics architecture; D Shot, U A R T,and I2C are three different digital com m unication protocols used on the aircraft 46 3.5.1 Inertial Measurement Unit (IMU): MPU-6050 by Invensense is used as an IMU on the vehicle. The IMU consists of 3 orthogonally placed micro-electromechanical (MEMS) accelerometers and 3 orthogonally placed MEMS gyroscopes. The gyroscopes measure the angular velocity of the vehicle, while the accelerometers measure the linear acceleration combined with gravity vector. 3.5.2 Visual Inertial Odometry (VIO) sensor: Realsense T265 by Intel is used as a visual-inertial sensor onboard the quadrotor. It measures the position and velocity of the aircraft and is used in place of GPS for indoor flights. It is essentially a stereo camera that runs Visual Odometry software fused with IMU measurements. The software is proprietary to Intel. 3.5.3 Monocular Camera: A monocular global shutter camera by e-consystems is used for detection and tracking of ship-deck. It is the only sensor used to detect ship-deck and estimate its position and orientation. The camera and the related vision system are explained in detail in the next chapter. (a) Inertial Measurement Unit (IMU) (b) Visual Inertial Odometry (VIO) sensor (c) Monocular global shutter camera Figure 3.10: Sensors on the quadrotor vehicle 47 3.6 Computers There are two computers onboard the vehicle. These are: 3.6.1 Flight controller: A Flight controller based on Teensy 4.0 micro-controller board is used on the vehicle. It contains an NXP MIMXRT1062 processor with a 32-bit ARM Cortex-M7 CPU core and can execute instructions at 600 Mhz. Its primary purpose is real-time flight stabilization/attitude control of the vehicle. It runs a single process that in sequence, estimates the vehicle’s attitude and angular velocity using IMU data, receives attitude setpoints from the flight computer, evaluates the inner loop control, and then finally sends motor commands to ESC. The inner loop calculates the four rotor RPM needed to maintain vehicle’s attitude. This is repeated every millisecond, resulting in a loop rate of 1000 Hz. 3.6.2 Flight Computer: A Flight Computer based on Up-Board Single Board Computer (SBC) is used on the vehicle. It contains an x64 architecture based quad-core Intel Atom processor and runs a non- real-time version of debian Linux operating system (OS). Its primary purpose is localization and navigation of the vehicle through outer loop control. The outer loop calculates attitude setpoints from VIO and camera measurements. Its secondary purpose is data logging of flight variables for debugging and post-processing. Different processes running on top of the OS are used for executing different tasks in parallel. These tasks are reading and processing the data from the VIO and camera sensor, executing outer loop control commands, managing communication with 48 the flight controller, and recording flight data to non-volatile memory for post-processing. The communication between the independent processes itself is carried out using an asynchronous publish/subscribe messaging system of the Robot Operating System (ROS) package. It is to be noted that ROS is just an Inter Process Communication (IPC) mechanism here. (a) Teensy Flight Controller; CPU Core: ARM Cortex M7 (b) Up-Board Flight Computer; CPU Core: x64 Intel Atom Figure 3.11: Primary computing elements on the quadrotor vehicle 3.7 Communication Protocols Three types of communication protocols are used onboard the aircraft. Low latency UART and I2C are used for communication of time-sensitive data to the micro-controller. High bandwidth USB on the other hand is used by the flight computer to receive camera frames and VIO data. For communication with ESC, a special purpose UART known as DShot is used. These protocols are briefly described below. 3.7.1 UART: Universal Asynchronous Receiver and Transmitter (UART) is a simple method of bi-directional (full duplex) serial communication between two devices. The data is transmitted in frames. Each frame is 8 bits of data prepended by a start bit and appended optionally by a parity and two stop 49 bits. The rate of transmission of each bit is pre-agreed between the two devices. The entire protocol is commonly implemented as an Integrated Circuit (IC), which offloads CPU (Central Processing Unit) cycles for other important tasks. In this research, a high-level protocol based on UART was written for communication between the flight controller and the flight computer. The messages from flight controller are primarily to log inner loop flight variables into the non-volatile memory of flight computer. The messages from flight computer on the other hand contains outer loop control commands i.e. attitude set-points for the flight controller to execute on. Each message for data logging from flight controller to flight computer starts with ”Hi,”. This is followed by 3 character bytes specifying which vehicle state or parameter is being communicated, which is then followed by bytes containing the value of that state. The message ends with a newline character. The vehicle set points command message from the flight computer to the flight controller starts with ”Hi,” too, and is followed by values of thrust and the desired attitude setpoints command from the outer loop control. The message ends with a checksum and a newline character. The checksum is important as it helps to detect communication errors for the flight critical data being sent from flight computer to the controller. Each message in either direction when received is evaluated against their respective format, and is discarded if it does not match or if the checksum is invalid. The codes for either side UART message encoder and decoder were written in C to minimize round-trip communication delay to 440 microseconds. The 2.4 GHz radio receiver used for emergency manual control also sends the control commands to the flight controller using UART. A decoder for the signal from this UART radio receiver was 50 also written as part of this research using the message format provided by the vendor. 3.7.2 I2C: I2C stands for Inter-Integrated Circuit. Like UART, it is another protocol for bi-directional but half-duplex serial communication. Unlike UART, it is synchronous, which means a clock signal is also transmitted along with data to tell the receiver the rate of data transmission. I2C uses a parent-child communication framework, in which one device is designated as parent and completely controls the start and stop of communication with multiple child devices. Each I2C communication involves the following steps: 1. The parent sends a start signal to the child devices. 2. The parent sends the address of the device it wants to communicate with, along with its intent to either read or write from the device. 3. The child device for which the address matches, sends an acknowledgement signal back to the parent device. 4. The parent sends or receives data in 8-bit data frames. Each frame is followed by an acknowledgement signal from the receiver to the sender. 5. The parent sends a stop signal to stop the transmission. The I2C protocol is more robust to noise and electromagnetic interferences and is also significantly faster than UART. Its primary drawback is that communication can only occur in one direction at a time and that the parent device has total control over the flow of communication. Hence it is suitable for communication between computers and sensors where the computer acts as a parent. 51 In this research, I2C was used to transmit gyroscope and accelerometer data from the IMU to the micro-controller for flight stabilization. 3.7.3 DShot: Digital Shot is a special purpose UART type communication protocol made for fast digital communication with Electronic Speed Controllers (ESCs). It is used to send motor RPM commands to ESCs and receive telemetry data in return. Sending of command from flight-controller to ESC happens in 16-bit data packets. The first 11 bits of a packet contain the commanded RPM value for motors or special commands such as to reverse motor spin direction. The next bit represents a request for telemetry if set to 1. The last 4 bits are used for communication error checking through Cyclic Redundancy Check (CRC). In this research, an open-source DShot library for the Teensy micro-controller was modified for use with the in-house flight controller code. 3.7.4 USB: Universal Serial Bus is a standardized modern serial communication protocol developed for communication between computers and peripheral devices. It has a tree-network topology where the host computer acts as the root device and talks to the peripheral devices over different communication branches. Similar to I2C, the host computer for USB has total control over communication flow. USB by design is fast and reliable as it has inbuilt error-correcting mechanisms. Its primary drawback is large communication latency which makes it unsuitable for applications where low-latency real-time communication is needed. In this work, USB was used to send monocular camera images and VIO data to the flight computer. It should be noted that the job of the flight computer is guidance and navigation which are much more tolerant to latency than job 52 of the flight controller which is vehicle stabilization. 3.8 Assembled Aircraft The assembled aircraft ready for flight is shown in Fig. 3.12, with its properties listed in Table 3.1. Center of gravity and moment of inertia of the aircraft are calculated by measuring the weight and location of each component. For this purpose, all components are assumed to be point masses except for the flight battery, aluminum beams in the frame, and the ABS plates for which the moment of inertia was evaluated using standard formulae assuming uniform internal mass distribution. Figure 3.12: The in-house fabricated quadrotor aircraft 53 Table 3.1: Aircraft Properties Parameters Value Gross Take off mass 1.7 kg Center of gravity (0.015 m, 0 m, 0 m) Ixx 5.8 ×10−3 kgm2 Iyy 1.8 ×10−2 kgm2 Izz 2.5 ×10−2 kgm2 Rotor diameter 0.152 m Mean rotor chord 0.015 m Hover tip speed 99 m/s Hover tip Mach number 0.29 Hover tip Reynolds number 100,914 Rotor spacing 0.28 m Landing gear footprint 0.10 m × 0.062 m Max thrust to weight ratio 1.96 Flight time in hover 5 min 54 Figure 3.13: Body-axes and direction of rotation of rotors; Z is +ve down 3.9 Flight Control System The flight controller for the vehicle is designed as nested PID (Proportional-Integral-Derivative) type linear controller for vehicle dynamics linearized around hover. Time-scale separation of attitude and translational dynamics is assumed in control design and is ensured through control gains selection. The complete control block diagram is shown in Fig. 3.14. 55 Figure 3.14:C ontrolarchitecture forvision-based autonom ous tracking and landing on a stochastically m oving platform 56 3.9.1 Attitude Control Attitude control is performed through two cascaded PI controller, operating on attitude and attitude rates. The equations for the rate PI control are Ld = KpL(pd − p) +KiL ∫ t 0 (pd − p)dt Md = KpM(qd − q) +KiM ∫ t 0 (qd − q)dt Nd = KpN(rd − r) +KiN ∫ t 0 (rd − r)dt (3.6) where (Ld,Md, Nd) are desired moments while (pd, qd, rd) and (p, q, r) are desired and measured angular velocities respectively. Similarly the equations for the Attitude PI control are: pd = Kpϕ(ϕd − ϕ) +Kiϕ ∫ t 0 (ϕd − ϕ)dt qd = Kpθ(θd − θ) +Kiθ ∫ t 0 (θd − θ)dt rd = Kpψ(ψd − ψ) +Kiψ ∫ t 0 (ψd − ψ)dt (3.7) where (ϕd, θd, ψd) and (ϕ, θ, ψ) are desired and measured attitude angles respectively. (pd, qd, rd) are desired angular rates and are used in rate PI control (Eq. 3.6). 3.9.2 Translational Control Translational control for the quadrotor to track the ship-deck position is performed through a single linear PID type control. The control is designed such that only X and Y deck motions 57 (surge and sway) are tracked, while maintaining a constant altitude. The equations for translational control are: ẍd = Kpx(xt) +Kdx(0− ẋ) +Kd1x(ẋt) +Kix ∫ t 0 (xt)dt θd = −ẍd/g (3.8) ÿd = Kpy(yt) +Kdy(0− ẏ) +Kd1y(ẏt) +Kiy ∫ t 0 (yt)dt ϕd = ÿd/g (3.9) z̈d = Kpz(zd − z) +Kdz(0− ż) +Kiz ∫ t 0 (zd − z)dt Td = m(g − z̈d) (3.10) where (xt, yt, zt) is quadrotor relative ship-deck position and (ẋt, ẏt, żt) is quadrotor relative ship-deck velocity, both obtained using computer vision through monocular camera. (x, y, z) is quadrotor inertial position, (ẋ, ẏ, ż) is quadrotor velocity in body axes obtained through the VIO sensor. The symbol g is acceleration due to gravity, m is mass of the quadrotor and h is the height above deck to hover the quadrotor. Td, ϕd, θd are the desired thrust, roll and pitch used in the attitude control. It should be noted that the translational control equations makes use of small angle assumptions in roll and pitch of the vehicle near hover. The above translational control equations can also be used for waypoint navigation of quadrotor 58 in the absence of ship-deck. The relative error in this case is xt = xd− x, where xd is the desired waypoint and x is the quadrotor position obtained through VIO sensor. Another point to note is that the integral terms in all the above controller equations are implemented using recursive summation in the actual controller. ∫ t 0 fdt = f(t)∆t+ ∫ t−∆t 0 fdt The saturation of integral terms and re-initialization to zero every time the aircraft takes off after landing are two techniques used to prevent integral windup in the controller. 3.9.3 Control mixer Control mixer essentially converts the desired thrust, roll, pitch and yaw moment (Td, Ld,Md, Nd) to motor speeds. The equations for control mixer are based on the following actuation model that assumes the thrust and torque from each rotor to be proportional to the square of its RPM, valid for aircraft operations near hover.  Td Ld Md Nd  =  kT kT kT kT lkT lkT −lkT −lkT lkT −lkT −lkT lkT −kQ kQ −kQ kQ   Ω2 1 Ω2 2 Ω2 3 Ω2 4  (3.11) 59 Here l = 0.14 m is half of the distance between any two adjacent motors, while kT and kQ are given as: kT = ρπR4CT = 3.315× 10−6 Ns2 kQ = ρπR5CQ = 4.2× 10−8 Nms2 The inversion of Eq. 3.11 produces the equation for aircraft mixer. The set of selected control gains for the quadrotor are given in Table 3.2 & 3.3. The inner loop was tuned to get the fastest settling, without any overshoot and without feeding in sensor noise. Similarly the outer loop was tuned to get the maximum tracking bandwidth without feeding in sensor noise. The gains were arrived at through multiple subsequent iterations of linear system analysis and flight tests. Table 3.2: Inner loop control Gains KpL KiL KpM KiM KpN KiN Kpϕ Kiϕ Kpθ Kiθ Kpψ Kiψ 0.2 0.2 0.3 0.2 0.7 0.2 7 0.8 7 0.8 5 1 Table 3.3: Outer loop control Gains Kpx Kdx Kd1x Kix Kpy Kdy Kd1y Kiy Kpz Kdz Kd1z Kiz 6.0 6.0 0.5 2.0 6.0 6.0 0.5 2.0 5.0 3.2 0.3 1.0 Figure 3.15 shows simulated inner and outer loop response of quadrotor to step inputs. It is observed that settling time of outer loop is more than an order of magnitude higher than that of inner loop. This confirms that the selected gains maintain a timescale separation of the vehicle attitude and position dynamics, an assumption of the presented control design. 60 (a) Inner loop step response (b) Outer loop step response Figure 3.15: Inner and outer loop step response of quadrotor along X-axis; response along other axes is similar; The response time verifies the time scale separation of attitude and translational dynamics assumed in the control design Figure 3.16 shows the effective outer loop tracking control system of the quadrotor with the closed loop transfer function given as, T (s) = s2Kd1 + sKp +Ki s3 + s2(Kd1 +Kd) + sKp +Ki (3.12) Figure 3.16: Effective outer loop control diagram 61 Figure 3.17: Bode plot of closed outer loop control system T (s) for X and Y axes Figure 3.17 shows Bode plot of T (s). As observed, the tracking bandwidth is 0.15 Hz. The bandwidth was restricted here as it was observed from experiments that further increase in bandwidth would lead to significant feedback of noise into control. The system has infinite gain margin, phase margin of 66.7°and delay margin 1.15 s. 62 Figure 3.18: Large noise in measurements by accelerometers Figure 3.19: Fourier transform of the measurements show large noise is due to 1/rev vibration from the rotors 3.10 Filtering and State-estimation One of the important problems associated with control design is estimation of vehicle states reliably from a set of sensor measurements. This is a non-trivial task as sensor measurements are always corrupted by unwanted noise from the environment. For rotary wing vehicles this is exacerbated by vibrations from the rotors as shown in Figure 3.18 and 3.19. The figures show a high 1/rev vibration from rotors adversely affecting the vehicle attitude measurement by MEMS (Micro Electro-Mechanical System) accelerometers. In this work, low-pass filters and Kalman filters have been used to filter out such noise and produce reliable estimates of vehicle states from sensor measurements. 63 3.11 Kalman Filters Kalman Filter also known as Linear Quadratic Estimator (LQE) is an algorithm to produce optimal estimates of vehicle’s states using known models of state dynamics and sensor measurements. Models for the three Kalman filters used in the control system (Figure 3.14) are described below: 3.11.1 For angular velocity: The models for Kalman filter of roll rate is presented here. They are same for the other angular velocity components. Process model: pk+1 ṗk+1  = 1 ∆t 0 1  pk ṗk + 1 2 ∆t2 ∆t wpk (3.13) where wpk is the process disturbance, assumed to be in the second derivative of the roll rate. Measurement model: ypk+1 = pk+1 + vpk+1 (3.14) where ypk+1 is the measurement from the gyroscope aligned along X-axis of the aircraft and vpk+1 is the measurement noise. It should be noted that independent Kalman filter design for angular velocities along three axes assumes no cross-coupling in attitude dynamics between the axes, which is strictly true 64 only if the off-axis terms in the moment of inertia matrix of the vehicle are zero. Typically for quadrotors they are small because of vehicle symmetry. 3.11.2 For attitude The Kalman filter for attitude fuses the data from gyroscopes with that from accelerometers. The gyroscope data through angular velocity estimates are used as inputs in the process model and are integrated to produce attitude estimates. These are then corrected with direct attitude measurements from accelerometers. This method of sensor fusion produces much better attitude estimates than using any one of the sensor alone. Process model: ϕk+1 = ϕk + (ϕ̇k + wϕk )∆t θk+1 = θk + (θ̇k + wθk)∆t ψk+1 = ψk + (ψ̇k + wψk )t (3.15) where (wϕk ,wθk ,wψk ) are the process disturbances. (ϕ̇k, θ̇k, ψ̇k) are Euler angle rates, obtained from angular velocity estimates (pk, qk, rk) through the following transformation.  ϕ̇k θ̇k ψ̇k  =  1 sin(ϕk) tan(θk) cos(ϕk) tan(θk) 0 cos(ϕk) − sin(ϕk) 0 sin(ϕk) sec(θk) cos(ϕk) sec(θk)   pk qk rk  (3.16) 65 Measurement model: yϕk+1 = ϕk+1 + vϕk+1 yθk+1 = θk+1 + vθk+1 (3.17) where yϕk+1 and yθk+1 are pseudo measurements obtained from true accelerometer measurements using the following equations: yϕk+1 = tan−1(ay/az) yθk+1 = tan−1(−ax/ √ a2y + a2z) (3.18) where (ax, ay, az) are accelerometer measurements along the three axes. It should be noted that the above measurement model assumes vehicle accelerations in flight to be small when compared to acceleration due to gravity (g). Also, the measurement model does not include a model for yaw measurements as accelerometer readings are unaffected by vehicle’s yaw. Hence, the Kalman filter only predicts yaw through the integration of yaw rate in the process model and as such it drifts with time. 3.11.2.1 For ship-deck position: Relative ship-deck position is measured through the vision system is then filtered through Kalman filter to get both relative position and velocity. The model of the Kalman filter is: Process model: 66 The process model is based on relative translational dynamics of quadrotor and ship-deck: xk+1 ẋk+1  = 1 ∆t 0 1  xk ẋk + 1 2 ∆t2 ∆t  (ẍk + δẍk) + wxk (3.19) yk+1 ẏk+1  = 1 ∆t 0 1  yk ẏk + 1 2 ∆t2 ∆t  (ÿk + δÿk) + wyk (3.20) zk+1 żk+1  = 1 ∆t 0 1  zk żk + 1 2 ∆t2 ∆t  (z̈k + δz̈k) + wzk (3.21) where xk, yk, zk are ship-deck relative quadrotor position measured through vision. The accelerations (ẍk, ÿk, z̈k) in the process model are negative of commanded quadrotor accelerations, assuming nominal deck acceleration to be zero. (δẍk, δÿk, δz̈k) are external disturbances to quadrotor. wxk , wyk, and wzk are random vectors representing ship-deck disturbance by the sea-waves. The quadrotor accelerations are obtained from outer loop PID control inputs as: ẍk =gθdk ÿk =− gϕdk z̈k =Tdk/m− g (3.22) where (Tdk, ϕdk, θdk) are outer loop controls from the previous time step. 67 Measurement model: The measurements for the translations dynamics is the quadrotor relative ship-deck position as measured through the monocular camera. yxk+1 = xk+1 + vxk+1 (3.23) yyk+1 = yk+1 + vyk+1 (3.24) yzk+1 = zk+1 + vzk+1 (3.25) 3.12 Low Pass Filter Kalman filter works in the time domain. It produces optimal state estimates by assuming white Gaussian noise in measurements. But sometimes corruption of signal is not due to noise but real phenomenon such as rotor vibrations (Figure 3.18-3.19), which are not white and have peaks at certain frequencies. Typically notch filters are used to remove such noise. But notch filters requires precise estimation of the frequency at which the vibration occurs. This is difficult for variable RPM rotors where the excitation frequency can change during flight. But this excitation frequency due to RPM of the rotors are always at frequencies that are much above those of typical flight dynamics. This is seen from Fig. 3.19 where the vibration frequency is near 200 Hz. Hence first order discrete low pass filters are used with the following equation: Yi+1 = αXi+1 + (1− α)Yi (3.26) where Xi+1 is the (i+1)-th filter input and Yi+1 is the (i+1)-th filter output. α is a parameter that can be tuned to vary filter cut-off frequency. A 30 Hz low pass filter was used to remove the effect of vibration in accelerometers. It should be noted that this specific solution of vibration isolation 68 is not scalable to full-scale manned aircrafts where the rotor RPMs and hence the vibration frequencies are often at lower frequencies such as 5 Hz. Notch filters become necessary in such cases. 3.13 Summary This chapter covered in detail the design and fabrication of the autonomous quadrotor UAV as a representative VTOL vehicle for ship-deck tracking/landing. Various aircraft subsystems, including structure, propulsion, avionics and control were explained in-depth. To summarize, the structure is made out of aluminium tubes and 3D printed parts are used to attach avionics and propulsion to it. Commercial Off The Shelf (COTS) motors and rotors were used for propulsion but characterized through in-house experiments. A total of 3 sensors and 2 computers make up the avionics. Kalman filters are used for state-estimation, while linear controls are used to stabilize the vehicle and track ship-deck motion. Based on the results presented, following conclusions are drawn 1. A custom designed UAV provides the right flexibility to focus on vision-based tracking/landing performance. Modern fabrication techniques such as 3D printing and CNC makes this feasible. 2. The inner attitude stabilization loop has settling time of around 0.4 s, while outer trajectory tracking loop settling time is 10 s. This maintains a timescale separation between translational and rotational dynamics of the vehicle consistent with assumption of the designed control system. 3. The tracking bandwidth of the designed control system is 0.15 Hz. 69 Chapter 4: Fiducial-Based Vision System The quadrotor aircraft uses only a camera and computer vision algorithms to detect and track ship-deck motion. This chapter describes the vision system in detail. In order it covers, the camera hardware, calibration, fiducial based vision algorithm, and verification of position and velocity measurements from the vision system. 4.1 Camera The camera is a global shutter camera with fixed focus, high dynamic range and auto- exposure. Global shutter was necessary to get images without motion artifacts. Auto-exposure was necessary to prevent over and under exposed images when there are large changes in environmental lighting. Fixed focus was chosen for repeatability and ease of calibration. High dynamic range allows the sensor to properly perceive bright and dark areas of the scenes in sufficient detail at the same time. The image sensor of the camera is 1920×1200