ABSTRACT Title of Dissertation: COMPUTATIONAL METHODS FOR NATURAL WALKING IN VIRTUAL REALITY Niall L. Williams Doctor of Philosophy, 2024 Dissertation Directed by: Professor Dinesh Manocha Department of Computer Science Virtual reality (VR) allow users to feel as though they are really present in a computer- generated virtual environment (VE). A key component of an immersive virtual experience is the ability to interact with the VE, which includes the ability to explore the virtual environment. Exploration of VEs is usually not straightforward since the virtual environment is usually shaped differently than the user’s physical environment. This can cause users to walk on virtual routes that correspond to physical routes that are obstructed by unseen physical objects or boundaries of the tracked physical space. In this dissertation, we develop new algorithms to understand how and enable people to explore large VEs using natural walking while incurring fewer collisions physical objects in their surroundings. Our methods leverage concepts of alignment between the physical and virtual spaces, robot motion planning, and statistical models of human visual perception. Through a series of user studies and simulations, we show that our algorithms enable users to explore large VEs with fewer collisions, allow us to predict the navigability of a pair of environments without collecting any locomotion data, and deepen our understanding of how human perception functions during locomotion in VR. COMPUTATIONAL METHODS FOR NATURAL WALKING IN VIRTUAL REALITY by Niall L. Williams Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2024 Advisory Committee: Professor Dinesh Manocha, Chair/Advisor Professor Jae Shim, Dean’s Representative Professor Aniket Bera Professor Ming C. Lin Professor Huaishu Peng © Copyright by Niall L. Williams 2024 To all of my teachers and mentors. ii Acknowledgments Access to education is an extreme privilege, so I am extraordinarily grateful to the systems in place that allowed me to spend 5.5 years of my life essentially studying whatever I find interesting. Hopefully the research produced from this dissertation will find its place in contributing back to society one day. I thank my PhD advisors/mentors, Dr. Dinesh Manocha, Dr. Aniket Bera, and Dr. Ming C. Lin, for trusting me enough to let me study a new topic that nobody in our lab was working on. Your flexibility in which topics I could study has made the PhD a very fulfilling and enjoyable experience. I especially thank Dr. Manocha for giving me the freedom to study whatever I happened to find interesting at the time, Dr. Bera for meeting with me at all hours of the day to discuss the low-level details that I got stuck on, and for Dr. Lin’s infectious enthusiasm for VR locomotion and research in general. I also thank the rest of my committee, Dr. Huaishu Peng and Dr. Jae Kun Shim, for providing their outside perspectives and helpful feedback on this dissertation’s work. Outside of my PhD committee, there are many other teachers and mentors whom I must thank. I owe the greatest thanks to Dr. Tabitha C. Peck for formally introducing me to computer graphics and research, for always challenging and encouraging me, for patiently answering my career and life questions for the last 7+ years, and for teaching me the importance of staying positive and believing in myself. I would not have made it this far without your support. I also iii thank Mary C. Whitton for giving me feedback on the first paper that I submitted for peer review, for providing words of encouragement and positivity, for helping me realize the importance of history, and for always chatting with me about life and research at conferences. Thanks to Dr. Kerry McIntyre Magee for providing me with an outstanding education at what I view as the start of my journey as a scientist, for explaining to me what a PhD even is, for teaching me how to understand a complex system as the sum of individual components working together, for giving me exam questions that challenged my logic and reasoning skills, and for teaching me to always label my axes! Thanks to Ashok Pillai and the Davidson Computer Science & Mathematics faculty for introducing me to computer science and providing me with the foundational skills needed to become a strong programmer. I thank Karl Savoury for teaching me how to write a proof, for getting me hooked on geometry, for challenging me with new math problems that I did not understand, and for being the first teacher to treat me like a peer and a collaborator instead of a subordinate student. Thanks to Joan Cogle, Jane Harper, Jai Oshun, Cherry Robinson, and Rosie Smalling for teaching me how to organize my work, manage my time, and develop my public speaking skills. I also thank Flloyd Logan for teaching me the importance of discipline and respecting other people’s time. I am also very grateful to my mentors from my internships in industry research labs, who opened my eyes to a whole new way to think about perception and computer graphics. Thanks to Dr. Ian Erkelens, Dr. Phillip Guan, and the rest of the Applied Perception Science team at Meta Reality Labs for showing me what vision science is and for teaching me the full value of conducting precise, well-controlled experiments. Thanks to Dr. Ruth Rosenholtz at NVIDIA, who taught me a great deal about “smart models” of vision, how to choose my wording very carefully, how to think carefully about which component of perception an experiment is actually iv measuring, and for emphasizing the value of developing computational, predictive models of human vision. I wish I had the chance to work with you earlier in my career so that your mentorship could have had a greater influence on the work in this dissertation. I would be remiss not to thank my many friends, old and new, who made the PhD even more fun than it already was. Thanks to my GAMMA lab mates Jaehoon Choi, Vishnu Sashank Dorbala, Jason Fotso-Puepi, Alexander Gao, Tianrui Guan, Pooja Guhan, Divya Kothandaraman, Geonsun Lee, Yonghan Lee, Bhrij Patel, Yiling Qiao, Shreelekha Shriram Revankar, Logan Stevens, Xijun Wang, Ruiqi Xian, Laura Zheng, and the rest of the lab for all the advice, fun times, and insightful research discussions. I also thank my other friends in the CS department, including Yusuf Alnawakhtha, Connor Baumler, Dr. Marina Knittel, Jiasheng Li, Pedro Sandoval, Manasi Shingane, Zeyu Yan, and (honorary GAMMA member) Kevin Zhang. I thank my friends from outside UMD, including Zubin Choudhary, Dr. Shakiba Davari, Alexander Giovanelli, Matt Gottsacker, Dr. Ryan Hamilton, Dr. Saad Hassan, Dr. SeulAh Kim, Dr. Lee Lisle, Enrique Melendez, Dr. Cassidy R. Nelson, Dr. Missie Smith, Dr. Ashutosh Srivastava, and Dr. Zoe (Jing) Xu for the fun adventures and great company at conferences and internships. A big thanks also goes out to my friends from college and high school, including Dr. Christopher Brooks, George Cai, Arthur Chen, My Doan, Ryan Ewing, Walker Griggs, Yeonjae Han, Sarah Hancock, Hoot Hennesy, Mingyu Kim, Ben Kuschner, Josh Kuschner, Jaeyoung Lee, Dr. Lillian Lowrey, James Ni, Khoa Phan, Jimmy Plaut, Tom Ren, Ryan Strauss, and Terry Zervos. Finally, I am very grateful for the administrative and financial support I have received. Thanks to Migo Gui, Jodie Gray, Tom Hurst, and the rest of the CS department administration for helping with research and course logistics. I also thank the Link Foundation for graciously funding part of my PhD via the Modeling, Simulation, & Training Fellowship. v “I don’t mistrust reality, of which I know next to nothing. I mistrust the picture of reality conveyed to us by our senses, which is imperfect and circumscribed.” – Gerhard Richter, 1972 vi Table of Contents Acknowledgements ii Table of Contents vii List of Tables xi List of Figures xiii I Introduction & Background 1 Chapter 1: Overview 2 1.1 Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Overview of Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Chapter 2: Background 14 2.1 Human Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.1 Visual Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.1.2 Simulator Sickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Virtual Reality Locomotion Interfaces . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.1 Natural Walking in Virtual Reality . . . . . . . . . . . . . . . . . . . . . 19 2.2.2 Redirected Walking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Motion Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 II Locomotion Interfaces and Metrics for Natural Walking in Virtual Reality 26 Chapter 3: Alignment-Based Redirection 27 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.1 Perceptual Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.2 Redirected Walking Controllers . . . . . . . . . . . . . . . . . . . . . . 34 3.2.3 Environment Complexity Metrics . . . . . . . . . . . . . . . . . . . . . 37 3.3 Redirection by Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.3.1 Definitions and Background . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3.2 Alignment-based Redirection Controller . . . . . . . . . . . . . . . . . . 43 vii 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.4.1 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4.2 Simulated Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.3 Environment Layouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.4.4 Experiment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.5.1 Experiment 1 (Environment A) . . . . . . . . . . . . . . . . . . . . . . . 58 3.5.2 Experiment 2 (Environment B) . . . . . . . . . . . . . . . . . . . . . . . 60 3.5.3 Experiment 3 (Environment C) . . . . . . . . . . . . . . . . . . . . . . . 63 3.5.4 Proof of Concept Implementation . . . . . . . . . . . . . . . . . . . . . 66 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Chapter 4: Visibility-Based Redirection 74 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.2 Prior Work and Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2.1 Redirection Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.2.2 Motion Planning and Visibility Polygons . . . . . . . . . . . . . . . . . . 80 4.3 Redirected Walking Using Visibility Polygons . . . . . . . . . . . . . . . . . . . 82 4.3.1 Definitions and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.3.2 Redirected Walking and Configuration Spaces . . . . . . . . . . . . . . . 83 4.3.3 Finding RDW∗() Using Visibility Polygons . . . . . . . . . . . . . . . . 85 4.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.4.1 Environment Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.4.2 Simulated Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.4.3 Experiment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.5.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.5.2 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.5.3 Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.5.4 Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.6.1 Static Scenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.6.2 Dynamic Scenes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.3 Other Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.7 Conclusion, Limitations, and Future Work . . . . . . . . . . . . . . . . . . . . . 103 Chapter 5: Distractor-Based Redirection 110 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.2 Background & Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.2.1 Natural Walking in Virtual Reality . . . . . . . . . . . . . . . . . . . . . 114 5.2.2 Distractors in Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . 116 5.3 Persistent Distractor-driven Locomotion . . . . . . . . . . . . . . . . . . . . . . 117 5.3.1 Definitions & Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.3.2 Distractor Interaction Detection . . . . . . . . . . . . . . . . . . . . . . 119 viii 5.3.3 Safe Zone Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 5.3.4 Update Distractor Behavior . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.3.5 Trade-offs Between Collision Avoidance & Immersion . . . . . . . . . . 123 5.4 Experiments & Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 5.4.1 Experiment 1: Comparison with RDW . . . . . . . . . . . . . . . . . . . 124 5.4.2 Experiment 2: Impact of Collision-avoidance Bias . . . . . . . . . . . . 125 5.4.3 Experiment 3: Impact of Distractor Behavior Feasibility . . . . . . . . . 125 5.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.5.1 Additional Implementation Examples . . . . . . . . . . . . . . . . . . . 129 5.6 Conclusion, Limitations, and Future Work . . . . . . . . . . . . . . . . . . . . . 130 Chapter 6: Quantifying Environment Navigability for Natural Walking in Virtual Reality 138 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 6.2 Background and Prior Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6.2.1 Navigability Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6.2.2 Shape Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 6.3 Environment Navigation Incompatibility Metric . . . . . . . . . . . . . . . . . . 146 6.3.1 Environment Representation . . . . . . . . . . . . . . . . . . . . . . . . 146 6.3.2 User Position and Orientation . . . . . . . . . . . . . . . . . . . . . . . 148 6.3.3 Measuring Compatibility of Local Surroundings . . . . . . . . . . . . . . 151 6.3.4 ENI Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.3.5 ENI Metric: Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.4 Applications and Benefits of ENI . . . . . . . . . . . . . . . . . . . . . . . . . . 157 6.4.1 Analyzing Areas with Low and High Compatibility . . . . . . . . . . . . 157 6.4.2 Analysis of Changes in VE on Compatibility . . . . . . . . . . . . . . . 158 6.4.3 Design Guidelines Based on ENI . . . . . . . . . . . . . . . . . . . . . . 160 6.4.4 Performance Analysis of RDW Controllers . . . . . . . . . . . . . . . . 161 6.5 User Studies and Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.5.1 Experiment 1: Simulation Experiment . . . . . . . . . . . . . . . . . . . 163 6.5.2 Experiment 2: User Studies . . . . . . . . . . . . . . . . . . . . . . . . . 165 6.6 Conclusion, Limitations, & Future Work . . . . . . . . . . . . . . . . . . . . . . 168 III Perception and Physiology During Natural Walking in Virtual Reality 179 Chapter 7: Perceptual Sensitivity and Physiological Signals of Tolerance to Redirection 180 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 7.2 Background and Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 7.2.1 Redirected Walking Thresholds . . . . . . . . . . . . . . . . . . . . . . . 184 7.2.2 Physiological Signals of Users’ Internal State . . . . . . . . . . . . . . . 186 7.2.3 Luminance & Motion Perception . . . . . . . . . . . . . . . . . . . . . . 188 7.3 Experimental Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 7.3.1 Experiment Design & Stimuli . . . . . . . . . . . . . . . . . . . . . . . 191 7.3.2 Equipment & Participants . . . . . . . . . . . . . . . . . . . . . . . . . . 193 ix 7.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 7.4.1 Rotation Gain Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . 195 7.4.2 Simulator Sickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 7.4.3 Postural Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 7.4.4 Gaze Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 7.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 7.5.1 Detection Thresholds and Sickness Scores . . . . . . . . . . . . . . . . . 200 7.5.2 Gaze Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 7.5.3 Postural Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 7.5.4 Further Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 7.6 Conclusions, Limitations, & Future Work . . . . . . . . . . . . . . . . . . . . . 205 IV Conclusion, Limitations, and Future Work 212 Chapter 8: Conclusion, Limitations, & Future Work 213 8.1 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 8.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 8.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 x List of Tables 3.1 Coordinates of vertices of boundaries and obstacles in each environment. . . . . . 52 3.2 Coordinates of vertices of boundaries and obstacles in each environment. . . . . . 53 3.3 Coordinates of vertices of boundaries and obstacles in each environment. . . . . . 54 3.4 The results of pairwise post-hoc comparisons between controllers, computed using linear contrasts and reported using confidence intervals due to the large sample size [124]. For each metric, ψ̂ is the difference in estimated means between the two groups (estimate of the true mean). CI lower is the lower bound of the confidence interval on this difference, and CI upper is the upper bound. Narrower intervals indicate a more precise estimate of the true mean. We can interpret a cell as the estimated difference between the group means (ψ̂), and CI lower and CI upper to represent that on 95% of samples, the true difference in means between the groups will fall in the range [ψ̂ − CI lower, ψ̂ + CI upper]. For a given row that compares Algorithm X vs. Algorithm Y, a positive ψ̂ value indicates that Algorithm X scored more than Algorithm Y by that ψ̂, while negative a value indicates that Algorithm X scored lower than Algorithm Y by that ψ̂, bounded by CI lower and CI upper. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1 Coordinates of vertices of boundaries and obstacles in both environments used in Experiment 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.2 Coordinates of vertices of boundaries and obstacles in both environments used in Experiment 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.3 Coordinates of vertices of boundaries and obstacles in both environments used in Experiment 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.4 The results of post-hoc pairwise comparisons of average number of resets for the redirection algorithms tested in our experiments. The post-hoc tests are computed using linear contrasts. The ψ̂ value is the average difference in means between the first algorithm and the second algorithm listed in the “Redirection Conotrller” column. A negative ψ̂ value indicates that the first algorithm has a lower average number of resets across all 100 paths. The CI column presents the lower and upper bounds of the confidence interval, while the p column presents the significance level of the difference between the algorithms. The ψ̂ and CI values are rounded to three significant figures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 xi 4.5 The results of post-hoc pairwise comparisons of average distanced walked between resets for the redirection algorithms tested in our experiments. The post-hoc tests are computed using linear contrasts. The ψ̂ value is the average difference in means between the first algorithm and the second algorithm listed in the “Redirection Conotrller” column. A negative ψ̂ value indicates that the first algorithm has a lower average number of resets across all 100 paths. The CI column presents the lower and upper bounds of the confidence interval, while the p column presents the significance level of the difference between the algorithms. The ψ̂ and CI values are rounded to three significant figures. . . . . . . . . . . . . . . . . . . . 99 6.1 Effect of sample density on the accuracy of the ENI metric, using the ⟨PE #1, VE #1⟩ environment pair. After increasing the density of our point sampling by roughly 16×, the mean and standard deviation of the ENI metric exhibited very little change in values, but suffered a significantly greater computation time. . . . 157 6.2 Navigability results from simulating 50 random walking paths in different ⟨PE, VE⟩ pairs. Here, we define navigability as the average distance that the user can walk in the VE before colliding with an object in the PE, across all configurations in the PE and VE (Subsection 6.2.1). In general, the navigability of the environments decreases as the ENI score increases, indicating that our metric is able to correctly identify ⟨PE, VE⟩ pairs that are less amenable to real walking. . . . . . . . . . . 165 6.3 Navigability results from two separate user studies. In the first user study (first three rows), users walked towards a goal location in a static VE while located in three different PEs. In the second study (bottom three rows), users searched for a goal location in different VEs while located in the same PE. In both situations, our results showed that navigability decreases as the ENI score increases, validating the correctness of our metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 7.1 Individual psychometric fits for each participant in the photopic and mesopic light conditions. We show the point of subjective equality (PSE), the standard deviation of the Gaussian (σ), and the 25% and 75% detection threshold gains. We also show the average for each of these values at the bottom of each sub-table. In general, participants exhibited RDW detection thresholds that were within the ranges found in prior work [111], though there is significant inter-participant variability [92]. Our results showed no significant differences in detection thresholds between the two lighting conditions. . . . . . . . . . . . . . . . . . . . . . . . . 196 xii List of Figures 1.1 An example of the VR locomotion problem. The user wishes to travel in a straight line in the direction they are facing (dashed teal line). In the virtual environment (right), this corresponds to a valid path that takes the user into the virtual operation room. However, in the physical environment (left), this desired path is invalid because it yields a collision with the table front of them. Locomotion interfaces avoid these collisions by allowing the user to move through the virtual environment without requiring them to carry out the same movements in the physical space. . . 3 2.1 Diagrams that illustrate how different RDW gains can be used to increase the size of the explorable VE. The green borders represent the real-world tracked space borders, and the purple borders represent the borders of the VE that correspond to the size of the tracked space. Arrows indicate the user (green) or VE (purple) movement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.1 A user being steered with our alignment-based redirection controller (ARC) in two different environments. In Environment 1, the virtual environment (VE) is larger than the physical environment (PE), and there is an obstacle in the northeast corner of the VE. The PE has no obstacles. In Environment 2, the VE is larger than the PE, and both have obstacles in different positions. (A) The user walks in a straight line forward in the VE. (B) In the PE, the user is steered on a curved path away from the edge of the tracked space, in order to minimize the differences in proximity to obstacles in PE and VE. (C) The user walks in a straight line forward in the VE, with obstacles on either side of the path. (D) The user is steered on a path with multiple curves in the physical space. The user avoids a collision with the obstacle in front of them, and is also steered to minimize the differences in proximity to obstacles in the PE and VE. We are able to steer the user along smooth, collision-free trajectories in the PE. Our extensive experiments in real-wold and simulation-based experiments show that in simple and complex environments, our approach results in fewer collisions with obstacles and lower steering rate than current state-of-the-art algorithms for redirected walking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 Visualization of the three values from the PE and three values from the VE that constitute a user’s state. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 xiii 3.3 A visualization of the two steps involved in resetting. The top row shows the process of selecting the best direction for resetting. In this example, θreset is chosen to be θ3. To reduce visual clutter, we only show eight of the twenty sampled directions. The bottom row shows the user to turning to face the best direction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.4 Diagrams of the physical and virtual environment pairs tested in our experiments. 55 3.5 A heat map of the user’s physical position across all paths for each controller in Environment A. Yellow tiles indicate the most time spent at that location, while purple tiles indicate the least amount of time. S2C and APF steer the user such that they spent the large majority of their time in the center of the room, while ARC allows the user to visit each region of the room more evenly. . . . . . . . . 60 3.6 A histogram of the average curvature gain applied by each controller for each path in Environment A. The implementation of APF we used always applies the same gain, while S2C and ARC apply lower gains on average. S2C still applies gains fairly close to the perceptual threshold (≈ 7.6◦/s), but ARC is able to steer the user on paths with fewer collisions and significantly reduced curvature gains. Most of the gains applied by ARC fall in the 3◦/s − 5◦/s range, showing that ARC only applies the gains necessary to avoid collisions and maintain alignment. 61 3.7 A heat map of the physical locations visited by the user in Environment B when steered with each controller. Yellow tiles indicate more visits to a region, while purple tiles indicate less time spent in a region. Obstacles are shown in black. S2C and APF keep the user concentrated near the center of the room since it is the most open space in all directions, while ARC is able to utilize more of the space and steer the user along all corridors in the room. ARC has some tendency to keep the user near the north wall of the room, which we suspect is due to the user getting stuck in between obstacles, but the exact cause is not clear. . . . . . . 62 3.8 A histogram of the average curvature gain applied for each path with each controller in Environment B. As in Environment A, APF applies a constant curvature gain when the user is walking. S2C and ARC apply gains with an average in the range of 4◦/s− 6◦/s, with ARC applying gains all gains at a lower intensity than about half of the gains applied by S2C. Note that the lowest gains applied by S2C are lower than those of ARC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.9 A heat map of the simulated user’s location in the physical environment when exploring a virtual environment using three different redirection controllers. Yellow tiles represent a large amount of time spent in that region, and purple tiles represent a small amount of time spent in that region. The Alignment-based Redirection Controller (ARC) allows the user to utilize more of the physical space while exploring the virtual world compared to S2C and APF. This means that users spends less time being reset and more time walking through the physical environment, when steered with ARC than with S2C or APF. This is supported by the results for the number of collisions and distance walked. . . . . . . . . . . . . . . . . . 65 xiv 3.10 The average curvature gain applied by each controller for all paths in Environment C. The same trend as in Environment B is seen here, where APF has a higher steering rate than S2C and ARC. One difference between the steering rates in Environment B and C is that the gains applied by S2C and ARC are in a higher range (6◦/s− 7◦/s) in Environment C than they were in Environment B (4◦/s− 6◦/s). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.11 Boxplots of performance metrics for each controller in each environment. The boxplots show the median and IQR for the data. A significant difference was found between all algorithms in all environments. ARC outperformed APF and S2C for all metrics in all environments except for average alignment in Environment C. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.12 The relationship between the environment complexity and the number of resets incurred by a redirection controller. ARC consistently has a better performance than S2C and APF for all environment complexities. The performance difference between ARC and the other algorithms is quite large for environments A and C, but the difference decreases drastically for Environment B. It is not clear why Environment B causes the controllers to have a more similar performance, but it may be due to the relatively few pathing options afforded by the narrow hallways of Environment B. Environments A and C both include regions with a fairly large amount of open space, unlike Environment B (see Figure 3.4). . . . . . . . . . . 72 3.13 A screenshot of the user’s state and recent path in Environment A for each controller. Each simulated user travelled on the same virtual path in this figure, and the screenshot was taken at the same time in the simulation. When steered with ARC, the system is able to achieve perfect alignment, and the user’s physical state and recent path matches the virtual counterpart. APF and S2C are not able to achieve alignment, and their paths and states are very dissimilar to the virtual counterparts. The state of the virtual user is not the same across all conditions because the virtual user pauses while the physical user reorients after a collision, and each controller incurred a different number of collisions. . . . . . . . . . . . 73 4.1 A visualization of the geometric reasoning that our redirection controller performs on every frame in order to steer the user in the physical space. First, the controller computes the visibility polygon for the user’s physical and virtual locations (the regions bounded by the blue and red edges, respectively). Next, the controller computes the region of space (part of the red visibility polygon) in front of the user that the user is walking towards in the virtual environment (yellow region in the right image). By comparing the areas of the regions, our controller computes the region in the physical space (yellow region in the left image) that is most similar to the virtual region the user is heading towards. Finally, the controller applies redirected walking gains to steer the user to walk towards the highlighted region in the physical space. Black dashed arrows indicate the user’s trajectory in the environment. Our algorithm yields significantly fewer resets with physical obstacles than prior algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 xv 4.2 Visualization of the redirected walking problem. If the user tries to walk on the virtual path pathvirt with no redirection applied, they will collide with the obstacle to their left in the physical space. After applying redirection, the user instead walks along pathphys and avoids any collisions. The free spaces Freephys and Freevirt are shown in blue and red, respectively. . . . . . . . . . . . . . . . 84 4.3 A visualization of the superimposition of the two free spaces, Freephys (blue) and Freevirt (red). Regions of Freevirt that do not overlap with Freephys signify regions of the virtual environment that the user cannot walk to without colliding with a physical obstacle. Our controller aims to steer the user in Ephys such that Freephys and Freevirt overlap in the region that the user is walking towards. . . . 87 4.4 An overview of our redirection controller based on visibility polygons. (A) We compute the visibility polygon corresponding to the user’s position in both the physical (blue) and virtual (red) environments. After the visibility polygons are computed, they are divided into regions called “slices” which we use later in our approach to measure the similarity of the two polygons. (B) The “active slice” in the virtual environment is computed. This is the slice of the virtual visibility polygon that the user is walking towards (shown in yellow). (C) The corresponding slice in the physical environment that is most similar to the active slice is computed. Similarity is measured using slice area. (D) Redirected walking gains are applied according to the user’s heading to steer them in the direction of the most similar physical slice that was computed in step (C). . . . . . . . . . . . 88 4.5 A visibility polygon after its slices are computed (Figure 4.5a) and the composition of one slice (Figure 4.5b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.6 The layouts of the different environment pairs we tested in our experiments. The faded circles in the virtual environment for Experiment 4 indicate that the circles change position over time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.7 Boxplot of the number of resets for each algorithm, across all 100 paths in Experiment 1. Our visibility-based algorithm significantly outperformed each of the other redirection controllers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.8 Boxplot of the number of resets for each algorithm, across all 100 paths in Experiment 2. The difference in the number of resets incurred is much larger for APF and S2C, which do not take advantage of alignment. ARC and our visibility-based controller (Vis. Poly.) have more similar performance levels, but our algorithm still produced significantly fewer resets. . . . . . . . . . . . . . . . . . . . . . . 107 4.9 Boxplot of the number of resets for each algorithm, across all 100 paths in Experiment 3. Our controller based on visibility polygons performed significantly better than all other controllers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.10 Boxplot of the number of resets for each algorithm, across all 100 paths in Experiment 4. In the dynamic scene we tested, we once again found that our visibility-based algorithm was significantly better than the other controllers at avoiding resets with physical obstacles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 xvi 5.1 A visualization of our distractor-driven locomotion interface designed to enable exploration of large virtual environments using natural walking. In the virtual environment, the user approaches a distractor (a frog) in an attempt to collect it using a virtual, hand-tracked jar. Our algorithm detects this interaction and updates the behavior of the distractor in order to guide the user away from the nearby boundary of the physical space (yellow tape in the physical environment). In particular, our algorithm causes the frog to jump away to one of many candidate positions (dashed red arrows) in the virtual environment that corresponds to safe positions in the physical environment. The use of such distractors causes the user to alter their virtual trajectory such that it is more compatible with their physical surroundings, which allows the user to explore virtual worlds with with longer collisions-free trajectories even when they are located in a small physical environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.2 An overview of our persistent distractor-driven locomotion interface. Given as input the layouts of the physical and virtual environments and user’s configuration in each environment, our system listens for an interaction between the user and a persistent distractor in the virtual environment. If an interaction is detected, the system computes the regions of the physical environment that the user can be safely guided towards (Subsection 5.3.3). Once a goal configuration to guide the user towards has been chosen, our system computes a path from the user’s current physical configuration to the goal physical configuration and modifies the behavior of the distractor such that it guides the user along this computed path (Subsection 5.3.4). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.3 A visualization of the safe zone concept. The navigable space Cfree is shown in green, and the regions that yield a collision Cobs are shown in black. In this figure, the user (white cursor icon) is heading towards a physical obstacle and should be guided towards a safe region of the physical space. An example point q∗phys in the safe zone S is shown in the top left corner of the environment. A valid path that leads the user towards q∗phys is shown in blue, and an invalid path is shown in red. Left: When the VR system has access to a full map of the user’s PE, the safe zone S is equal to the entire free space Cfree. Right: When the VR system does not have a full map of the user’s PE, the safe zone S can be a partial representation of the user’s physical surroundings (subset of Cfree), such as the visibility polygon centered at their current position. Portions of the environment that are not known to the system are shown as black dashed lines. . . . . . . . . . . . . . . . . . . 121 5.4 The physical and virtual environments used in our implementation. The physical environment (top) was a 4.3m × 6.125m space. The green dots represent the four pre-computed safe zones that users were guided towards by the persistent distractor (frogs). The first virtual environment (bottom left), used in Experiments 1 and 2, was a 20m×20m environment with bushes, trees, and other miscellaneous forest objects. The second virtual environment (bottom right), used in Experiment 3, was a 20m× 20m environment with significantly fewer bushes and large rocks and a plateau that obstructed the movements of our persistent distractors. . . . . 132 xvii 5.5 A diagram illustrating how the distractor behavior can be used to guide the user towards the desired goal configuration q∗phys. When the user gets too close to a physical obstacle and needs to be guided back to a safe location in the PE, our algorithm updates the behavior of a nearby persistent distractor in the VE to naturally guide the user towards the virtual configuration q∗virt that corresponds with q∗phys. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 5.6 Left: Box plot of the average distance walked between resets for each participant, grouped by the experiment condition. Results show that, compared to randomized distractor behavior, participants walked significantly further (7.864m vs 6.269m) before incurring a reset when they navigated through the VE using our collision- aware persistent distractors (∗ p < 0.05). Right: A histogram of the distances of paths travelled between resets, grouped by experiment condition. Paths that were influenced by a collision-aware distractor were on average longer than those traveled when interacting with random distractors and RDW. . . . . . . . . . . . 134 5.7 Left: A box plot of the average distance walked between resets for each participant, grouped by the experiment condition. In Experiment 2, the likelihood that distractors followed a collision-aware trajectory was reduced, to show the impact of this parameter on the user’s locomotion experience. Compared to random distractor behavior, participants traveled, on average, similar distances while being guided with our collision-aware distractors. The lack of significant differences highlights the importance of generating distractor behaviors that try to guide the user towards safe zones. Right: A histogram of the distances of paths travelled between resets, with and without our collision-aware distractor behavior. Unlike Experiment 1, the two distributions have more overlap, indicating that users walked roughly equal distances between resets regardless of the distractor behavior. . . . . . . . . 135 5.8 Left: A box plot showing the average distance walked between resets for each condition in Experiment 3. In Experiment 3, the VE was modified so that it was harder for distractors to execute diegetic behaviors that could guide the user towards the safe zone. We see that participants travelled a similar distance regardless of the presence of collision-aware or naiv̈e distractors. This lack of significant differences highlights the importance of having a virtual experience in which the persistent distractors can reliably guide the user towards a virtual location that corresponds closely to the physical safe zone. Right: Histogram of the distances of paths walked between resets. Similar to Experiment 2, we see a high amount of overlap between the two distributions, which highlights that the users’ locomotion patterns were similar despite the use of collision-aware distractors in one of the conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 xviii 5.9 Plots of an example path that a participant travelled for each experiment. In each plot, the user is being guided using our distractor-driven interface. Green circles represent bushes in the VE that frogs jumped to, red X’s represent positions where the user incurred a reset, teal stars represent positions where the user caught a frog, and black shapes represent obstacles. The boundary of the PE is the dashed black rectangle (6.125m × 4.3m), and the boundary of the VE is the solid black square (20m×20m). The user’s physical and virtual trajectories are shown in blue and orange, respectively. Left: When frogs had a 90% chance to flee and 90% chance to choose a destination using our method (as in Experiment 1), the user spends some of their time following the frog around a small area of the VE, then chases the frog to another location in the VE if it randomly jumps to a far-away location, or searching for new frogs after catching one. Middle: When the frogs had a 90% chance to flee and only a 30% chance to choose a destination using our method (as in Experiment 2), the user incurred more resets than in Experiment 1. Right: When the frogs had a 90% chance to flee and a 90% chance to choose a destination using our method but it was harder for the distractor to reach a location that aligned closely to the goal physical configuration q∗phys, the user incurred a similar number of resets as in Experiment 2 and was not able to catch any frogs. 137 6.1 A visualization of our Environment Navigation Incompatibility (ENI) scores for physical environment paired with three different virtual environments with increasing environment area. Our metric is used to accurately quantify whether it is possible to compute a good mapping between the geometric layouts of these environment. We sample points across the virtual environment to represent the user’s position (shown as colored circles) and compute the corresponding point in the physical environment based on comparing the local neighborhoods. Overall, our metric helps us to determine which regions or subsets of the VE are more compatible with the PE. Through this visualization, we can see which regions of the VE are more or less compatible with the PE. . . . . . . . . . . . . . . . . . . . . . . . . 138 6.2 Left: An environment with obstacles (black) and the constrained Delaunay triangulation (green) of the free space. Right: The vertices (green) of the constrained Delaunay triangulation that lie inside the free space. These vertices are the sampled points at which we compute visibility polygons to describe the structure of the environment and compute our ENI metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 6.3 An illustration of the impact of the similarity between the user’s physical and virtual surroundings on their ability to travel on collision-free paths. In the top row, the user (shown as the white cursor) cannot walk forward in the VE without colliding with an object in the PE. In the bottom row, the user’s proximity to obstacles in the two environments is more similar, so more of the possible paths in the VE correspond to collision-free paths in the PE. In our metric, we compute this area of the virtual surroundings that cannot be accessed from a particular physical surrounding as a measure of the navigability at a pair of physical and virtual configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 xix 6.4 Top row: Two visibility polygons Pphys and Pvirt in a ⟨PE, VE⟩ pair. Bottom row (left): Pphys and Pvirt have been translated such that their kernels lie on the same 2D position in the plane. Bottom row (right): The result of the boolean difference operation Pvirt\Pphys is shown as the red polygons. These polygons represent all the regions of Pvirt that cannot be accessed when the user is located at kphys and kvirt in the PE and VE, respectively. Our metric uses the total area of Pvirt \Pphys as a measure of the similarity of the user’s local physical and virtual surroundings. 170 6.5 The effect of selecting a bar of the histogram in our interactive visualization. When a bar is selected (right), the physical and virtual points that contribute towards this histogram bar are highlighted in orange (left and middle). . . . . . . 171 6.6 The effect of selecting a set of virtual points using the lasso tool. When virtual points are selected (left), the corresponding most compatible points (computed via Equation 6.3.4.3) are shown in red in the PE (right). . . . . . . . . . . . . . . 171 6.7 ENI metric scores for three different ⟨PE, VE⟩ pairs, where the PE is static and the density of objects in the VE increases. As the density increases, the amount of navigable space decreases, creating a more navigable ⟨PE, VE⟩ pair due to the small size of the PE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.8 Top left: A virtual path in an empty 10m × 10m VE. Top right, bottom left, and bottom right: The physical path the user travels on when steered by APF [205], ARC [231], and S2C [84]. The physical paths are colored according to the ENI scores between the corresponding points along the physical and virtual paths. The path yielded from ARC is more compatible with the virtual path, suggesting that the user is less likely to incur collisions during locomotion with ARC. . . . . . . 173 6.9 Left: A user in the lab space in which we conducted our user evaluations. Right: A screenshot of the user’s starting configuration in the VE at the beginning of a trial in our first user study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.10 Diagrams of the layouts of the VE and PEs used in the first user study. The blue circle indicates the user’s starting position in each environment, and the green, pink, and red circles indicate the locations of the goal in the VE during different trials. The dimensions of each environment are 4.37m× 6.125m. . . . . . . . . . 174 6.11 Environment A introduced by Williams et al. in [231]. . . . . . . . . . . . . . . . 175 6.12 Environment B introduced by Williams et al. in [231]. . . . . . . . . . . . . . . . 175 6.13 Environment C introduced by Williams et al. in [231]. . . . . . . . . . . . . . . . 176 6.14 The three environment pairs used in our first user study. . . . . . . . . . . . . . . 177 6.15 The three environment pairs used in our second user study. . . . . . . . . . . . . 178 xx 7.1 A visualization of our experiment paradigm and the properties of physiological signals that we found to be correlated with scene motion during redirected walking (RDW). (A) We conducted a psychophysical experiment in which participants completed a rotation task across hundreds of trials, with different amounts of additional scene motion injected into the virtual environment during the rotation. Participants reported on whether or not they perceived the additional injected motions and we computed their visual sensitivity to these motions. (B) Our analyses revealed that as the speed of injected motions increased, the stability of participants’ gaze (left) and posture (right) decreased. These results show, for the first time, a direct correlation between the strength of redirection (injected visual motion gains) and physiological signals. . . . . . . . . . . . . . . . . . . . 180 7.2 Screenshots of the virtual office environment used in our experiment (during the photopic condition). Ambient office sounds were played to help mitigate the viability of using sounds from the physical environment as a cue for the participant’s orientation in the physical space. (A) The view of the environment that participants saw at the beginning of each trial. The white arrow indicated to the user which direction they should rotate, and this arrow disappeared after they rotated 5◦ from the starting position in the direction of the arrow. (B) An example view of the environment at the end of a trial. When the user rotated 90◦ in the virtual environment (±5◦), a beep tone was played that indicated that the user should stop rotating and maintain their current orientation in the environment. After maintaining this orientation for 1 second, a green check mark appeared to indicate that they successfully completed the trial. . . . . . . . . . . . . . . . . 194 7.3 Psychometric curves fit to participants’ pooled response data for the photopic (yellow) and mesopic (blue) conditions. The graph shows the average probability of responding “greater” to the post-trial question “Was the virtual movement smaller or greater than the physical movement?”. The yellow- and blue-shaded regions indicate the estimated range of rotation gains that are usually imperceptible to users (i.e., the 25% and 75% detection thresholds). Error bars for each data point denote the standard error. The pooled detection thresholds for photopic and mesopic conditions were similar to values found in prior work that used photopic stimuli, and there were no significant differences between the two conditions. The detection threshold gains shown here are not exactly the same as the average values shown in Table 7.1 since we computed the curves in this plot by fitting a psychometric curve to the pooled participant responses, while Table 7.1 computes the average of the curves fit to individual participants’ responses for each conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 7.4 A scatter plot of users’ SS scores after the light (photopic) and dark (mesopic) blocks of our experiment. In general, participants exhibited SS levels that are typical of RDW detection threshold experiments. The data belonging to the outlier male participant with the highest SS scores did not show any anomalous patterns, so their data are included in our analyses. . . . . . . . . . . . . . . . . . 208 xxi 7.5 Examples of one participant’s posture data for two different trials (each row corresponds to one trial). The left column shows the participant’s head position projected onto the ground plane (black curve), with the centroid of their positions at the origin (red dot). For each trajectory point sampled, we compute a proxy for postural sway as the distance between the centroid and the sampled head position (i.e., the distance from each point to the origin). The right column shows the participant’s postural sway (purple curve) and total amount rotated in the physical environment (orange curve) across the duration of the trial. The points along the trajectory curves and postural sway curves are colored according to the time in the trial (purple indicates the beginning of the trial, yellow indicates the end of the trial). These plots show that as the gain increases, participants’ postural sway also increases—a correlation which was statistically significant (Subsection 7.4.3). 209 7.6 An example of one participant’s horizontal eye position (red curve, in UV coordinates of the rendered image) during one trial. Green indicate saccades (gaze velocity above 30◦), which are also identifiable as a very steep slope in the red curve, denoting the eye’s horizontal position. The data help to confirm that our participants’ gaze behavior was free of abnormalities since this plot shows that gaze behavior was characterized by typical nystagmus responses that are expected in healthy observers during head rotation [1]. . . . . . . . . . . . . . . . . . . . . . . . . . 210 7.7 A graph showing a user’s gaze velocity (blue) compared to the total amount they have rotated their body during one trial (orange). The gaze velocity curve is characterized by multiple saccades that arise from vestibular nystagmus and the optokinetic reflex. The body rotation curve increases from 0◦ to ∼95◦ over the course of the trial. The grey shaded region is the range 85◦−95◦ (the trial ended if the orange curve lies within this region for 1 s). As the trial progresses, the user’s gaze velocity gradually decreases to 0◦ as they wait for the trial to complete after rotating a sufficient amount. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 xxii Part I Introduction & Background 1 Chapter 1: Overview Virtual reality (VR) is a system that allows the user to experience and interact with a computer-generated digital environment [101]. VR is an interesting technology because it creates a feeling of presence, the feeling that the user is really in the virtual environment (VE) that they are viewing [178]. Key to this feeling that one is actually in the VE is the ability to interact with the VE—the user’s actions in VR create a change in their experience of the virtual surroundings. For example, picking up a virtual basketball and throwing it should create a plausible dynamic response that causes the ball to bounce away according to the laws of physics and the geometry of the virtual objects that make up the VE. One important aspect of interaction is the ability to explore the VE. The ability to actively explore an environment (i.e., using one’s own volition) leads to better acquisition of route and survey knowledge in the environment [36]. Furthermore, it has been shown that the ability to explore an environment using natural walking contributes significantly to a user’s feeling of presence in VR [209] and their performance on search and wayfinding tasks [84, 162]. Here, we define “natural walking” as step-driven locomotion that does not use treadmills or other mechanical devices, makes use of the entire gait cycle, and, ideally, is perceived by the user as identical to how they walk in the real world while not in VR [116, 188]. However, a fundamental problem with natural walking in VR is that the user’s physical movements are usually mapped 2 one-to-one to their virtual movements, which means that an unobstructed path in the virtual world may correspond to an obstructed path in the physical environment (see Figure 1.1). Figure 1.1: An example of the VR locomotion problem. The user wishes to travel in a straight line in the direction they are facing (dashed teal line). In the virtual environment (right), this corresponds to a valid path that takes the user into the virtual operation room. However, in the physical environment (left), this desired path is invalid because it yields a collision with the table front of them. Locomotion interfaces avoid these collisions by allowing the user to move through the virtual environment without requiring them to carry out the same movements in the physical space. Locomotion interfaces (LIs) are techniques that mitigate this problem by changing the mapping between the user’s physical and virtual movements or otherwise allowing users to control their virtual movements (e.g., via a joystick) [52, 188]. Indeed, researchers have invested a significant amount of work into developing LIs that are comfortable, easy to use, and effective for moving the user through the VE. Each interface comes with different advantages and disadvantages in terms of its learning curve, the level of presence afforded, and its efficiency. In this dissertation, we focus on locomotion interfaces that allow users to explore VEs using natural walking due to the benefits it provides to the user experience. Most of the research in natural walking interfaces for VR build upon a technique known as redirected walking (RDW) [157, 158], which 3 works by subtly manipulating the mapping between the user’s physical and virtual movements which allows the VR system to steer the user away from physical obstacles in their surroundings that they cannot see. There are three main questions that the majority of natural walking (and especially RDW) research has studied: 1. How severely can we manipulate the mapping between a user’s physical and virtual motions before the discrepancy in their self-motion signals becomes detrimental to their virtual experience? 2. What is the optimal direction to steer a user in the physical environment to avoid as many collisions with physical obstacles as possible? 3. For a given pair of physical and virtual environments, what is the optimal natural walking interface that yields the best user experience? Note that this third question has received comparatively much less attention than the first two questions. In this dissertation, we develop methods that make progress on all three of the above questions. We focus on computational approaches to these questions since computational methods have many benefits for modeling, simulating, and understanding complex systems and large amounts of data [90, 171] For example, by redefining the redirection problem using a rigorous mathematical framework (Chapter 4), our RDW algorithms can be more readily applied to a wide range of scenarios as long as the correct inputs (e.g., walking trajectories, environment layouts, human factors) can be fed into our algorithms. Indeed, prior work has demonstrated that redirection (i.e., simultaneous, decoupled locomotion in a real and virtual environment) exhibits complex behaviors akin to chaotic systems [81] and has shown the usefulness of simulations for 4 measuring the efficacy of different redirection algorithms [10]. In an effort to develop computational methods for VR locomotion, we build upon the concept of alignment [207] as a way of comparing the similarity of the structure of the physical and virtual environments. To achieve this, we introduce environment similarity metrics that are precisely defined based on the geometric structure of the environments. We also adapt techniques from robot motion planning to define a rigorous mathematical framework that allows us to reason about VR locomotion in generalized, abstract terms. This framework enables us to develop locomotion interfaces that can operate in any environment as long as particular constraints are fulfilled. Furthermore, we show that this framework allows us to better understand and make predictions about locomotion in a pair of physical and virtual environments without the need to run user studies to collect locomotion data. Additionally, we measure the correlation between a user’s physiological signals (gaze and postural stability) and the intensity of VE manipulations introduced by redirected walking (RDW), a popular natural locomotion interface. To evaluate my methods, we use both quantitative and qualitative metrics. This includes statistical models of human perception, quantifications of distance walked and frequency of collisions in VR, presence and simulator sickness questionnaires, and semi-structured interviews with participants. 1.1 Main Contributions The main contributions of this thesis are: 1. Improved redirected walking algorithms based on alignment and distractors: We develop two new algorithms that steer the user away from unseen physical obstacles with a higher success rate than existing state-of-the-art algorithms. Existing algorithms typically 5 do not consider the layout of the virtual environment when deciding where to steer the user in the physical space. We show that by considering the structure of both the physical and virtual environments together, one can develop a new kind of redirection algorithm that can avoid more collisions than algorithms that do not consider the structure of the virtual environment, in both static and dynamic scenes. Our algorithms leverage alignment, the concept of measuring the similarity of the user’s state (in our case, proximity to objects) in the physical and virtual environments, to approximate the likelihood that the user will collide with a physical object. Once an alignment score has been computed, we use this metric to steer the user to a safer physical location that minimizes the discrepancy between the user’s proximity to physical and virtual objects. Next, we develop a third algorithm that formalizes the usage of distractors, any element of the virtual environment that aims to capture the user’s attention [148], for improved collision-avoidance during VR locomotion. We describe a framework for how distractors can be implemented into any natural-walking locomotion interface as long as the VR system can compute regions of the physical space that the user can safely navigate towards and a feasible distractor behavior can be generated that guides the user to that safe physical region. We demonstrate the viability of this framework through a simple implementation and study the effects of distractor behavior on the effectiveness of this framework. 2. A novel, rigorous formulation of the redirected walking problem: Many redirection algorithms have been based on simple, hand-designed heuristics based on intuitions about the situations in VR walking that are typically challenging (e.g., small physical spaces). This decision to use heuristics limits an algorithm’s ability to perform well across a wide 6 range of physical and virtual environments since it is likely that the chosen heuristics do not cover the vast range of possible environments and configurations the user may be in. Taking inspiration from robot motion planning, we reformulated the redirected walking problem in terms of the user’s configuration in either environment and their trajectory through the virtual environment. We then show how this framework highlights geometric and perceptual constraints that tend to make collision-free navigation difficult. Two of the redirection algorithms introduced in this thesis are based on this mathematical framework. 3. A new metric to quantify navigability for virtual reality: A large challenge with VR locomotion is that it is difficult to know how much collision-free navigation is possible for a given physical and virtual environment without conducting a user study. This is because quantifying the navigability of a pair of environments usually requires gaining an understanding of the types of virtual paths the user is likely to travel on, which determines how feasible it is for the user to safely navigate through their corresponding physical environment. We introduce, for the first time, a metric that approximates the navigability of a given pair of physical and virtual environments without using any user locomotion data (real or simulated). Our metric is based purely on the geometric layout of the two environments. Our metric is built on the observation that locomotion is a primarily local problem and that by quantifying the similarity of the local structure of uniformly-sampled points across the two environments, we can approximate the likelihood that a user will incur a collision for any given starting positions in the physical and virtual environments. We validate our metric using large-scale simulations and find that our metric is correlated with the navigability of environment pairs. 7 4. A new study of the relationships between redirection and physiological signals: Redirected walking works by injecting small, subtle rotations and translations into the user’s virtual camera trajectory as they move around in their physical space. An important component of successfully deploying a redirected walking system is ensuring that these rotations and translations are never too large that the user consciously notices them. If the user notices the applied redirection, it is likely that it will interfere with the quality of their virtual experience and may make them experience simulator sickness [104]. Traditional methods for estimating a user’s sensitivity to redirection employ psychophysical threshold estimation techniques which are not scalable since they require large amounts of time, can cause users to feel fatigued or bored, and do not generalize well to the broader population (i.e., there are individual differences in sensitivity). We conduct a study that measures perceptual thresholds for RDW rotation gains and examines the relationship between the strength of these gains and physiological signals generated by the user. In particular, we show that the strength of the gain is correlated with the stability of the user’s gaze and posture, which opens the door for new methods of RDW sensitivity estimation that do not require long calibration times like traditional psychophysical methods do. 1.2 Overview of Dissertation This thesis presents new methods for understanding locomotion in virtual reality and for developing natural walking-based locomotion interfaces. We organize the thesis as follows: • Chapter 2—Background: We provide a high-level overview of the human perceptual system and locomotion interfaces for virtual reality. In particular, we detail the basic 8 mechanics of how visual perception works and how it contributes to a user’s perception of self-motion through their environment. We also provide details on the basics of locomotion interfaces for virtual reality, with a focus on methods that enable users to explore their virtual surroundings using natural walking. • Chapter 3—Alignment-Based Redirection: The first contribution of this thesis is based on the observation that collision-free locomotion in VR does not require the physical and virtual environments to have globally similar layouts. That is, regions of local similarity can yield collision-free paths as long as the user’s proximity to objects is similar between the physical and virtual environments. Furthermore, this similarity in proximity is only necessary in the direction of travel. Based on this observation, we designed a redirection algorithm that first quantifies the alignment of the user’s state in each environment (i.e., the similarity of the user’s physical and virtual positions in terms of proximity to objects) and then applies redirection to steer the user towards a physical location that is more similar to their virtual location. Interestingly, we find that by applying redirection only proportional to the magnitude of the differences in proximity to physical and virtual objects, we are able to achieve improved collision avoidance while also steering the user with weaker redirection gains compared to state-of-the-art RDW algorithms, which may decrease the chance that the user feels symptoms of simulator sickness. This finding goes against a common rule-of-thumb that stronger redirection gains yield better collision-avoidance performance. • Chapter 4—Visibility-Based Redirection: Next, we present a novel, mathematical formulation of the RDW problem based on concepts robot motion planning. In particular, we detail how 9 the redirection problem can be described using the notions of configuration spaces (which describe the user’s position and orientation in an environment) and trajectories (which are represented as an ordered set of user configurations in an environment). Building upon this framework, we use visibility polygons to represent the local structure of the environment around the user’s current position. We show that this representation is useful for computing regions of space where a collision is possible without needing to know what the user’s future trajectory is, meaning no trajectory prediction is required. Using our mathematical framework and visibility polygons, we develop a new redirection algorithm that achieves improved results over state-of-the-art algorithms (including our alignment-based algorithm described in Chapter 3) in both static and dynamic environments. • Chapter 5—Distractor-Based Redirection: In this chapter, we present a natural walking interface for VR that integrates distractors, elements of the virtual environment that capture the user’s attention [148], to help guide the user away from collisions with physical objects. Since VR is an interactive technology, users often interact with elements of their virtual surroundings (usually things like virtual agents or objects) as part of their virtual experience. Based on this observation, we designed a locomotion framework that directly uses distractors as a guiding agent to steer the user away from imminent physical collisions in a naturalistic way that does not interfere with their virtual experience (either through overt reorientation interventions or motions injected into the virtual camera). Our framework functions by computing safe zones of the physical environment, which are regions of the physical space that the user is able to safely navigate to and are not likely to lead to a collision in the near future. Once an appropriate safe zone has been computed and the user interacts with 10 a virtual distractor, our system generates a distractor behavior that is natural (within the context of the virtual experience) and will guide the user towards the safe zone as long as the user continues to interact with the virtual distractor. We implement a simplified version of our framework and demonstrate its effectiveness compared to a redirection system with naı̈ve distractor behavior. Furthermore, we study how changes to the distractors’ behavior impact our ability to guide the user away from collisions with physical objects. • Chapter 6—Quantifying Environment Navigability for Natural Walking in Virtual Reality: One challenge for researchers who study virtual reality locomotion navigation is that it is difficult to predict how easily a user will be able to explore a given virtual environment without conducting a user study. In this chapter, we develop, for the first time, a metric that approximates the navigability of a pair of physical and virtual environments purely based on the geometric layout of the environments (i.e., our metric does not require users’ navigation trajectories as an input). Our metric is based on the observation that natural walking behavior is largely determined by the structure of the user’s local surroundings. Therefore, a method that can sample and quantify the similarity of locations in the physical and virtual environments will likely be correlated with the ease of collision-free locomotion in those environments. We present details on how such a metric can be computed, taking inspiration from geometric shape similarity metrics and robot navigation, and show through extensive user studies and simulations that our metric is correlated with how far a user is able to walk in an environment before they incur a collision with an unseen physical obstacle. • Chapter 7—Perceptual Sensitivity and Physiological Signals of Tolerance to Redirection: 11 When studying redirected walking systems it is important to consider not only the design of redirection algorithms but also the user’s subjective perceptual sensitivity to the rotations and translations that RDW introduces into their virtual camera motion. A crucial factor to consider when deploying a RDW algorithm is to make sure that the algorithm does not apply redirection gains that are too strong for the user such that they notice the injected motions and feel symptoms of simulator sickness, which will detract from their virtual experience. Traditional methods to estimate this sensitivity to injected motions are based on psychophysics and often entail long measurement processes that are tiring for the user and cannot be employed while the user is in a virtual experience (e.g., while the user is experience a virtual job simulator). In this chapter, we conduct a study that measures sensitivity to RDW rotation gains under light and dark conditions and correlates users’ sensitivity to patterns in their physiological signals. In particular, we show that increased rotation gains are positively correlated with postural and gaze instability. This finding opens the door to the possibility of using physiological signals as a measure of user comfort during redirected locomotion, potentially bypassing the long measurement process required by traditional psychophysical methods. • Chapter 8—Conclusion, Limitations, and Future Work: In this chapter, we provide a summary of the results, discuss its limitations, and discuss avenues for future work in this area. The overall goal with this thesis is to improve our understanding of the dynamics of natural walking in virtual reality and develop algorithms and metrics that improve users’ ability to explore virtual environments using natural walking. In an effort to do this, we introduced new algorithms for natural walking in VR, developed new mathematical tools to 12 think about and analyze virtual locomotion, and provided new insight into the relationship between perception and physiology during virtual locomotion. However, our work has limitations relating to the usage of simulation-based methods and how likely some of our results are to generalize to real users, the computational costs of our navigability metric, and the generalizable of the results on physiology to more representative natural walking scenarios and tasks. Future work in this area should aim to improve upon these limitations by building more complex models of human locomotion and perception, developing data- driven methods for estimating environment navigability, and conducting large-scale user studies to better understand the success and failure cases of our locomotion interfaces. 13 Chapter 2: Background In this section, we provide a high-level overview of the three main areas of research that this dissertation builds upon. We discuss how human perception contributes to a person’s ability to understand and interact with their environment (Section 2.1), the interfaces that have been developed to enable users to explore VEs (Section 2.2), and the mathematical framework that roboticists have created to allow them to develop rigorous robot navigation algorithms (Section 2.3). 2.1 Human Perception The human perceptual system is responsible for organizing, identifying, and interpreting the sensory information that is received by an observer’s sensory system [237]. Virtual reality experiences are highly influenced by the user’s perceptual system. When in VR, the user perceives stimuli that mostly come from the VR system. That is, VR system developers have direct control over a large portion of the information perceived by users. Because of this, it is important for us to understand how users react to different perceived stimuli so that we can create enjoyable and effective virtual experiences for our users. Some examples of different types of stimuli include visual, auditory, haptic, proprioceptive, and vestibular signals, each of which is processed by a respective perceptual system of an observer. During the process of perception, an observer’s brain must integrate the information from all 14 of these different perceptual systems so that they can come to a conclusion about the state of their surroundings. This is a process known as multisensory integration [186]. In VR, it is not uncommon for the information about the user’s surroundings from different perceptual systems to disagree. For example, a user playing a game in VR might see stimuli that correspond to a jungle environment but might simultaneously be overhearing sounds of a cooking appliance in the kitchen nearby, which provides auditory information that contradicts the visual indicator that the user is in a jungle. This is a situation known as sensory conflict, and it can decrease a user’s feeling of presence within a virtual experience. 2.1.1 Visual Perception Although human perception is a multisensory experience, the rest of this dissertation focuses primarily on visual perception and its intersection with locomotion since visual stimuli are usually the primary channel through which a user experiences VR. Indeed, one of the primary techniques that this work uses, called redirected walking (RDW), only works because humans tend to respond more often to visual stimuli than non-visual stimuli (a phenomenon known as visual dominance [152]). However, as we will briefly discuss in Section 7.5, a multisensory view of locomotion in VR will likely be necessary in order to make notable progress in understanding human locomotion in VR. Visual perception refers to the brain’s interpretation of an environment through the eyes. It is an important part of how observers understand their surroundings. Within VR, the visual stimuli a user perceives come from the head-mounted display (HMD). The quality of the stimuli will depend on HMD factors including refresh rate, display resolution, and field of view (FOV). 15 FOV is the observable space an observer can see through their eyes or viewing device. FOV is of particular interest to us in this thesis, since differences in FOV have been shown to influence observers’ locomotion patterns. Visual perception is crucial to virtual experiences, so we will now discuss some of the important facets of visual perception and how they interact with locomotion. 2.1.1.1 Optical Flow Optical flow refers to the pattern of perceived motion of the surrounding environment that is projected onto the human observer’s retina. Optical flow patterns serve as a visual signal of self-motion for the human observer. Numerous studies have shown that optical flow influences the observer’s locomotion control depending on the speed and direction of optical flow [13, 145, 221]. When the observer’s non-visual movement signals conflict with their visual movement signals (namely optical flow), the brain prioritizes the visual signals. That is, when the observer determines their current motion, they are more likely to believe visual information than non- visual information if the two provide conflicting cues of self-motion [16, 114]. 2.1.1.2 Vection Vection is the illusory impression of self-movement provided by visual stimulation [78, 188]. It is typically felt when the observer visually perceives a moving environment, but their body moves in a manner that would not produce the perceived optical flow patterns. Because vection is most often induced by visual stimuli, it is closely tied to the perceived optical flow. A common example of vection is the feeling of movement when an observer sits stationary in a train and watches a neighboring train move. 16 It is known that peripheral stimulation plays an important role in perceiving optical flow patterns [145]. Thus, we can infer that peripheral stimulation, which VR provides a considerable amount of, plays an important role in the degree of vection felt in the observer. In fact, many studies have demonstrated that optical flow perceived in the periphery increases feelings of vection [24, 89, 224]. However, it should be noted that there is evidence of feelings of vection when foveal, and not peripheral, stimulation is present [220]. 2.1.2 Simulator Sickness Simulator sickness is the feeling of motion sickness experienced when using a VR system. When they experience vection, it is common for users to also experience simulator sickness. It is also possible for users to experience simulator sickness when using VR applications. Simulator sickness decreases the usability of VR and can potentially deter people from wanting to experience VR more than once. The exact cause of simulator sickness is not known, but the main theory argues that conflict between visual, proprioceptive, and vestibular stimuli is the cause [108]. Hettinger et al. [78] strengthened this theory when they provided data suggesting that simulator sickness is a product of vection. It has been noted that FOV influences simulator sickness—specifically, a smaller FOV has been shown to reduce the amount of simulator sickness users experience [53, 123]. A study by Fernandes et al. [62] further explored how FOV influences simulator sickness. In their study, they dynamically changed the FOV in VR using what they refer to as FOV restrictors. They concluded that changing the FOV based on visually perceived motion makes users feel more comfortable during their VR experiences [62]. 17 2.2 Virtual Reality Locomotion Interfaces Navigation consists of two processes: wayfinding and locomotion. Wayfinding is the process of determining the route through an environment that an agent (in our case, a human) must travel on to go from their starting location to their goal destination [48]. Locomotion refers to the low-level, mechanical process of how an agent travels along the route determined in the wayfinding step in order to reach its destination (e.g. walking, flying, driving). Locomotion in VR is essential for exploring VEs and delivering an interactive experience. A lack of support for locomotion within the VE may reduce feelings of presence and, in turn, make VR less effective [179]. Human gait features a wide range of movements like walking, running, skipping, and waddling. A good locomotion interface must support these motions, while also accounting for a variety of physical space shapes and user dimensions. Supporting such a variety of movements is a challenge for VR systems. In this section, we will discuss the advantages and disadvantages of different locomotion interfaces. A locomotion interface is a device and/or software that allows a user to travel in a virtual environment. Ideally, a locomotion interface should allow the user to naturally walk1 (or perfectly mimic the sensations felt when one really walks), be easy to understand, and require minimal extra hardware or setup. A number of different locomotion interfaces have been proposed, prototyped, and evaluated. Some well-known interfaces include joystick controls, omnidirectional treadmills [97], powered shoes [99], moveable tiles [98], and redirected walking [158]. Different 1“Naturally walk” refers to step-driven locomotion that does not use treadmills or other mechanical devices, makes use of the entire gait cycle, and, ideally, is perceived by the user as identical to how they walk in the real world while not in VR [116, 188]. 18 locomotion interfaces may be undesirable in different situations because they do not meet all the criteria of an ideal locomotion interface. Suboptimal locomotion interfaces are usually unsatisfactory because they involve unwieldy hardware or lack vestibular or proprioceptive feedback that is present during real walking, e.g. a treadmill. Of the locomotion interfaces that have been studied, interfaces that utilize redirection techniques (RTs) are especially appealing since they allow users to walk naturally while exploring a VE. RTs allow users to explore VEs that are larger than the tracked workspace by manipulating the user’s path in the virtual environment [140]. It has been shown that natural walking is the most intuitive and beneficial locomotion technique in VR, as it improves users’ sense of presence [209], memory, and performance [85, 149, 162]. As a result of the numerous benefits real walking offers, researchers have invested considerable effort into developing and understanding locomotion interfaces that support real walking. 2.2.1 Natural Walking in Virtual Reality Standard VR systems do allow users to walk around during a virtual experience, but users are only able to walk within the tracked space. Movement outside the workspace borders will not be tracked by the system’s sensors, so the visual scene displayed on the HMD will not update according to the user’s movements. Thus, the size of the VE that a user can explore is limited to the size of the tracked space. To support real walking and increase the size of the explorable VE, we can employ RTs. A multitude of redirection techniques have been developed [27, 94, 158, 194], which has prompted researchers to classify RTs based on their implementation-specific characteristics. Suma et al. 19 distinguished between redirection techniques based on the conspicuousness (overt or subtle) and continuity (discrete or continuous) of their implementations [195]. Subtle and continuous techniques are preferred because they have been reported to create fewer breaks in presence. However, depending on the user’s projected path and position in the workspace, we cannot always rely on such techniques to keep users in the tracked workspace. In these situations, redirection systems may sometimes be required to fall back on more overt techniques to ensure the user’s safety [140, 195]. 2.2.2 Redirected Walking One popular subtle and continuous RT that enables natural walking in VR is redirected walking (RDW) [158]. RDW involves imperceptibly manipulating the VE via rotations and translations so that a user subconsciously adjusts their real-world position to remain on their intended virtual path. Using this technique, we can steer users away from the tracked-space edges while still giving users the benefits of real walking in the VE. This reduces the amount of breaks in presence caused by reaching the bounds of the tracked space. For example, a user will physically rotate by 180◦ when he or she wants to turn 180◦ in the VE, if no redirection is applied. If redirection is applied such that some real-world rotation results in a larger rotation in the VE, the user will turn until their position in the VE has rotated 180◦, but the physical rotation will be less than 180◦. We can also redirect such that a physical rotation results in a smaller virtual rotation. When implemented carefully, this discrepancy between the physical and virtual movements is imperceptible to the user if it is small enough. Similar transformations can be applied to a user’s walking path. When walking on a straight path, we 20 can translate the VE in the direction opposite to the user’s walking direction, which results in a virtual displacement that is larger than the user’s physical displacement. We can also rotate the VE while the user walks to force the user to follow a curved path in the real world. Depending on the strength and direction of the rotation, this will force the user’s real path to steer away from the edges of the tracked space. See Figure 2.1 for a diagram that explains how RDW manipulates the VE. This thesis is only concerned with rotations of the VE when the user is standing in place. 2.2.2.1 Limits of Redirection By applying RDW, users are able to walk naturally and explore VEs larger than the tracked workspace. However, we cannot simply amplify users’ movements by a large, constant factor to maximize the size of the explorable VE without incurring negative repercussions such as disorientation or increased simulator sickness. The scaling of a user’s movements must be small enough to maintain the VR application’s usability and ensure the user’s comfort. Thus, there exists a trade-off between redirection intensity and user experience [158]. Ideally, enough redirection is applied to maximize the explorable size of the VE and minimize discomfort and breaks in presence caused by manipulating the VE. The intensity of scaling applied to the VE is controlled by parameters called gains. Rotation gains increase or decrease a user’s rotation in the VE relative to their real-world rotation, while translation gains increase or decrease a user’s displacement in the VE relative to their real-world displacement. Curvature gains, on the other hand, cause users to walk along a curved physical path while walking on a straight virtual path. Both rotation and translation gains are expressed as a ratio of virtual motion to physical motion. A gain of 1 is applied when virtual motion to 21 (a) A translation gain allows the user to walk distances in the VE that are greater than the distance walked in the real world. (b) A rotation rotation gain allows the user to turn a greater virtual distance compared to their physical rotation. (c) A curvature gain forces the user to walk on a curved physical path in order to walk in a straight path in the VE. Figure 2.1: Diagrams that illustrate how different RDW gains can be used to increase the size of the explorable VE. The green borders represent the real-world tracked space borders, and the purple borders represent the borders of the VE that correspond to the size of the tracked space. Arrows indicate the user (green) or VE (purple) movement. 22 physical motion is mapped 1:1. When a gain is greater than 1, the virtual movement (rotation or translation) is increased, and the resulting real-world movement is smaller than the virtual movement. Similarly, when a gain is less than 1, the virtual movement is decreased, and the resulting real-world movement is larger than the virtual movement. A threshold refers to the point at which the applied gain becomes noticeable to the user, and each threshold has an associated gain. A threshold t corresponds to a gain g. A t threshold of g means that t% of the population will believe that their virtual movements are larger than their physical movements when the gain g is applied. For example, if the 50% threshold has a gain of 1.02, then half the population will believe that their physical and virtual movements are the same when we apply a gain of 1.02 while the other half will believe that their virtual movements are larger than their physical movements. In previous work by Steinicke et al. the threshold values of interest are users’ 25% and 75% thresholds, which correspond to decreased and increased virtual rotations respectively [187]. VE rotation is often discussed in relation to the user’s physical rotation. VE rotation with the user’s physical rotation direction corresponds to a real-world rotation that is larger than the virtual rotation, and VE rotation against the user’s physical rotation direction corresponds to a real-world rotation that is smaller than the virtual rotation. 2.3 Motion Planning In the field of robotics, motion planning is the problem of moving a robot from an initial state to a goal state through a series of valid configurations that avoid collisions with obstacles [115]. For a robot with n degrees of freedom, its configuration space (denoted C) is an n- dimensional manifold, where each point in the manifold corresponds to a configuration of the 23 robot. The configuration space describes the set of all states that a robot can be in. In order to successfully navigate from a starting position to a goal position, the robot must find a set of configurations that takes it from the starting position to the goal position. This can be formulated as finding a continuous path of valid configurations through C. Some common desirable traits of such a path are that it yields the shortest path and that the robot trajectory is smooth, without many oscillations as it travels along this path. Motion planning has seen great success in allowing researchers to create navigation algorithms that are rigorously defined, can provide guarantees on navigation completion, and are robust to unknown, unpredictable, and dynamic environments. There is considerable work on developing motion planning algorithms for static and dynamic environments. Search-based planners discretize the state space (the set of all possible states) and employ search algorithms to find a path from the start to the goal. An example of a search- based motion planning algorithm is the A* algorithm [76]. Sampling-based planners operate by randomly sampling the configuration space in order to build a valid path. Such algorithms can usually quickly find valid solutions, but their solutions are usually not the most efficient [60]. Potential field methods uses attractive and repulsive forces to guide the robot through the environment [105]. These planners are easy to implement but are susceptible to getting the robot trapped in local minima of the potential function. Planning algorithms may also use geometric representations, such as visibility graphs and cell decomposition, to reason about the environment, detect collisions, and compute collision-free paths [49]. Motion planning algorithms may also use optimization to to handle dynamic obstacles and compute smooth trajectories [144, 156]. Optimization-based approaches are advantageous in that they can more easily handle complex, high-dimensional state spaces. Dynamic motion planning is the problem of computing a collision-free path in an environment with moving obstacles. A popular approach to dynamic 24 motion planning is the use of velocity obstacles to reason about collision-free paths in terms of velocity [64, 211]. In this dissertation we show how VR locomotion can be reframed as a special kind of motion planning (Chapter 4). We use motion planning as a kind of “language” which we can use to precisely define constraints on VR locomotion and reason about the dynamics of VR locomotion. As we will show, this allows us to develop new redirection algorithms and environment complexity metrics that would be difficult to create using purely heuristic-based approaches like much of prior work in the VR locomotion community has done. 25 Part II Locomotion Interfaces and Metrics for Natural Walking in Virtual Reality 26 Chapter 3: Alignment-Based Redirection Figure 3.1: A user being steered with our alignment-based redirection controller (ARC) in two different environments. In Environment 1, the virtual environment (VE) is larger than the physical environment (PE), and there is an obstacle in the northeast corner of the VE. The PE has no obstacles. In Environment 2, the VE is larger than the PE, and both have obstacles in different positions. (A) The user walks in a straight line forward in the VE. (B) In the PE, the user is steered on a curved path away from the edge of the tracked space, in order to minimize the differences in proximity to obstacles in PE and VE. (C) The user walks in a straight line forward in the VE, with obstacles on either side of the path. (D) The user is steered on a path with multiple curves in the physical space. The user avoids a collision with the obstacle in front of them, and is also steered to minimize the differences in proximity to obstacles in the PE and VE. We are able to steer the user along smooth, collision-free trajectories in the PE. Our extensive experiments in real-wold and simulation-based experiments show that in simple and complex environments, our approach results in fewer collisions with obstacles and lower steering rate than current state-of- the-art algorithms for redirected walking. In this chapter, we present a novel redirected walking controller based on alignment that allows the user to explore large and complex virtual environments, while minimizing the number of collisions with obstacles in the physical environment. Our alignment-based redirection controller, ARC, steers the user such that their proximity to obstacles in the physical environment matches the proximity to obstacles in the virtual environment as closely as possible. To quantify a 27 controller’s performance in complex environments, we introduce a new metric, Complexity Ratio (CR), to measure the relative environment complexity and characterize the difference in navigational complexity between the physical and virtual environments. Through extensive simulation-based experiments, we show that ARC significantly outperforms current state-of-the-art controllers in its ability to steer the user on a collision-free path. We also show through quantitative and qualitative measures of performance that our controller is robust in complex environments with many obstacles. Our method is applicable to arbitrary environments and operates without any user input or parameter tweaking, aside from the layout of the environments. We have implemented our algorithm on the Oculus Quest head-mounted display and evaluated its performance in environments with varying complexity. 3.1 Introduction Exploring virtual environments (VEs) is an integral part of immersive virtual experiences. Real walking is known to provide benefits to sense of presence [209] and task performance [162] that other locomotion interfaces cannot provide. Using an intuitive locomotion interface like real walking has benefits to all virtual experiences for which travel is crucial, such as virtual house tours and training applications. Redirected walking (RDW) is a locomotion interface that allows users to naturally explore VEs that are larger than or different from the physical tracked space, while minimizing how often the user collides wit