ABSTRACT Title of Dissertation: IMMERSIVE VISUAL ANALYTICS OF WI-FI SIGNAL PROPAGATION AND NETWORK HEALTH Alexander Rowden Dissertation Directed by: Professor Amitabh Varshney Department of Computer Science We are immersed in waves of information. This information is typically transmitted as ra- dio waves in many protocols and frequencies, such as WiFi, Bluetooth, and Near-Field Commu- nications (NFC). It carries vital information such as health data, private messages, and financial records. There is a critical need for systematic and comprehensive visualization techniques to facilitate seamless, resilient, and secure transmission of these signals. Traditional visualization techniques are not enough because of the scale of these datasets. In this dissertation, we present three novel contributions that leverage advances in volume rendering and virtual reality (VR): (a) an outdoor volume-rendering visualization system that facilitates large-scale visualization of radio waves over a college campus through real-time programmable customization for analysis purposes, (b) an indoor, building-scale visualization system that enables data to be collected and analyzed without occluding the user’s view of the environment, and (c) a systematic user study with 32 participants which shows that users perform analysis tasks well with our novel visualiza- tions. In our outdoor system, we present the Programmable Transfer Function. Programmable Transfer Functions offer the user a way to replace the traditional transfer function paradigm with a more flexible and less memory-demanding alternative. Our work on indoor WiFi visualization is called WaveRider. WaveRider is our contribution to indoor-modeled WiFi visualization using a virtual environment. WaveRider was designed with the help of expert signal engineers we inter- viewed to determine the needs of the visualization and who we used to evaluate the application. These works provide a solid starting point for signal visualization as our networks transition to more complex models. Indoor and outdoor visualizations are not the only dichotomy in the realm of signal visu- alization. We are also interested in visualizations of modeled data compared to visualization of data samples. We have also explored designs for multiple sample-based visualizations and con- ducted a formal evaluation where we compare these to our previous model-based approach. This analysis has shown that visualizing the data without modeling improves user confidence in their analyses. In the future, we hope to explore how these sample-based methods allow more routers to be visualized at once. IMMERSIVE VISUAL ANALYTICS OF WI-FI SIGNAL PROPAGATION AND NETWORK HEALTH BY ALEXANDER ROWDEN DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MARYLAND, COLLEGE PARK IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY 2023 ADVISORY COMMITTEE: PROFESSOR AMITABH VARSHNEY, CHAIR/ADVISOR PROFESSOR JOSEPH JAJA DR. ERIC KROKOS PROFESSOR NIRUPAM ROY DR. KIRSTEN WHITLEY © Copyright by Alexander Rowden 2023 Table of Contents Table of Contents ii List of Tables v List of Figures vi List of Abbreviations viii Chapter 1: Introduction 1 1.1 Programmable Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2 WaveRider: Immersive Visualizations of Indoor Signal Propagation . . . . . . . 8 1.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.3 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3 Exploring Effective Immersive Approaches to Visualizing WiFi . . . . . . . . . . 14 1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.3.2 Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3.3 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Chapter 2: Programmable Transfer Functions 18 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.1 WiFi Data Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.2 Direct Volume Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.3 Interaction in Volume Rendering . . . . . . . . . . . . . . . . . . . . . . 23 2.2.4 Non-photorealism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.5 Multifield Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4 Programmable Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4.1 Base Direct Volume Rendering . . . . . . . . . . . . . . . . . . . . . . 28 2.4.2 Silhouette Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4.3 Specular Highlight Augmentation . . . . . . . . . . . . . . . . . . . . . 33 2.4.4 Multi-volume Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 34 ii 2.4.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.4.6 Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.5 Limitations and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.5.1 Customized Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5.2 Data Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.5.3 Visual Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.5.4 Multivolume Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Chapter 3: WaveRider: Immersive Visualizations of Indoor Signal Propagation 44 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2.1 Embedded Data Representations . . . . . . . . . . . . . . . . . . . . . . 47 3.2.2 WiFi Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.2.3 Line Integral Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.4 Textons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.1 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.3.2 Signal Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.3.3 Interference Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.3.4 Signal Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4 Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4.1 Data Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4.2 Contour Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.4.3 Layered LIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.4.4 Max LIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.4.5 Frequency Textons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.4.6 Virtual Reality Implementation . . . . . . . . . . . . . . . . . . . . . . . 66 3.4.7 Augmented Reality Prototype . . . . . . . . . . . . . . . . . . . . . . . 67 3.4.8 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.5 Expert Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.5.1 Immersive Design Impact . . . . . . . . . . . . . . . . . . . . . . . . . . 71 3.5.2 Localization Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.5.3 Frequency Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6 Limitations and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Chapter 4: Exploring Effective Immersive Approaches to Visualizing WiFi 77 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.1.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2.1 Situated Visualization and Immersive Analytics . . . . . . . . . . . . . . 81 4.2.2 Overdraw and Occlusion . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2.3 WiFi Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 iii 4.2.4 Ranking Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.3 Analysis Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.1 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.2 Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.3 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.3.4 Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.4 Visualization Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.4.1 Segmented and Oriented Glyphs . . . . . . . . . . . . . . . . . . . . . . 88 4.4.2 Novel Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.4.3 Contour Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.5 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.5.1 Study Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.5.2 User Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.5.3 Questionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.5.4 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.6.1 Demographics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.6.2 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.6.3 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.6.4 Confidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.6.5 Questionnaire Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.7 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Chapter 5: Conclusion and Future Work 109 5.0.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Bibliography 112 iv List of Tables 1.1 PTF Frame-Timings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 PTF Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1 Visual Encoding - Contour Lines . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.2 Visual Encoding - Layered LIC . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.3 Visual Encoding - Max LIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.4 Visual Encoding - Textons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.1 Quantitative Results of User Study comparing Stacked Bars and Wavelines to Contour lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 v List of Figures 1.1 Full-campus rendering of Programmable Transfer Functions . . . . . . . . . . . 4 1.2 Explanatory image for Programmable Transfer Functions . . . . . . . . . . . . . 5 1.3 Programmable Transfer Function examples . . . . . . . . . . . . . . . . . . . . 6 1.4 Screenshot of WaveRider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Screenshot of WaveRider in VR with LIC . . . . . . . . . . . . . . . . . . . . . 10 1.6 Screenshot of the AR WaveRider Prototype . . . . . . . . . . . . . . . . . . . . 13 1.7 Two novel light-weight visualizations . . . . . . . . . . . . . . . . . . . . . . . 14 2.1 PTF Teaser Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 PTF Visual Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Rendering Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4 Depth Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.5 Silhouette Shading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.6 Specular Highlight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7 Intersection Shading for Multiple Volumes . . . . . . . . . . . . . . . . . . . . . 35 2.8 Ablation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.9 IMGUI Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.1 WaveRider Teaser with Contour Line Visualization . . . . . . . . . . . . . . . . 44 3.2 State-of-the-art WiFi Visualization Tools . . . . . . . . . . . . . . . . . . . . . . 47 3.3 Contour line Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.4 Color Blending Drawbacks for Layered Monochromatic Heatmaps . . . . . . . . 58 3.5 Contours with text rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.6 Layered LIC Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.7 Novel Screen Space LIC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 63 3.8 Max LIC Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.9 Texton Visualization of Router Frequency Configuration . . . . . . . . . . . . . 65 3.10 The Inspector Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.11 AR Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.12 Laptop Configuration Used for Data Collection . . . . . . . . . . . . . . . . . . 69 4.1 Teaser for Exploring Effective Immersive Approaches to Visualizing WiFi . . . . 78 4.2 Related Works showing State-of-the-Art . . . . . . . . . . . . . . . . . . . . . . 82 4.3 Waveline Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.4 Stacked Bars Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5 Contour line example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6 User Study Demographics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 vi 4.7 P-Values for User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.8 Distribution of 2D error (distance) from localization task . . . . . . . . . . . . . 100 4.9 Accuracy distributions for Ranking, Coverage, and Interference Tasks . . . . . . 101 4.10 Time-to-Completion Distributions for all tasks . . . . . . . . . . . . . . . . . . . 103 4.11 User-reported confidence in their analyses for each visualization-task pair . . . . 105 4.12 Qualitative Feedback from User Study . . . . . . . . . . . . . . . . . . . . . . . 106 vii List of Abbreviations and Initialisms AR Augmented Reality BSSID Basic Service Set Identifier CAVE Cave Automatic Virtual Environments CPU Central Processing Unit CSV Comma-Separated File CT Computed Tomography dBm Decibel-milliwatts FPS Frames Per Second GPU Graphics Processing Unit GUI Graphical User Interface HMD Head Mounted Display LIC Line Integral Convolution MAC Media Access Control MRI Magnetic Resonance Imaging PTF Programmable Transfer Function RAM Random Access Memory RF Radio Frequency SSID Service Set Identifier SUS System Usability Scale TLX Task Load Index VR Virtual Reality WiFi Wireless Fidelity WLAN Wide Local Area Network viii Chapter 1: Introduction Our most sensitive information and urgent communications happen wirelessly over the radio frequency (RF) spectrum. When doctors pull up patient information, they often do so wire- lessly using WiFi. Secure facilities often use passive Radio Frequency Identification (RFID) to protect their spaces. Therefore, ensuring that these networks are healthy, fast, constantly avail- able, and secure is vital. In our works, we use WiFi as our source of RF signal data but strive for generalizability to other RF signal sources. Signal data is notoriously difficult to visualize comprehensively as signals propagate in three dimensions unevenly, and sensors are noisy. Users of these visualizations need to un- derstand the context of the signal in the physical environment and its relationship to the other communications around it. Immersive technology, such as virtual reality (VR), allows one to vi- sualize signals more effectively than traditional displays. It enables data to be displayed directly in its environmental context. This allows analysts to make decisions holistically and develop insights that would have been difficult to come to otherwise. It allows users to interact with the data in three dimensions: their natural environment. The visualization of large datasets is a significant challenge in the field of visual analytics. Signal data falls under this category because signal datasets often contain thousands of multidi- mensional data points that cover several routers, and each router has many attributes a user may 1 be interested in, such as its frequency, service set identifier (SSID), basic service set identifier (BSSID), and security capabilities. Thus, the challenges in the visualization of large datasets apply to the visualization of RF signals. For instance, we must tackle the issues of overdrawing, environmental and self-occlusion, and visualizing multiple data series. Overdrawing occurs when glyphs overlap each other. Occlusion is a related challenge, where glyphs block the user from seeing additional information. In the case of environmental occlusion, the visualization interferes with the user’s ability to see the environment. Self-occlusion is where one part of a visualization blocks the user’s ability to see another section of the visualization. Finally, there is the more general issue of visualizing multiple data series, like overlapping RF access points. Large outdoor areas, such as campuses and smart cities, are challenging to manage as a network administrator. Due to their scale, testing the network health at every location in the region is physically and mentally demanding. Further, ensuring that the most critical areas have enough bandwidth without causing interference is also vital. Our first work [1], which we present in Chapter 2, attempted to tackle this challenge. We developed a large-scale visualization using a specialized form of volume rendering that utilized a novel transfer function. Our work allows the lightweight computation of multiple large-scale datasets while allowing the user to customize the visualization for their analysis. Indoor areas are also challenging for network visualization. As the environment is more closed off, occlusion becomes a more significant issue. With that in mind, we developed a new visualization strategy, which we present in Chapter 3. In this work, we created an application called WaveRider [2] to visualize a pre-collected dataset in a digital twin of the environment. This allowed us to focus on the visualization design rather than the complex challenge of imple- menting a real-time AR system. We developed our strategy with a team of signal experts and 2 developed a set of requirements for an indoor visualization. Following that, we developed four visualizations and packaged them in an application called WaveRider. Our experts determined that our visualizations were intuitive and significantly improved state-of-the-art. The WaveRider visualizations were intuitive and allowed users to localize to the router efficiently. However, they were computationally expensive, visually dense, and relied on a RF propagation model. All this combined meant that while users could obtain a solid understanding of a few routers based on a computational model, they could not easily survey a space looking at many routers. Additionally, any analysis using these visualizations would depend highly on the propagation model. Chapter 4 presents our work tackling these problems with two more visualizations, which we designed to be computationally light-weight and require no propagation model [3]. We then conducted a user study with 32 participants to compare these visualizations to the state-of-the-art in four common analysis tasks. Our novel visualizations compared favorably, matching or exceeding the state-of-the-art in every case. 1.1 Programmable Transfer Functions 1.1.1 Introduction Large outdoor networks are common and vital to our world. The cellphone network is one such network, where coverage is sometimes of life-and-death importance. Ensuring that our outdoor networks are secure, efficient, and have their intended range is crucial. Here, we present a summary of how we tackled this challenge. Details can be found in Chapter 2. In this work, we developed a novel transfer function for volume rendering, which we employed to visualize the networks on the campus of the University of Maryland. Our technique allows lightweight and 3 Figure 1.1: An example of the full-campus rendering we achieved using our programmable trans- fer function. Here, we show three networks with different properties. Also included in the screen- shot is our GUI, which allows the user to customize the rendering, to-scale models of the campus buildings, and the campus plant inventory to increase user immersion and aid in user localization. customizable renderings of any three-dimensional scalar field dataset. 1.1.2 Approach To visualize these large-scale networks, we decided to model them as three-dimensional volumes and render them using volume rendering. Modeling the networks this way was a natural choice, as the RF signals propagate in three-dimensional space. To limit the occlusion of the system, we decided to render the volumes as thin transparent isosurfaces. Isosurfaces are parts of a three-dimensional scalar field where all points have the same scalar value. This allows the viewers of our visualization to see all points in a router’s coverage that have the same signal strength. This gives a very intuitive grasp of the shape of a router’s signal propagation. We de- cided to reinvent the transfer function to store our large datasets and have highly customizable multidimensional transfer functions, which have become the standard in volume rendering. Tra- 4 Figure 1.2: Visualization of the trade-off between single-dimensional, two-dimensional, and pro- grammable transfer functions. ditional transfer functions use large look-up tables, which must be computed on the CPU and then transferred and stored on the GPU for rendering. This can become a heavy burden on GPU memory for high-quality renderings if the resolution and dimensionality of the function become large. Thus we replaced these burdensome tables with customizable shader code. This trade-off is visualized in Figure 1.2. Programmable Transfer Functions benefit from the creation of ex- ceptionally fast Graphics Processing Units (GPUs) as they replace the memory overhead with added computation. This trade-off also gives us several additional advantages. Firstly, we no longer need to calculate the function ahead of time. Transfer functions can be created and mod- ified on the fly without incurring the additional expense of sending the data from the CPU to the GPU. Secondly, GPU memory is becoming increasingly in demand as environmental models and textures are becoming more detailed in modern rendering practices. We also make room for more complicated and higher-dimensional transfer functions by removing the memory burden. Multidimensional transfer functions often elucidate more information from volumes but come at 5 a high memory cost. With programmable transfer functions, the cost of adding a dimension is negligible. 1.1.3 Results Figure 1.3: Examples of the customization that PTFs can provide. To evaluate the usability of our programmable transfer functions, we implemented a few custom shaders to show off the utility and versatility they offer. These transfer functions demon- strated different ways PTFs are advantageous over traditional transfer functions. Our first custom PTF utilized the normal of the isosurface and the direction from the light to the surface to add opacity to the volume in the region of the specular highlight. This allows users to more easily see the curvature of the volume since the specular highlight is often washed out in thin, translucent surfaces. Thus, they have a more intuitive understanding of its geometry. This specular augmen- tation PTF highlights the versatility of the technique because it can utilize additional variables and dimensions, such as the light direction. Similarly, we created a PTF that increases the iso- 6 surface’s opacity at its boundaries, which we call Silhouette Shading. This technique requires access to the surface normal and the direction from the viewer to the shaded position. Finally, to highlight PTFs’ ability to enable specialized rendering for multiple scalar fields, we created a PTF to highlight the intersection between two volumes. The result of these PTFs can be seen in Figure 1.3. Base Specular Silhouette Spec+Silh Intersection All 3 1 Volume (interior) 105 105 105 105 NA NA 1 Volume (exterior) 181 181 181 181 NA NA 5 Volumes (interior) 85 85 85 85 85 85 5 Volumes (exterior) 116 116 116 116 116 116 Table 1.1: The frame rates of our example Programmable Transfer Functions on one volume and five volumes. These rates were recorded both inside and outside the volumes. We also conducted two additional tests of their usability. In one, we tested Programmable Transfer Function’s practical utility by recording the frame timings in various conditions to prove that these renderings can achieve real-time interactive rates, see Table 1.1. We also developed a VR experience that allowed users to walk, move around the campus, and be immersed in the campus’s networks. We believe that such an outdoor visualization can increase the quality of our large-scale networks by giving network administrators and planners ways to evaluate network coverage and see how the signals interact with their environment. 7 Figure 1.4: Screenshot of the desktop version of WaveRider shows several routers simultaneously visualized in an indoor environment. 1.2 WaveRider: Immersive Visualizations of Indoor Signal Propagation 1.2.1 Introduction Indoor signal visualization is arguably more important. Many of our most vital systems exist in our home and work environments or places like hospitals and courtrooms. Visualizing networks in these environments poses its own set of issues. Namely, how does one visualize so much data without blocking the user’s line of sight to critical environmental features? Not only would this occlusion impede an analyst’s ability to make decisions about the network based on its environment, but it also would limit the utility of the visualization in Augmented Reality (AR), where virtual objects are placed on top of the real world. AR visualizations are increasing in popularity partly due to the mass commercial availability of headsets and because they allow users to augment their view of the world without hindering their ability to perceive it. 8 Since routers must be placed close together to ensure dense coverage and high bandwidth in an indoor context, routers are more likely to interfere with one another. There are two kinds of fre- quency interference. The first form of interference is adjacent channel interference, where routers whose frequency bands overlap with each other can cause each other’s signals to modulate. This modulation will cause packet loss and slow communication. Adjacent channel interference is uncommon, however, as most routers are configured on non-overlapping frequency bands, and frequency modulation is often recoverable. The other form of frequency interference is cochan- nel interference. In cochannel interference, multiple routers communicate on the same frequency band. In this case, a router must wait to communicate until the band is clear. This can cause long stalls. If a router’s communication is interrupted by another router trying to communicate on its frequency, it will likely need to be repeated. This interference is common and a common source of network inefficiency, so our visualization must account for it. Here, we present a summary of Chapter 3, where we introduce WaveRider. WaveRider is the application we created that introduces several visualization techniques to the world of signal visualization and fuses them to tackle the tasks required for indoor signal analysis. We also provide an evaluation by signal experts and offer directions for the future of indoor signal visualization. 1.2.2 Approach To develop WaveRider, we spoke with professional network engineers to establish the re- quirements for an indoor signal visualization. Using those requirements, we gathered visualiza- tion techniques from other areas, like the visualization of multiple overlapping scalar fields and 9 Figure 1.5: Screenshot of WaveRider visualizing a single router using Line Integral Convolution in Virtual Reality fluid flow, to confront the challenge from a new perspective. When put together, we developed four visualizations. Three tackled the challenge of visualizing signal strength and router cover- age, while one visualization allowed the user to visualize the frequency each router utilized. We fused these visualizations by using the natural segmentation of the walls and floor. Users view the signal data on the walls, allowing them to view the signals in three dimensions, while the floor provides them with supplementary frequency knowledge, should they need it. To visualize our data, we decided to model it using Principle Component Analysis. We chose to model our data so that the user could see the signal level at locations between data samples and to avoid glyph-based visualizations, which would overdraw, as the routers share the same set of sample locations. This gave us ellipsoids we could visualize instead of raw data. We decided to use ellipsoids as they are simple to understand and do a fair job of fitting our 10 data. Nevertheless, to ensure WaveRider’s utility, we developed our visualizations to be model- agnostic. The only requirement on the router model is that a function must exist to map the world position to signal strength and a direction to the router. 1.2.2.1 Contour Lines With our model in place, we were free to develop our visualizations. The first visualization we created was contour lines based on topographical maps. In contour lines, the wall is shaded at fixed signal strengths as the router’s signal decays. This would produce a series of concentric el- lipses on each wall. The thickness of the contour line, transparency, and the number of contours to the center determine the router’s signal strength. The direction to the router’s source is conveyed in the curvature of the ellipse. An example of the contour lines can be seen in Figure 1.4. 1.2.2.2 LIC We designed an alternative visualization based on the fluid flow visualization technique, line integral convolution (LIC). LIC is used for visualizing vector fields. We utilize it to visualize the field of vectors that point to the router’s location. LIC produces an image consisting of lines pointing in the direction of the vector field. This allows the user to navigate directly to the signal source without needing to estimate the direction themselves. The signal strength is encoded in the transparency of the lines. An example of LIC can be seen in Figure 1.5. This LIC technique works for one router. We developed two techniques for visualizing multiple routers. The first is called Layered LIC, where separate LIC images are composited on top of one another to produce the final vi- 11 sualization. This visualization allows the user to visualize multiple routers in every position, but it suffers from visual clutter and overlap as the number of routers increases. It also suffers from performance issues, as each router’s LIC image needs to be calculated individually. The other technique is called MaxLIC. In MaxLIC, we produce one LIC image. Instead of visualizing the vector to one router, we visualize the vector to the router with the highest signal strength in the visualized router set. This technique has a few interesting properties. Firstly, it allows for a con- stant computation time, as no matter how many routers we visualize, we only need to compute one LIC image. Another benefit is that MaxLIC produces a boundary line where two routers are equal. This feature has some analytical value as it highlights the region where two routers have similar strengths, which is where they are more likely to interfere. MaxLIC does, however, have the disadvantage that by looking at a single segment of the wall, a user will not know how many routers and what routers are present, as only one router is shown at a single point. 1.2.2.3 Textons Our final visualization technique comes from the visualization of multiple scalar fields and is called Oriented Texture Slivers. Oriented texture slivers are a variety of textons, the base unit of precognitive texture perception, which has been used to visualize categorical data. Textons, in general, are useful in visualization because a user can easily distinguish between different textons without the mental overhead. Oriented texture slivers are a variety of textons consisting of thin lines pointing in different directions. The orientation of these lines can be used to encode categor- ical data. We, however, modify the usage. Our slivers are oriented based on the frequency of the router. However, we can encode more data by assigning a color to each texton that identifies the 12 router it is associated with. Additionally, to reduce visual clutter and highlight routers operating on the same frequency, we group textons oriented in the same direction onto a single line. An example of textons can be seen in Figure 1.4. 1.2.3 Evaluation Once we created WaveRider, we immersed our experts in the VR version of the applica- tion to gather their feedback on each of our visualizations. We guided each expert through the visualizations and asked them to complete tasks, such as localization and determining if routers could interfere with each other. During the application demo, we encouraged the experts to ask questions and asked if they wanted to tweak the visualization. After showing off the application, we took each expert through a semi-structured interview to get their feedback and suggestions for how to improve WaveRider and to see what additional features of a visualization they would be interested in. The experts were satisfied with the utility of our visualizations and were surprised by their intuitiveness. Figure 1.6: Screenshot of WaveRider visualizing a single router using Line Integral Convolution in Virtual Reality 13 (a) Wavelines (b) Stacked Bars Figure 1.7: The novel visualizations created for light-weight surveying of large indoor environ- ments. a: Visualization that emphasizes localization and estimating exact signal-strength val- ues. May contain large areas of overlap. b: Visualization that emphasizes the ranking of signal strengths across multiple routers. Designed to contain minimal overlap. We also implemented an AR version prototype of WaveRider to evaluate how WaveRider’s design would transfer to the new medium. This prototype highlighted the fact that small details in a visualization do not come through as well in AR applications due to the complex textures and dynamics of the real world. 1.3 Exploring Effective Immersive Approaches to Visualizing WiFi 1.3.1 Introduction Our work with WaveRider was a massive success. Our experts verified both the need for these visualizations and their efficacy. Still, there were a few shortcomings that provided an open- ing for iteration and improvement. Firstly, LIC was computationally expensive, preventing more than a few routers from being represented. Both visualizations suffered from visual cluttering, where data overwhelmed the user when analyzing many routers. Additionally, both of these visu- alizations rely on modeling the data before the data can be visualized. For our purposes, we used a relatively simple model, but more complicated models could be used. However, any analysis 14 made with these visualizations will be limited by the correctness of the propagation model used. The shortcomings of the visualizations used in WaveRider set the stage for our third work, which we present in Chapter 4. This work presents two more novel visualizations seen in fig. 1.7 - Stacked Bars fig. 1.7b and Wavelines fig. 1.7a. These visualizations require no propagation model and are not computationally intense, thus allowing for the real-time visualization of many routers (our system rendered up to 40 at interactive rates in VR). With these visualizations in hand, we ran a user study to see how these visualizations perform when compared to our previous work (represented by contour lines) by testing user performance at four different analysis tasks. In the following summary, we will discuss our novel visualizations and our user study at a high level of detail. For more information, please review Chapter 4. 1.3.2 Visualizations This work drew inspiration from our WaveRider work in two key ways. Firstly, our visu- alizations were represented solely on the natural features of the environment in order to reduce occlusion and visual clutter. Our visualizations also used WaveRider’s model of using the walls for visualizing the signal strength of the routers and using the floor to represent channel informa- tion via textons. From there, we set out to design non-cluttered visualizations of signal data for the walls. 1.3.2.1 Wavelines The first visualization we developed while keeping visual clutter in mind was Wavelines fig. 1.7a. We decided to draw a single line for each visualization at any point on the wall, thus minimizing 15 visual clutter. From there, we decided to encode signal strength in the height and thickness of each line. In this way, Wavelines reads similarly to a bar graph with routers with higher signal strengths on the top and those with lower signal strengths near the floor. This allows the visu- alization to be incredibly intuitive to read and makes seeing a router’s high signal strength area obvious from a distance. On the downside, when the routers have similar signal strengths, they overlap. This can make areas with many routers with equal signal strength difficult to interpret. With this in mind, we developed the Stacked Bars visualization to guarantee minimal overlap. 1.3.2.2 Stacked Bars In order to guarantee that each router line does not overlap, we drew each line on top of one another with thickness proportional to signal strength. We placed the routers in order of their signal strength, with the weakest on the bottom. When the ranking changes, the order of the lines will change; thus, we allow a temporary overlap in these areas to create continuous lines for the user to trace over the environment. We call this visualization Stacked fig. 1.7b. As we intended, stacked contains minimal overlapping lines and gives the user the ability to see every router at all locations. It, however, loses Waveline’s ability to see high signal areas at a distance because the signal strength is no longer encoded directly in wall height. After designing both of these visualizations, we were curious to see how they would perform in comparison to each other and the state of the art so we conducted a user study evaluation. 16 1.3.3 User Study In order to validate our designs, we ran a user study with 32 participants. Each user per- formed four analysis tasks with each visualization. These tasks were localization, ranking, cov- erage, and interference. A full description of these tasks can be found in Chapter 4. At the end of the user study, we collected subjective feedback from the users. The results show that our novel visualizations perform as well as, if not better, than contour lines in every task. Wavelines out- perform Stacked at the localization task, and Stacked does the best at the ranking task. Thus, our new light-weight visualizations are an improvement on the state-of-the-art that does not depend on a propagation model and reduces visual clutter. 17 Chapter 2: Programmable Transfer Functions Figure 2.1: Screen capture of a rendering tool developed to utilize a Programmable Transfer Function, which offers user interaction to enhance a data analysis task. Here, five networks are shown over the University of Maryland campus to allow the user to assess signal coverage and frequency interference potential. 2.1 Introduction Visualizing WiFi signal strength can help us engineer superior buildings, academic cam- puses, and smart cities by ensuring constant, secure, and reliable coverage. Data connection reliability in the information age is paramount to ensure critical data is not lost. It is thus nec- essary to look into and update how we analyze the coverage and effectiveness of a WiFi signal space. Current design practices analyze aggregate data and heat maps but do not allow an analyst 18 to make decisions about the environment, such as planning router locations that maximize cov- erage while minimizing cost and potential interference. In this chapter, we propose a technique of WiFi visualization that utilizes direct volume rendering of sampled WiFi data to provide a better situational awareness of the complex WiFi signal space. We utilize WiFi as a stand-in for a generalized RF signal as it is easily accessible and widely understood. Visualizing three-dimensional volumes using direct volume rendering involves the use of transfer functions. A transfer function maps the abstract voxel data to a human-interpretable color and opacity. Commonly, the volumes visualized through these means arise from scientific measurement or simulation and are thus complex, possibly involving a mixture of materials or interacting phenomena. Therefore, it is imperative to design a system to aid in creating simple and effective transfer functions. Good transfer functions reveal significant data regions, mask unimportant areas that may occlude regions of interest, and increase visual clutter. Designing transfer functions automatically, semi-automatically, and interactively is an open area of research. This chapter aims to strengthen the visual analysis process by replacing the traditional look- up-based transfer function with a higher-level Programmable Transfer Function that facilitates flexible rendering of multiple scalar fields with their mutual interactions to enable flexible and easy-to-understand visualizations for exploring real-world multifield volumetric data. Motivated by our driving application of WiFi visualization, we propose a new concept called the Programmable Transfer Function. Programmable Transfer Functions can be imple- mented as shader functions that take multiple inputs, such as camera parameters, lighting param- eters, and multiple scalar fields with their gradients, and generate a color and opacity. By replac- ing the traditional transfer function’s single texture lookup, a Programmable Transfer Function enables: 19 Figure 2.2: Comparison between (left) one-dimensional, (middle) two-dimensional, and (right) Programmable Transfer Function. From left to right, we observe how the transfer functions generalize to accept more input parameters. These include view-dependent parameters (v), class labels (c), the vector fields fi and their gradients f ′ i , 0 ≤ i ≤ n− 1. Note how the Programmable Transfer Functions succinctly handle the increasing multi-dimensionality. • superior design through user interaction by leveraging view-dependent attributes to en- hance comprehensibility in real-time, • compact representation as succinct and fast shader code and alleviate the need for large arrays of slow look-ups from memory and • operation on multiple scalar fields and, therefore, can highlight features such as their inter- section curves and regions. Programmable Transfer Functions can assist in visually analyzing large amounts of three- dimensional volumetric data. These functions are easier to plan, analyze and visualize than tradi- tional transfer functions and enable the user to design multifield visualizations without prohibitive memory cost. In addition to our driving application of WiFi visualization, we believe that Pro- 20 grammable Transfer Functions could be helpful for scientists who run large-scale simulations and healthcare professionals studying medical images. This chapter presents how Programmable Transfer Functions are beneficial for our driving application of WiFi visualization. We first present an overview of the Programmable Transfer Function approach in Section 2.4, which includes the base rendering system’s implementation in Section 2.4.1 and performance analysis in Section 2.4.5. We next present three use cases to validate the capabilities of the Programmable Transfer Function in subsections 2.4.2, 2.4.3, and 2.4.4. Finally, we discuss other possible use cases in Section 2.5 and present our Conclusions in Section 2.6. The contents of this chapter come from our work featured in MDPI Information journal’s special issue: Trends and Opportunities in Visualization and Visual Analytics in 2022 [1]. 2.2 Related Works 2.2.1 WiFi Data Visualization Current techniques for modeling WiFi signal strength over a large space are limited. An analyst could review the data in its raw form in a comma-separated values (CSV) format file, but the growing volume of data makes this intractable. Therefore, current approaches use statistical modeling and modern rendering techniques to gain an advantage. One such work by Kokkinos et al. [4] utilized upload and download speeds, throughput, and ping times as metrics of internet quality and collected their data via crowdsourcing. Therefore, they evaluated signal coverage over an area rather than at each sample location. Several authors have utilized two-dimensional heatmaps to represent their signal data over indoor [5, 6] and outdoor [7, 8] areas. Another com- 21 monly used strategy uses a network graph to represent the wireless network. Although this works quite well for visualizing network security and infrastructure, it fails to provide practical infor- mation to answer questions of region coverage or interference [9, 10]. This chapter presents our system that uses volume rendering of the WiFi signal strength and real-world geometry to help review regional signal coverage, find areas of potential co-channel interference, assess possible security vulnerabilities, and more. We chose to use direct volume rendering rather than other three-dimensional scalar field rendering techniques as it allows for manipulation of the scalar field without substantial overhead, as would be necessary for marching cubes. 2.2.2 Direct Volume Rendering Direct Volume Rendering is a method for visualizing three-dimensional scalar fields. These fields often arise in the medical, engineering, and scientific fields due to various data acquisition technologies. In addition, many computer simulations process and output data in n-dimensional grids. Direct Volume Rendering was first introduced by Levoy [11] and improved by Drebin [12]. Volume rendering techniques have improved over the years with the introduction of various ac- celeration data-structures [13] and automated transfer function generation techniques [14]. Direct volume rendering is computationally intensive, and with an increase in the size of datasets, volume rendering performance quickly drops below interactive frame rates. This field has seen many improvements. These include hardware acceleration techniques [15] which use per-fragment texture fetch operations, texture rendering targets and per-fragment arithmetic to accelerate the rendering of volumetric data. Another source of frame rate improvements comes in the form of acceleration structures, such as the octree [15] to implement empty-space skip- 22 ping, which steps over regions that do not contain renderable values. Multiresolution textures or mipmaps have also increased the interactivity of texture-based volume rendering [16] at the cost of memory. Roettger et al. [17] used an innovative technique known as preintegration to upsample only semantically significant areas in order to remove aliasing artifacts. The localized preintegration technique of Roettger et al. utilizes the second derivative to modify the step size adaptively and thus better enable step skipping. In order to avoid the cost of transfer function cre- ation, several researchers have designed techniques that use a clustering algorithm to segment the volume into regions of interest and generate a transfer function to show these boundaries [18,19]. 2.2.3 Interaction in Volume Rendering In the field of volume visualization interaction, Sharma et al. [20] use a graph-based ap- proach to identify material boundaries and create a transfer function. Their graph represents the different materials based on how deep they are in the volume and its density. They then allow user interaction by allowing the transfer function to be modified for each individual segment. This technique allows the user to change the color and opacity of different segments. After each edit, the transfer functions must be calculated and stored as a texture. Pflesser et al. [21] perform virtual cuts into volumes to simulate surgery to prepare sur- geons in training and acts as a way to view internal structures of a volume. Carpendale et al. [22] increased user interactivity by producing three-dimensional distortion tools for data analysis by modifying the camera to make certain areas of a volume appear larger or smaller. Ip et al. [23] use normalized cut to create an interactive hierarchical structure for data exploration and transfer function creation. This technique allows users to automatically generate interesting data repre- 23 sentations and interact with a fixed set of model variations. Kniss et al. [24] present an elegant technique that uses multidimensional transfer func- tions to base the shading of the volume not just on the value at a specific location on the three- dimensional grid but also the gradient, or even further the Hessian, at that location. This chapter proposes a set of controls to interact with the multidimensional transfer function to aid in creating these transfer functions. These controls help the user explore the volume through different ways of looking at the volume, and they may edit the function by modifying the opacity for specific isovalues. 2.2.4 Non-photorealism Another area of interest for us is non-photorealistic rendering. Our Programmable Transfer Functions modify the rendering to aid in data analysis, but these effects are not a realistic sim- ulation of light transport and are non-photorealistic. Non-photorealistic volume rendering has innovative use cases. For example, an importance-based method proposed by Viola et al. [25] assigns each sample a level of sparseness during an importance compositing step. Despite their intrinsic structure and opacity, they make significant regions more visible during their final ren- der than unimportant regions. Treavett and Chen [26] use a pen-and-ink style to render a three- dimensional or even two-dimensional representation of volume, which they compare to an archi- tect’s sketch. They showed that this sketch-like visualization helped analysts in specific tasks. Csebfalvi et al. [27] use non-photorealism to render the contours along a surface, thus providing a more comprehensible view of the overall structure of the volume. 24 2.2.5 Multifield Data Multifield data consists of multiple values at each point. An example of one such multifield dataset would be a standard scalar field paired with the gradient at each point, or it could represent another volume entirely. Visualizing multifield data is essential to modern researchers as most scientific simulations and measurements yield multiple values at each point in three-dimensional space. One way to visualize high-dimensional data is to reproject it into three dimensions using clustering [28]. Another technique is to create a volume with a multidimensional transfer func- tion [29]. While highly versatile, the dimensionality of the transfer function increases memory requirements. For instance, a one-dimensional transfer function over eight-bit characters would require 256 elements, whereas a four-dimensional transfer function would require 2564 elements. This memory burden is the impetus of several performance improvements. One such technique is to use mixtures of analytical functions to represent the transfer functions, including Gaus- sians [30] or ellipsoids [31]. Multifield rendering can also visualize the mathematical properties of the inter-field relationships. For example, Multifield-Graphs visualize the correlation between multiple scalar fields [32]. 2.3 Data To evaluate the effectiveness of our Programmable Transfer Functions, we visualize WiFi signal strength data as it represents a varying volume over a large scale. The data used in our examples is a collection of WiFi signal data collected on the University of Maryland - College Park campus. Our dataset was collected using two handheld receivers moving across the campus over six one-hour data collection sessions. At each sample point, among the data collected was 25 the router’s Service Set Identifiers (SSIDs) and Basic Service Set Identifiers (BSSIDs) - (the names of the WLAN networks and the MAC addresses of the routers, respectively), WiFi signal strength in decibel-milliwatts (dBm), the GPS longitude and latitude of the sample, an estimate of GPS accuracy, the router’s security capabilities, and the signal frequency (which contains channel information). We chose to analyze radio frequency (RF) signals as they presented a diverse set of volumes covering large areas whose propagation is affected by their environment in which analysts may be interested in seeing trends. Specifically, we are interested in studying campus coverage, signal propagation trends, and areas of potential co-channel interference, a source of signal loss due to the overlap of signal bands between channels. The methods developed and explored here will lead to new tools to analyze how various channels may interact on a large scale, such as over a university campus or a smart city. The data used in this chapter has been interpolated after acquisition using Matlab’s fit functions. We mapped the data to a two-dimensional uniform grid representing bins of latitude and longitude. We create a unique texture for each network, usually defined as a specific SSID or a specific SSID and frequency pair. The textures contain the RSSI (Received Signal Strength Indicator) value sampled at each latitude and longitude. We then used Matlab biharmonic spline fit to model the router over the whole campus. The resulting function is then output to a binary file for our rendering. In addition to our volume data, we also use both 3D building models and a campus vege- tation inventory with GPS-accurate positioning to help our users orient themselves in the virtual world. Since these building models form a one-to-one mapping to the real world, users will be able to make actionable conclusions from the information, such as where to place additional routers or which routers to move to a new channel to improve the signal landscape. 26 2.4 Programmable Transfer Functions Programmable Transfer Functions significantly improve traditional transfer functions due to their malleability. In order to edit a conventional transfer function, the user must perform a memory swap, replacing the transfer function array or texture with new data. Data transfer between the CPU and GPU is a significant bottleneck for rendering pipelines. One of our con- tributions in this chapter is trading the transfer function lookup, which is memory intensive, for a function call at every sample location. Leveraging computation over memory fetches reduces lookup time and enables swift modifications to the transfer function through parameter modifica- tion. We can give the user far more customization and interaction options with this function call. Examples of this increased functionality are shown in Sections 2.4.2, 2.4.3, and 2.4.4. The Programmable Transfer Function is well-suited for visualizing our complex WiFi sig- nal space as we have many interactions across the dataset to analyze. The interactivity that the Programmable Transfer Function gives significantly benefits a signal analyst. In real-time, a user can update the isovalue to analyze stronger or weaker areas and efficiently recognize weak signal strength areas. Further, by using the specialized Programmable Transfer Functions listed below, an analyst can easily recognize the shape of the WiFi isosurfaces and find their regions of inter- section. These intersection regions are noteworthy as they indicate areas of potential frequency interference or areas where WiFi packet loss may occur due to sharing space on the RF spectrum. These regions of interest in the three-dimensional space would be harder to find using aggregate data, heat maps, or even volume rendering with a traditional transfer function. 27 Figure 2.3: The base rendering pipeline with six steps: (a) skybox rendering, (b) model rendered with one mesh, (c) campus map modeled on a two-dimensional quad, (d) instance rendering of the vegetation directory, (e) volume rendering using ray marching, and (f) IMGUI window rendering. 2.4.1 Base Direct Volume Rendering We render the volume rendering in this chapter in conjunction with traditional mesh ren- dering using rasterization. This rendering is a part of a multi-step rendering pipeline shown in Figure 2.3. First, we render a skybox as a background using the standard skybox shaders. We then render the buildings as one large mesh, the campus map as a single textured quad, and the vegetation inventory as multiple instances of a single tree object. Finally, we perform volume rendering with a GUI interface. We decided to use volume rendering to visualize the isosurfaces of the WiFi signals. We chose to visualize isosurfaces because they are easy for users to interpret. Due to the limited overlap region, they are also compact enough to allow the simultaneous rendering of multiple net- works. This enables users to determine total coverage on campus and make decisions regarding co-channel interference. Please note that the decision to use isosurfaces was made for our use case 28 Figure 2.4: Depth mask used to terminate the volume rendering raycasts in order to enable occlu- sion and early ray termination. Occlusion allows the user to maintain their sense of depth, while early ray termination boosts rendering performance. of WiFi visualization and does not represent a fundamental limitation to Programmable Transfer Functions. Programmable Transfer Functions enable us to customize isosurfaces’ appearance in real-time fully. However, indirect volume rendering using traditional isosurface extraction tech- niques, such as Marching Cubes, limits how flexible our visual depiction can be. It would also require us to re-extract the surfaces whenever our visualization parameters are updated. Both the mesh rendering and the volume rendering are OpenGL implementations. We use a bounding cube as an acceleration structure and render the front and back hit points of the bounding cube to a framebuffer object. Then, we use the front- and back-hit points to create the ray representing the light path for that pixel. We step through the volume sampling at each point along the ray. We store the volume data as a flat two-dimensional array. When sampling, we indexed the array based on the latitude and longitude of the sample point and subtracted a value 29 proportional to the z component of the sample position to represent the signal strength fall-off. A traditional volume renderer would take these samples and access a transfer function texture to get a color and opacity at each point on the ray. However, we instead call a function to receive the same information. This function is the Programmable Transfer Function. We then composite all the colors with opacities along the ray to create the final rendering. In order to terminate rays early and allow buildings to occlude the volume, we use the depth buffer from our building rendering; see Figure 2.4. Early ray termination improves frame rates and provides the occlusion necessary for proper depth perception in the environment. When appropriate, we process multiple volumes by sampling each volume in turn and calculating their respective contribution at each point along the ray. Figure 2.1 shows the results of this approach. We can implement additional features using various Programmable Transfer Functions from this volume rendering. When analyzing these renderings, the user should interpret the isosurfaces as indicating equal signal strength. The higher the isosurface in an area is, the higher the WiFi signal strength on the ground beneath it. Any area without a surface overhead is a dead zone with no measurable signal strength. Where volumes overlap, there is a potential for co-channel interference. 2.4.2 Silhouette Shading A common challenge for many volume renderings is unclear boundaries. Specifically, a soft fall-off where a volume ends. This boundary is a concern as it makes it difficult for an analyst to discern where features are in a volume. We introduce our first use case for a Programmable Transfer Function to address this issue. We use the idea that the silhouette of an isosurface has surface normals perpendicular to the viewing angle. In contrast, the central parts of the 30 Figure 2.5: Volume rendering with (Right) and without (Left) silhouette shading. Notice how this makes it easier to distinguish different parts of the volume and accentuates details. The increased comprehensibility allows the user to make decisions regarding router coverage at a glance, as it is more clear where a volume, and thus a router’s signal, ends. isosurface has normals aligned with the view direction. We can therefore use the dot-product of the isosurface normal vector and the view direction to modify the color and opacity to accentuate the silhouettes of the isosurface. Pseudocode for this approach is in Algorithm 1. This method produces a bubble-like shading effect. This effect is described in Demir et al. [33]. Programmable Transfer Functions can not only implement the silhouette shading effect they also enable the user to manipulate the silhouette parameters in real time. In traditional volume rendering, silhouette shading often requires a multidimensional transfer function, increasing storage requirements. However, with a Programmable Transfer Function, no additional cost is incurred. 31 Algorithm 1: Silhouette Shading for Each Pixel do RayDir← (FrontHit - BackHit) for Each Sample Along RayDir do µ← 1− Dot(viewDir, normal) if µ ≤ silhmin then µ← silhmin else if µ ≥ silhmax then µ← silhmax αsample ← αbase + µ′ colorsample ← colorbase + µ ∗ (1, 1, 1) ... end end Note that the silhouette coefficient described above is tuned based on two ranges. One range [silhmin, silhmax] represents how thick the silhouette augmentation band should be. Typically silhmax is set to 1.0 as this defines the absolute edge. The other range [αmin, αmax] defines how much the silhouette should be augmented. The results of this algorithm are shown in Figure 2.5. The silhouette shading would allow a network analyst to see the network’s features more clearly, making it easier to draw conclusions from the data. For instance, in Figure 2.5 an analyst can not see much of the flat features without silhouette shading. With silhouette shading, the analyst can determine that the flat regions represent relatively high signal strength and thus are not worrisome. 32 Figure 2.6: Volume rendering without (Left) and with (Right) specular highlight augmentation. This highlight allows the user to understand the curvature of the surface better and make decisions about the environment more intuitively. 2.4.3 Specular Highlight Augmentation When illuminating thin semi-transparent surfaces, it is typical for the specular highlight to be lost. This loss is due to the lack of opacity, diluting the specular contribution. The Pro- grammable Transfer Function can selectively boost the opacity in a region of high specular high- light to mitigate this. The added specular highlight help users orient themselves in the virtual world and effectively discern the volume’s features. Specifically, the specular highlight can help elucidate the curvature of the surface. The utilization of specular highlight in volume rendering is discussed in Fernando [34]. When implementing specular highlights using a Programmable Transfer Function, users can further enhance comprehensibility by selectively increasing the opacity where a significant specular effect exists. This can emphasize surface shape by reducing the loss of visual appearance of highlights due to volumetric transparency. The specular highlight augmentation is tunable via the parameter µspec We have observed that µspec value depends on the thickness of the surface, the base opacity, and the ray-stepping size. 33 Algorithm 2: Specular Highlight Augmentation for Each Pixel do RayDir← (FrontHit - BackHit) for Each Sample Along Ray do dotLightNorm← Dot(lightDir, normal) spec← Pow(dotLightNorm, shininess) αsample ← αbase + spec ∗ µ′ spec ... end end With the aid of specular highlight, the curvature of the volume becomes much more easy to understand, and analysis of the volume becomes more intuitive. 2.4.4 Multi-volume Interaction It can be challenging to distinguish among independent volumes when rendering multifield data, notably when the volumes are semi-transparent. The Programmable Transfer Function can aid the user by highlighting their interaction as done by Jankowai and Hotz [35]. For instance, we can visualize where two volumes intersect and shade these regions a particular color as in Figure 2.7. In this example, we are coloring the intersections of the two volumes black to high- light where they meet, but more elaborate interaction visualizations are possible. It is important to note that this style of visualization is best applicable for thin isosurfaces. A different shading model may be necessary for more complicated volumes. We suggest some of these possibilities in Section 2.5. This feature is valuable, especially in the case of our data where routers commu- 34 Figure 2.7: Volume rendering without (Top Left) and with (Top Right) intersection highlighting. Underneath is a zoomed-in picture of the area of intersection with the highlighting. Intersection highlighting allows a user to better determine the depth ordering of the volumes, and understand the arrangement of the surfaces. In our use case, it also serves to highlight regions where co- channel interference is likely. In regions such as the one highlighted here, a network analyst could determine that due to the large region of overlap depicted in this figure, one of these networks, should be configured to communicate on another frequency channel. nicate on the same frequency band creating a higher probability of destructive interference and, thus, worse signal coverage and lower bandwidth in that area. In general, it also helps analysts determine the depth ordering of the volumes. 35 Algorithm 3: Multivolume Intersection Shading for Each Pixel do RayDir← (FrontHit - BackHit) for Each Sample Along RayDir do hit volume← false for each volume do sample← Texture (volu, volv, volw) if IsInRange(sample) then if hit volume then colorsample ← (1,1,1) break else hit volume← true // Shade Normally ... end end end end end 36 Figure 2.8: Representation of (a) the base volume rendering, (b) the rendering with specular highlight augmentation, (c) the rendering with silhouette highlighting, (d) the base rendering with both silhouette rendering and specular highlight augmentation, (e) the full suite with silhouette highlighting, specular augmentation and intersection highlighting. 2.4.5 Performance The benefit of Programmable Transfer Functions All rendering occurs on a single NVIDIA RTX 2080 with 48 GB of RAM and an Intel Core i7 CPU, and we report frame rates in Table 2.1. Interestingly, the addition of the specialized Programmable Transfer Functions does not result in any significant frame rate loss. The lack of a performance dip may be due to how OpenGL handles branching conditionals since all functions are implemented in one shader and toggled through conditional statements. Notice, however, that even when rendering at its worst, the frame rates remain consistently well above our application’s interactive threshold of 60 fps. We test each rendering configuration as shown in Figure 2.8. 37 Base Specular Silhouette Spec+Silh Intersection All 3 1 Volume (interior) 105 105 105 105 NA NA 1 Volume (exterior) 181 181 181 181 NA NA 5 Volumes (interior) 85 85 85 85 85 85 5 Volumes (exterior) 116 116 116 116 116 116 Table 2.1: Performance measured in frames per second (FPS). Each volume represents the signal strength of a single WiFi channel over the campus. Each test case in the single-volume case uses the same volume. For the five-volume cases, we simultaneously render all channels from this dataset in the 2.4 GHz region. We captured interior and exterior frame rates to note the difference in performance from within and without the volume. 2.4.6 Interaction We have shown the versatility of the Programmable Transfer Functions for our driving application of WiFi visualization. We have also implemented a GUI in our renderer so that variables in our Programmable Transfer Functions can be changed dynamically, and the user can tune them to produce the rendering they need. In our example, we created an IMGUI window that can modify many rendering aspects. From this window, the user can modify the volume rendering terms such as the isovalue used to render the surface, the color used to shade each surface, and the volume step size. The user can also modify all of the Programmable Transfer Function parameters such as the silhouette term, the silhouette coefficients, the coefficient of specular highlight augmentation. In addition, the user can toggle the intersection shading on or off. We also control several acceleration techniques from this GUI. To tune the rendering, we can manipulate variables in the GUI with a simple widget, such as a slider or a checkbox (Figure 2.9). This ease of use contrasts with how interaction traditionally works, where a new transfer function would have to be computed and stored into a texture. 38 Figure 2.9: The IMGUI interface for user interaction. This GUI allows the user to take advantage of the flexibility afforded by Programmable Transfer Functions, by allowing them the ability to manipulate rendering parameters, and turn on and off various features such as intersection highlighting. 2.5 Limitations and Future Work The application presented in this chapter is just an example of what Programmable Trans- fer Functions can do, but their potential extends beyond what we have presented here. As with all WiFi visualization approaches, our approach is most effective at visualizing a limited num- ber of networks. We have visualized up to six networks at a time. This limit, however, is due to the high-frequency details and scale of WiFi data. In general, Programmable Transfer Func- 39 tions are scalable. In this chapter, we have shown that we can visualize WiFi data using Pro- grammable Transfer Functions. An exciting future work would be to examine the effectiveness of Programmable Transfer Functions for other application domains such as medical visualization or large-scale simulations. There are many ways in which we are looking to leverage the potential of Programmable Transfer Functions. For ease of discussion, we have divided these into four categories; cus- tomized rendering, Section 2.5.1; data analytics features, Section 2.5.2; visual enhancements, Section 2.5.3; and multivolume tools, Section 2.5.4. 2.5.1 Customized Rendering A promising direction for Programmable Transfer Functions is through fully customized rendering. In this method, an expert user would design their shader function during runtime, and the rendering would update in real-time. This way, users could interact with the code in whichever way they saw fit. For example, one could imagine a system reminiscent of Unreal Engine material shaders, where users specify inputs and visually program their desired rendering output. This system would allow arbitrary functionality and complete customization and be an exciting area for future work. 2.5.2 Data Analytics The simplest form of data augmentation with a Programmable Transfer Function is data highlighting. For example, one could shade all values above 90% as red and everything else blue, thus highlighting the strongest signals. The Programmable Transfer Function can also leverage 40 supplemental data. For example, in our WiFi signal data, we could shade the region based on which router had the strongest signal strength in that region, or we could mask the signal strengths in a specific region if we knew that the data there is corrupted or proprietary. In addition, several interesting geometric properties that one could highlight using Programmable Transfer Functions such as curvature and gradient. 2.5.3 Visual Enhancements The silhouette shading and specular highlight augmentation from our implementation sec- tion fall into this category. Programmable Transfer Functions can aid in data analytics, but they also generally improve the visual component of the rendering. As an example, we could utilize additional textures to store class-based masks. For instance, a volume from a CT or MRI seg- mented into known tissues and organs could leverage Programmable Transfer Functions to hide organs and tissues that may be obscuring some feature of interest or highlight a region of interest. This functionality could be a valuable tool for visualizing multi-class data. 2.5.4 Multivolume Tools Programmable Transfer Functions could be invaluable in analyzing how multiple volumes interact. This functionality is particularly useful in simulation and sensor data. The data is often in more than three dimensions, and analyzing two variables at once may help analysts identify previously unseen patterns. Algorithm 3 shows our implementation of this method. Instead of just viewing the intersection, one could consider any mathematical formulation of multiple fields, such as their difference or correlation. Further, we could design Programmable Transfer 41 Functions to render any feature level-set as defined by Jankowai and Hotz [35]. Programmable Transfer Functions enable the user to choose features on the fly and unconstrained by the need to compute new scalar fields for rendering. For example, a Programmable Transfer Function could visualize the intersection depth for two volumetric scalar fields for any given point. An exciting possibility is to form a conditional operation based on multiple volumetric fields. For instance, one could view the signal strength of a particular SSID and only shade it red if it is not the maximum signal over a set of SSIDs, therefore, representing the set of all SSIDs with the maximum signal strength at each location. 2.6 Conclusion Direct volume rendering has made many strides since its origins. With the advances in graphics processing hardware, we can now mathematically calculate the transfer function on the fly rather than storing it in a predefined lookup table. This method allows an analyst to mod- ify the transfer function on the graphics hardware and interact with the volume more efficiently. This new freedom allows for the development of new kinds of transfer functions. No longer con- strained by dimensionality limits, transfer functions enable analysts to utilize other data sources. We have implemented three specific cases in which a Programmable Transfer Function can be used and suggested many others, but there are far more than we could mention here. We have shown the usefulness of the Programmable Transfer Function for WiFi signal analysis. In par- ticular, we have used direct volume rendering to allow a user to assess the signal coverage at the University of Maryland Campus to conclude the interaction between the signals and their envi- ronment. Programmable Transfer Functions can aid in understanding and interpreting the WiFi 42 volumes. Using the multivolume intersection transfer function, we have also allowed analysts to evaluate the potential for co-channel interference. Programmable Transfer Functions have al- lowed us to get both of these benefits from one rendering technique. Programmable Transfer Functions also allow for a data-specific transfer function. For example, a function could use one scalar field to mask another, or the interactions between two fields can be visualized expressly in the transfer function. Programmable Transfer Functions offer a new way of thinking for designers of multifield volume visualizations. They enable data scientists to explore their data in a flexible and efficient way while still providing all the functionality of a traditional transfer function. We believe that the use of Programmable Transfer Functions is likely to benefit several other fields beyond WiFi signal analysis. 43 Chapter 3: WaveRider: Immersive Visualizations of Indoor Signal Propagation Figure 3.1: WaveRider with a multi-router contour visualization representing six routers on three different frequencies. Note the mini-map in the bottom left corner to assist the user in self- localization. 3.1 Introduction Electromagnetic signals permeate the space around us, carrying information to and from wireless devices. These invisible communications make up an essential part of our lives, facili- tating communication, data transmission, and collection. However, network health is challenging to assess due to the complex way signals interact with their environment. These signals represent 44 various protocols such as Bluetooth, cellular, and Zigbee. In this chapter, we examine WiFi sig- nals as they are the most complex and understandable network we have access to. Specifically, rather than looking at our outdoor networks, here we look at indoor WiFi signals. Indoors, we are presented with a completely different environment defined by walls that block a user’s line of sight. In this work, we will take advantage of these natural surfaces to display our data in an efficient and structured way. By visualizing WiFi signals, a systems analyst can monitor the network’s design and health, essential components for the system’s reliability and user satisfaction. In addition, WiFi signal analysis software is also critical in ensuring coverage, comparing available bandwidth at peak usage, minimizing the overlap of adjacent frequencies, and identifying available frequency bands for new routers to prevent overlap when upgrading the network. Unfortunately, as we present, the state-of-the-art WiFi signal visualization software is limited to just one or two dimensions [36, 37], significantly constraining the amount of information conveyed to the analyst. Specifically, it makes localization and coverage analysis difficult as signals propagate in all three dimensions unevenly. Many WiFi visualizations focus on data at a single location [36, 38]. Single sample visu- alizations take a snapshot of the network environment at one position and display that data to the user. Since they lack environmental context, single sample visualizations delegate much of the tasks, such as source localization and coverage analysis, to the user, making the decision- making process much harder. Other WiFi visualizations look at data over the entire environ- ment [6, 39, 40]. While these provide environmental knowledge, they are also limited by causing visual clutter and occlusion or providing a limited view of the WiFi signals. Our application, WaveRider, allows users to view WiFi data from multiple routers over the 45 whole environment. We designed and developed WaveRider by analyzing the current state-of- the-art signal visualizations, reviewing the inherent needs of the domain, and creating multiple visualization strategies to tackle those needs using the strengths of prior techniques. We then re- cruited five subject matter experts to get their detailed, informed opinions of our methods and see what changes they would recommend for our system moving forward. Through these dialogues, we have collected a large amount of feedback which has significantly enhanced WaveRider. WaveRider provides six main contributions to state-of-the-art signal visualization: 1. WaveRider depicts more signal sources than today’s state-of-the-art visualizations while representing the sources over the entire environment, 2. WaveRider introduces the visualization techniques of line-integral convolution and textons to WiFi signal space analysis for the first time, 3. WaveRider extends line integral convolution to environments with discontinuous surface normals (such as the adjacent walls of a building) via a new rendering technique. 4. WaveRider enables the user to localize a router and determine its coverage and interference potential while preserving the user’s ability to locate themselves in the environment. 5. WaveRider uses the natural indoor surfaces (walls and floors) for depicting the signals – this balances the conflicting goals of conveying information over the entire volume of interest with the need to reduce visual occlusion and clutter, and 6. WaveRider provides an Augmented Reality (AR) prototype, demonstrating that these de- sign principles can work in real spaces. This chapter presents our work published in ACM Spatial User Interaction 2022 [2]. 46 3.2 Related Work Figure 3.2: The current state of the art in WiFi visualization tools. Each image represents a commercial application for visualizing WiFi Signal Strength. a) WiFi AR [41] is an Android app that uses a color-coded data glyph to represent sample points. b) Ekahau Survey [42] is a heatmap-based visualization product for MAC OS. c) AR Sensor [43] is a basic glyph visualiza- tion that uses colored spheres to represent signal strength. d) Acrylic Wi-Fi Heatmaps [37] uses a heatmap that dual encodes height. e) WiFi Analyzer [36] is a simple channel graph visualization that uses simplified shapes to bin signals by frequency to make reading the spectrum easier and help visualize channel congestion. None of these visualizations can represent multiple networks while maintaining a sense of the environment. 3.2.1 Embedded Data Representations Immersive technologies have been around for a long time. The first head-mounted display was developed in 1968 [44]; however, it was not until the Binocular Omni Orientation Monitor was developed in 1990 [45] that immersive data visualization really took off with the Virtual Wind Tunnel [46] in 1991. From there, we have seen an outpouring of work in the realm of immersive data visualizations [47] with head-mounted displays returning to the forefront of im- 47 mersive visualization work in 1997 with the Virtual Data Visualizer [48]. These works utilize the ease of exploration and natural sense of positioning in space to aid the analyst [49]. A subset of immersive visualization has emerged where data is overlaid on the physical environment it represents called embedded visualization [50]. These embedded representations have provided analysts with the necessary context to make decisions about their data and how it relates to the world [51–53]. 3.2.2 WiFi Visualization WiFi visualization is still relatively young, with few contributions either in peer-reviewed papers or commercial applications such as in Figure 3.2. The current state-of-the-art methods for WiFi visualization are generally used for either visualizing coverage, as a user may need to ensure access to the network throughout the environment, or for visualizing interference potential, as a network administrator may need to guarantee reliability and speed. Unfortunately, no methods exist which can handle both at the same time. 3.2.2.1 2D Heatmaps Heatmaps are a widespread way of encoding WiFi signal strength [6, 8, 54] as they are widely known and commonly understood, see Figures 3.2b and 3.2d. Notably, the field of Elec- tromagnetic Compatibility (EMC) analysis, the assessment of external interference on electro- magnetic radiation data, uses heatmaps for visualizing signal interference [40]. This method, unfortunately, does not scale well. If we represent each router in an environment with a heatmap with the same color map, it becomes difficult to distinguish between two adjacent routers visually. 48 However, mixing the colors can easily cause confusion and ambiguity if we use different color maps, especially when we have more than three colors. Even without the blending concerns, monochromatic heatmaps struggle to show slight changes in value. Hence, its ability to represent signal strength is limited, making comparisons of magnitude even more difficult. Several tech- niques exist to mitigate the layering issue in the two-dimensional case. One common approach is Small Multiples [55], where multiple heatmaps are shown in a grid, each representing the same area. Small Multiples has been used successfully in immersive visualizations [38,56], but they do not allow the data to be nested in the environment. Heatmaps also have the drawback of only rep- resenting the data in two dimensions, obscuring the actual distribution of the data in space. This limitation motivates the creation of 3D volumetric heatmaps rendered using volume rendering. 3.2.2.2 Volume Rendering As mentioned above, heatmaps can visualize data in two dimensions, but the user can see all three with volume rendering. Volume Rendering involves projecting a cone of rays into a three-dimensional scalar field. We discretely sample the scalar field along each ray and use a transfer function to map the scalar values to color and opacity for each sample [39]. We then composite the color and opacity samples in a front-to-back order along each ray to calculate the final color for each pixel. This process allows us to convert volumes to rendered images for sci- entific visualization [57, 58]. Volume visualization can thus be viewed as a natural extension of heatmaps and overcomes the hurdles associated with heatmap’s two-dimensionality and has been used in WiFi visualization for that purpose [1]. Volume rendering, however, does not overcome the layering difficulty of heatmaps and introduces new complications, including self-occlusion 49 and computational complexity. If the visualization’s data is noisy, as with most real-world data that is captured with sensors, there are often be many peaks and troughs in the isosurfaces repre- senting constant scalar values in the volume. Such high-frequency features can add visual clutter and block the user’s view of the dataset through self-occlusion. While transparency can reduce self-occlusion, it also makes it more challenging to analyze the curvature and shape of the sur- face. Further, since the rendering algorithm must trace at least one ray per pixel and the resolution of the visualization is dependent on the step size used in ray casting, the number of computations for each frame is significant. 3.2.2.3 Channel Graphs and Spectrograms A critical use case for WiFi visualization is examining the channel distribution of the routers in a region. Modern WiFi transmits data over the RF spectrum in predefined bands known as channels. The technical specifications of channel allocation are defined in the IEEE 802.11 Standard [59]. Notably, for 2.4 GHz WiFi, channels are 22 MHz wide, and their centers are 5 MHz apart. It is important to note that these channels overlap. If overlapping channels transmit data simultaneously, they can distort each other. We discuss Channel Interference and its risks to network health in Section 3.3.3. However, the user must know each router’s channel configuration and signal strength to assess the risk of channel interference. Channel graphs or spectrograms offer a convenient way to visualize channel interference by representing each router’s signal strength and frequency channel at a specific location in a two-dimensional graph. A spectrom- eter can acquire the frequency and strength of a signal over time and is often used to visualize RF signals [38, 60]. Channel Graphs are simplified versions of spectrographs. An example can 50 be seen in Figure 3.2e. The main shortcoming of this representation is its constraint to a single location. Thus, users must look at several channel graphs and infer their relationships to analyze signal coverage. With its high cognitive burden, such a process quickly becomes infeasible when exploring large regions with many routers. 3.2.2.4 Glyphs Glyphs, or markers, have been used in EMC [40] and WiFi visualization to describe distinct samples [61–65]. Glyphs can be as simple as a sphere placed at the sample position, colored to identify the router, and sized to represent the signal strength. Several modern visualization approaches use this style of visualization, see Figures 3.2a and 3.2c. More sophisticated glyphs include a data cube visualization [66] which could depict a router’s cryptographic capabilities and manufacturer. Glyphs are valuable because they can represent data with multiple encodings, such as color, shape, size, and position. Still, they are inherently discrete and thus leave gaps in regions that lack information samples. Such gaps can impose an additional cognitive burden on the user in inferring where the signal strength is strongest or where it falls off entirely. A related challenge is determining the placement of these glyphs in the scene. Since glyphs are often opaque markers, they can occlude themselves and their environment. 3.2.3 Line Integral Convolution Line Integral Convolution (LIC) [67] is commonly used in visualizing vector fields, espe- cially in fluid dynamics. LIC works by advecting random noise in the direction of the vector field, resulting in streaks called streamlines. These streamlines act as a visual indicator for the direc- 51 tion of the local vector field. The overall visualization looks similar to dye injected into a moving water source. The dye follows the direction of the water and indicates the currents. A specialized form of Line Integral Convolution called Fast Surface Line Integral Convolution [68–70] com- bines the principles of Fast LIC [68,70], a performance improvement on LIC that takes advantage of redundant calculations and Surface LIC [71–73], a technique for applying LIC to 3D objects. One system, [74], uses screen space to parameterize a surface for LIC. Three-dimensional vec- tor fields have been used to visualize EMF before by [75], which aggregates signal readings by frequency and visualizes them by drawing lines connecting every point with its neighbor. This visualization nicely represents the topology of the EMF signals but introduces substantial visual clutter. 3.2.4 Textons The base unit of precognitive texture perception is called the texton [76]. Textons are popular for use in training image classifiers, but they have also been used to model multivariate data [77]. Since a user can easily distinguish among textons, we can represent different categories of data with different textons. Oriented Slivers [78] is one such texton that has been used to visualize multiple overlapping scalar fields. Oriented Slivers are composed of rectangular capsule shapes which are rotated about their center. When layered, they produce a clear segmentation of the visual field. 52 3.3 Use Cases To inform our visualization strategy, we have designed a set of WiFi use cases to aid an- alysts in monitoring a network. In order to develop these use cases, we formed a relationship with the expert community by reaching out to a group of professional telecommunications re- searchers. We interviewed these experts, performed a literature review, and embedded ourselves in a professional training course for signal analysts. Through these means, we learned the day- to-day tasks related to signal analysis that network administrators must perform and what the state-of-practice visualizations do and do not provide. We discussed their current techniques and what they would like to see in a novel visualization. We examine the use cases we developed in Sections 3.3.1– 3.3.4. 3.3.1 Localization Localization aims to help a user identify the router’s location. In real-life scenarios, lo- calization is helpful for maintenance, service, and security reasons. For example, if a secured network router is in a public, unsupervised location, it could be subject to tampering. If a router is in an unexpected area, it could indicate that someone is attempting to impersonate that router as part of a man-in-the-middle attack. In addition, localization can help direct a repair person to a signal source that is acting up somehow or allow a network administrator to reconfigure the router to operate on a better frequency channel with lower interference. The current state-of-the- art visualizations tend to put a significant cognitive burden on the user to localize the router, or in the case of channel visualizations like Figure 3.2e, do not support localization at all. 53 3.3.2 Signal Coverage The region where a user can communicate with a router is its signal coverage area. For net- work planning, a user must identify the undesirable dead zones with no signal coverage to guide the placement of additional routers. Another use case is that of containing a network’s signal coverage. For instance, a network’s coverage could go beyond the intended bounds, potentially causing a security vulnerability. While heatmaps do a fine job at visualizing coverage, they fail at attributing the coverage to specific routers due to the limitations of their color maps. Glyph visu- alizations rely heavily on their sample points and can easily miss coverage areas by not sampling the region. As with localization, channel visualizations do not support signal coverage analysis. 3.3.3 Interference Potential Communication among many routers and devices in a limited space can lead to interfer- ence. Each router typically communicates on a single frequency channel, a region of the RF spectrum. In adjacent channel interference, devices communicate on nearby channels that over- lap, leading to signal distortion and packet loss. In co-channel interference, routers must wait for their channel to be open before sending their packets. As a result, users may experience slowed network access when there is high network demand. Therefore, the ability to monitor a network for areas with high interference potential would be a vital asset to any network administrator. The only visualization from Figure 3.2 that aids the user in detecting Frequency interference is the channel visualization. However, it fails to encapsulate environmental knowledge. 54 3.3.4 Signal Awareness Our visualization must be grounded in the real world to allow users to make actionable de- cisions from their analyses. For example, discovering a coverage gap can guide the placement of a new router. Therefore we must develop a visual paradigm that allows users to maintain their sense of location as they evaluate the network performance. Further, given the overwhelming amount of signal information, we also need to make sure an analyst can view the most salient network infor- mation at any given time to avoid having to sift through irrelevant information. While heatmaps and glyph visualizations can maintain some environmental knowledge, heatmaps are constrained to two dimensions, and glyphs heavily occlude their environment. Channel visualizations are far worse, containing no knowledge of the environment. 3.4 Visualizations Our goal for WaveRider was to develop a WiFi network planning tool for a multi-floor en- vironment. WaveRider is built to assist a network manager in creating and maintaining a wireless network that provides a stable and fast connection for their customers. Because WiFi analy- sis involves domain knowledge, WaverRider is built for users with this domain expertise, not for laypeople. We also designed WaveRider as an immersive Virtual Reality (VR) application to allow for intuitive movement and interaction with the environment. To facilitate real-time decision-making while in the data’s environment, we designed WaveRider with embedded data representations. The use of immersive technology allows the user to conduct their data analysis in the environment and thus make actionable decisions in-place including decisions about where to focus data collection. Our goal is to enable experts to understand interactions between their 55 physical spaces and the “signal space” while making decisions about router placement and data collection in the environment. Going into the design phase, we determined seven features that we required for our novel visualizations. We derived these requirements from our use cases and through discussion with signal experts. Our new visualizations must, • Maintain environmental context for the user • Show multiple routers simultaneously • Allow the user to identify a specific router • Delineate a clear boundary for each router’s coverage area •