ABSTRACT Title of Thesis: INFORMATION OLFACTATION: THEORY, DESIGN, AND EVALUATION Biswaksen Patnaik Master of Science, 2019 Thesis directed by: Dr. Niklas Elmqvist College of Information Studies Olfactory feedback for analytical tasks is a virtually unexplored area in spite of the advantages it offers for information recall, feature identification, and location detection. Here we introduce the concept of information olfactation as the fragrant sibling of information visualization, and discuss how scent can be used to convey data. Building on a review of the human olfactory system and mirroring common visualization practice, we propose olfactory marks, the substrate in which they exist, and their olfactory channels that are available to designers. To exemplify this idea, we present viScent(1.0): a six-scent stereo olfactory display capable of conveying olfactory glyphs of varying temperature and direction, as well as a corresponding software system that integrates the display with a traditional visualization display. We also conduct a comprehensive perceptual experiment on information olfactation: the use of olfactory marks and channels to convey data. More specifically, following the example from graphical perception studies, we design an experiment that studies the perceptual accuracy of four “olfactory channels”—scent type, scent intensity, airflow, and temperature—for conveying three different types of data—nominal, ordinal, and quantitative. We also present details of an advanced 24-scent olfactory display: viScent(2.0) and its software framework that we designed in order to run this experiment. Our results yield a ranking of olfactory channels for each data type that follows similar principles as rankings for visual channels, such as those derived by Mackinlay, Cleveland & McGill, and Bertin. INFORMATION OLFACTATION: THEORY, DESIGN, AND EVALUATION by Biswaksen Patnaik Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Master of Science 2019 Advisory Committee: Dr. Niklas Elmqvist, Chair/Advisor Dr. Catherine Plaisant Dr. Eun Kyoung Choe ©c Copyright by Biswaksen Patnaik 2019 Dedication To my beloved family and friends. ii Acknowledgments I owe my gratitude to all the people who have made this thesis possible and because of whom my graduate experience has been one that I will cherish forever. First and foremost I’d like to thank my advisor, Professor Niklas Elmqvist for giving me an invaluable opportunity to work on this extremely interesting project. He has motivated and encouraged me throughout this journey and has been im- mensely supportive as a mentor. I am thankful to Dr. Catherine Plaisant and Dr. Eun Kyoung Choe for their constructive criticism and support. Most work presented in this thesis was undertaken with the help of an amazing group of collaborators who have been very passionate, creative and dedicated towards this research. I am very thankful to Andrea Batch, Moses Akazue and Karthik Badam. I would also like to acknowledge help and support from Craig Taylor and Carol Boston from the iSchool. I am thankful to UMD Terrapin Works for helping me with the fabrication. I am thankful to Dr. Evan Golub for all the help with HCIL Hackerspace. I owe my deepest thanks to my family - my mother and father who have always stood by me and guided me through my career, and have pulled me through against impossible odds at times. Words cannot express the gratitude I owe them. My housemates at my place of residence have been a crucial factor in my finishing smoothly. I’d like to express my gratitude to Ram Kaushik and Arvind Rajiv. I would like to acknowledge financial support from the National Science Foun- dation. iii Table of Contents Dedication ii Acknowledgements iii List of Figures vii 1 Introduction 1 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Thesis Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 List of Papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Background 7 2.1 Olfaction in Humans . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.1 Chemical Topography of Smells . . . . . . . . . . . . . . . . . 7 2.1.2 Fragrance Classification . . . . . . . . . . . . . . . . . . . . . 8 2.1.3 Model Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Theories of Smell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Task Taxonomy of Olfaction . . . . . . . . . . . . . . . . . . . . . . . 13 2.3.1 Information Recall . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 Object Localization and Tracking . . . . . . . . . . . . . . . . 15 2.3.3 Feature Detection and Discrimination . . . . . . . . . . . . . . 16 2.3.4 The Smell of Time and Somatic Sniffing . . . . . . . . . . . . 17 2.3.5 Human Olfaction is Cross-Modal and Associative . . . . . . . 19 2.4 Olfactory Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5 Olfactory Display Architecture . . . . . . . . . . . . . . . . . . . . . . 21 2.5.1 Ultrasonic Atomization . . . . . . . . . . . . . . . . . . . . . . 22 2.5.2 Atomization through Venturi Effect . . . . . . . . . . . . . . . 22 2.5.3 Evaporative Diffusion . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.4 Electro-Stimulation . . . . . . . . . . . . . . . . . . . . . . . . 24 2.6 Limitations of Scent . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 iv 3 Information Olfactation: Theory and Design 26 3.1 Visualization alphabet: Marks and Channels . . . . . . . . . . . . . . 26 3.1.1 Understanding Marks and Channels . . . . . . . . . . . . . . . 26 3.1.2 Effectiveness of Marks and Channels: A Perceptual Psychol- ogy Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2 Perceptual tasks in Olfaction . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.1 Olfactory Marks . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.1.1 Smell Glyphs and Fragrance Classes . . . . . . . . . 32 3.2.1.2 Molecular Bouquet . . . . . . . . . . . . . . . . . . . 32 3.2.1.3 Airburst . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.2 Olfactory Channels . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2.2.1 Direction (or, Origin Position) . . . . . . . . . . . . . 35 3.2.2.2 Saturation (or, Chemo-Intensity) . . . . . . . . . . . 35 3.2.2.3 Air Flow Rate (or, Kinetic Intensity) . . . . . . . . . 36 3.2.2.4 Air Quality (or, Climate) . . . . . . . . . . . . . . . 36 3.2.2.5 Temporal Pattern (or, Scent Animation) . . . . . . . 37 3.3 Substrates of Olfaction . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.3.1 Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3.2 Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.3 Airburst Revisited (as a substrate) . . . . . . . . . . . . . . . 40 4 viScent: An Olfactory Display System 41 4.1 Lo-Fi Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2 viScent(1.0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.2.1 viScent(1.0): Implementation . . . . . . . . . . . . . . . . . . 43 4.2.2 viScent(1.0): System Overview . . . . . . . . . . . . . . . . . 43 4.2.3 viScent(1.0): Visual Interface and Interaction . . . . . . . . . 48 4.2.4 viScent(1.0): Examples . . . . . . . . . . . . . . . . . . . . . . 48 4.2.4.1 VR 3D Network Graph . . . . . . . . . . . . . . . . 49 4.2.4.2 2D Network Graph . . . . . . . . . . . . . . . . . . . 49 4.2.4.3 2D Line and Points . . . . . . . . . . . . . . . . . . . 51 4.3 viScent(2.0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3.1 viScent(2.0): System Overview . . . . . . . . . . . . . . . . . 52 4.3.2 viScent(2.0): Control System . . . . . . . . . . . . . . . . . . 56 4.3.3 viScent(2.0): Channel - Scent Types . . . . . . . . . . . . . . 56 4.3.4 viScent(2.0): Channel - Scent Intensity . . . . . . . . . . . . . 56 4.3.5 viScent(2.0): Channel - Airflow Rate . . . . . . . . . . . . . . 57 4.3.6 viScent(2.0): Channel - Air Temperature . . . . . . . . . . . . 58 4.3.6.1 Air Heating . . . . . . . . . . . . . . . . . . . . . . . 58 4.3.6.2 Air Cooling . . . . . . . . . . . . . . . . . . . . . . . 59 4.3.6.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . 60 v 5 Evaluation and Results 61 5.1 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.1.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.1.2 Apparatus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.1.3 Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.1.4 Experimental Factors . . . . . . . . . . . . . . . . . . . . . . . 64 5.1.5 Tasks and Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.1.6 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . 70 5.1.7 Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.1.8 Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.2.2 Nominal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.2.3 Ordinal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.2.4 Quantitative Data . . . . . . . . . . . . . . . . . . . . . . . . 81 5.2.5 Subjective Feedback . . . . . . . . . . . . . . . . . . . . . . . 82 5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.3.1 Quantitative Data . . . . . . . . . . . . . . . . . . . . . . . . 83 5.3.2 Ordinal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.3.3 Nominal Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.3.4 Smelling Least, Smelling Best . . . . . . . . . . . . . . . . . . 85 5.3.5 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5 Conclusion and Future Work 88 Bibliography 91 vi List of Figures 1.1 viScent(1.0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 viScent(2.0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 Castro’s Fragrances Classes . . . . . . . . . . . . . . . . . . . . . . . 9 3.1 Marks and Channels of Visualization . . . . . . . . . . . . . . . . . . 27 3.2 Mackinlay’s ranking of visual variables organized by data type . . . . 29 3.3 Summary of olfactory marks . . . . . . . . . . . . . . . . . . . . . . . 31 3.4 Summary of olfactory channels . . . . . . . . . . . . . . . . . . . . . . 34 4.1 Proof of concept of an olfactory display. . . . . . . . . . . . . . . . . 42 4.2 viScent(1.0) and the 2D display: magnified view of each of the pri- mary components on the prototype depicting the ultrasonic atom- izers,diffusing fan, pneumatic solenoid valves, Peltier-based thermo- electricheating system and the accompanying 2D visualization. . . . . 45 4.3 viScent(1.0) and VR: a magnified view of ultrasonic atomization in play and bi-directional air stream nozzles for creating an olfactory spatial mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 4.4 viScent(1.0) table top display. . . . . . . . . . . . . . . . . . . . . . . 47 4.5 viScent(1.0) olfactory display for VR . . . . . . . . . . . . . . . . . . 47 4.6 viScent(1.0) visual-olfactory display in VR: 3D Network Graph . . . . 50 4.7 viScent(1.0) visual-olfactory display in 2D: Network Graph . . . . . . 50 4.8 viScent(1.0) visual-olfactory display in 2D: Line and Points . . . . . . 51 4.9 The viScent(2.0) system . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.10 viScent(2.0) control tower . . . . . . . . . . . . . . . . . . . . . . . . 53 4.11 A typical viScent(2.0) user session. . . . . . . . . . . . . . . . . . . . 54 4.12 The coolant reservoir and heat exchanger in viScent(2.0). . . . . . . 54 4.13 A high-resolution image of olfactory display in viScent(2.0). . . . . . 55 5.1 Example of a user study in progress. . . . . . . . . . . . . . . . . . . 62 5.2 Example of a user study in progress. User reported confidence is recorded. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 vii 5.3 Example of Quantitative Task. Participants select the value being conveyed using scent. The screen is followed by a dialog asking for participant’s confidence on a 5-level Likert scale. . . . . . . . . . . . . 67 5.4 Example of Ordinal Task. Participants select the value being con- veyed using scent. The screen is followed by a dialog asking for par- ticipant’s confidence on a 5-level Likert scale. . . . . . . . . . . . . . . 67 5.5 Example of Nominal Task. Participants select the value being con- veyed using scent. The screen is followed by a dialog asking for par- ticipant’s confidence on a 5-level Likert scale. . . . . . . . . . . . . . . 68 5.6 Normalized distance from correct answer. . . . . . . . . . . . . . . . . 74 5.7 Ratio of exactly correct answers. . . . . . . . . . . . . . . . . . . . . . 74 5.8 Overall completion time for all data types DT . . . . . . . . . . . . . . 75 5.9 Self-reported confidence rating per data type DT . . . . . . . . . . . . 75 5.10 Self-reported confidence rating per olfactory channel OC. . . . . . . . 75 5.11 Correctness organized by data type DT and then by olfactory channel OC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.12 Completion times organized by data type DT and then by olfactory channel OC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 5.13 Error organized by data type DT and then by olfactory channel OC. 77 5.14 Correctness organized by data type DT and then by olfactory channel OC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.15 Completion times organized by data type DT and then by olfactory channel OC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 5.16 Error organized by data type DT and then by olfactory channel OC. 79 5.17 Completion times organized by data type DT and then by olfactory channel OC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.1 Ranking of olfactory channels organized by data type; inspired by Mackinlay’s ranking of visual channels [1]. . . . . . . . . . . . . . . . 88 viii Chapter 1: Introduction 1.1 Overview The rich cinnamon of mom’s apple pie cooling on the kitchen table; the re- freshing tang of a fir tree permeating the house during a childhood Christmas; a beloved dog’s wet fur as he cuddles next to you in bed after an evening walk in summer rain. Olfaction, the chemoreception that gives rise to the sense of smell, is a powerful memory stimulant and can yield unexpected associations. The sense of smell, can be a powerful complement to the human visual system as a sensory substi- tute [2] in situations when the user can not look or cannot see; e.g., either when the user’s eyes are busy elsewhere, or the user has a visual impairment. Marcel Proust (1871–1922) famously wrote in In Search of Lost Time about how a single bite of a madeleine (a small cookie from the Lorraine region of northeastern France) gave rise to vivid childhood memories of the narrator’s aunt sharing the same cookie. Beyond memory, smell (and its close relative, taste) is a powerful sense used for detecting danger, testing (and enjoying) food, and receiving pheromones to yield social response. But can smell be used to convey data? This question has not yet been satisfactorily investigated in the visualization community. In this thesis, the design space of olfaction in humans as a multimodal mech- 1 Figure 1.1: Two six-scent olfactory displays for information olfactation; the left shows the mobile setup, which is intended to be hung on a virtual reality head-mounted display (HTC Vive depicted) and used for immersive analytics applications, and the right shows the tabletop unit, which is to be used for more traditional 2D visualization setups.The prototypes have different capabilities; the mobile version has two pipes for varying the directionality of scents, whereas the tabletop unit can control the air temperature. anism to convey information is explored. The exploration begins with a review of the olfactory system in humans. Mirroring visual marks and their visual channels traditionally used in data visualization [3], a design space consisting of olfactory marks—the building blocks of scent is derived, including fragrance, bouquet, and airburst—as well as their olfactory channels, including intensity, direction, air flow rate, burst frequency, and climate. Then, existing approaches to designing olfac- tory interfaces, and using these as well as real-world examples to present a case for information olfactation: The use of interactive olfactory representations of data to amplify cognition is summarized. To showcase the potential of information olfacta- tion, viScent(1.0) (Figure 1.1), a prototype system consisting of both hardware and software components is presented. 2 The viScent(1.0) hardware rig is a six-fragrance olfactory display with stereo output (i.e., supporting both nostrils, which is important for scent direction) as well as temperature control. The corresponding viScent software framework allows for rapidly building information olfactation applications that harness the rig. Three example scenarios using viScent; one immersive for Virtual Reality (VR), and two using standard 2D visualizations are presented. While this provides a good theoreti- cal framework, a controlled experiment is conducted to provide empirical validation. Following the example from graphical perception [1,4,5], a controlled perceptual ex- periment is designed to elicit internal rankings of four olfactory channels (analogous to visual channels [3] or visual variables [4]) for three different forms of data: nom- inal, ordinal, and quantitative [6]. The olfactory channels were the following: • Scent type (S): The type of scent displayed, drawn from different fragrance classes [7] (such as lemon, lavender, and leather); • Scent intensity (I): The amount of scent displayed (low to high); • Airflow Rate (A): The speed of the air (i.e., wind) delivering the scent from the display (low to high); and • Temperature (T): The temperature of the air delivering the scent from the display (cold to warm). Conducting this evaluation required fabricating an olfactory display capable of supporting all of these olfactory channels within the necessary data ranges. Thus, a secondary contribution of this research is viScent(2.0): a high-fidelity olfactory 3 Figure 1.2: viScent 2.0 is a 24-pod olfactory display system designed specifically to evaluate the encoding of information into smells and the cross-modal signals associated with olfaction. display consisting of 24 essential oil bottles designed for ultrasonic atomization of essential oils. (Figure 1.2). The display is controlled using a software API interfacing with an Arduino ATMega2560. Beyond the essential oil containers, which typically are configured to emit six different smells at four different intensities each, the display can also control the temperature using thermoelectric cooling and heating(a separate chamber fitted with heating coils and Peltier modules) as well as wind speed using speed controllable fans. Not surprisingly, our results mostly follow analogous results from graphical perception. In particular, based on accuracy perception for different stimuli, it was found that quantitative data is best represented by temperature and wind speed, and nominal data is best represented by temperature and scent type. Ordinal data was best conveyed using scent type, which typically has no encoded ordering. That scent intensity was outperformed by all other channels was also unexpected, although 4 this is explained with the fact that varying levels of intensity of the same scent is difficult to perceive. This development of a hierarchical framework of olfactory channel effectiveness provides a reference tool for designers and opens the design space of encoding information through smells for interactive immersive displays, ubiquitous analytics, and general analytical environments. While we do not suggest that smell will ever replace vision (or even sound or touch) in a data visualization, our investigation of this topic gives strong indication that smell can be used as a natural complement to vision for ambient and passive effects [8], such as smell glyphs. In particular, we see the primary utility of olfactory displays such as ours for immersive [9] and ubiquitous analytics [10], the new flavors of visual analytics that endeavors to optimize the flow [11] and fluidity [12] of the user by immersing them in the analytic environment. For such situations, we suggest that an olfactory display can provide a powerful and hitherto unused sensory modality with significant potential to improve the presence and flow of the analyst [13–15]. 1.2 Thesis Statement Exploring and evaluating the sense of smell as a medium of conveying and perceiving information. 1.3 Contributions The contributions are (1) what we believe to be the first definition of the information olfactation topic, (2) high-fidelity olfactory display systems capable of 5 harnessing olfactory channels and (3) the benchmark results of a perceptual exper- iment to establish the first ranking of olfactation channel effectiveness. 1.4 List of Papers The work presented in this thesis on the theory and model of Information Olfactation has been published in IEEE (Institute of Electrical and Electronics Engineers) Transactions on Visualization and Computer Graphics as Information Olfaction: Harnessing Scent to Convey Data [16]. Evaluation of Information Olfac- tation has been submitted to a major visualization conference. 6 Chapter 2: Background 2.1 Olfaction in Humans Humans are able to distinguish between a vast number of discrete fragrances— over one trillion, by one estimate [17]. There are two perspectives on olfactory per- ception relevant to interface design: A chemical-topographic model, and a fragrance classification model. 2.1.1 Chemical Topography of Smells All of our senses create a spatial mapping of the world around us, and olfac- tion is no different [18, 19]. How this is done in olfaction, however, is an ongoing area of research that is still not fully explored [18, 20, 21]. To some extent, an ini- tial landscape of smells is created through dimensionality reduction. The epithelial tissue inside the nasal cavity is lined with millions of olfactory sensor neurons, each with an odorant receptor. There are approximately 1,000 different types of odor- ant receptors [22, 23], each able to detect a range of molecule formations. A large amount of dimensionality reduction is done in the epithelium alone [22]. The sensor neurons are all connected (synapsed) to the olfactory bulb in the brain via smaller 7 bundles of nerves called the glomeruli ; it is through this bundling that the dimen- sionality of the information received from the olfactory sensor neurons is further reduced [22]. The brain classifies sensory input as a distinct fragrance, but only after several steps. Olfactory receptors first detect high-dimensional information about the composition of volatile molecules in the air. The pathways leading from the olfactory sensor neurons to the olfactory bulb then perform heavy preprocessing to reduce the complexity of this information in the both the epithelium and the olfactory bulb. Thus, an initial landscape of smells is created by the nervous sys- tem before entering cognition. It is perhaps as a result of this pre-processing that odor-in-the-head recollection—the internal re-creation of fragrances in the absence of external stimuli—is subjectively considered very difficult in comparison to visual or aural memories [24]. 2.1.2 Fragrance Classification The notion that people group odors into fragrance categories is not a new one, but what those categories entail has historically been subjective and culture- dependent [25, 26]. It is only recently that robust empirical research supporting fragrance classification models has appeared, both in psychology [7, 27] and in interface design [28]. Castro et al. [7] introduce the classification framework used in this research (Figure 2.1). Classifying olfactory input as a distinct fragrance is an important part of the olfaction process in humans; it allows us to assign meaning to smells and use the 8 Figure 2.1: Castro’s Fragrances Classes 9 contextual information we associate with specific odors in our decision-making pro- cesses [26]. While the estimate of “one trillion” distinct odors is contested as repre- senting the upper, rather than lower, bound of human olfactory discrimination [29], it still presents a powerful argument toward the use of fragrances for analysis of high-dimensional data. 2.1.3 Model Synthesis While dimensionality reduction of detected odors is done before the informa- tion reaches the cortex, relative to other animals, a greater degree of odor processing is done consciously [26]. Categorical clustering of odors into broad types may be considered a further reduction in dimensionality that is done by humans: By creat- ing associative classes of odors, we simplify the data our odor receptors collect from the air around us, perceived by our conscious brain as a distinct fragrance, into a conceptual grouping of fragrances [7, 26]. Human perception of olfactory stimuli is an exercise in dimensionality reduc- tion and cognitive filtering. From the chemo-topographic model, we understand that there are spatial and, as we will discuss (in section 2.3.4), temporal substrates of information that are detected and interpreted by the human olfactory system both consciously and unconsciously. From the classification model, we understand that human olfaction is dependent on association and context. 10 2.2 Theories of Smell There have been several attempts to explain the process of olfaction: specif- ically how odor is produced and how the structure of a molecule determines the olfactory stimulus it generates. The first-stage of odor perception namely the odorant-receptor interaction is primarily dependant on the structure of the odorant molecule [30]. There has been widespread agreement that, for a substance to be odorous, it should posses certain properties. The substance must have lipid solubil- ity, water solubility (may be very low), must be volatile and be present in a certain concentration (this varies for different substances) around the receptor site to be per- ceived as odorous [30]. A comprehensive study by Klopping [30] provides a detailed exploration of the primary theories accepted to date to understand olfaction. Dyson-Wright Vibrational Theory is primarily based on contiguity of odorant and the receptor where the vibrational frequencies of the odorant are transferred to the receptor. Dyson proposed that olfactory receptors were activated by vibrations in the odorant in the frequency range of 1400-3500 cm-1. While this theory attempts to explain the odorant-receptor interaction, it does not clearly state about nerve impulses generated at the receptor site. After reviewing the available theories on olfaction, Moncrieff presented the site-shifting theory. According to this theory, for a molecule to be odorous, it must possess a characteristic shape and size that fits to the molecular site in the receptor (in the the nose) for it to be perceived. This theory also added the fact that molecules that were flexible had higher chances of accommodating themselves to the receptor 11 sites compared to rigid molecules. Amoore in the year 1964 was the first to provide experimental evidence on the site shifting theory in his stereochemical theory of olfaction where he stated that factors such as presence of an electronic status affected odor perception apart from the size and shape of the molecules. In 1996, Luca Turin proposed a theory of olfaction that deals with the vibration of the molecules [31]. He proposed that odor of molecules is characterized by the vibrations in them which are sensed by the human body by a phenomenon called the inelastic electron tunneling. In quantum mechanics, electron tunneling is a phenomena where an electron passes through a potential barrier which otherwise it cannot under the rules of classical mechanics. According to this hypothesis, as an odorant binds to a receptor in the nose, due to electron tunneling, an electron is transferred to the receptor activating it. This phenomena also makes the odorant vibrate that is detected on a sub-atomic scale by the receptor. The patterns of vibrations are characteristic of the odorant molecules (dependant on the molecular structure) causing different olfactory stimuli. Turins theory of olfaction is widely accepted as the theory that relates molecular structure, molecular vibration and odor. For simplicity, one may refer to Ammore’s site-shifting theory whereas to gain an in-depth explanation of perception of odors on a subatomic scale, one might refer to Turin’s work on spectroscopic mechanisms for odor reception. 12 2.3 Task Taxonomy of Olfaction Obrist et al. [32] have proposed ten categories for describing the user’s expe- rience in the context of olfactory interfaces, including past association and memory recall (categories 1 and 2), stimulation and attention (categories 3 and 4), identifi- cation and detection (category 5), aversion (category 6; e.g., the scent of a decaying corpse), feeling intruded upon (category 7), associations with other people (category 8), and smell affecting mood, behavior, and expectations (categories 9 and 10) [32]. While this model offers solid foundation for framing the ways a user may receive and experience olfactory feedback, it does not extend into the realm of recommending signifiers for affordances related to tasks relevant to olfactory displays to convey information [33]. This section explores the features of odor detection in human sensation and perception that we believe to be most relevant to information olfactation design in the context of the varieties of task that are signaled or augmented by odor. We define information olfactation to specifically refer to the design, creation, and transmission of olfactory stimuli to convey information. Following the introduction of data edibilization [34]—a design space for encoding data into taste—information olfactation represents the next piece of the multisensory information visualization puzzle [35, 36]. Human beings, like all animals, rely on odor not only to receive, but also to send information, although the latter of these is typically done uncon- sciously [37]. We make a distinction not only between the odor reception as it affects user experience, but also between information olfactation and this unconscious ol- 13 factory communication. 2.3.1 Information Recall Decades of research indicate that odor detection is a potent trigger for eliciting memories of prior experience [38–40]. In fact, there is evidence that odor may be a better mode for eliciting recall of certain classes of information than visual or verbal/word cues [24, 41]. Olfactory signals that retrieve information from human memory tend to evoke stronger emotion than other sensory stimuli [24,42]. The caveat that this applies to only certain classes of information warrants elaboration: There is a difference in the age distribution of memory formation (olfaction-associated memories tend to be from earlier stages of life relative to visual- or aural-associated ones), and olfaction-associated memories tend to be more sensi- tive to pairing with events affecting a greater emotional intensity than those with other sensory associations [43, 44]. With that distinction made, these relationships go both ways. Memories created in association with olfactory signals experience less decay over longer spans of time than visual or aural ones [24], and olfactory signals that retrieve information from human memory tend to also evoke stronger emotion than other sensory stimuli [42]. In the short-term, there is also some evidence of an “olfactory working memory” which can be updated with new information as the individual is exposed to stimuli, and which depends heavily on the individual verbalizing to create a semantic association (i.e., naming the perceived odor out loud) [45]. 14 Task performance may be improved when fragrances are contextually out of place: The smell of motor oil while walking through an orange grove would aid in forming a stronger memory than the smell of oranges in the same context [40]. An immediate suggestion must be made in light of this observation: While there is some evidence supporting the existence of associations specific odors have with colors [46] and with shapes [47], natural (for want of a better word) contextual association between specific odors and visual marks and channels for abstract information are largely unexplored. How should a bar chart smell, for example? Or a force-directed graph? As such, with respect to odor-vision concept pairs, the visualization (olfac- tation) community is free to develop design standards. 2.3.2 Object Localization and Tracking Using scent to locate objects is a feat we typically associate with other animals— the drug-sniffing dog, the bloodhound put to work tracking a deer or an escaped criminal, the truffle-sniffing pig, or the shark scenting a drop of human blood in the ocean. Human beings, however, are also capable of successfully tracking objects by scent, and their tracking ability improves with practice [48]. The intuition is rea- sonable: Imagine that someone, distracted, misses the garbage can when throwing away the remains of their meal, and it rolls unnoticed behind a couch. This person, within the ensuing days, would likely be able to detect and locate the remains for disposal by smell. The olfactory system’s structures for delivering stimuli to sensor neurons af- 15 fects perception [49], and the dual-nostril structure of the nose is an important mech- anism for tracking objects [48,50]. Bilateral scent detection is an important feature of olfactory system structure for navigation not only in humans, but in mammals in general [51] (and, incidentally, in robotic sensor systems as well [52–54]). Direction- ality detection by scent is also associative: Visual stimuli simulating leftward motion presented with a specific odor, for example, leads individuals to associate that odor with leftward motion [55]. This effect is significant enough that it can affect the way we see objects in motion when the direction of that motion is ambiguous [55]. 2.3.3 Feature Detection and Discrimination Section 2.1.2 involved a review of empirical and theoretical work on the process by which olfaction represented the ability of people to distinguish between many different, distinct odor types [7, 17]. If each odor type, as described by any number of fragrance classification schema, is mapped to a particular feature of a dataset, it stands to reason that it should improve task performance with regards to making distinctions between objects in a view [7,17]. Dmitrenko et al. [56], in evaluating the abilities of drivers to discern between different odors conveying semantic meaning, found that deriving meaning from different types of odor was well within the range of human olfactory ability. Human ability to discern between different odors is dependent not only on long-term memory associations but also upon the use of working memory, which is strongly influenced by vocalizing semantic codes for a given odor [45]. To give a real world example, the detection of certain types of smells 16 by firefighters—burnt rubber or grease, for example—is a means of evaluating the scene of a fire [25]. We argue, then, that feature detection is a task that may be augmented by olfactory feedback in analytical environments; it is a task that fits the associative nature of human olfaction, and is a clear area for application of the classes listed in Figure 2.1. 2.3.4 The Smell of Time and Somatic Sniffing Imagine the very first moment you enter your favorite café or bakery: The smell of the freshly-brewed coffee and baked goods are likely at the fore of your mind. Consider how the relative priority of the ambient aromas changes after your first fifteen minutes of sitting down; likely, you hardly even notice most of the smells you detected upon entering the building. It is equally likely that you will be able to pick out the specific aroma of a freshly-baked pie if it passes near your table, or, worse, the stink of a garbage can with a tuna sandwich from yesterday being knocked over near where you are sitting. In this scenario, walking through the entrance and encountering the bouquet of fragrances results in a spike of activity in certain parts of your brain lasting between 15 and 30 seconds; after this period, the activity for these regions begins not only to return to its original level, but, for a subset of the regions, to be actively suppressed below a baseline level [57,58]. You have habituated. In the orbitofrontal cortex, however, there is ongoing activity that lasts as long as your exposure to the fragrances. This ongoing activity may facilitate associative memory creation [57,58], 17 and it also allows for the discrimination between old and new odors in the air in a single sniff [59]. Among ventilation-breathing organisms, olfaction is typically a cyclical pro- cess: There is a short period of time when air is exhaled during which the breather is receiving little to no external odor data, and a short period of time when air is being inhaled during which the breather is receiving a new sample of information from the air around them. This cyclical process of passive (autonomic) odor detec- tion is often referred to as a “sniff cycle,” which has been proposed as a standard “unit of smell” [59, 60]. It may be argued that this cycle may also mark a unit of human perception of the passage of time. Beyond this passive approach to odor detection, the breather can perform active smelling via somatic (voluntary) sniffing [50, 61]. It is worth noting that this act of voluntarily sniffing, independent of the autonomic nervous system and external stimuli, is a feat unique to humans [62]. While voluntary sniffing on the part of the user is one way to get around the suppression of olfactory perception, all is not lost in terms of the olfactory interface designer’s ability to counteract habituation. In empirical studies of olfactory perception, intermittent nine-second- long bursts of odor molecules produce a sustained level of activity in the parts of the brain that typically show suppressed activity after post-exposure adjustment during the autonomic sniff cycle; participants of an empirical study using this staggered burst approach experienced no habituation to the odor being tested [57]. The olfactory display designer must account for temporal perception and ob- ject discrimination through habituation, the sniff cycle, and active sniffing [62, 63]. 18 Furthermore, temporal order of stimuli exposure matters, and not merely order of odors in isolation, but their order relative to other stimuli [49]. In other words, the perception of time through odor is cross-modal. 2.3.5 Human Olfaction is Cross-Modal and Associative Odor affects the way we see objects, and vision affects olfaction [42,47,55,64]. Research in neuropsychology has pondered the question of “seeing smell” for over a century, and structures of the brain seem to support the notion that there is a direct connection between these senses [42, 65]. This relationship is further validated by the recent development of convolutional neural networks to detect the presence of olfactory features in images [66]. Empirical cognitive psychology studies have ex- plored techniques for producing odor associations with the effect of causing people to favor perceiving objects as moving in one direction over the other [55], or to be able to “smell” whether shapes are rounded or angular [47]. The latter study by Hanson-Vaux et al. [47] found that lemon, for example, smells angular; a finding that appeared in human-computer interaction (HCI) work as users’ predisposition to create spiked sculptures with lemon-scented material [67]. Interfaces taking ad- vantage of the cross-modality of odor are able to, for example, change the taste of the food the user is eating using visual and olfactory cues [68]. Beyond dimensionality reduction, olfaction is associative in that it is how we make sense of smells; we use scent and our memory thereof to identify people, animals, food, threats, and other objects in the world around us [25,26,32]. Further, 19 these associations and the cross-modality of our senses affect the things that we create and the way we interact with the objects around us, particularly those that we have shared, or intend to share, with others [32,67]. The evidence is clear: Odor has the power to modify how we experience vision, time, and space [47,64,65]. It can aide in our ability to find things in the world around us and in our own heads [38,48]. We argue that the range of tasks that olfactory displays can impact make olfaction worth considerating for any analytical environment that extends beyond vision. 2.4 Olfactory Displays A specialized form of data physicalization uses smell to represent data. If olfac- tion is the sense of smell, then an olfactory display is a programmable device that is capable of creating an olfactory stimulus by (a) emitting odorous molecules (chemo- stimulation) [69], or (b) directly activating odor receptors in the nose (electro- stimulation) [70]. The former category—creating olfactory stimulus by emitting odor—can be further organized based on its mode of distribution: ultrasonic atom- ization, atomization through Venturi effect, and evaporative diffusion. Patnaik et al. [16] provides an overview of these mechanisms. The most straightforward usage of olfactory displays is for increasing presence in immersive applications, such as Virtual Reality training and recreation. In fact, Sensorama [71], the very first VR implementation and patented in 1962, included both “at least one” scent channel, as well as a fan to generate a breeze on the user’s face. 20 However, our interest in this research is more narrow in that we focus on olfac- tory displays used for information olfactation [16]: using scent to convey abstract data—such as stock market price over time, node types in a social network, or the distribution of data in a histogram—rather than a realistic phenomenon—such as the smell of a damp cave in a dungeon crawler, the tang of gunpowder in a com- bat simulation, or the heavy aroma of motor oil in an airplane mechanic training application. Washburn and Jones [72] were among the first to suggest this practice, listing several existing olfactory devices that could be used for data visualization. However, most existing displays are typically used for a small number of notifica- tions, such as Dobbelstein et al.’s “scentifications” using the inScent pendant [73], Dmitrenko et al.’s use of odor for driving-related messages [74], and Grace and Stew- ard’s peppermint scent to alert drowsy drivers to prevent them from falling asleep at the wheel [75]. Similarly, in his master’s dissertation, Kaye [25] talks about smell icons—smicons—and proposes a “symbolic” olfactory display that, for example, uses the scent of mint for a rising stock market, and lemon for a falling one. 2.5 Olfactory Display Architecture Olfactory displays may essentially be described as the superclass of interfaces for information olfactation. An olfactory display unit is a device that is capable of being programmed to create an olfactory stimulus by emitting odorous molecules (chemo-stimulation) or creating a sense of smell (electro-stimulation). In this sec- tion, we categorize olfactory displays based on their mechanism of producing an odor 21 stimulus, then discuss existing applications of olfactory feedback in design for in- formation transmission. Broadly, these mechanisms include ultrasonic atomization, atomization through Venturi Effect, evaporative diffusion, and electro-stimulation. 2.5.1 Ultrasonic Atomization Ultrasonic atomization systems employ a ceramic diaphragm vibrating at an ultrasonic frequency to convert a liquid solution (often aromatic essence oils) into a mist that eventually diffuses into the surroundings. Atomizers may be embedded in wearables to create personalized olfactory displays [76]. Ultrasonic atomization has also been used in designing tabletop olfactory systems for peripheral aware- ness [77]. Smell-based interfaces have employed this technology to create interactive peripherals for olfactory tagging [78] and engaging experiences in art museums [79]. Integrating ultrasonic atomization with ultrasonic transducer arrays offers potential for mid-air odor control [80]. Ultrasonic atomization (our mechanism of choice) and atomization through venturi effect (Section 2.5.2) both begin emitting odor molecules instantly. However, the scent may need some time to travel to the user based on the relative position of the display point of origin to the user with both of these methods. 2.5.2 Atomization through Venturi Effect The Venturi effect is the reduction in fluid pressure that results when a fluid flows through a constricted section (or choke) of a pipe. When pressurized air blows 22 past through the orifice of a cartridge holding a solution, it lowers the pressure within the cartridge, sucking up the liquid and converting it into fine mist. The application of atomization through Venturi Effect may be observed in a variety of appliances that many people use in their everyday lives: Consumer goods and industrial appliances, from perfume bottles to air brushes, make use of this principle. Design of interfaces for peripheral awareness have employed this technology in encoding information into smells such as mapping rise or fall in stock market information to distinct smells, or designing reminder systems with a task mapped to a smell [25]. 2.5.3 Evaporative Diffusion Evaporative diffusion is attenuating the rate of diffusion of a solution through controlling parameters such as air flow and temperature. Air flow may be enhanced by adding a fan. Temperature may be attenuated by placing a heating element to heat up the solution. Vaporization through attenuating air flow has been employed in designing olfactory displays for immersive media—accompanying VR [81] and large displays [82]. Heat-assisted vaporization has also been used in creating scent notifications for peripheral awareness [73]. Evaporative diffusion has a very long delay between events intended to trigger diffusion and the perception of an odor by the user when compared with the other mechanisms discussed here. 23 2.5.4 Electro-Stimulation Electro-stimulation is the direct activation of receptor neurons through con- trolled electrical impulses. In the case of olfactory stimulus, this would mean that an electrical impulse must be passed directly to the odor receptors on the epithe- lial tissue deep in the nasal cavity. In other words, an electro-stimulation interface would require that wires be inserted deep into the nose and contact be made with odor receptor neurons. Because there are roughly 1,000 types of odor receptors, developing a method for conveying a consistent class or molecular composition of smell is nontrivial with this approach. While this area remains less explored because of these practical considerations, there is evidence that the digital stimulation of smell for multisensory communica- tion is a possibility for interface design [70]. There are several benefits this approach relative to the others, as it sidesteps the need for an aromatic solution, which has the potential to present allergy risks to users, and it does not require that the air be cleared of diffused odor molecules to discontinue the stimuli. Electro-stimulation is instantaneous: The odor receptor neurons are directly stimulated by the display system in real time. However, it is quite invasive, requiring that electrodes be placed inside the user’s nasal cavity. It also has a low resolution; in the current state of the art, simulating any specific odor is very difficult [70, 83]. These and other benefits and drawbacks to digitizing chemical senses are described at greater length by Spence et al. [69]. 24 2.6 Limitations of Scent Anosmia, the inability to detect fragrances, is a symptom associated with a wide variety of causal factors. These include conditions affecting the brain, like meningitis or Parkinson’s disease, congenital conditions such as Kallmann syn- drome, lasting damage of the mucosa or olfactory receptor neurons (often caused by compounds that pass through the nasal passage like cigarette smoke or nasal sprays [84, 85]), and inflammation or short-term sinus congestion and blockage as- sociated with temporary conditions like influenza or the common cold. Another potentially limiting phenomena, known as odor fatigue, olfactory adaption, or affective habituation, is the loss in distinctive perception of an odor due to prolonged exposure. Olfaction has a very low resolution as compared to vision. The resolution of information perceived by the sense of smell is limited by the low bandwidth of information transfer in the human body. Also, olfaction may be subjective based on personal preferences making it difficult to have a standardized approach to making use of fragrances. 25 Chapter 3: Information Olfactation: Theory and Design 3.1 Visualization alphabet: Marks and Channels Visual marks and channels construct the building blocks of information vi- sualization. It is important not only to understand them but also to be able to determine their effectiveness in expressing information. 3.1.1 Understanding Marks and Channels In visualization, a mark is a unit of conveyance: A point, line, or area, for example [3]. A visual channel represents the dimensions along which a mark may be parameterized: Position, area/size, shape, hue, color value, and so on, along with temporal transitions of all of the above (motion, for example, meaning a change in position; or growth, meaning a change in size) [3] (Figure 3.1). Visual spatial substrates are the medium of conveyance–the space in which visual elements exist, the structure thereof, and the mapping of features of the data to be visualized to that space [86]. 26 Figure 3.1: Marks and Channels of Visualization [3]. TOP: Marks are geometric primitives; BOTTOM: Visual channels control the appearance of marks. 27 3.1.2 Effectiveness of Marks and Channels: A Perceptual Psychology Perspective The field of perceptual psychology [87], a subfield of cognitive psychology, is concerned with the human sensory system, which in turn can be seen as the preconscious aspects of human cognition [88, 89]. In general, the interpretation of any external stimulus by our sensory system—e.g., sight, sound, touch, smell, or taste—falls under the umbrella of perception [87]. Thus, perceptual psychology is of interest to the data visualization community not because of the stimulus itself— typically visual—or even the characteristics of the sensory systems, but rather in terms of the information-carrying capacity of the stimulus as well as our bodies’ ability to interpret this information [90]. For data visualization, therefore, the primary instrument for understanding perception is the graphical perception experiment, where human participants are asked to interpret visual stimulus in a controlled laboratory setting. Some of the early work in this vein dates back to paper-based statistical graphics prior to raster displays, or even computers, such as results by Eells et al. [91] from 1927, by Croxton et al. [92] in 1927 on Pie charts, Bar charts, and circle diagrams, and by Croxton et al. [93] in 1932 on shapes for comparison. The work of Peterson and Schramm [94] from 1954 is seminal in that it systemically studied eight different statistical graphs and derived resulting guidelines. Modern graphical perception for visualization can be said to start from such holistic surveys that not merely measure accuracy for individual chart types or 28 Figure 3.2: Mackinlay’s ranking of visual variables organized by data type [1], adapted from a ranking by Cleveland and McGill [5]. The variables in gray boxes are not relevant to those data types. shapes, but attempt to study and rank multiple ones. Already in 1967, Jacques Bertin, a cartographer by training, assembled a ranking of so-called visual variables (also known as visual channels [3]) from his personal expertise and current practice in cartography [4]. Cleveland and McGill [5] assembled results from many percep- tion studies to provide a similar ranking of visual variables backed by empirical data; remarkably, the rankings are more or less identical. Mackinlay [1] later extended Cleveland and McGill’s ranking into a more fine-grained model suitable for auto- matic visualization design. While Mackinlay never empirically verified his model, his ranking is foundational in that it introduced variations depending on whether the data to visualize is nominal, ordinal, or quantative [6]. Figure 3.2 shows a summary of this ranking, organized by data type. 29 3.2 Perceptual tasks in Olfaction The olfactory equivalents of marks, channels, and substrates are not entirely straightforward. Foundational cognitive psychology studies investigating the atomic variables of olfaction, for example, do not themselves distinguish between modes and units of conveyance [95, 96]. To mirror the traditions in visualization, this section outlines the design space of information olfactation in terms of its primitives. 3.2.1 Olfactory Marks Olfactory marks are the base elements of the olfactory display for information olfactation. Where visual marks are internally consistent spatial primitives—points, lines, areas, and volumes [3]—we have identified three olfactory marks (Figure 3.3) that must be thought of as existing in two different domains. The first domain is chemo-associative; the chemical composition of odors displayed to the user may be linked to real-world objects as smell glyphs based on the fragrance classification model of olfaction (Section 2.1.2), or it may be a molecular bouquet more closely aligned with the chemical topography model (Section 2.1.1)–a complex cocktail of odor molecules that is either completely fabricated or is simply a mixture of enough glyphs that it becomes difficult to distinguish between them. The second domain is a spatiotemporal one: The air is a vehicle for transporting odor molecules, but it also conveys information about the odor source that is fundamentally inseparable from the chemical composition of molecules it carries. Our proposal of airburst as a mark is based on a more holistic look at the sense of smell as cross-modal 30 Figure 3.3: Summary of olfactory marks 31 (Sections 2.3.4 and 2.3.5) and anatomy-dependent (Section 2.3.2). 3.2.1.1 Smell Glyphs and Fragrance Classes Given real-world objects with distinct “natural” odors (e.g., oranges, pine, lavender, and so on), any odor representation of such an object may be considered a smell glyph. These glyphs act as an olfactory mark, mapping fragrances to discrete information features. Smell glyphs can be grouped into categorical fragrance classes using empirical work clustering together odors by perceived similarity. We base our categorical clusters (Figure 3.3) on empirical and theoretical work by Castro et al. [7] breaking odors into eight discrete groupings (which they refrain from naming, so we take the liberty here): citrus, acerbic-synthetic, leafy, floral, fruity-non-citrus, woody, spicy-smoky-nutty, and heavy-rotten. In this way, distinct fragrances cor- responding to real-world smells (oranges, pine, lavender, and so on) may act as a broad (class) or narrow (glyph) olfactory counterpart to visual glyphs. 3.2.1.2 Molecular Bouquet Consider the example given in Section 2.3.1 in which the scent of motor oil in an orange grove is presented as being more likely to contribute to the formation of an odor-associated memory than the smell of oranges alone. While the smell of motor oil alone may be enough to elicit the memory of the event of exposure to these stimuli, the argument might be made that the combination of motor oil and all of the other odors in the air at the time—say, oranges, grass, dirt, and bark—is 32 even more likely to do so. The person exposed to this cocktail of fragrances, rather than picking out the distinct scent glyphs, might simply remember it as being the smell of the orchard. The chemical topography model of perception treats olfaction as a chemo- reception process determined by the differential binding affinities of constituent molecules to the olfactory receptors. While the human nose can effectively detect fragrance classes, it becomes difficult to recognize individual constituents when more than a few individual fragrances are bundled together [25]. Complex combinations of odor molecules may present an opportunity, however, in creating a unique finger- print for embedding nuanced information views in the user’s head. Once imprinted, the unique bouquet may facilitate improved conceptual recall of the information in the view once emitted again, with or without the visualization present. 3.2.1.3 Airburst Olfaction is how we detect volatile molecules in the air around us. Because of this volatility, there is no olfactory equivalent to a static image. We argue that, with the exception of the direct electro-stimulation of olfactory sensor neurons, divorcing the air carrying odor stimuli from the basic units of conveyance of that stimuli (i.e., considering it to fall strictly into the domain of “substrate”) does not reflect the way human beings experience smell. If the “sniff cycle” is a unit of olfaction [59,60], then we propose an airburst as a temporal unit of olfactation: In Figure 3.3, we visually represent airbursts as the individual cross-sections of a directed stream of 33 Figure 3.4: Summary of olfactory channels air that is flowing toward the user–the sections of air falling between the vertical lines dividing the figure. 3.2.2 Olfactory Channels Like the visual channels, which control features of visual marks, olfactory channels are characteristics of the olfactory marks which can be adjusted depending on values in the data. In this section, we present five olfactory channels (Figure 3.4): Direction, saturation, frequency, air flow rate, and air quality (or, climate). 34 3.2.2.1 Direction (or, Origin Position) The bilateral anatomy of the nose is a reflection of the underused ability in hu- mans to detect the direction of odors and track them to their perceived origin [48,50]. By taking advantage of the stereo nature of the nose, the olfactory interface designer can create the impression that objects in the space around them are emitting an odor from a point of origin relative to the user’s own position. This could be used to direct user’s head to positions in three dimensional space where they are best situated to interact with information encoded for any of their senses. 3.2.2.2 Saturation (or, Chemo-Intensity) A solution is a liquid mixture in which the minor component—a solute—is uniformly distributed within the major component–the solvent, often water, for example. The saturation of a solution is defined as a measure of the amount of solute—the minor component in a solution, such as an aromatic essential oil— dissolved in the solution . We define the concentration of an aromatic solution in terms of volume fraction which may be expressed as volume of solute divided by the volume of the solution. Early studies in experimental psychology have indicated that people experience distinct perception of three intensity levels (three levels of dilution) of odorants [97]. 35 3.2.2.3 Air Flow Rate (or, Kinetic Intensity) Several early empirical studies on the subject of factors influencing olfaction has indicated that the rate of flow of the air saturated with odorous molecules heavily influenced the experience of the participant [96, 98]. A positive change in the flow rate is associated with a positive change in the detection of the olfactory stimuli [95,99]. To propose an untested hypothesis for future research, it is possible that air flow rate may, by virtue of the cross-modality of olfactory perception, act as an indicator of distance between the user and the source of the odor: In a “natural” setting, an odor may be carried farther by a strong wind. Thus, if the odor molecule saturation remains constant, an increase in the flow rate may be perceived by the user as a greater distance between their nose and the source of the odor. The rate of air flow should be considered an olfactory channel with respect not only to the ability of the user to detect a scent, but also a channel with a potential direct relationship with the user’s perception of the space around them. 3.2.2.4 Air Quality (or, Climate) Thermoception—the sense of perceiving temperature—is dependent on ther- moreceptors found on the skin of the human body, including on the epithelium lining the nasal cavity. A thermoreceptor is a non-specialized sense receptor; specifically, it is the receptive portion of a sensory neuron that codes absolute and relative changes in temperature. Olfaction of distinct odors at different temperatures can convey different information, and (at least in some species of animals), higher temperatures 36 tend to improve odor molecule detection [100]. Humidity, barometric pressure, and CO2 concentration have also been shown to affect olfactory perception [101, 102]. Temperature, humidity, and other non-olfactory qualities of the air carrying the odor may be considered an auxiliary channel in information olfactation. 3.2.2.5 Temporal Pattern (or, Scent Animation) Perception of stimulus often involves a two stage process within the human nervous system: An analytical categorization of stimulus into similar features or patterns (spots of light, frequencies of sound), and a configuration process that determines the perception (the sight of a house, the voice of a human) [103]. We propose the temporal pattern of a mark (or collection thereof) as an olfactory meta- channel : The olfactory equivalent of mark animation (channel change over time) created by moderating sequences of “frames” in an olfactory “view.” The temporal pattern of an olfactory view is the set of interval frequencies and durations of diffused odor and auxiliary stimuli (e.g., temperature), and the transitions performed to modify them over time. 3.3 Substrates of Olfaction As with information visualization, the substrates of information olfactation are spatial in nature. The chemo-topographic mapping of the world around the user constructed via olfactory perception is implicitly a spatial one. We use it to gather information about the place we are situated in and the events that have occurred or 37 will occur in them (rain, for example [104]), as well as to detect and locate objects in that space (consider the first reaction a person may have to a foul smell in their car or house, for example). While we do not claim to have an exhaustive list of all possible substrates for the purpose of designing the olfactory interface, we do propose a few general approaches to mapping data to spatial substrates. 3.3.1 Dimensionality We have thus far discussed our olfactory marks in the context of its concep- tual dimensions—the dimensions of the data—as if smell glyphs represent a single dimension, and molecular bouquet represents a multitude of dimensions. For the purposes of the user’s experience, the reverse is true: A bouquet may contain the mapping of many individual variables as smell glyphs, but when taken as an ambient part of the environment, it is itself perceived as only one dimension. As part of our passive sniff cycle, we unconsciously reduce all dimensions down to one [60]—until we need to locate a single odor by engaging in directional active sniffing [61]. This is where spatial substrates (and spatial dimensionality) become most relevant. In our 2D implementation, we create a spatially one-dimensional olfactation corresponding to a two dimensional visualization. In our VR implementation, we use air flow direction and the position and orientation of the user relative to an ob- ject in a VR environment to, for lack of a better word, spoof the user into perceiving a spatially three-dimensional olfactory substrate. However, the substrate of an ol- 38 factation need not be exclusively spatially one- or three-dimensional. In Smelling Screen, Matsukura et al. [105] implement a two-dimensional spatial mapping fea- turing olfactory signals localized to within different regions of an LCD screen. By blowing olfactory molecules from four corners of a monitor, they were able to cre- ate smell regions on the screen. Similarly, in Smellmap, McLean [106] projects a cartographic map of Amsterdam onto a 2D plane of regions coated with 11 custom fragrances based on smells described in a spatial survey. 3.3.2 Structures As with information visualization [86], information olfactation is at its core the mapping of features and entries in the data to be olfactated to its olfactory struc- tures. Olfactory structures are the olfactory representations of spatial substrates, temporal encodings, olfactory marks, and the features thereof to be controlled by the olfactory channels. In the VR example described in section 4.2.4.1, we map our olfactation to the same spatial substrate as our visualization. In our 2D examples, we chose a single spatial dimension, which does not match the 2D visualization. These decisions, chosen for the simplicity of illustrating our model, are not pro- scriptive: We encourage future researchers in the visualization community to use olfactation, and to explore and evaluate a variety of mappings. 39 3.3.3 Airburst Revisited (as a substrate) We have argued that the air, which is necessary as a medium of conveyance for olfactory stimuli (again, excepting direct electro-stimulation), may be viewed as an olfactory mark–divided into segments of arbitrary length as a unit of olfactation. With that said, an airburst cannot be removed from temporal encoding: The number of units of an airburst containing the odorant (frequency) and auxiliary stimuli (temperature, humidity, etc) and the rate at which they are connecting with the user (flow rate) are measures of time. Likewise, the spatial nature of the airburst cannot be ignored: Excluding electro-stimulation and nose-to-the-ground scent tracking, the direction that the air is flowing onto the user is a determinant of the sense of spatial encoding. Even in our prototype’s one-dimensional mode, the user experienced the flow of air from a direction in front of them, albeit a stationary one. The airburst is not only a vehicle for transporting odor molecules, but a vehicle it is nonetheless. 40 Chapter 4: viScent: An Olfactory Display System 4.1 Lo-Fi Prototype The design consists of an visual-olfactory display system, a VR headset, a display unit and a workstation ( Figure 4.1). This prototype was designed as a proof of concept. The olfactory display system is controlled by interactions with the visual display system. The current model of our prototype only allows for the switching on and off of a single fragrance based on interactions with objects in the view. The olfactory display system consists of an ultrasonic atomizer attached to an essence oil cartridge. Upon actuation, a piezo-electric disk, in the atomizer, vibrates at an ultrasonic frequency atomizing the aromatic solution. This is released out in the form a mist. A pneumatic nozzle, connected to an air pump, produces a jet of air that carries the odorous mist to a diffusing fan. The diffusing fan blends the odorous mist with the jet of air producing a gentle diffused flow directed at the user. The system is controlled by an Arduino based control unit which employs an ATmega328P microcontroller. 41 Figure 4.1: Proof of concept of an olfactory display. 42 4.2 viScent(1.0) 4.2.1 viScent(1.0): Implementation The design consists of a visual-olfactory display system, a VR headset, a dis- play unit and a workstation. The olfactory display system is controlled by interac- tions with the visual display system. The system allows switching between scents, altering the temperature of the air carrying the scents, changing the burst frequency of the scents and the direction of air flowing at the user. 4.2.2 viScent(1.0): System Overview Our implementation includes a multi-scent olfactory display system that can be converted between supporting visualizations in a two-dimensional view (Fig- ures 4.2, 4.4), and those in a VR environment (Figures 4.3, 4.5). Apart from deliv- ering aromatic scents, the 2D visualization mode is equipped with air temperature variation based on user interaction. For the sake of reproducibility, we based our desktop olfactory display around the principles described in Herrera et al. [81], with simplicity and cost in mind, although we have extended it to meet the requirements of our model of olfactation by using ultrasonic diffusers allowing the user to select different smell glyphs, and by using a solenoid-controlled airflow through a Peltier module to alter the temperature of the airburst. The visualization in VR mode incorporates a head-mounted display (HMD) augmented with an array of ultra- sonic diffusers with bi-directional airflow output for directional tracking. This gives 43 the user the impression that a three-dimensional olfactory spatial mapping exists around them. In all implementations, we have set burst frequency to intervals of nine seconds ON (aromatic atomization activated), nine seconds OFF (aromatic atomization deactivated) in order to avoid habituation. The olfactory display system consists of six ultrasonic atomizers attached to essence oil cartridges. Upon actuation, a piezo-electric disk vibrates at an ultrasonic frequency, atomizing the aromatic solution. This is released out in the form a mist. The cartridges sit on a table top display unit for the 2D visualization mode whereas the VR visualization mode holds tiny cartridge pods attached on to a HMD. The table top olfactory display unit employs a diffusing fan that blends the odorous mist with air producing a diffused flow directed at the user. It also employs a peltier based air heating system to produce a stream of thermally controlled (heated up) air. An air compressor feeds pressurized air into the peltier based heating system to produce a warm air jet venting alongside the diffusing fan. This table-top display houses the control unit, employing an Arduino Mega 2560 (based on ATmega2560 microcontroller) that controls and activates each of the systems. The VR visual- ization mode employs a bi-directional air stream output, attached to either sides of the HMD. This bi-directional air stream runs on the pressurized air delivered by the compressor. All the pneumatic channels are controlled through electromagnetic solenoid valves. 44 Figure 4.2: viScent(1.0) and the 2D display: magnified view of each of the pri-mary components on the prototype depicting the ultrasonic atomizers,diffusing fan, pneumatic solenoid valves, Peltier-based ther- moelectricheating system and the accompanying 2D visualization. 45 Figure 4.3: viScent and VR: a magnified view of ultrasonic atomization in play and bi-directional air stream nozzles for creating an olfactory spatial mapping. 46 Figure 4.4: viScent(1.0) table top display. Figure 4.5: viScent(1.0) olfactory display for VR. 47 4.2.3 viScent(1.0): Visual Interface and Interaction Research implementing network graph visualizations for immersive, collabora- tive analytics has found a task speed and movement balance advantage from using HMDs over those using CAVEs, while CAVEs have an advantage in communication between users [107]. Because our implementation was not collaborative, we use this as the basis for our decision to use a HMD over a CAVE. In the 3D VR environment, grabbing a node with a controller triggers the diffusion of odor. In the 2D network graph view, clicking a node acts as the diffusion trigger. In the 2D line and point chart, clicking a point diffuses odor and may switch the thermal air flow on or off. The visual interface was designed in Unity, allowing for easy integration between the Arduino control and objects in the view. 4.2.4 viScent(1.0): Examples We are proposing information olfactation largely as a supplement, rather than a replacement, for information visualization, and so our examples all include ba- sic visual components. Our examples include 2-dimensional and 3-dimensional force-directed network graph layouts [108], both of which used the SNAP Bitcoin dataset [109], and a 2-dimensional line and point chart using multivariate building air quality time series data [110]. Rather than presenting our examples as a stan- dard, we use them as a call to action: Our mapping of data to visual and olfactory marks and channels is not proscriptive, but a proof of concept to be improved upon. 48 4.2.4.1 VR 3D Network Graph In light of recent work arguing that immersive environments are the most practical means for introducing taste and olfactory displays [69], and the formation of the domain of immersive analytics [9], we have opted to present one example in VR using the viScent HMD. While there are other papers that offer more sophisticated implementations of information visualization in VR environments [111, 112], the purpose of our application was to act as a simple example of a data structure with a well-explored spatial encoding where olfactation could augment the user’s analytical performance. In this example (Figure 4.6), we used visual channels and olfactory marks to complement each other: Each node represents an entity, its color and smell glyph are determined by its average transaction rating profile (binned into six quantiles, corresponding to our number of smell glyphs), and each link represents a Bitcoin transaction between two entities in the SNAP dataset [109]. 4.2.4.2 2D Network Graph To isolate the ways that the user’s experience of olfactory stimuli differ in a three dimensional workspace relative to a traditional, two-dimensional one, one of our two visual examples is the two-dimensional sibling of our VR network graph, using the same dataset and the more traditional 2D force-directed network layout (Figure 4.7). In both our 3D and 2D network visualizations, we chose the arbitrary pairing of pear-black, lemon-orange, leather-red, coconut-white, lavender-blue, and peppermint-green in order of quantile (low to high) to represent our nodes in smell- 49 Figure 4.6: viScent(1.0) visual-olfactory display in VR: 3D Netowrk Graph. Figure 4.7: viScent(1.0) visual-olfactory display in 2D: Network Graph. 50 Figure 4.8: viScent(1.0) visual-olfactory display in 2D: Line and Points. color combinations. Unlike the 3D view, nodes in the 2D network were not only mapped to smell glyphs, but also to air burst temperature, which was not mapped to a corresponding visual channel: Entities who mostly transacted during the weekend were cool, and those who transacted mainly during the week were hot. 4.2.4.3 2D Line and Points While the purpose of this research is not to explore the entire domain of infor- mation visualization with respect to the areas that olfactation may play a supportive role, we did want to include a multidimensional dataset other than the network graph data used for our VR visualization. We selected building air quality data recorded over the span of several days because features related to attributes of the air seemed an appropriate fit for the purpose of an implementation in which air itself is a dis- play structure [110]. In this example (Figure 4.8), we used glyphs to represent a 51 variable that was not represented visually: CO2 levels. As with our network exam- ple, we binned the observations into six groups (the number of smell glyphs we built into our prototype) based on the quantile they fell into for the variable mapped to smell glyphs (CO2 level). We mapped temperature to the actual temperature in the building, although our prototype only allowed for a high/low switch (temperatures in the upper half of the distribution were encoded as hot; the lower half was encoded as cool). 4.3 viScent(2.0) Here we describe the design and implementation of viScent 2.0 (Figure 4.9, 4.13, 4.10, 4.11), an olfactory display intended for information olfactation. The system design is segmented into components, each responsible for producing unique olfactory perceptual elements that we describe as olfactory channels for encoding information. 4.3.1 viScent(2.0): System Overview The olfactory display system viScent 2.0 is a tabletop olfactory display ca- pable of producing a range of olfactory stimuli for information olfactation. More specifically, the viScent olfactory channels include • Scent type: the specific fragrance (lemon, lavender, leather); • Scent intensity: the amount of scent of each type; 52 Airflow fan array Ultrasonic diffuser Liquid coolant Microcontroller pin pods Power supply supply connections Figure 4.9: The viScent(2.0) system. Liquid Cooling System cooling fans readouts system [piped to peltier] Connections to tabletop display Figure 4.10: viScent(2.0) control tower 53 Figure 4.11: A typical viScent(2.0) user session. Figure 4.12: The coolant reservoir and heat exchanger in viScent(2.0). 54 Figure 4.13: A high-resolution image of olfactory display in viScent(2.0). • Airflow rate: the speed of the air carrying the scent; and • Air temperature: the temperature of the air carrying the scent. Each of these channels are managed by a specific system component, four in total, as well as a control system; we describe them below. All components are enclosed in two custom-designed physical modules: the olfactory display unit and the control tower. The build is a MakerBeam (anodised aluminium beams) framework covered with acrylic plexiglass panels, laser cut into shape. The olfactory display unit acts as an output device housing all four component subsystems. The control tower houses the control system and power supplies. It is re- sponsible for controlling the functioning of all four subsystems. The tower also has digital readouts (Figure 4.10) that display relevant information such as temperature 55 (room, coolant, heating, and cooling core) and power (power and voltage drawn by the heating, cooling, and control systems). 4.3.2 viScent(2.0): Control System The control system is housed in the uppermost section of the control tower, and acts as the brain of the entire system. It is run by an ATMega2560-based mi- crocontroller. This microcontroller interfaces through a USB cable with a computer running a Unity-based software developed to control and operate the entire system. 4.3.3 viScent(2.0): Channel - Scent Types We define scent classes as discrete fragrances as a means of encoding infor- mation. We use essential oils diluted with water as the source of the scent. The oil-water mixture is atomized with an ultrasonic transducer controlled by the AT- Mega2560 microcontroller. The transducer sits on a cork fitted to a glass bottle containing the oil-water mixture. A cotton bud fitted underneath the transducer acts as a channel carrying the scent from the bottle up to the transducer. All bottles are fitted on to the acrylic panel of the display with a custom designed 3D printed housing. We use 6 distinct fragrances for the scent class (5 in the experiment). 4.3.4 viScent(2.0): Channel - Scent Intensity We define scent intensity as the intensity of a certain fragrance. To define scent intensity quantitatively, we shall use the concept of volume fraction. Volume fraction 56 is defined as volume of a constituent divided by the volume of all the constituents of the mixture prior to mixing. Volume fraction is a dimensionless quantity. For our experiment, we dilute essential oils by mixing them with water. Our system uses 5 levels of dilution producing 5 intensity levels of a certain fragrance. The volume fractions of our essential oil mixtures (for the scent intensity experiment) are: • 0.005 (approx.): 0.2 ml essential oil in 40 ml water; • 0.01 (approx.): 0.4 ml essential oil in 40 ml water; • 0.02 (approx.): 0.8 ml essential oil in 40 ml water; • 0.04 (approx.): 1.6 ml essential oil in 40 ml water; and • 0.08 (approx.): 3.2 ml essential oil in 40 ml water. These mixtures are then placed in the ultrasonic atomizer pods for atomization during the experiment, with 0.005 being the volume fraction for the lowest intensity and 0.08 being the volume fraction for the highest intensity. 4.3.5 viScent(2.0): Channel - Airflow Rate We define the airflow rate as an olfactory channel which relates to the speed of the air carrying the smell. Here, the idea is to evaluate if an increasing air flow rate carrying a certain scent relates to an increasing quantity in a dataset or vice versa. We use 12V brushless DC fans to diffuse the scent vapours towards the user. AirFlow rate is controlled by controlling the fan speed. We use L298N driver to 57 control the fan speed with Pulse-Width Modulation(PWM). The L298N driver is controlled by the ATMega2560 microcontroller. 4.3.6 viScent(2.0): Channel - Air Temperature We define the temperature of the air carrying the scent as a channel where the rise or fall in temperature of the air carrying the scent is associated with a rise or a fall in quantity associated with the data set. Designing a thermal system to control air temperature is critical as achieving rapid temperature changes can be complicated. Here we present a detailed description of the thermal system design and challenges associated with it. To achieve a programmable air temperature control, we segment and design the thermal interface in two parts, a heating and a cooling system. This helps us instantly switch between heating and cooling without delays. 4.3.6.1 Air Heating We use resistive heating to maintain a heated core over which air flows. A blower produces an air stream through the heated core: drawing-in air from the surrounding, pushing it through the heated core and out through a vent that opens up to the user. A MOSFET (AOD4184A, N Channel) controls the current flowing through the resistive heating core, thereby controlling the heating core temperature. The microcontroller (ATMega2560) interfaces with this MOSFET with PWM. We also attach a temperature sensor adjacent to the heating core to monitor the tem- 58 perature. A L298N driver drives the blower fan there by controlling the airflow rate. We optimize the airflow rate and the resistive heating to obtain optimal warm air. 4.3.6.2 Air Cooling The air cooling system is one of the most complex systems employed in this prototype. We use Thermoelectric Cooling to maintain a cooling core at subzero temperature. Mirroring the heating system, a blower produces an air stream through the cooled core: drawing-in air from the surrounding, pushing it through the cooled core and out through a vent that opens up to the user. We use four Thermoelectric modules (TEC1-12706) attached underneath an aluminum heat exchanger which acts as the cooling core. These modules sit on top of an aluminum liquid cooling block. The modules are sandwiched between the aluminum heat exchanger (cooling core) and an aluminum liquid cooling block with thermal adhesive. On supplying power, the Thermoelectic modules act as heat pumps pulling heat from the heat ex- changers/cooling core to the other side interfacing with the aluminum liquid cooling block. This rapidly cools down the cooling core thereby heating up the aluminum liquid cooling block. We circulate a coolant (XSPC EC6: a high performance, high thermal conductivity coolant) through the aluminium liquid cooling block and an aluminium heat exchanger that sits outside the olfactory display on the control tower (Figures 4.10, 4.12). This coolant transfers the heat produced by the thermoelectric modules to a heat exchangers. Three 12V DC cooling fans create a steady stream of air flow through the heat exchangers to bring about efficient heat transfer. The 59 coolant is stored in a coolant reservoir connected to a pump that does the circula- tion. Two temperature sensors are connected to this cooling system, one placed on the cooling core to monitor the cooling core temperature and the other dipped inside the coolant in the coolant reservoir tube to monitor the temperature of the coolant. The blower fan is controlled by a L298N driver interfaced with the ATMega2560 Microcontroller to control cool air flow. The thermoelectric modules, the cooling core and the aluminum liquid cooling block are cover with thermal insulation to have maximum efficiency. 4.3.6.3 Challenges While resistive heating and thermo-electric cooling systems work well for heat- ing and cooling air respectively, there are a few challenges associated with them. Both the systems are extremely power-hungry and demand a high amperage of cur- rent. While designing such systems, one has to take into account both these criteria; having a suitable power supply as well as having a capable conductor (wire) to run the current. We address these issues by having a high wattage power supply and using multiple conductors in parallel to carry the current. 60 Chapter 5: Evaluation and Results 5.1 Evaluation This section describes the evaluation process. 5.1.1 Method A perceptual experiment was conducted to evaluate the utility of scent for con- veying abstract information. In doing so, we followed the analogy of past empirical work on graphical perception such as that catalogued by Cleveland and McGill [5], Mackinlay [1], and Bertin [4]. Similar to these studies, the ultimate purpose of the study was to determine an internal ranking between olfactory channels and different types of data: quantitative, ordinal, and nominal [6]. 5.1.2 Apparatus The study was conducted using the viScent 2.0 device as the olfactory display (Section 4.3). The device was connected to a laptop computer running Microsoft Windows 10. The laptop ran the Unity-based viScent control system, as well as an automated testing framework implemented using the viScent API. Instead of the 61 Figure 5.1: Example of a user study in progress. laptop display, we used a 55-inch display with a resolution of 1920× 1080 pixels. The study was conducted in an isolated laboratory space (Figure 5.1, 5.2). The viScent tabletop display was arranged between the participant and the display in a position so that it would not obstruct the screen, yet was still at a comfortably distance from the user’s face. Participants wore noise-canceling ear protection dur- ing the experiment to minimize confounds from ambient noise or sound from the olfactory display. Box and stand fans were used during experiments to maintain air circulation. Furthermore, the space was thoroughly aired out between sessions to eliminate vestigial scents that may otherwise affect task performance. The scent configuration was designed specifically for the experiment. For scent intensity, we used five bottles of different intensities of mango: see Section 4.3.4 62 Figure 5.2: Example of a user study in progress. User reported confi- dence is recorded. 63 for volume fractions used. For scent type, we used five different scents taken from distinct olfactory groups: leather, orange, peppermint, coffee, and pear). Each scent was represented in three different intensities: the low, mid, and high volume fractions in Section 4.3.4. The remaining 4 bottles were not used during the experiment. 5.1.3 Participants 11 paid participants were recruited (7 identified as male, 4 as female) to join our experiment. Participant ages ranged from 22 to 30 years. All participants were university students and had a basic knowledge of data and statistics. No participant reported olfactory dysfunction, and we screened participants against allergies to any essential oil used in the experiment both during recruitment as well as during informed consent prior to the experiment. 5.1.4 Experimental Factors We involved the following two factors in our experiment: • Olfactory Channel (OC): The scent aspects used to convey data. We stud- ied the following four olfactory channels: – Scent Type (S): Using one of five scents to convey data (leather, or- ange, peppermint, coffee, and pear). – Scent Intensity (I): The concentration of mango (five fractions, see Section 4.3.4) used to convey data. 64 – Airflow Rate (A): The speed of the air (i.e., wind) delivering the scent (conveyed using fan voltages). – Temperature (T ): Temperature of the air delivering the scent (two cooling, one neutral, and two heating settings). • Data Type (DT ): The specific type of data being conveyed using scent [6]. Informed by Mackinlay’s three-part ranking [1], we study three data types: – Quantitative (Q): Numbered items that support all arithmetic oper- ations (a combination of Stevens’ interval-scale and ratio-scale levels). Example: integers. – Ordinal (O): Labeled items that support rank order, but not relative degree of difference between items. Example: Monday, Tuesday, Wednes- day, Thursday, Friday (days). – Nominal (N): Categorical items that differentiate only on their names or identifiers. Example: Volvo, GMC, Ford, Toyota, Chevrolet (car brands). 5.1.5 Tasks and Stimuli The experiment involved a single task—identifying a data item conveyed using scent—with three different instantiations depending on the data type D. For all of the tasks, the display showed a visual representation of the data type on the screen (Figure 5.3, 5.4, 5.5): 65 • Quantitative sensing task (TQ): The participant was asked to recover a number encoded using the olfactory channel. Display: A slider with a contin- uous color scale (Figure 5.3). • Ordinal sensing task (TO): The participant was asked to recover an ordered data item encoded using the olfactory channel. Display: A slider with a five- segment color scale (Figure 5.4). • Nominal sensing task (TN): The participant was asked to recover a nominal data item encoded using the olfactory channel. Display: An unordered list of checkboxes (Figure 5.5). Similarly, the mapping from data values to scent differed depending on which olfactory channel OC was used (nominal data was assigned in random order): • Scent Type: Items were assigned to scents depending on their position in the range of possible values. • Scent Intensity: ascending items (if ordered) assigned to ascending scent intensities (essential oil saturations). • Airflow Rate: ascending items (if ordered) assigned to ascending fan volt- ages. • Temperature: ascending items (if ordered) assigned to ascending tempera- ture (cooling and heating elements). 66 Figure 5.3: Example of Quantitative Task. Participants select the value being conveyed using scent. The screen is followed by a dialog asking for participant’s confidence on a 5-level Likert scale. Figure 5.4: Example of Ordinal Task. Participants select the value being conveyed using scent. The screen is followed by a dialog asking for participant’s confidence on a 5-level Likert scale. 67 Figure 5.5: Example of Nominal Task. Participants select the value being conveyed using scent. The screen is followed by a dialog asking for participant’s confidence on a 5-level Likert scale. For ordinal and nominal data types, the data range for tasks was five distinct values for each olfactory channel, which translated to five different scents for S, five different scent intensities for I (Section 4.3.4; we did not use a 0% intensity as the absence of smell is not a reliable signal), five different airflow rates for A (five distinct voltage values to the fans), and five different temperatures for T (one cooling setting, one neutral, and three heating settings with increasing voltage to the radiator elements). For quantitative data, continuous voltage levels were used for the airflow rate R and the temperature T to represent values. However, since scent type S and scent intensity I rely on discrete bottles where the diffuser can only be turned on or off, bottles were blended to generate additional smells to carry more than five values. In general, blending scents is a non-linear process, so more research may be needed 68 here. With this caveat in mind, additional scent types S were generated by blending the three different intensities of the five scents used so that any value between a scent type SA and scent type SB was subdivided into three regions using a blend of scent intensities (H,M,L for high, medium, low) as follows: [0, 0.17) → (H × SA, 0), [0.17, 0.5) → (M × SA, L × SB), [0.5, 0.83) → (L × SA,M × SB), and [0.83, 1) → (0, H × SB). This yielded a total of 13 unique scent blends. For scent intensity I, with five bottles of intensity, each a magnitude stronger than the previous (as described in Section 4.3.4), we simply treated each bottle as digits in a binary number where the lowest intensity represented position 1, the next position 2, etc. With five bottles, we had a total of 25 = 32 distinct intensity blends, and representing a value simply became a matter of normalizing it to the range [0, 32), deriving the binary digits, and activating the corresponding bottles. Each experimental condition (OC ×DT ) was repeated three times. Prior to each block of three repetitions, participants were given a tutorial where they were given the “olfactory legend” that corresponded to the visual display. For example, for scent type S, the participant would get to smell each scent as its associated value on screen was highlighted, e.g., that a smell of lemon corresponded to “Volvo.” A visual label persisted on the screen showing this scent-to-data mapping throughout the block of repetitions, but the olfactory legend was not repeated again. Continuing the example above, the olfactory label “lemon” would be placed under the data label ”Volvo” on screen. 69 5.1.6 Experimental Design Within-participants factorial design was used where each participant was ex- posed to trials for all olfactory channels and data types. This yielded the below design, the order of each experimental condition OC ×DT randomized to counter- balance systemic effects of practice: 4 Olfactory Channels OC (S, I, A, T ) × 3 Data Types DT (Q, O, N) × 3 repetitions × 11 participants 396 trials (36 per participant) For each trial, the accuracy (both whether the answer was correct, and for ordinal and quantitative data, the normalized distance from the correct answer), the completion time, and the Likert-scale confidence rating were collected. The completion time was measured from the beginning of a trial until the end of the 9- second habituation period or when an answer was submitted, whichever was shorter. 5.1.7 Procedure Upon arriving at a session, participants were first given informed consent in an antechamber outside the laboratory space. The purpose here was to screen for allergies to essential oils prior to entering an area that could be potentially hazardous to a person with allergies. After giving consent, the participant was allowed to enter the laboratory space 70 and was given a brief explanation of the purpose of the study. The experimenter demonstrated the olfactory display and the testing framework. The participant was allowed to train on several example trials using different olfactory channels. Timed trials began once the participant indicated they felt comfortable to proceed. Each block of experimental conditions OC×DT began with the above-mentioned tutorial, during which the olfactory legend was displayed. This was followed by the three repetitions, each with a new random data value to sense. The same visual legend persisted during the entire block of three trials. During a trial, olfactory stimulus was active for a total of 9 seconds. This corresponds to the typical sensory habituation period of the human olfactory system. After this period, a visual feed- back indicated that the stimulus was no longer active. Participants were not able to repeat the stimulus in a trial. After submitting a data value corresponding to the olfactory stimulus, the software would pop up a dialog box polling the participant about their level of confidence in their answer on a 5-point Likert scale. This was followed by a blank screen during which a participant could rest between trials, if desired. Once all trials had been completed, the participant was given an exit survey. They were then compensated $10 for their participation. A typical session lasted between 50 to 60 minutes; no single session lasted more than one hour. 71 5.1.8 Hypotheses We formulate the following basic hypotheses and our corresponding motivation about our experiment. We want to emphasize, however, that the goal of this study is not to accept or reject hypotheses, but rather to derive rankings of olfactory channels for different data types. H1 Participants will be significantly more accurate when sensing nominal (N) data using scent type (S) than all other olfactory channels. The distinct nature of scents lends itself well to differentiating between a corresponding discrete set of data items. H2 Participants will be significantly more accurate when sensing ordinal (O) and quantitative (Q) data using scent intensity (I) than all other olfactory channels. Our olfactory systems are sensitive to intensity, and its increasing nature fits ordered data types. H3 Participants will be significantly less accurate when sensing ordinal (O) and quantitative (Q) data using scent type (S) than all other olfactory channels. As a dual to H1, distinct smells are ill-suited to representing ordered data. 5.2 Results Here the results are reviewed from the study, starting with an overview and then organizing findings into the three data types: nominal, ordinal, and quantita- tive. For each data type, accuracy/error and completion time are discussed. The 72 reason we slice our results by data type first is that we are not primarily trying to compare different types, but rather to derive an internal ranking within each type (similar to Mackinlay’s ranking [1]). The subjective feedback provided by partici- pants is also reported. Because null-hypothesis significance testing is drawing increasing criticism in many fields [113, 114], we instead base our analyses on estimation using effect sizes (means) with 95% confidence intervals [115] (i.e., the range containing the mean with 95% probability). This is also consistent with the American Psychological As- sociation’s latest recommendations [116], and is seeing increasing use in visualization and HCI. Dragicevic [114] gives an in-depth background on the practice. 5.2.1 Overview Figure 5.6, 5.7, 5.8 show a summary of the three dependent measures arranged by data type DT . In this overview of results (error, correctness, and completion time) for each data type, error bars show 95% confidence intervals, dots show means. Note that nominal data N has no error since the data type only supports identity, not distance. Similarly, correctness does not include quantitative data Q as it represents a continuous input range, and thus providing the exactly correct answer is not a relevant measure. As stated above, ranking between different data types is of no real consequence to our study, so we will not discuss this data further other than to say that (a) the confidence intervals are relatively tight, which indicates that our experiment yielded strong effects, and that (b) there appears to be little support for 73 Figure 5.6: Normalized distance from correct answer. Figure 5.7: Ratio of exactly correct answers. claiming that any of the data types N , O, and Q exhibits divergent performance compared to any other data type. The only exception may be that the small overlap between CIs (Figure 5.8) suggests that participants were faster when answering trials with nominal N data than with ordinal O data. We also study the perceived confidence rating given by participants after each trial in Figure 5.9, 5.10. Perceived confidence as reported by participants on a Likert scale ranging from -2 (not at all), through 0 (somewhat), to 2 (certain). Error bars show 95% confidence intervals, and dots show averages. The fact that all confidence intervals are well above the neutral is another indication that our experiment is 74 Figure 5.8: Overall completion time for all data types DT . Figure 5.9: Self-reported confidence rating per data type DT . Figure 5.10: Self-reported confidence rating per olfactory channel OC. 75 Figure 5.11: Correctness organized by data type DT and then by olfac- tory channel OC. a success, at least as perceived by the participants themselves. Results per data type in Figure 5.9 seem very similar, and there is little evidence to suggest that participants expressed different confidence ratings for different data types. The confidence ratings in Figure 5.10 are a little more divergent based on the olfactory channel. In general, there is some evidence to suggest that temperature T was the channel that participants felt most confident about. Certainly, participants appear to rate their confidence for temperature T as stronger than for airflow rate A. Furthermore, the confidence intervals for scent type S and scent intensity I are larger than the other two channels. Other pairwise comparisons are more difficult to discriminate. Figures 5.11, 5.12, 5.13, 5.14, 5.15, 5.16, 5.17 summarize the error (distance from the correct value), correctness (ratio of participant’s answers that were exactly correct), and completion time. 76 Figure 5.12: Completion times organized by data type DT and then by olfactory channel OC. Figure 5.13: Error organized by data type DT and then by olfactory channel OC. 77 Figure 5.14: Correctness organized by data type DT and then by olfac- tory channel OC. Figure 5.15: Completion times organized by data type DT and then by olfactory channel OC. 78 Figure 5.16: Error organized by data type DT and then by olfactory channel OC. Figure 5.17: Completion times organized by data type DT and then by olfactory channel OC. 79 5.2.2 Nominal Data Figures 5.11, 5.12 represent participant performance for nominal data N . Note that the error metric is not applicable to nominal data, as there is no distance property for nominal data (it only supports equality). Correctness, however, is defined as whether the participant’s response exactly matched the stimulus, and taken in aggregate it represents the ratio of trials that were exactly correct. The plot in Figure 5.11 depicts 95% confidence intervals that are rather large, indicating that this was a challenging experimental condition. However, while there is significant overlap between CIs, there is moderate evidence to suggest an ordering between olfactory channels OC. In particular, the plot suggests that trials that used intensity I to convey data were outperformed by those that used temperature T . As for the completion times (Figure 5.12), which may be less important for our ranking except for providing context, the spread is smaller here. Temperature T , intensity I, and scent type S appear to yield similar completion times, with a slight advantage for scent type, but they are all outperformed by airflow rate A. 5.2.3 Ordinal Data Data collected for trials involving the ordinal data type O are shown in Fig- ures 5.13, 5.14, 5.15. The error rate (Figure 5.13) here exhibited relatively high spread, with trials that used intensity I seemingly resulting in higher error than the other three channels. There is some evidence that scent type S yields lower error than temperature and airflow, and this is actually supported by the correctness plot 80 (Figure 5.14), where scent has a clear advantage compared to the other channels. Intensity, however, appears to be outperformed by the other channels even in this plot. Both temperature T and airflow A appears to manifest similar participant performance for both error and correctness. As for completion time (Figure 5.15), which again is of less importance pro- vided trials did not require inordinate time to complete (this is not the case), the data suggests that scent type S and airflow rate A are part of the same group that required less time for participants to complete trials using than for temperature T and scent intensity I. 5.2.4 Quantitative Data Finally, the Figures 5.16, 5.17 show data for the quantitative data type Q. In this case, we do not plot correctness, as the quantitative data trials asked partici- pants to answer using a continuous data scale. It is rather unlikely that participants would be able to answer the exact correct value being conveyed using the olfactory stimulus, so instead we rely on the distance from the correct value (e.g., the error) as the accuracy metric. Studying this error metric (Figure 5.16), there is ample evidence that temperature T was the most accurate olfactory channel for perceiving quantitative data. The data also suggests that the airflow rate A is moderately more accurate than the scent type S, and that they both are more accurate than scent intensity I. Completion times in Figure 5.17 are again relatively tight, but seem to indicate 81 that airflow rate A is faster than scent type S and temperature T , and possibly even scent intensity I. Scent intensity also appears to exhibit shorter completion times than both scent and temperature. 5.2.5 Subjective Feedback None of the participants reported ever having used an olfactory display in the past; in fact, many were intrigued by the concept and volunteered for the study mainly to experience it. Several participants expressed curiosity about the real- world applications of our work; “I look forward to see how you will implement this in real life.” In practice, participants spent approximately 45 minutes for each session. While all participants who begun the experiment also completed it, several noted that they felt saturated at the end, their ability to smell diminished. However, we saw no indication of this in our analysis. Nevertheless, participants expressed some surprise in the level of difficulty in the trials; said one participant, “this was a lot harder than I thought.” This may have arisen from the high granularity expected of participants, where some noted that they were easily able to discern the “big picture,” but not the minute details. 5.3 Discussion Summarizing across our results, we point to the generally tight confidence intervals (for example in Figures 5.6, 5.7, 5.8) as a strong indication that our exper- 82 iment is internally valid. Put differently, we see the high quality of our findings as evidence that the combination of experimental design, olfactory display, and evalua- tion protocol is appropriately calibrated. That some of the experimental conditions failed to yield strong evidence in favor of one or another factor is only a minor concern when viewed through this lens. 5.3.1 Quantitative Data Synthesizing the above findings, we now derive a ranking of olfactory channels for nominal data. Just like Mackinlay’s ranking [1], the below ranking is not entirely based on empirical findings; rather, it is supported by empirical data. For each channel, we give a brief motivation for the specific ranking: Q1. Temperature (T ), on account of being the most accurate; Q2. Airflow Rate (A), on account of being slightly more accurate and faster than scent type; Q3. Scent Type (S), on account of being more accurate than intensity; and Q4. Scent Intensity (I), on account of being the least accurate. 5.3.2 Ordinal Data The large and overlapping confidence intervals for correctness (Figure 5.11) seem to indicate that these trials were open to some ambiguity compared to other trials. Scent type outperformed all other channels in error and correctness for ordinal 83 data, indicating that the sharp distinction between scents aided in solidifying the users’ perception of the boundaries between values along a discrete scale. O1. Scent Type (S), on account of being the most accurate (both in error and correctness); O2. Airflow Rate (A), on account of being slightly more accurate (lower error, higher correctness) and faster than temperature; O3. Temperature (T ), on account of being more accurate (both in error and correctness) than intensity; and O4. Scent Intensity (I), on account of being the least accurate. 5.3.3 Nominal Data Similar to ordinal data, we saw large spreads in the confidence intervals for all metrics in the nominal results. Of course, this obviously stems partly from the fact that being exactly correct for nominal data is a more strict metric than the error distance used as a metric for other data types. In fact, correctness for ordinal data (Figure 5.14) exhibits the same large CIs as it does for nominal data. Still, the data is also disappointing in that it provides little support for H1: the data does not provide any evidence that scent type yields the best accuracy for nominal data. At best, it is possible to say that scent type shows comparable accuracy to temperature, but temperature still has an advantage. We were surprised to find that scent type, which performed better than all other channels in conveying ordinal data, did so 84 poorly at conveying categorical data. N1. Temperature (T ), on account of being the most accurate; N2. Scent Type (S), on account of being second-most accurate and second fastest; N3. Airflow Rate (A), on account of being the fastest; and N4. Scent Intensity (I), on account of being the least accurate. 5.3.4 Smelling Least → Smelling Best One of the more surprising findings from the study was that scent intensity was outperformed by basically all other olfactory channels for all data types (the plots in Figures 5.11, 5.12, 5.13, 5.14, 5.15, 5.16, 5.17 give the detailed results). This clearly disproved our hypothesis H2, in which we predicted the exact opposite. However, if we investigate this phenomenon closer, we can begin to find an explanation in the literature. In psychophysics, the quantitative study of physical stimuli and the sensa- tions and perceptions they produce, the concept of sensory scaling in assigning perceived numbers to sensory experiences is well-known [117]. Basically, sensory experiences are subjective, and building a personalized scale for specific senses is a time-consuming process based on past experience and exposure. What one person ranks as strong stimulus—say, a 9 on a scale of 1 to 10 commonly referred to as the Labeled Magnitude Scale (LMS) [118]—may merely rate as moderate for someone 85 else, e.g., a 6. Furthermore, some people merely have a higher sensory range than others; for example, so-called “supertasters” [119] experience taste with far greater intensity than others. Also, intensities are modified by their context; for example, a word such as “large” or “small” all depend on the noun it describes. This is why Stevens [120] can give the following example without introducing ambiguity: “Mice may be called large or small, and so may elephants, and it is quite understand- able when someone says it was a large mouse that ran up the trunk of the small elephant.” To address this, psychophysics researchers have introduced the so-called “gen- eral” Labeled Magnitude Scale (gLMS) [121] where instead of labeling the rungs on the scale using the same specific sense—e.g., “10 is the most intense smell you have ever experienced”—the scale is labeled using the strongest imaginable sensation of any kind, i.e., not restricted to the specific sensory channel. This begins to address the personalized concern, but arguably still makes for a subjective scale. Nevertheless, mitigating the scaling problem takes time, and scent intensity is often a sense that typical people train little. Since the goal of our experiment was to empirically understand information olfactation with participants representative of the general population, we did not provide any extended training in the intensity tutorial. Furthermore, the nature of our experiment precluded us from leveraging the gLMS scale since all trial blocks were preceded by an “olfactory legend”. However, as described in Section 4.3.4, we did base our scent intensity on the so-called “power law of psychophysics” [122]. 86 5.3.5 Limitations One limitation, implicit in both our study design and the theoretical model proposed in [16], is our assumption of cross-modality in olfaction as dominating the users’ isolation of tactile and thermal stimuli, divorcing it from scent. While we supervised the participants during the study to ensure that the thermal and airflow channel modifications were centered on their olfactory sensory system (i.e., the nasal region of their faces), there is still room for perfection in isolating the impact these features impose on users’ olfaction in future work. In spite of this limitation, the winning performance of scent type for conveying ordinal data acts as a counter-argument to this being the case. Scent outpaces— and has very little overlap in the distribution of correctness with—other channels for this data type. Beyond indicating that scent outperforms other channels for this type of data, it also bolsters the hardware as accurately conveying information as smells, and combats the notion that users were simply detecting other stimuli to derive their answers. Finally, our study was a laboratory study, which limits the pool of potential participants to those available for on-premises user sessions. As a result, our study included 11 participants. With that said, to quote Dragicevic [114], “there is no magic number of participants.” Our confidence intervals in all are extremely tight for all channels and data types, which we argue supports our findings as being reasonably robust. 87 Chapter 5: Conclusion and Future Work We have introduced the design space of Information Olfactation, its marks, channels, and substrates, along with a high-level task taxonomy for design, as a supplement to information visualization and immersive as well as ubiquitous ana- lytics. As a proof of concept, the theory was extended to application in viScent, an implementation of most of the olfactory marks and channels for analysis outlined in the model of information olfactation. We have empirically evaluated the olfactory perception of information, and the findings are summarized in the ranking of olfactory channels (Figure 5.1). While the presentation mirrors seminal work by Mackinlay [1] and Clevelnd and McGill [5], it is the first study of its kind: an empirical evaluation of olfactory display to convey features of abstract data. The disappointing results for odor intensity for all data types, as well as for Figure 5.1: Ranking of olfactory channels organized by data type; in- spired by Mackinlay’s ranking of visual channels [1]. 88 scent type in encoding quantitative and nominal data, warrants further exploration. While we believe our hardware implementation accurately conveyed these signals to the user, as with any hardware device, it is an approximation simulating the desired stimulus, and the possibility remains that our participants may not be perceiving the scent intensity with as granular a level of detail as is required for the task at hand. Refining the granularity of detail in the level of intensity in scent for presenting users with abstract information is an open area of research ripe for future work. Further refinement of olfactation techniques and their incorporation into visual- olfactory systems based on findings from user studies is another, longer-term oppor- tunity for picking low-hanging research fruit. One measure that is common in the literature surrounding olfaction, but not included in our model of information olfactation, is that of hedonic scale. The work by Obrist et al. [32] (noted in Section 2.3 for their introduction of categories of user experience in olfactory interfaces) and a later study extending it by Dmitrenko et al. [74], are heavily influenced by readings on hedonic measures. The perceived pleasantness of an odor is a fairly subjective one [123], but there is some evidence from studies using scent to augment the experience of driving automobiles that “good” smells improve user task performance relative to exposure to unpleasant or no specific olfactory stimuli [56,124]. While this makes it an inappropriate olfactory channel analogous to the aesthetic merit of visual composition, it also, like visual aesthetic, should be taken into consideration when designing the olfactory inter- face; design recommendations around this metric therefore would be appropriate for future studies. 89 While the approach in this research is in many ways mirroring the field of in- formation visualization, we want to emphasize that we are not in any way proposing that the topic of information olfactation will eventually supersede infovis. There is a reason why visualization is such a powerful information communication mechanism, and that is because our visual system is our most important, highest-bandwidth, and most accurate sense. The sense of scent has only a fraction of the resolution, capacity, and flexibility as vision. In other words, even if we had wanted to, there is little practical outlook for creating data-rich applications where the olfactory display is the only display. We see information olfactation as a complement rather than as a replacement for information visualization, where scent can provide strong, recognizable, and even visceral responses to information displays. 90 Bibliography [1] Jock D. Mackinlay. Automating the design of graphical presentations of rela- tional information. ACM Transactions on Graphics, 5(2):110–141, 1986. [2] Albert M. Cook and Janice Miller Polgar. Assistive Technologies: Principles and Practice. Mosby, St Louis, MO, USA, 4th edition, 2015. [3] Tamara Munzner. Visualization Analysis and Design. CRC Press, Boca Raton, FL, USA, 2014. [4] Jacques Bertin. Sémiologie graphique. Mouton/Gauthier-Villars, Paris, France, 1967. [5] William S. Cleveland and Robert McGill. Graphical perception: Theory, ex- perimentation and application to the development of graphical methods. Jour- nal of the American Statistical Association, 79(387):531–554, September 1984. [6] Stanley Smith Stevens. On the theory of scales of measurement. Science, 103(2684):677–680, 1946. [7] Jason B. Castro, Arvind Ramanathan, and Chakra S. Chennubhotla. Cate- gorical dimensions of human odor descriptor space revealed by non-negative matrix factorization. PLOS ONE, 8(9):1–16, Sep 2013. [8] Hiroshi Ishii, Craig Wisneski, Scott Brave, Andrew Dahley, Matt Gorbet, Brygg Ullmer, and Paul Yarin. ambientROOM: Integrating ambient media with architectural space. In Conference Summary on ACM Human Factors in Computing Systems, pages 173–174, 1998. [9] Tom Chandler, Maxime Cordeil, Tobias Czauderna, Tim Dwyer, Jaroslaw Glowacki, Cagatay Goncu, Matthias Klapperstueck, Karsten Klein, Falk Schreiber, and Elliot Wilson. Immersive analytics. In Proceedings of the International Symposium on Big Data Visual Analytics, pages 1–8, Sep 2015. 91 [10] Niklas Elmqvist and Pourang Irani. Ubiquitous analytics: Interacting with big data anywhere, anytime. IEEE Computer, 46(4):86–89, 2013. [11] Mihaly Csikszentmihalyi. Finding Flow: The Psychology of Engagement with Everyday Life. Basic Books, New York, NY, USA, 1997. [12] Niklas Elmqvist, Andrew Vande Moere, Hans-Christian Jetter, Daniel Cernea, Harald Reiterer, and TJ Jankun-Kelly. Fluid interaction for information vi- sualization. Information Visualization, 10(4):327–340, 2011. [13] Gary Fontaine. The experience of a sense of presence in intercultural and international encounters. Presence: Teleoperators and Virtual Environments, 1(4):482–490, Jan. 1992. [14] Mel Slater, Martin Usoh, and Anthony Steed. Depth of presence in virtual environments. Presence: Teleoperators and Virtual Environments, 3(2):130– 144, Jan. 1994. [15] Bob G. Witmer and Michael J. Singer. Measuring presence in virtual en- vironments: A presence questionnaire. Presence: Teleoperators and Virtual Environments, 7(3):225–240, 1998. [16] Biswaksen Patnaik, Andrea Batch, and Niklas Elmqvist. Information olfacta- tion: Harnessing scent to convey data. IEEE Transactions on Visualization and Computer Graphics, 25(1):726–736, 2018. [17] Caroline Bushdid, Marcelo O Magnasco, Leslie B Vosshall, and Andreas Keller. Humans can discriminate more than 1 trillion olfactory stimuli. Sci- ence, 343(6177):1370–1372, 2014. [18] Fouzia El Mountassir, Christine Belloir, Löıc Briand, Thierry Thomas- Danguin, and Anne-Marie Le Bon. Encoding odorant mixtures by human olfactory receptors. Flavour and Fragrance Journal, 31(5):400–407, 2016. [19] Takeshi Imai, Hitoshi Sakano, and Leslie B. Vosshall. Topographic mapping—the olfactory system. Cold Spring Harbor Perspectives in Biology, 2(8):a001776, 2010. [20] Benjamin Auffarth. Understanding smell-The olfactory stimulus problem. Neuroscience & Biobehavioral Reviews, 37(8):1667–1679, 2013. [21] Claire A. de March, SangEun Ryu, Gilles Sicard, Cheil Moon, and Jérôme Golebiowski. Structure-odour relationships reviewed in the postgenomic era. Flavour and Fragrance Journal, 30(5):342–361, 2015. [22] Kiyomitsu Nara, Luis R. Saraiva, Xiaolan Ye, and Linda B. Buck. A large-scale analysis of odor coding in the olfactory epithelium. Journal of Neuroscience, 31(25):9179–9191, 2011. 92 [23] Kerry J. Ressler, Susan L. Sullivan, and Linda B. Buck. Information coding in the olfactory system: Evidence for a stereotyped and highly organized epitope map in the olfactory bulb. Cell, 79(7):1245–1255, 1994. [24] Rachel S. Herz and Trygg Engen. Odor memory: Review and analysis. Psy- chonomic Bulletin & Review, 3(3):300–313, 1996. [25] Joseph Nathaniel Kaye. Symbolic Olfactory Display. PhD thesis, Mas- sachusetts Institute of Technology, 2001. [26] Gordon M. Shepherd. The human sense of smell: Are we better than we think? PLOS Biology, 2(5), May 2004. [27] Kathrin Kaeppler and Friedrich Mueller. Odor classification: A review of factors influencing perception-based odor arrangements. Chemical Senses, 38(3):189–209, 2013. [28] Aiko Nambu, Takuji Narumi, Kunihiro Nishimura, Tomohiro Tanikawa, and Michitaka Hirose. Visual-olfactory display using olfactory sensory map. In Proceedings of the IEEE Virtual Reality Conference, pages 39–42, Mar 2010. [29] Richard C. Gerkin and Jason B. Castro. The number of olfactory stimuli that humans can discriminate is still unknown. Elife, 4, 2015. [30] Hein L Klopping. Olfactory theories and the odors of small molecules. Journal of Agricultural and Food Chemistry, 19(5):999–1004, 1971. [31] Luca Turin. A spectroscopic mechanism for primary olfactory reception. Chemical senses, 21(6):773–791, 1996. [32] Marianna Obrist, Alexandre N. Tuch, and Kasper Hornbæk. Opportunities for odor: Experiences with smell and implications for technology. In Proceedings of ACM Conference on Human Factors in Computing Systems, pages 2843– 2852, 2014. [33] Don Norman. The Design of Everyday Things. Basic Books, 2013. [34] Yun Wang, Xiaojuan Ma, Qiong Luo, and Huamin Qu. Data Edibilization: Representing data with food. In Extended Abstracts of the ACM Conference on Human Factors in Computing Systems, pages 409–422, 2016. [35] R. Bowen Loftin. Multisensory perception: beyond the visual in visualization. Computing in Science Engineering, 5(4):56–58, Jul 2003. [36] Jonathan C. Roberts and Rick Walker. Using all our senses: the need for a unified theoretical approach to multi-sensory information visualization. In Workshop on the Role of Theory in Information Visualization, 2010. [37] JA Desor and Gary K. Beauchamp. The human capacity to transmit olfactory information. Perception & Psychophysics, 16(3):551–556, 1974. 93 [38] Oluwakemi A. Ademoye and Gheorghita Ghinea. Information recall task im- pact in olfaction-enhanced multimedia. ACM Transactions on Multimedia Computing Communications Applications, 9(3):17:1–17:16, Jul 2013. [39] Arnie Cann and Debra A Ross. Olfactory stimuli as context cues in human memory. The American Journal of Psychology, 102(1):91–102, 1989. [40] Rachel S. Herz. The effects of cue distinctiveness on odor-based context- dependent memory. Memory & Cognition, 25(3):375–380, May 1997. [41] Artin Arshamian, Emilia Iannilli, Johannes C. Gerber, Johan Willander, Jonas Persson, Han-Seok Seo, Thomas Hummel, and Maria Larsson. The functional neuroanatomy of odor evoked autobiographical memories cued by odors and words. Neuropsychologia, 51(1):123–131, 2013. [42] David H. Gire, Diego Restrepo, Terrence J. Sejnowski, Charles Greer, Juan A. De Carlos, and Laura Lopez-Mascaraque. Temporal processing in the olfactory system: Can we see a smell? Neuron, 78(3):416–432, 2013. [43] Rachel S. Herz. Emotion experienced during encoding enhances odor retrieval cue effectiveness. The American Journal of Psychology, 110(4):489, 1997. [44] Johan Willander and Maria Larsson. Smell your way back to childhood: Au- tobiographical odor memory. Psychonomic Bulletin & Review, 13(2):240–244, 2006. [45] Fredrik U. Jönsson, Per Møller, and Mats J. Olsson. Olfactory working memory: Effects of verbalization on the 2-back task. Memory & Cognition, 39(6):1023–1032, Aug 2011. [46] Carmel A. Levitan, Jiana Ren, Andy T. Woods, Sanne Boesveldt, Jason S. Chan, Kirsten J. McKenzie, Michael Dodson, Jai A. Levin, Christine X.R. Leong, and Jasper J.F. van den Bosch. Cross-cultural color-odor associations. PloS one, 9(7), 2014. [47] Grant Hanson-Vaux, Anne-Sylvie Crisinel, and Charles Spence. Smelling shapes: Crossmodal correspondences between odors and shapes. Chemical Senses, 38(2):161–166, 2012. [48] Jess Porter, Brent Craven, Rehan M. Khan, Shao-Ju Chang, Irene Kang, Benjamin Judkewitz, Jason Volpe, Gary Settles, and Noam Sobel. Mechanisms of scent-tracking in humans. Nature Neuroscience, 10(1):27, 2007. [49] Yuya Kakutani, Takuji Narumi, Tatsu Kobayakawa, Takayuki Kawai, Yuko Kusakabe, Satomi Kunieda, and Yuji Wada. Taste of breath: The temporal order of taste and smell synchronized with breathing as a determinant for taste and olfactory integration. Scientific Reports, 7(1):8922, 2017. 94 [50] Johannes Frasnelli, Genevieve Charbonneau, Olivier Collignon, and Franco Lepore. Odor localization and sniffing. Chemical Senses, 34(2):139–144, 2009. [51] Kenneth C. Catania. Stereo and serial sniffing guide navigation to an odour source in a mammal. Nature Communications, 4:1441, 2013. [52] Andres Gongora, Javier G Monroy, and Javier Gonzalez-Jimenez. A robotic experiment toward understanding human gas-source localization strategies. In Proceedings of the ISOCS/IEEE Symposium on Olfaction and Electronic Nose, pages 1–3, 2017. [53] Atsushi Kohnotoh and Hiroshi Ishida. Active stereo olfactory sensing sys- tem for localization of gas/odor source. In Proceedings of the International Conference on Machine Learning and Applications, pages 476–481, Dec 2008. [54] Olivier Rochel, Dominique Martinez, Etienne Hugues, and Frédéric Sarry. Stereo-olfaction with a sniffing neuromorphic robot using spiking neurons. In Proceedings of the European Conference on Solid-State Transducers, pages 1–4, Prague, Czech Republic, Sep 2002. [55] Shenbing Kuang and Tao Zhang. Smelling directions: Olfaction modulates ambiguous visual motion perception. Scientific Reports, 4:5796, 2014. [56] Dmitrijs Dmitrenko, Emanuela Maggioni, Chi Thanh Vi, and Marianna Obrist. What did i sniff?: Mapping scents onto driving-related messages. In Proceedings of the ACM International Conference on Automotive User In- terfaces and Interactive Vehicular Applications, AutomotiveUI ’17, pages 154– 163, 2017. [57] Alexander Poellinger, Robert Thomas, Peter Lio, Anne Lee, Nikos Makris, Bruce R. Rosen, and Kenneth K. Kwong. Activation and habituation in olfaction—an fMRI study. Neuroimage, 13(4):547–560, 2001. [58] Noam Sobel, Vivek Prabhakaran, Zuo Zhao, John E. Desmond, Gary H. Glover, Edith V. Sullivan, and John D.E. Gabrieli. Time course of odorant- induced activation in the human primary olfactory cortex. Journal of Neuro- physiology, 83(1):537–551, 2000. [59] Adam Kepecs, Naoshige Uchida, and Zachary F. Mainen. The sniff as a unit of olfactory processing. Chemical Senses, 31(2):167–179, 2005. [60] Justus V. Verhagen, Daniel W. Wesson, Theoden I. Netoff, John A. White, and Matt Wachowiak. Sniffing controls an adaptive filter of sensory input to the olfactory bulb. Nature Neuroscience, 10:631–639, 2007. [61] Noam Sobel, Vivek Prabhakaran, John E. Desmond, Gary H. Glover, R.L. Goode, Edith V. Sullivan, and John D.E. Gabrieli. Sniffing and smelling: Separate subsystems in the human olfactory cortex. Nature, 392(6673):282, 1998. 95 [62] Kristina Simonyan, Ziad S Saad, Torrey MJ Loucks, Christopher J Poletto, and Christy L. Ludlow. Functional neuroanatomy of human voluntary cough and sniff production. Neuroimage, 37(2):401–409, 2007. [63] Joel Mainland and Noam Sobel. The sniff is part of the olfactory percept. Chemical Senses, 31(2):181–196, 2006. [64] Avery N. Gilbert, Robyn Martin, and Sarah E. Kemp. Cross-modal corre- spondence between vision and olfaction: The color of smells. The American Journal of Psychology, 109(3):335–351, 1996. [65] Ophelia Deroy, Anne-Sylvie Crisinel, and Charles Spence. Crossmodal cor- respondences between odors and contingent features: odors, musical notes, and geometrical shapes. Psychonomic Bulletin & Review, 20(5):878–896, Oct 2013. [66] Sangyun Kim, Junseok Park, Junseong Bang, and Haeryong Lee. Seeing is smelling: Localizing odor-related objects in images. In Proceedings of the ACM Augmented Human International Conference, pages 15:1–15:9, 2018. [67] Olivia Jezler, Elia Gatti, Marco Gilardi, and Marianna Obrist. Scented ma- terial: Changing features of physical creations based on odors. In Extended Abstracts of the ACM Conference on Human Factors in Computing Systems, pages 1677–1683, 2016. [68] Takuji Narumi, Takashi Kajinami, Shinya Nishizaka, Tomohiro Tanikawa, and Michitaka Hirose. Pseudo-gustatory display system based on cross-modal inte- gration of vision, olfaction and gustation. In Proceedings of the IEEE Virtual Reality Conference, pages 127–130, Mar 2011. [69] Charles Spence, Marianna Obrist, Carlos Velasco, and Nimesha Ranasinghe. Digitizing the chemical senses: Possibilities & pitfalls. International Journal of Human-Computer Studies, 107:62–74, 2017. Multisensory Human-Computer Interaction. [70] Surina Hariri, Nur Ain Mustafa, Kasun Karunanayaka, and Adrian David Cheok. Electrical stimulation of olfactory receptors for digitizing smell. In Proceedings of the ACM Workshop on Multimodal Virtual and Augmented Reality, pages 4:1–4:4, 2016. [71] Morton L. Heilig. Sensorama simulator, Aug 1962. US Patent 3050870. [72] Donald A. Washburn and Lauriann M. Jones. Could olfactory displays improve data visualization? Computing in Science Engineering, 6(6):80–83, Nov 2004. [73] David Dobbelstein, Steffen Herrdum, and Enrico Rukzio. inScent: A wearable olfactory display as an amplification for mobile notifications. In Proceedings of the ACM International Symposium on Wearable Computers, pages 130–137, 2017. 96 [74] Dmitrijs Dmitrenko, Emanuela Maggioni, and Marianna Obrist. OSpace: To- wards a systematic exploration of olfactory interaction spaces. In Proceedings of the ACM Conference on Interactive Surfaces and Spaces, pages 171–180, 2017. [75] Richard Grace and Sonya Steward. Drowsy driver monitor and warning sys- tem. In Proceedings of the International Driving Symposium on Human Fac- tors in Driver Assessment, Training and Vehicle Design, pages 201–208, 2001. [76] Judith Amores and Pattie Maes. Essence: Olfactory interfaces for unconscious influence of mood and cognitive performance. In Proceedings of the ACM Conference on Human Factors in Computing Systems, pages 28–34, 2017. [77] Adam Bodnar, Richard Corbett, and Dmitry Nekrasovski. AROMA: Ambient awareness through olfaction in a messaging application. In Proceedings of the ACM Conference on Multimodal Interfaces, pages 183–190, 2004. [78] Stephen Brewster, David McGookin, and Christopher Miller. Olfoto: De- signing a smell-based interaction. In Proceedings of the ACM Conference on Human Factors in Computing Systems, pages 653–662, 2006. [79] Mei-Kei Lai. Universal scent blackbox: Engaging visitors communication through creating olfactory experience at art museum. In Proceedings of the ACM Conference on the Design of Communication, pages 27:1–27:6, 2015. [80] Keisuke Hasegawa, Liwei Qiu, and Hiroyuki Shinoda. Interactive midair odor control via ultrasound-driven air flow. In Proceedings of ACM SIGGRAPH Asia Emerging Technologies, pages 8:1–8:2, 2017. [81] Nicolas S. Herrera and Ryan P. McMahan. Development of a simple and low- cost olfactory display for immersive media experiences. In Proceedings of the ACM Workshop on Immersive Media Experiences, pages 1–6, 2014. [82] Chomtip Pornpanomchai, Khanti Benjathanachat, Suradej Prechaphuet, and Jaruwat Supapol. Ad-Smell: Advertising movie with a simple olfactory dis- play. In Proceedings of the International Conference on Internet Multimedia Computing and Service, pages 113–118, 2009. [83] Tali Weiss, Sagit Shushan, Aharon Ravia, Avital Hahamy, Lavi Secundo, Aharon Weissbrod, Aya Ben-Yakov, Yael Holtzman, Smadar Cohen-Atsmoni, Yehudah Roth, and Noam Sobel. From nose to brain: Un-sensed electrical currents applied in the nose alter activity in deep brain structures. Cerebral Cortex, 26(11):4180–4191, 2016. [84] Thomas H. Alexander and Terence M. Davidson. Intranasal zinc and anosmia: The zinc-induced anosmia syndrome. The Laryngoscope, 116(2):217–220, 2006. 97 [85] Mechtild M. Vennemann, Thomas Hummel, and Klaus Berger. The association between smoking and smell and taste impairment in the general population. Journal of Neurology, 255(8):1121–1126, 2008. [86] Stuart K. Card, Jock D. Mackinlay, and Ben Shneiderman, editors. Readings in Information Visualization: Using Vision to Think. Morgan Kaufmann, 1999. [87] E. Bruce Goldstein and James R. Brockmole. Sensation and Perception. Cen- gage Learning, Stamford, CT, USA, 10th edition, 2017. [88] E. Bruce Goldstein. Cognitive Psychology: Connecting Mind, Research and Everyday Experience. Cengage Learning, Stamford, CT, USA, 4th edition, 2015. [89] Robert J. Sternberg and Karin Sternberg. Cognitive Psychology. Cengage Learning, Stamford, CT, USA, 6th edition, 2012. [90] Colin Ware. Information Visualization: Perception for Design. Morgan Kauf- mann Publishers, San Francisco, CA, USA, 3rd edition, 2012. [91] Walter C. Eells. The relative merits of circles and bars for representing compo- nent parts. Journal of the American Statistical Association, 21(154):119–132, 1926. [92] Frederick E. Croxton and Roy E. Stryker. Bar charts versus circle diagrams. Journal of the American Statistical Association, 22(160):473–482, 1927. [93] Frederick E. Croxton and Harold Stein. Graphic comparisons by bars, squares, circles, and cubes. Journal of the American Statistical Association, 27(177):54–60, 1932. [94] Lewis V. Peterson and Wilbur Schramm. How accurately are different kinds of graphs read? Educational Technology Research and Development, 2(3):178– 189, June 1954. [95] Maxwell M. Mozell, Paul F. Kent, and Stephen J. Murphy. The effect of flow rate upon the magnitude of the olfactory response differs for different odorants. Chemical Senses, 16(6):631–649, 1991. [96] Maxwell M. Mozell, Paul R. Sheehe, SW Swieck, Daniel B. Kurtz, and David E. Hornung. A parametric study of the stimulation variables affect- ing the magnitude of the olfactory nerve response. The Journal of General Physiology, 83(2):233–267, 1984. [97] Trygg Engen and Carl Pfaffmann. Absolute judgments of odor intensity. Jour- nal of Experimental Psychology, 58(1):23, 1959. 98 [98] Don Tucker. Physical variables in the olfactory stimulation process. The Journal of General Physiology, 46(3):453–489, 1963. [99] KPWMM Keyhani, PW Scherer, and MM Mozell. Numerical simulation of airflow in the human nasal cavity. Journal of Biomechanical Engineering, 117(4):429–441, 1995. [100] Jacob Riveron, Tamara Boto, and Esther Alcorta. The effect of environmental temperature on olfactory perception in drosophila melanogaster. Journal of Insect Physiology, 55(10):943–951, 2009. [101] Michael Kuehn, Heiko Welsch, Thomas Zahnert, and Thomas Hummel. Changes of pressure and humidity affect olfactory function. European Archives of Oto-Rhino-Laryngology, 265(3):299–302, 2008. [102] Yong Li, Yanping Yuan, Chaofeng Li, Xu Han, and Xiaosong Zhang. Hu- man responses to high air temperature, relative humidity and carbon dioxide concentration in underground refuge chamber. Building and Environment, 131:53–62, 2018. [103] Donald A. Wilson. Pattern separation and completion in olfaction. Annals of the New York Academy of Sciences, 1170(1):306–312, 2009. [104] Young Soo Joung and Cullen R. Buie. Aerosol generation by raindrop impact on soil. Nature Communications, 6:6083, 2015. [105] Haruka Matsukura, Tatsuhiro Yoneda, and Hiroshi Ishida. Smelling screen: Development and evaluation of an olfactory display system for presenting a vir- tual odor source. IEEE Transactions on Visualization and Computer Graphics, 19(4):606–615, Apr 2013. [106] Kate McLean. Smellmap: Amsterdam—olfactory art and smell visualization. In Proceedings of the IEEE VIS Arts Program, pages 143–145, 2014. [107] Maxime Cordeil, Tim Dwyer, Karsten Klein, Bireswar Laha, Kim Marriott, and Bruce H. Thomas. Immersive collaborative analysis of network connec- tivity: CAVE-style or head-mounted display? IEEE Transactions on Visual- ization and Computer Graphics, 23(1):441–450, 2017. [108] Tim Dwyer. Scalable, versatile and simple constrained graph layout. Computer Graphics Forum, 28(3):991–998, 2009. [109] Srijan Kumar, Francesca Spezzano, VS Subrahmanian, and Christos Falout- sos. Edge weight prediction in weighted signed networks. In Proceedings of the IEEE International Conference on Data Mining, pages 221–230. IEEE, Dec 2016. 99 [110] G.P. Vasilyev, I.A. Tabunshchikov, M.M. Brodach, V.A. Leskov, N.V. Mitro- fanova, N.A. Timofeev, V.F. Gornov, and G.V. Esaulov. Modeling moisture condensation in humid air flow in the course of cooling and heat recovery. Energy and Buildings, 112:93–100, 2016. [111] Maxime Cordeil, Andrew Cunningham, Tim Dwyer, Bruce H. Thomas, and Kim Marriott. ImAxes: Immersive axes as embodied affordances for interac- tive multivariate data visualisation. In Proceedings of the ACM Symposium on User Interface Software and Technology, pages 71–83, 2017. [112] Yalong Yang, Bernhard Jenny, Haohui Chen, Maxime Cordeil, Tim Dwyer, and Kim Marriott. Maps and globes in virtual reality. Computer Graphics Forum, 37(3), 2018. [113] Geoff Cumming and Sue Finch. Inference by eye: confidence intervals and how to read pictures of data. American Psychologist, 60(2):170, 2005. [114] Pierre Dragicevic. Fair statistical communication in HCI. In Judy Robertson and Maurits Kaptein, editors, Modern Statistical Methods for HCI, pages 291– 330, Boston, MA, USA, 2016. Springer. [115] Geoff Cumming. Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. Routledge, New York, NY, USA, 2013. [116] APA. The Publication Manual of the American Psychological Association. Washington, DC, 6th edition, 2010. [117] Harry T. Lawless and Hildegarde Heymann. Scaling, pages 208–264. Springer, Boston, MA, USA, 1999. [118] Barry G. Green, Gregory S. Shaffer, and Magdalena M. Gilmore. Derivation and evaluation of a semantic scale of oral sensation magnitude with apparent ratio properties. Chemical Senses, 18(6):683–702, 12 1993. [119] Linda M. Bartoshuk. Sweetness: history, preference, and genetic variability. Food Technology, 45(11):108–113, 1991. [120] Stanley Smith Stevens. Adaptation-level vs. the relativity of judgment. The American Journal of Psychology, 71(4):633–646, December 1958. [121] L.M Bartoshuk, V.B Duffy, B.G Green, H.J Hoffman, C.-W Ko, L.A Lucchina, L.E Marks, D.J Snyder, and J.M Weiffenbach. Valid across-group comparisons with labeled scales: the gLMS versus magnitude matching. Physiology & Behavior, 82(1):109–114, 2004. [122] Stanley Smith Stevens. On the psychophysical law. Psychological Review, 64(3):153–181, 1957. 100 [123] Hans Distel, Saho Ayabe-Kanamura, Margarita Martnez-Gmez, Ina Schicker, Tatsu Kobayakawa, Sachiko Saito, and Robyn Hudson. Perception of every- day odors—correlation between intensity, familiarity and strength of hedonic judgement. Chemical Senses, 24(2):191–199, 1999. [124] Andreas Riener. Subliminal persuasion and its potential for driver behav- ior adaptation. IEEE Transactions on Intelligent Transportation Systems, 13(1):71–80, March 2012. 101