ABSTRACT Title of Dissertation: USER ONBOARDING DESIGN IN CITIZEN SCIENCE: A PATH TO GROW ENGAGEMENT AND PARTICIPATION. Marina Cascaes Cardoso, Doctor of Philosophy, 2021 Directed By: Professor Jennifer Preece, PhD, Information Studies In the context of crowdsourcing communities (e.g., Citizen Science), crowd engagement is a significant determinant of projects' sustainability. The challenging missions of finding motivated people to participate in such initiatives and triggering their engagement to the cause have been widely acknowledged by scholars in the field of Citizen Science (Eveleigh et al., 2014; Nov et al., 2011b; Raddick et al., 2010; Rotman, 2013; Rotman et al., 2014); both on crowdsourcing initiatives (Balestra et al., 2017; Lampe et al., 2010; K. Y. Lin & Lu, 2011; Preece & Shneiderman, 2009; Steinmacher et al., 2015) and on online communities, in general (Brabham, 2010; Crowston & Fagnot, 2008; de Vreede et al., 2013; Zheng et al., 2011). The initial interaction with the technology employed by crowdsourcing platforms, including Citizen Science, affects users' experiences and should be designed considering their effects on the initial engagement. This work focuses on understanding how onboarding impacts early engagement and, consequently, the likelihood of boosting the quality of initial interaction and sustaining the adoption. Early engagement means the intricate process of embracing users' characteristics and motivations during the first interaction. The goal of Citizen Science platforms when implementing an onboarding design is, in general, to turn first-time visitors into a long-term users by scaffolding the first use toward participation. The central premise of this investigation is that onboarding characteristics and users' initial experiences largely determine whether they ultimately continue using the app; therefore, the thoughtful design of the first experience is fundamental. Organized in eight chapters, this doctoral dissertation starts by offering insights into the variables involved in the process of onboarding new users. 1 Although commonly employed by the SaaS industry in various applications, onboarding design still lacks systematic investigation and precise definitions. Therefore, this research presents a terminology for the onboarding process and defines its four structural elements: Statement of Purpose, User Identification, Informational Support, and Conversion Event. Delving into the Citizen Science context, it is conducted three studies on how existing projects employ onboarding practices in their mobile applications. The studies, in chapters four to six, reveal barriers and reactions to onboarding experiences from volunteers. For example, making the statement of purpose clear, explicitly showing why individuals should be volunteering, and being part of a contributing crowd, apps have promising chances of keeping users engaged and returning in the future. Through various analyses and discussions, this work provides novel comprehension of how first-time interactions have the potential to alter newcomers' engagement in mobile apps. Finally, this investigation offers guidelines to support the designing decision process of creating a successful onboarding flow, primarily in the Citizen Science domain. It is presented seven drivers of newcomers' engagement that consist of design recommendations for onboarding that can be adopted for virtually any crowdsourcing app. Key drivers include essential concerns that influence engagement and can be resolved, for instance, by providing information on the users' roles and their contributions to the project, plus informing the app's goals and impact on the world with transparency. The seven drivers address cautious use of (1) technical language and jargon; to encourage (2) informing users regarding app's mechanics, and offering guidance to tasks accomplishments; stressing (3) the users' roles and their contributions' purpose within the project; to be transparent about the (4) app's goals, results, and impacts on the world; elucidate any (5) benefits or rewards right from the beginning, even they are not tangible or immediate; consider (6) UI's visual quality as a decisive interest factor and design it according to the intended audience; and lastly, (7) to advise the use of visual cues to enhance usability and reduce uncertainty. This dissertation has a pivotal contribution: the definition of terms and operationalization of onboarding elements, their attributes, and roles upon users' needs and individual aspects. Moreover, an onboarding flow creates an opportunity to successfully captivate and retain newcomers only when design and engagement attributes address users' characteristics, needs, and motivations. 2 USER ONBOARDING DESIGN IN CITIZEN SCIENCE: A PATH TO GROW ENGAGEMENT AND PARTICIPATION. by Marina Cascaes Cardoso Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park, in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2021 Advisory Committee: Professor Jennifer Preece, Chair Professor Tamara Clegg Professor Jennifer Golbeck Professor Katie Shilton Professor David Jacobs ? Dean?s Representative 3 ? Copyright by Marina Cascaes Cardoso 2021 4 TABLE OF CONTENTS 1. INTRODUCTION ........................................................................................................ 1 1.1. Background of the Study ................................................................................................ 2 1.2. Onboarding process as part of the initial interaction ...................................................... 3 1.3. Important definitions: Interaction, Engagement, and Experience. ................................. 5 1.4. Study purpose ................................................................................................................. 7 1.5. Research Questions ........................................................................................................ 8 1.6. Research Design ........................................................................................................... 10 1.7. Dissertation structure .................................................................................................... 12 2. LITERATURE BACKGROUND ............................................................................. 14 2.1. User engagement .......................................................................................................... 14 2.2. Scientific Crowdsourcing: Online Citizen Science Initiatives ..................................... 17 2.3. Motivational Theory ..................................................................................................... 19 2.4. Behavioral Theory ........................................................................................................ 22 2.5. Onboarding ................................................................................................................... 23 2.5.1. Current Meanings and A New Definition ........................................................... 23 2.5.2. Framework and Terminology ............................................................................. 25 2.5.3. Onboarding Elements: Statement of Purpose ..................................................... 29 2.5.4. Onboarding Elements: User Identification ......................................................... 29 2.5.5. Onboarding Elements: Informational Support .................................................... 30 2.5.6. Onboarding Elements: Conversion Event........................................................... 32 2.5.7. A note on Reengagement .................................................................................... 34 2.6. Summary ...................................................................................................................... 34 ii 3. METHODOLOGICAL CONSIDERATIONS ........................................................ 37 3.1. A case for qualitative research ..................................................................................... 41 4. STUDY I: Onboarding Design in Citizen Science apps .......................................... 43 4.1. Goals and limitations .................................................................................................... 43 4.2. Selection of platforms .................................................................................................. 44 4.3. Evaluation Criteria ....................................................................................................... 47 4.4. Findings ........................................................................................................................ 48 4.5. Discussion .................................................................................................................... 49 4.6. Conclusion .................................................................................................................... 52 5. STUDY II: Citizen Science apps User Study ........................................................... 54 5.1. Goals and Limitations .................................................................................................. 54 5.2. Selection of Platforms and Participants........................................................................ 56 5.3. Use Case Scenarios ...................................................................................................... 58 5.4. Subjects Recruitment and Demographics..................................................................... 59 5.5. Instruments ................................................................................................................... 60 5.6. Data Collection ............................................................................................................. 65 5.7. Data Analysis ............................................................................................................... 68 5.7.1. Types of Data ...................................................................................................... 68 5.7.2. Analysis Methods ............................................................................................... 68 Survey 1: Pre-Questionnaire ................................................................................ 68 Semi-Structured Interview and Observations ...................................................... 69 Postquestionnaire: Survey 2 ................................................................................. 76 5.7.3. mPING app: Analysis and Discussion ................................................................ 76 5.7.4. eBird app: Analysis and Discussion ................................................................... 82 5.7.5. Marine Debris Tracker app: Analysis and Discussion ........................................ 88 5.7.6. SatCam app: Analysis and Discussion................................................................ 93 iii 5.8. Findings ...................................................................................................................... 100 5.8.1. Overarching themes across the four analyzed apps .......................................... 101 6. STUDY III: Crowdsourcing apps onboarding analysis ........................................ 106 6.1. Goals and limitations .................................................................................................. 106 6.2. Selection of the Platform ............................................................................................ 107 6.3. Expert Review & Evaluation criteria ......................................................................... 109 6.4. GoFundMe app Analysis ............................................................................................ 111 6.5. GoFUNDME Analysis Discussion ............................................................................. 116 6.6. Findings ...................................................................................................................... 118 7. DISCUSSION ............................................................................................................ 121 7.1. A MODEL OF USER ONBOARDING FOR CITIZEN SCIENCE .......................... 124 7.1.1. The Users' Attributes ........................................................................................ 127 7.1.2. The Model ......................................................................................................... 128 7.1.3. Seven Drivers of the Newcomer Engagement .................................................. 133 8. CONCLUSIONS, LIMITATIONS, AND FUTURE WORK ............................... 135 APPENDIX A Data analysis Example from Study I ...................................................... 138 APPENDIX B Post-Questionnaire Results ..................................................................... 140 Mping post-questionnaire analysis ...................................................................................... 140 Ebird post-questionnaire analysis ........................................................................................ 142 Marine debris tracker app post-questionnaire analysis ........................................................ 144 Satcam app post-questionnaire analysis............................................................................... 145 APPENDIX C Codes generated by Study II .................................................................. 147 BIBLIOGRAPHY ???????????????????????????..149 iv INDEX OF TABLES Table1: The various sources of onboarding guidelines organized by authors. ...................... 28 Table2: Various patterns and strategic approaches organized by the four onboarding elements. ............................................................................................... 36 Table 3: List of selected contributory iPhone Cit Sci apps .................................................... 47 Table 4: Included and excluded apps in Study II ................................................................... 58 Table 5: List of scenarios elaborated for each app. ............................................................... 59 Table 6: Semi-Structured Interview Questions ...................................................................... 61 Table 7: Post-Questionnaire items organized by the onboarding constructs they address and to which app they were applied .................................................... 64 Table 8: mPing discovered Themes. ...................................................................................... 78 Table 9: eBird discovered Themes. ....................................................................................... 82 Table 10: Marine Debris discovered themes. ........................................................................ 89 Table 11: SatCam discovered Themes and Codes. ................................................................ 92 Table 12: Evaluation criteria for Study III. .......................................................................... 106 Table 13: Format and Set up of onboarding elements. ........................................................ 107 Table 14: Timeline. .............................................................................................................. 108 Table 15: Flow and guidance. .............................................................................................. 108 Table 16: Design patterns. ................................................................................................... 109 Table 17: Potential pain points. ........................................................................................... 109 Table 18: Onboarding Model detailed by layers and elements............................................ 126 v INDEX OF FIGURES Figure 1: A sample of the print template used on interviews and field notes. ........................ 62 Figure 2: Image used as examples of debris found at beaches. .............................................. 67 Figure 3: Birds pictures used during the sessions. .................................................................. 67 Figure 4: Coding process example carried on NVivo. ............................................................ 71 Figure 5: Coding Process and Mind Map Construction.......................................................... 72 Figure 6: Qualitative analysis stages carried. ......................................................................... 73 Figure 7: mPing app screens examples. .................................................................................. 80 Figure 8: mPing app Mind Map. ............................................................................................. 81 Figure 9: eBird app screens examples. ................................................................................... 83 Figure 10: eBird app mind map. ............................................................................................. 86 Figure 11: Marine Debris Tracker app mind map. ................................................................. 90 Figure 12: Marine Debris Tracker app screens examples. ...................................................... 90 Figure 13: SatCam app mind map .......................................................................................... 97 Figure 14: SatCam app screens examples. .............................................................................. 98 Figure 15: Overarching topics elaboration process. ............................................................ 102 Figure 16: GoFundMe app screens examples. ..................................................................... 107 Figure 17: GoFundMe app onboarding flow. ....................................................................... 113 Figure 18: Diagram showing a sequence of observed and reported behaviors triggered by the deficiency of timely informational support................................................................... 117 Figure 19: Model of user engagement in open collaboration crowdsourcing proposed by de Vreede et al. (2013)....................................................................................................... 122 Figure 21: The Model for the User Onboarding. .................................................................. 127 vi LIST OF ABBREVIATIONS App: Application CnE: Conversion Event Cit Sci: Citizen Science CTA: Call-To-Action FTUE: First-Time User Experience HCI: Human-Computer Interaction InS: Informational Support QCA: Qualitative Content Analysis SoP: Statement of Purpose UI: User Interface UId: User Identification vii 1 1. INTRODUCTION The first impression when using a new technology has the gravity to determine much about long-term engagement and reuse. That is especially valuable for certain types of applications, such as the phenomenon of crowdsourcing and collaborative communities; in which the effort to engage users can be the key to reach critical mass and continually acquire users that contribute and participate. Constant crowd participation is difficult to attain and remains a challenge for many, if not all, online communities. As a result, crowdsourcing initiatives need to constantly work on incentive mechanisms. Research shows that online communities are massively formed by a large number of individuals that do not participate, leaving the majority of the content to be generated by few of its members. In the scientific crowdsourcing realm, such as Citizen Science (Cit Sci) platforms, the phenomenon is similar, and it is known that large numbers of volunteers join the communities and never contribute even once. However, the fact is: those who are not contributing, which might range from being completely absent to being free-riders or a slightly active participant; did once joined the platform and experienced an early interaction that have likely influenced their participation behavior. As many of the commercial technology apps in several industries have already found out, new users ?engagement needs to be addressed since the beginning of the relationship, even before there is any relationship between user and product. Therefore, product designers and entrepreneurs are interested in understanding motivations and users ?goals. The initial interactions (e.g. misalignments on users ?expectations, goals, or interests) and apps ? functionalities, usability or audience, might result in a poor user experience and undermine user engagement and app adoption ultimately. The understudied issue of early users' interactions and its impacts on engagement, added to the lack of design guidelines to support and even catalyze engagement and new users retention; and the understanding of how motivation can be incorporated in the user experience, make up the problem this investigation aims to tackle. While a variety of first-time user experience strategies aimed to acquire new users and to retain them have been tested and applied showing valuable results to the industry of 2 SaaS and other apps, little is known on how Cit Sci initiatives could benefit from such strategies and how such early engagement strategies could operate with their volunteering audience to improve contribution, participation, and, ultimately, platform sustainability. The present investigation seeks to explore what entails the first-time user experience and how it affects future user engagement. This study tackles that question by looking at what early user engagement is; what its key components are; and how user motivation and other engagement antecedents can inform onboarding design. 1.1. BACKGROUND OF THE STUDY User?s first experience with a technological product is the moment to understand how the technology works, to acquire initial instruction, and to perceive its value. An analogical example of a digital first-time experience would be when someone buys and tries for the first-time a brand-new product that demands some learning curve, such as an espresso coffee machine. Before preparing the first cup of coffee, the user will likely have to assemble some parts of the product and may have to consult a set of instructions to do that. When opening the box for the first-time, they will find a few different materials, for instance: a quick guide, a detailed instructions manual, warranty information, and a welcoming card or a set of coffee pod samples. To prepare the first cup of espresso, it might be necessary to read more instructions or a quick start guide. As the user masters the first steps and understands the basics, they progress and consult the manual again to learn how to use more advanced features, like preparing a cappuccino or latte. If the user is not a beginner and has used a similar machine before, they might want to skip the basic steps and use it right away, consulting the manual only when the machine requires cleaning or maintenance. Using a website link that comes in the box, the user can register online the new machine and subscribe to the weekly newsletter, in which they can learn new recipes and receive offers to continually buy coffee pods. This example illustrates a multichannel customer's experience approach to engage they through print materials, website, email, and 3 app. It is important to elucidate the difference between customer experience (CX) and user experience (UX) since UX is part of the vaster CX landscape. CX is a broader and more generalist term that describes the relationship between a customer and a brand or organization over time. This relationship happens in different levels and channels, such as through sales process, customer service, product delivery, etc., including the UX with specific products, not necessarily technological, but have an interface with the user throughout which designers can manipulate interaction elements to create a successful experience and a positive relationship. This first experience is the moment to understand how the technology works, acquire initial instruction, and perceive value. For the technology provider, there is an opportunity to captivate new users and prompt them to action through various mechanisms of interaction and communication approaches. 1.2. ONBOARDING PROCESS AS PART OF THE INITIAL INTERACTION The process described in the previous section depicts the first or initial interactions of a new user with a technological product. Looking at the coffee machine example, for the technology provider, i.e., the machine?s maker in this case, there is an opportunity to captivate the new customer (the user) and, through various mechanisms of interaction, encourage them to engage with the brand and the product, to enjoy and reach user?s goals, prompting them to take action (e.g. preparing a drink or buying pods, for example). One could take the interaction with the manual and the information design in there as one of the several interfaces with which the customer interacts. In the vast ocean of online apps, community websites and social network sites (SNS), both terms users and actions can assume many meanings. Users can be customers that subscribe to an online or offline service via a web app or mobile app, participants in social networks and online communities, volunteers for online projects or open-source initiatives, and so on. By action, in turn, one could mean making a purchase, signing up for a community, joining a crowdsourcing project, donating money to a crowdfunding cause, buying a cloud- 4 based app license on a website, inputting data in an open-source project, participating as a collaborator in a community, and the like. In all these cases, systems depend on users ? adoption, meaning ?commitment or continued usage of the technology over time? (Sledgianowski & Kulviwat, 2009), so that they can be profitable, sustainable, relevant, or simply viable. Many of the current commercial technologies and computer-mediated communication technologies (e.g. software as a service (SaaS) app and SNS) are already concerned with the beginning of their new users ?journeys. Designers, UX researchers, and marketing professionals commonly refer to this initial interaction as the user's onboarding process, which can be an opportunity to bring on board new customers, attracting and retaining these first-time visitors as they become members or paying customers that will keep using the products. The onboarding process consists of designing an initial interaction that introduces the new product or service and guide the new users throughout the main features or essential information so the users can get to the main action point established or promoted by the app. Although onboarding is a major portion of the initial interaction for new users and encompasses both users ?and systems ?factors to perform well and accomplish its goal; in this investigation, it will not be treated as a synonym of (early) user experience. This point is further discussed, in detail, on Section 2.1, in which we clarify the other elements involved in the experience. While many UX designers and developers use these terms interchangeably, we make the distinction of onboarding, early or initial user experience, and early engagement grounded on previous literature and further considerations explored on the literature review chapter. Whereas onboarding refers to a particular structure of interactions or stages that take place in the interface, it is only a part of the users' journey within the system, which encompassing more intangible elements, namely: the engagement built during this process and resulting from it. Further reflecting on the initial engagement, it becomes clear we must take into consideration the diversity of influencing factors ? which many should inform onboarding design. Some of these factors are why people were brought to use this app in first place, their needs and motivations, expectations and goals, their previous experiences and knowledge on 5 the topic or technology, as well as apps ?purpose and audience. Most of the onboarding practices found in commercial products are business oriented, and aim to drive the users towards ?conversion,? a known term in business and marketing for transforming visitors into paying costumers. But for UX and web analytics, conversion stands for a broader definition, that is, the rate of users who buy something, encompassing any desired action on the website, app, or system that matters for businesses or community (J. Nielsen, 2013a), whether it is filling out a profile, choosing a paid subscription, or making a first purchase. Onboarding can be designed in endless ways, making use of virtually any visual interface feature (e.g. videos, slides, and interactive tutorials), usually in steps that guide the users. Popular products that offer mobile commercial apps (e.g. ride share: Uber and Via; and food delivery sites: Caviar and Grub Hub) adopted processes that show the users the benefits of their services, briefly present how the apps work, guide the users through the sign-up process, and set up the payment method. However, a myriad of variations on these steps are possible depending on the type of service, users' group, and both users ?and businesses ?first- use goals. Not only can the steps vary, but the strategies design teams can employ, and different ways of implementing them, can also be very diverse. We assume that the various onboarding design decisions influence newcomers' experience and the likelihood of coming back and becoming consistent customers or members, contributing to the future reuse. 1.3. IMPORTANT DEFINITIONS: Interaction, Engagement, and Experience. In this work, we use "initial interaction" and "first-time interaction" interchangeably to describe the first-time users have contact with a product, app, or technological products that offer a graphical interface. In our case, we mostly refer to the apps created by crowdsourcing and Cit Sci teams. This initial interaction starts when the users first launch the app and go through a process or flow that introduces the product for the first-time, actively guiding and informing the new users, which we call the onboarding process. We corroborate a few authors, including O'Brien & Toms (2008), that engagement 6 is a product of the interaction. Focusing on the initial interaction can be expected to rise an early engagement as a quality of the users' experience with the technology. According to Doherty and Doherty (2019), engagement is often seen as a trait, state, or process by different lines of thought. However, engagement can assume the three descriptions when we look at the basic elements of interaction. First, engagement can be cast as a trait when it comes to the system's features, i.e., either the system (user-interface) can be considered engaging, so it is designed to engage, has the capability of captivating and address motivations, or motivating users (O?Brien et al., 2020). Second, we can interpret engagement as a state ascribed to the user (O?Brien & Toms, 2008) that varies across time or changes its intensity. Nevertheless, this variation happens during the interaction, over time, between the users and system. In this way, we can consider engagement as a process that is dynamic and has a beginning and an end. In our context of crowdsourcing apps, if the engagement process succeeds, resulting in a positive, pleasing interaction and leading ultimately to engaged users, we might affirm that the onboarding process also succeeded. Onboarding designs can assume multiple goals, sometimes they can relate to completing a purchase in e-commerce systems, sometimes they can be simply captivating the users and increasing the chances of reuse in the future. For Cit Sci, most of the time, a successful onboarding means engaged users that had a positive experience and will become contributors or active participants. Early engagement is also mentioned in this work to denote the engagement process that happens during the initial interaction, the first use of the system. As a result, engaged users will carry higher chances of becoming participants and, consequently, re-engaging with the system in the future. We differentiate the two terms discussed above, initial interaction and early engagement, between each other, but also, and equally importantly, we differ the term first- time user experience (sometimes referred as FTUE by practitioners). Although many authors and designers have been using these three conceptions indistinctly, creating confusion or vagueness, we cast the users' experience as something that cannot be designed per se - it can be supported though by the affordances (Norman, 1988). Designers deliberate how technology products can help people achieve their goals and perform them with quality at sensorial and operational levels. From the users, we expect to be provided the necessity in question that needs to be fulfilled, determining, therefore, the 7 products' functionality. It is also ascribed to the users their motivations and emotions involved in the interaction that gives meaning to the use, undergoing the user experience. Experience is, above all, a subjective matter. It is built on perception, action, motivation, and cognition. It is also holistic, situated, dynamic, and worthwhile. In summary, we argue that the experience emerges from the interaction; in this case, between the users and a device of "the integration of perception, action, motivation, and cognition into an inseparable, meaningful whole" (Hassenzahl, 2017). As complex and not fully understood as it is, a positive technology-mediated first-time experience might shape future behavior towards the technology. 1.4. STUDY PURPOSE While onboarding has become a hot topic among commercial apps, crowdsourcing initiatives such as Cit Sci communities, which heavily depend on users ?contributions through their platform and adoption of the technology to continue to survive and strive, could benefit from implementing this process. In the Cit Sci context, onboarding included early engagement, and interactions have been less explored or, at least, less systematically studied and documented. Such investigation could offer a better understanding of how design decisions might affect Cit Sci projects ?success. While addressing the onboarding design and current practices, it encompasses the initial interaction towards engagement as a more comprehensive and complex process. The onboarding practices found in the industry today are primarily seen in a commercial context, tied to conversion mainly, i.e., sale and ultimately profit as the main purpose and limited to it. Therefore, onboarding as a reliable practice still lacks systematic investigation and clear definitions. The scarcity of systematic scholarly research on early users' experience and onboarding design calls for a theory-based conceptual framework that can serve as a stimulus and foundation for such research. By addressing early engagement as a broader and more complete perspective of the first-time user experience, in which the onboarding stages are seen as elements of it, this study takes into consideration what comes before of it and after: 8 motivation, personal interest, and the experience effect on retention towards future reengagement. Building from the extensively studied problem of engagement in crowdsourcing literature, which remains a challenge for various communities, this inquiry investigates the weight of this early interaction with a new app and the impact on engagement. We focus on Cit Sci apps, which populating the platform with active participants who contribute and engage with the community represents a significant issue. Technology is an essential element for many, if not all, Cit Sci projects today. The initial interaction that volunteers have when accessing Cit Sci apps gains relevance insofar as it speaks to the newcomers ?motivations, promotes engagement, and boosts adoption. Previous literature on Cit Sci participation has concentrated on the motivational factors involved in volunteering (Dickinson et al., 2012; Newman et al., 2012; Nov et al., 2011a, 2011b; Rotman, 2013; Rotman et al., 2012, 2014). Personal interest or intrinsic motivations were found to be the primary impetus for volunteers' initial participation. However, it is also known that Cit Sci apps and platforms are vulnerable to problems with technology, poor app usability, and lack of training in technology-mediated projects, which can ultimately negatively affect long-term participation (Rotman et al., 2014, 2012). Much is known about the initial motivations that lead volunteers to join crowdsourcing platforms in an early stage (Lampe et al., 2010; Nov et al., 2011a, 2014; Rotman et al., 2014). However, little research has been carried out on design strategies for technologies that mediate participants and researchers toward active participation and long- term adoption (Aristeidou et al., 2017). Additionally, UI design and UX for Cit Sci apps have been marginally investigated, let alone the possible barriers for newcomers who desire to participate in scientific crowdsourcing projects (Rotman, 2013; Steinmacher et al., 2015). 1.5. RESEARCH QUESTIONS Essentially, in general terms, the purpose of this dissertation is to elucidate how onboarding design strategies can improve engagement. Our response to address this 9 investigation contributed to better establishing knowledge on initial interactions and to defining what onboarding practices accurately entail, offer, and result, particularly for Cit Sci, and how to better design engaging first-time users' experiences for volunteers. Therefore, our exploration begins with the main research question (Main RQ) this work addresses is: ? (Main RQ) How can onboarding design improve user engagement in Cit Sci mobile apps, ultimately leading to higher chances of reuse? Onboarding design refers to a set of design decisions that might shape how first- time users will interact with new technology. To improve, here, means to increase the number of participants who interact and return multiple times to the app among total users or visitors. Users' engagement in the Cit Sci context denotes quality of the interaction and experience that the users establish with the system (O?Brien & Toms, 2008), which, if positive, might lead users to participate and encourage contribution actively. In the same context, reuse suggests that engaged users will likely come back and use the app again and again, after the first access. Transforming newcomers into regular users is a great challenge for Cit Sci and other crowdsourced communities (Nov et al., 2014). This research is concerned with whether, among the endless ways of setting up an onboarding design, certain approaches can influence users' engagement to the extent of promoting system adoption and increasing future participation. The process of bringing new users to become part of a community, volunteers, or convert them to subscribers of a product, for example, is interwoven between inherent users' aspects (e.g. motivation) and the numerous possible design strategies and solutions. Thus, effective onboarding should not look like a black box that mysteriously produces satisfactory results and grows the number of users. For that reason, it is indispensable to comprehend which elements affect users' experience regarding engagement and the likelihood of future interaction. Likewise, it is imperative to go the other way around and perceive the users' attributes that play a role in this process and consider whether they are addressed by the UI and the system as a whole. Looking closely at the available examples of onboarding processes implemented in various fields, a common subset of components and systematized them to understand their purpose and relevance, so we could unveil how they influence first-time users. Thus, we unfolded the main research question into a set of sub-questions regarding each element of the early users' engagement process, which the discussion informed the main 10 result: ? (Sub-RQ1) How can we define onboarding and its components? ? (Sub-RQ2) Which users' attributes and system characteristics interact and play a role in this initial engagement process? The sub-questions set the path to closely examine a neglected moment of users' interaction with technology?the beginning of a relationship that might determine platform success. For systems or online communities that depend on voluntary use and reuse of their apps, it is fundamental to critically consider how the first users' experience is affecting their crowd engagement. The present research exposed the gap in understanding how the first-time users' experience impacts long-term engagement and retention, especially in the Cit Sci realm. It also unveiled the scarcity of design resources and guidelines for designers and project managers in this space. Furthermore, we discussed the variety of onboarding setups and strategies employed by current Cit Sci online apps and other crowdsourcing communities. 1.6. RESEARCH DESIGN From a design practice perspective, the primary goal of this dissertation was to develop a model that operationalizes the onboarding conceptual elements. Hence, we strived to understand how design aspects and users' attributes influence this initial process and impact engagement. Braun and Clarke (2021) provide a useful principle for designing qualitative research, given that its methodological literature is vast and complex and needs to be planned in advance so it can assess how different choices will take the researcher closer to their goal. As Levitt et al. (2017) (apud Braun and Clarke, 2021) summarized this position: [?] research designs and procedures (e.g., autoethnography, discursive analysis) support the research goals (i.e., the research problems/questions); respect the researcher's approaches to inquiry (i.e., research traditions sometimes described as 11 world views, paradigms, or philosophical/epistemological assumptions); and are tailored for fundamental characteristics of the subject matter and the investigators. (pg. 5) The sub-questions listed in the previous section served as a guide for designing the research towards answering the main RQ. Therefore, consisting of a review and diagnosis of the current onboarding terminology used and the missing definitions, Sub-Chapter 2.5 addresses the Sub-RQ1 by offering an HCI terminology for onboarding. Chapter 4 presents a descriptive research, Study I, which was the starting point to situate and level out the existing onboarding practices in the Cit Sci field. It was built on evaluation criteria derived from the literature and adopted terminology and concepts proposed in Section 2.5. The study held a descriptive research characteristic (DeCarlo, 2018), aiming to learn the variety of elements present in each app, design features, and interaction characteristics typically used in the Cit Sci domain. On a different approach, Study II went deep into revealing how users actually handle apps for the first-time. Again, the collected data and analysis supported to shape the proposed framework described in Section 7.1. During this study, the users' perspective was brought to light, so the onboarding design elements became variables with the potential of influencing users' experiences. Feasible and conceptually coherent methodologies were chosen to compose Study II. Following HCI and usability traditions, studies centered on the users? perspectives offered a range of advantages and were in line with our qualitative approach. The data and insights that originated from this part of the study fed our response to Sub-RQ2, revealing relevant users' attributes and system characteristics that played a role during the initial engagement process in the Cit Sci realm. Nonetheless, as discussed in Section 6.1, an evaluation was considered necessary to reflect on possible practice improvements. Consequently, a current practices research examined commonly adopted onboarding designs implemented in successful and popular crowdsourcing projects, configuring Study III. Study III was conducted similarly to Study I; however, the evaluation criteria were expanded, refined, and consolidated, based on what was learned during Study II. The quest to solve the main RQ of this investigation drove all the efforts towards one 12 central contribution: the operationalization of the onboarding elements. The elucidation of the components and moving parts (that is, structural elements, design aspects, user needs, motivations) coupled with the inner mechanics of early interactions and engagement were crucial to unravel the events of first-time use by a newcomer. Thus, this framework is the first step towards a more robust, complete, and widely applicable onboarding operational conceptual process. 1.7. DISSERTATION STRUCTURE In addition to this first introductory chapter, this dissertation is organized into eight further chapters plus the Bibliographic References and Appendices. Chapter 2 consolidates an outline of the most relevant areas of study that contribute to framing our research problem, significant previous studies on engagement, motivation, scientific crowdsourcing, including current information surrounding the onboarding design practice in the industry, UX community take on the issue, and relevant history on the issue. We found it helpful to expose and justify the methodological choices carried throughout this research. Hence, Chapter 3 offers a few considerations on epistemology and ontology underpinning this work, supporting the evaluation processes and analysis we employed. Additionally, we situated our philosophical position upon past and current HCI paradigms, intending to contextualize them and draw relationships amid perspectives and historical viewpoints. Chapter 4 presents Study I, which was the starting point to situate and level out the existing onboarding practices in the Cit Sci field. It was built on evaluation criteria derived from the literature (Section 2.5). This chapter is divided into four sections: 4.1 Goals and limitations, 4.2 Selection of Platforms, 4.3. Evaluation criteria, and 4.4 Analysis & Discussion. Chapter 5 presents Study II, configuring a users' study in which participants were observed and interviewed using four different Cit Sci apps for the first-time. Methods employed were elaborated based on literature and existing technics. This chapter includes sections 5.1 Goals and limitations, 5.2 Selection of Platforms, 5.3. Use case scenarios, 5.4 Subjects recruitment, 5.6 Data Collection, and 5.7 Data Analysis. This last section presents the 13 analyses for the four Cit Sci apps separately, besides two other subsections that detail 5.7.1 Types of data and 5.7.2 Methods, elucidating the methods used for each type of data collected. Chapter 6 describes Study III, a popular crowdsourcing app GoFundMe, divided into six sections: 6.1 Goals and limitations, 6.2 Selection of platforms, 6.3. Evaluation criteria, and 6.4 GoFundMe app Analysis & Discussion. Chapter 7 provides a general discussion embracing the three studies' analysis, drawing insights and comparisons. Finally, section 7.1 offers a description of the proposed Model of User Onboarding for Citizen Science. Chapter 8 presents the conclusions and considerations, discussing the contributions to the field and reiterating the research problem and its goals. Plus, it includes the limitations of this work and suggests further avenues for investigation. 14 2. LITERATURE BACKGROUND 2.1. USER ENGAGEMENT In the past few years, the term engagement has appeared across various domains and practices which, according to the Handbook of Communication Engagement, has been utilized to address slightly different ideas with the common denominator of interaction (Johnston & Taylor, 2018). As the authors point out, given the abundant use of the term and apparent interest coming from many fields, the study of engagement is important to better develop definitions and theory because there is no unifying theory of engagement at the present moment. In the technology domain, engagement or users' engagement has become a buzzword typically accompanied by two other terms, UX and retention. Innumerous websites, blogs, tech, and digital marketing entities and its publications emphasize the importance of users' engagement for businesses and offer strategies and directions to app developers and business owners to maximize it. Many of the marketing and UX recommendations comprise aspects of the product use and users' experience, promising to increase rates of users' retention, and sometimes better results in monetization or churn reduction. Churn is the rate of customers that abandon an app or product (Lin et al., 2011), causing companies to lose paying customers or simply having users becoming inactive. Businesses are typically concerned with engagement levels whereas having engaged users is the first step towards acquiring a loyal audience and, for most of the industry, hopefully profitable results. Beginning with the definition of engagement, by trying to answer the simple question of what it means to build engagement or help users to engage, clearly there is no straightforward response to that. Engagement is a multidimensional concept that has been applied and explored in many fields, processes, and outcomes. Although a unified definition is still lacking, for this present work, we agree with two slightly different but complementary views of engagement, by Johnston (2018) and O?Brien & McKay (2018): Social level engagement is defined as a collective state of engagement that can be 15 represented in behavioral forms (collective action, group participation), cognitive (shared knowledge) and affective forms (orientation, intention, and experience) and is an outcome of a dynamic, socially situated system. The notion of social level engagement is derived from the idea of collective action and outcomes. (Johnston, 2018, pg. 26) User engagement is a quality of user experience that is characterized by the depth of an actor?s cognitive, temporal and/or emotional investment in an interaction with a digital system. (O?Brien & McKay, 2018, pg. 73) While tech products' creators, developers, and designers rave about engagement?s importance and improvement (Johnston & Taylor, 2018), we observe that engagement actually lacks metrics, tools, and variables that allow an accurate measurement of progress, success, and analysis. The authors tentatively organize three potential levels of engagement that might help us to begin to understand how different actions and manifestations can be interpreted and utilized as sources for engagement metrics. However, the suggested engagement metrics academic studies and industry professionals suggest employing? number of likes, views, page visits, churn rate, and so on, are not that straightforward. Johnston and Taylor (2018) argue these measurements are only low-level manifestations indicating that users are interacting with the product or content, nothing beyond. Also, they fear that engagement becomes an empty term since it is being employed in so many circumstances as ?counts and amounts of things? (pg. 7). In a broader and deeper sense of the engagement, the authors claim that there is space for improvement of these measures so that engagement might be weighted in a more complex level taking into account indicators of action, changes, impact, social capital, agency, and other outcomes. On the one hand, the current metrics cited above might not tell the complete story behind different engagement behaviors or provide more comprehensive explanations. On the other hand, we believe that, like any other quantifiable measure or variable, they do offer valuable data for designers, although they should not be considered sufficient to understand a macro context of an intricate scenario (O?Brien et al., 2020). Across the technology industry, especially SaaS, investing in successful users' engagement strategies has become a trend as a way of transforming new users into loyal 16 customers and then boosting retention. Moreover, users' engagement strategies have rapidly turned out to be seen as a shortcut or a path to a greater end; to what matters the most to many products and services in the tech industry: to gain more users, mostly paying customers, and to increase profit. This fits into what Johnston and Taylor (2018) claim to be the instrumentalization of engagement. In fact, the early definitions of engagement were linked to individual outcomes and mechanisms, such as consumer education and employee contexts, where people would respond to stimuli resulting in being in an engagement state or not (Johnston & Taylor, 2018). Nowadays, engagement is seen more like a process ? ?where meaning is created, or co-created, through communication? (Johnston & Taylor, 2018, pg. 19) situated in a social environment that originates in a certain nature and produces certain outcomes. The functional aspect of engagement is supported by several works included in the Johnston and Taylor (2018) conceptualization of engagement, in which scholars have identified different themes that characterize this multidimensional term: ?Underpinning all of these themes is the central role of communication in engagement?to create, nurture, and influence outcomes.? (pg. 3) Corporations and businesses often seek customers or users' engagement approaches as useful instruments for ultimately profitable purposes, while non-profit crowdsourcing platforms and other on-line communities expect different outcomes. These outcomes differ in purpose and focus, where engagement might be essential to nurture participation and motivation as part of the experience. Non-profit crowdsourcing platforms and on-line communities, such as Cit Sci communities for example, often desire a higher member engagement with the social focus of engagement: its collective state (Johnston & Taylor, 2018), where people take collective actions i.e., towards solving a problem together, helping a cause to achieve a common objective, through participating in groups and communities with common interests, sharing and building new knowledge. In this context and the volunteerism communities, we might find that engagement can be seen both as a process ? encompassing low-level indicators proposed by Johnston and Taylor, such as manifestation of activity; and concomitantly as a product or outcome of an experience that, among other components of interaction, comprises indicators of action and impact at a social level. Nevertheless, engagement can and should be measured taking into consideration the 17 low and mid-level indicators as well by looking at the higher-level outcomes. The motive resides in the fact that at certain times and situations simple and quantitative manifestations are all they have and might help practitioners to predict outcomes in terms of impact. We argue that the UX take on early user interaction appropriate the low and mid indicators, work them as cues and signs to understand the experience they are offering to consumers or users, and with that, to better foresee high level engagement goals, such as the real impact the community or product in question has produced. More than discriminating engagement indicators by level or quality, we believe in measuring them horizontally, in a continuum process of findings and improvements of the user experience. It is true that some indicators, cited in the Handbook of Communication Engagement (2018), mostly the low-level ones, are easier to comprehend, define, measure, and gather. Other indicators are difficult to grasp and might sound like intangibles. Yet, when speaking about online communities and volunteerism, websites and mobile apps are the facilitators of engagement. In an effort to translate intangible measurements of visitors ?level of engagement to more palpable data, behavioral interactions are used as indicators and that is why many designers and teams are interested in collecting data analytics, such as clicks count, page views, etc. That data might tell us about user?s behavior, but it should not be enough to make assumptions on user?s engagement. Communicators and scholars state that engagement is intimately connected with some sort of involvement, both at cognitive and behavioral levels. Applying this concept to the online volunteerism community context, it becomes clear that for this involvement to happen (Smith & Gallicano, 2015) a few factors should be present: connection, sense of presence, interactivity, and interest in the activity. The volunteerism behavior itself is seen as an engagement behavior that creates social capital. 2.2. SCIENTIFIC CROWDSOURCING: ONLINE CITIZEN SCIENCE INITIATIVES Citizen Science is a form of collaborative research where members of the public take part as collaborators in scientific investigations that demand massive amounts of data to be 18 collected over vast geographic spaces or across time and is unsuitable for a regular group of scientists to carry out, including analyzing those immense data sets (Bonney et al., 2009; Law et al., 2017; Silvertown, 2009; Wiggins & Crowston, 2011). Its primary impacts have been seen on biological studies of global climate change as well as in subdisciplines focused on species (rare and invasive), populations, communities, and ecosystems (Dickinson et al., 2012). Technology can play a vital role in Cit Sci. Projects mediated entirely by information and communication technologies are becoming much more common. Technology enables project dissemination, volunteer recruitment, data collection and submission, and communication among participants. Because technology is one of the main factors responsible for the growth of this type of research (Wiggins & Crowston, 2011), the usability of these systems needs to be considered (Silvertown, 2009). From this perspective, users interface and interaction design must match participants ?context of use, providing experiences that embrace more than good usability, compelling the volunteers to participate. Technology should not be a barrier. Even though Cit Sci involvement is often related to volunteers ?desire to connect with nature and enjoy outdoor activities (Cohn, 2008), the technology side of it can have a key impact on their commitment to a project because platform and mobile app interfaces are often the gateway to participation (Preece, 2016, 2017). Because Cit Sci relies on volunteers ?willingness to get involved with a project?s cause and engage in the necessary activities, understanding what motivates people to join, to contribute and participate, and to keep participating become important, if not vital, issues for the sustainability of such online communities. Rotman and colleagues (2012) observed five significant factors that influence participants ?motivations in joining scientific collaborations. They noted that initial interest was associated with an egoistic motive. The entry point in a project is often related to an opportunity for volunteers to expand knowledge, which would arise from feelings of familiarity, personal curiosity, or a desire to further build their careers. Even when potential volunteers are determined to join a project and become part of the community, the entry point configures a decision time, where obstacles for collaboration are susceptible to emerge. The authors emphasize that this moment deserves proper attention since it is the initial encounter between volunteers and a scientific project. Motivational stimuli gain importance at 19 that time, resulting in stronger and more sustainable collaboration between citizens and scientists (Rotman et al., 2012). Trust issues related to scientists ?reactions about their contributions or not feeling welcomed by the group are some of the demotivating factors that can hamper potential participants. 2.3. MOTIVATIONAL THEORY Because participation and engagement are critical in processes that depend on people?s willingness to join and continually make inputs, as with crowdsourcing and Cit Sci platforms, it is unwise to not think about what drives people to contribute, be present, spend their own time and effort, and get involved. Psychological, sociological, and behavioral literature has underpinned countless studies, empirical research, and theories on what motivate people to contribute to a collective goal and on what can affect their participation. In the psychological literature, motivation has frequently been classified into two fundamental forms: intrinsic and extrinsic. The Self-Determination Theory, a psychological framework for motivation and personality, which was developed by Edward Deci (Deci, 1975) and Richard Ryan (Ryan & Deci, 2000), focuses on inner sources of motivation. The authors suggest that people are driven by the necessity to grow and gain knowledge through three basic ways: to acquire competence, gaining abilities in some tasks or learning skills; to feel relatedness, connecting and interacting with others; and to have autonomy, to feel in control of their actions, lives, and goals (Ryan & Deci, 2000). The authors further expand their previous study and distinguish between intrinsic and extrinsic motivation; what differentiates one from the other is where the drive comes from. Intrinsic, also called internal, motivations are those that stem from the individual and are not tied to external rewards (Paulini et al., 2014).They have their origin in personal interest and a desire to acquire new knowledge and explore topics of relevance that are usually attached to personal values and beliefs. According to Geiger, Seedorf, Nickerson, & Schader (2011), although intrinsic factors such as ?passion, fun, community identification, or personal 20 achievement? (p. 8) are quite difficult to be manipulated, there is some literature that regards the ways and mechanisms of indirectly influencing people with certain incentives, including examples of attempts in crowdsourcing communities that promote ideas or design competition activities. Rotman (2013) also points out that in online contexts like these many interactive aspects can work as stimuli and boost internal interests, that is, the appreciation of belonging to a community or experiencing some sort of emotional attachment, security, and efficacy. Extrinsic motivations, also called social or external factors, are those outside of the individuals, employed as physical or emotional rewards, providing pleasure or satisfying needs ?that the task itself does not necessarily provide? (Rotman, 2013, p. 55). The author draws attention to different types of activities that require different motivational approaches. External and internal factors affect each other in a variety of ways, leading to a complex relationship between their effects and how both influence people?s decisions to engage in activities and to act in a collaborative environment. Still, as this connection is not always clear, there is some criticism of this dichotomist approach. In the Cit Sci context, looking at what drives people to initial participation, Rotman (2013) and Rotman et al. (2014) found out that personal interest is the main drive to becoming volunteers. These intrinsic motivations can be easily weakened by problems with technology, poor usability, and lack of training in technology-mediated projects, also affecting long-term participation. Reasons that people join Cit Sci projects have been the topic of many studies in the past (Dickinson et al., 2012; Newman et al., 2012; Nov et al., 2011a, 2011b; Rotman, 2013; Rotman et al., 2012, 2014). While several factors are involved in the decision to become a volunteer, the central motivation for involvement in Cit Sci activities has often been related to personal interest, curiosity, and fostering expressions of self-efficacy, as well as opportunities to expand knowledge. Paulini et al. (2014) also use a psychological theory on motivation and address participation styles in online communities for volunteering and collective innovation, showing similar findings to Rotman?s works regarding how intrinsic motivations drive communities and can support long-term user involvement. The findings reinforce the relevance of the first-time users' experience, demonstrating that participants show strong beliefs and excitement about their motivations when they are new to the website or app (Eveleigh et al., 2014). 21 Furthermore, research has also been conducted at the other end of the participation process, revealing the motivating factors that affect volunteers ?decisions to remain involved and continue contributing to a project. Seminal work has been developed by Rotman (2012) on how these initial motivations change over time and what other factors play a role in participant retention in long-term Cit Sci projects. Interested in how these motivations and engagement change over time, Preece and Shneiderman (2009) present a framework that describes four successive levels of users' participation and involvement in online communities. The reader-to-leader framework also provides usability aspects of online platforms that might impact each category of participation and inform onboarding design. For example, for readers, the novice users in the lowest engagement scale of the framework, usability factors like providing support such as a tutorial or demos filled with relevant content coupled with consistent navigation are beneficial to sustaining motivation. For the users to become contributors and reach the next engagement level, certain design features, pointed out by the authors, can help, such as requiring no registration or little effort for users to make small contributions. Sign in or sign-up steps can illustrate such design features. Recommendations on how to design registration, typically focus on the number of steps and amount of work required for the users to go through. This is a design decision that needs to be made considering the context where the platform is. Often, online platforms and communities, particularly Cit Sci online initiatives, have their social dynamics built around gamification, which implies keeping track of users ? contributions by using badges and leaderboards (Jay et al., 2016), and, thus, registration might be a necessary evil. While the use of gamified components for these communities has its own criticisms and downsides besides forcing account creation (Jay et al., 2016; Paulini et al., 2014; Prestopnik & Crowston, 2011), registration can be required for other reasons, for instance, to allow users to keep a record of their activities and a personal profile. This feature might be positive in some cases. Drenner et al. (2008) claim that although the registration step is often seen as an entry barrier that forces the user to sign up and give personal information (sign in wall for example), it might be connected to commitment levels in some cases. Asking for more effort and time from users generates higher attrition rates when 22 joining a community (Cox et al., 2016; Kraut & Resnick, 2012). Once the users join the community, commitment becomes crucial to the success of the app. Studies concerned with the problem of suboptimal participation have described the challenges of making members start to contribute and move away from the free-riding stage, avoiding social loafing (Beenen et al., 2004; Preece & Shneiderman, 2009). According to Beenen et al. (2004), setting specific goals regarding contributions and actions toward the community can leverage users? contribution rates. These results are informative for onboarding; once the users are done with registration, it is essential that they receive enough guidance inside the app to build engagement from the beginning. The InS is responsible for performing this role making information available, show directions and help the user to navigate the app. 2.4. BEHAVIORAL THEORY Behavioral theories have been widely employed in HCI research, often related to behavior change or new behavior adoption (Fogg, 2002, 2009; Hekler et al., 2013). The Fogg Behavior Model (Fogg, 2009) provides a useful perspective on what influences users beyond personal motivations, considering two additional factors: ability and triggers. When becoming active users who make frequent contributions in a community, inner motivation and personal interest may not be sufficient. If the features of the platform are too complicated, the users may abstain from participating. They may have the desire to participate but not the ability to do so or did not receive the necessary training. A trigger, as defined by the Fogg Behavior Model (Fogg, 2009), can work as an activating signal. It can, for instance, remind the users that they never finished filling out the personal profile and should complete it since this will help them to connect to other participants with similar interests. Each of these three factors?motivation, ability, and triggers?consists of components that affect behavior in various ways. Motivation is related to pleasure and pain, acceptance and rejection, and hope and fear. Example components of ability are time, money, 23 and social acceptance. Triggers are arranged in types: spark, facilitator, and signal. Applying this behavioral model to the design of onboarding may be helpful in different steps of the process. It can provide insights into how to encourage the user to reach the call to action, identify the best timing, and detect possible barriers that make the target action difficult. 2.5. ONBOARDING 2.5.1. Current Meanings and A New Definition The term onboarding has been commonly used in the industry to communicate a socialization process that occurs in various contexts, meaning slightly different things. In the organizational management literature, onboarding refers to the socialization process new employees undertake to familiarize themselves with the company and its members, often called new hire onboarding. This mechanism is frequently associated with strategies that better establish a lasting bond between the employees and the company (Snell, 2006). It is characterized as a way to help ?newcomers become integrated members of their organization? (Fagerholm et al., 2013, pg. 55). Still in a business and commercial context, onboarding strategies are also employed to successfully integrate new clients, as in new customers onboarding. It encompasses interactions with an organization focused on improving the customers' experience and fostering business relationships. In the intersection between the Human Computer Interaction (HCI) and the industry, we find a third use of the onboarding term, known as new user onboarding or simply user onboarding. Here, customers are the users, and the product is a type of software, platform, or online service. In the narrow scientific literature on this topic, including blogs and magazines published online by the UX practitioner community, onboarding processes are typically linked 24 to marketing and business cases, reported as empirical examples of how a particular design works for a specific product (Waldron, 2015; Zambonini, 2014), or based on anecdotal data and personal analysis. The available online content generated by the UX community, despite being abundant and valuable for practitioners, is still limited as it does not present generalizable guidelines or structured information that is replicable. The existing literature lacks a formal definition of onboarding for HCI and falls short of identifying clear components that might constitute this process. User-onboarding practices serve various purposes, to name a few: to inform the users about the benefits of using the product or to offer instructions and guidance to use the platform with the general goal of acquiring new customers and retaining them. Examples of products that employ onboarding strategies are largely found in the SaaS industry, including subscriptions to access digital products, for instance, e-mail service Gmail by Google, the online app music stream Spotify, and video chat and voice call software Skype. Revenue models of products like these result from the way cloud computing is shifting how products and services are commercialized and adopted by consumers (Ojala, 2013). With the development of cloud computing, especially the growth of SaaS in the last decade, and the urge to acquire paying customers, user onboarding practices have disseminated, focusing on leveraging users' adoption and retention. Particularly for subscription-based businesses that demand recurring membership or license renewal for revenue generation, the main reason for investing in an efficient user- onboarding setup is to improve retention and, therefore, decrease customer churn rate, which is the percentage of subscribers to a service who discontinue their subscriptions within a given timeframe. Many companies also choose to offer a free-trial period, during which users have the chance to use and test the product, with partial or full access to functions, before making any payment. In cases like these, designing the first-time experience becomes even more critical since it can influence how new users perceive the product and the value of the service before making the decision about buying. While it is difficult to trace precisely when the term onboarding was first introduced into the HCI context, the connection between the business literature and the acquired meaning by the UX community is clear. A formal definition of onboarding in the HCI/ UX academic setting is still needed in addition to a precise outline of its elements. Based on a few authors? 25 works and current informal descriptions and current industry utilization, a new definition for onboarding can be offered here: ? Onboarding Proposition 1: User Onboarding is a conceptual process of turning first-time visitors into long-term users, communicating how the technology works, establishing its value to the user, and scaffolding the first experience by providing a sense of direction towards a pre-determined goal. This definition is purposefully broad so it can be useful for various areas of UX activities and different app fields besides Cit Sci. The definition proposed here may apply to other types of apps, platforms and online services and can be a starting point for further lines of research. Nonetheless, further investigation should be carried to address the particularities of each field, which are not part of this work?s scope. 2.5.2. Framework and Terminology While user onboarding strategies comprise a relatively systematically unexplored topic in the HCI and UX research context, a few studies have addressed this subject. The following subsections describe the onboarding elements that were elaborated based on material found in the academic and informal literature coming mainly from the industry. At the end of this section, we propose and name the four major elements that constitute the onboarding process. Both the proposed framework and the terminology are specific enough to differ and define this process from other types of interaction sequences or practices applied to digital products and, broad enough to allow the terms to be useful and relevant for other fields, not only crowdsourcing and Cit Sci systems, and advancing UX studies. In the educational online apps field, Renz and colleagues (Renz et al., 2014) discuss onboarding specifically for massive open online courses (MOOCs) and break down the process into three main phases: a) sign-up and registration; b) help and support, where the intention is to motivate the user throughout techniques like gamification; and the last step; c) reengagement, which attempts to bring back inactive users through various means such as 26 sending notifications via e-mail. Lists of components and recommendations have been generated by the UX community and are often available in online publications, blogs, magazines, and reports (Gupta, 2016; Hess, 2010; Hulick, 2014; Munger, 2014; Singer, 2011; Waldron, 2015; Yadav, 2012; Zambonini, 2014). Although not peer-reviewed or scientifically backed up, this content comprises constantly produced and updated resources on onboarding strategies largely used by designers and professionals from the industry. The available content on onboarding is mostly directed and tailored towards a particular field of industry, for example, tips on how to nudge first-time customers to make a purchase in mobile apps, which imply that elements or interactions that compose the onboarding process are intimately connected to the type of service, purpose, and audience. Nevertheless, it is necessary to identify the onboarding parts that are common amid different types of systems; to differentiate patterns and strategies from major elements and to investigate elements? functions and their effects. In an article from Smashing Magazine, Satia (2014) presents a few elements and common implementations of onboarding in commercial products, looking at cases of use and how they make sense together. Similar effort is also done by designers like Samuel Hulick who has published innumerous commercial apps onboarding analysis in his website, not systematically, but using his own set of criteria and opinion on the design. Common interaction resources seen across many apps? onboarding resulting in patterns known as onboarding patterns. UI/UX or GUI patterns, in general, are proposed solutions for recurrent problems and challenges designers encounter when creating new products. Patterns are certain ways of designing for solving common issues, such as little space for a long menu of actions can be resolved using a sliding drawer or hamburger menu. Without consensus, some authors have proposed lists of patterns and different classifications in order to organize the existent or most used ones such as Perea & Giner (2017); and more specifically on onboarding practices, Balboni (2019) reports on finding eight different UI/UX patterns after analyzing over 500 user onboarding experiences. In an effort to map current existing content on that, the following Table 01 organizes how different authors from industry and academia have addressed or structured the onboarding process. Visibly, there is no common ground among authors on how the onboarding practices 27 should be approached or addressed. Only two sources addressed the onboarding process as having main elements or components (Renz et al., 2014; Waldron, 2015), even though each author is focused on particular fields; onboarding for online education platforms and general online commercial products, respectively. Other authors tackle onboarding as a more fluid process and neither address nor define key parts of it, focusing mostly on tips and generalist strategies that could as well be applied to other users' flows. Some writings mix up what could be a step of onboarding (e.g., how to best design Blank Slates) with a not so clear idea of ?being empathetic with users? (Portman, 2017, blog post), which is usually addressed as a strategy or an attitude towards the user (Gasparini, 2015; Plattner, 2010), rather than a structural element of the flow. Blank Slates, in turn, are empty portions or pages in the app, which have not been populated by any content that comes from the users, since the users are accessing it for the first-time, there?s no activity yet. Portman is referring to how to better design the layout of those empty spaces, so it works in favor of the users' motivation, encouraging first actions. How onboarding is Author/Source Parts/ Elements / Types approached or referred to (Renz et al., 2014) Formed by five elements Login and Registration; Demo course.; Platform Tutorial in the course context; Public Sessions; Welcome Mails. (Satia, 2014) Comprises three techniques Benefits-oriented onboarding; Function- oriented onboarding; Progressive onboarding. (Balboni, 2019) Defined by eight UX/UI Welcome messages; Product Tour; Progress Appcues Blog Patterns Bars; Checklists; Hotspots; Action-driven tooltips; Deferred account creation; Persona-based user onboarding. Material Design from Three Onboarding Models Self-select; QuickStart; Top user benefits. Google (?Onboarding - Material Design,? 2015) (Waldron, 2015) Consisted in four Key Introducing The Product; Signing up; Net Guru Blog Elements Encouraging the use of the product; Generating Leads. 28 (Portman, 2017) Fundaments of onboarding Self-evidence; Empathy; Decomposed Intro InVision Blog Tour; Optimized Setup Flow; Blank Slates; Content as a Tutorial; Gamification; Lifecycle e-mails. (Cook, 2015) Various onboarding The Joyriding Approach; The Do Telepathy Blog approaches Something Approach; The Setup Approach; The Everything at Once Approach. (Mullin, 2019) Types of onboarding flows Benefit-Focused; Function-Focused; Doing- CXL Blog Focused; A combination of all. (Shad, 2018) Types of onboarding Benefit-Focused; Function-Focused; Doing- Userpilot Blog Focused; Account-Focused, All. Table1: The various sources of onboarding guidelines organized by authors. In the following section, an HCI terminology for onboarding, supported by extensive references to the research literature, is proposed with the goal of helping researchers, designers, and developers to design and build initial interaction in a more conscious way, comprehending how motivational factors impact users? engagement and how features and elements can help them move along without leaving the process. Within our definition resides clear boundaries of when, in the users' interaction process, the onboarding starts ? which is when the users first launch the app or access the platform for the first-time; and when it ends: with the users taking the critical action that culminates in the main conversion event. From the beginning to the end of the process, variations on steps can occur and should be designed with specific goals suited to the intended audience. Therefore, based on the literature review and the discussion above, we were able to identify the four essential elements that constitute an onboarding process. Other additional steps and parts found in articles and UX literature can be categorized and grouped in one of those four elements. Some authors include re-engagement strategies and actions, forming a fifth step in the onboarding (Portman, 2017; Renz et al., 2014). However, re-engagement approaches are not seen as part of the first-time users' experience in the present work, since they may have their beginning when users first join the system, such as notifications or email alerts; but they rarely occur during onboarding or until the user disengages for the first-time. 29 ? Onboarding Proposition 2: The onboarding process is composed by four basic elements: Statement of Purpose, User Identification, Informational Support, and Conversion Event. 2.5.3. Onboarding Elements: Statement of Purpose The onboarding process begins when first-time visitors access a website or launch an app for the first-time. At this point, most systems strive to demonstrate the usefulness of the service/product/technology by providing a clear explanation of the core features and values. It includes communicating upfront and succinctly what the app is, its main benefits for potential customers/members and clarifying what to expect as benefits. When the purpose of the app or community is not visible or clear, first-time users will have to employ more energy and effort in finding out what exactly what are the objectives of it. In the crowdsourcing context, goal clarity (de Vreede et al., 2013) is an important component for the participants' engagement in a cause. It refers to activities, tasks, and objectives given to participants, and how clearly stated and defined they are presented. For citizen scientists ?volunteers, not only the clarity and how well designed and outlined the tasks are important, but some other aspects should also be taken into consideration, namely: what the overall goals of the project are or the mission of that particular group of scientists is interested in; how the collected data will help the cause; how data will be used, shared, and contribute to a greater end. Several Cit Sci studies have revealed that it is an important factor for volunteers to learn how their efforts will make an impact on the project, contributing to the cause or to future scientific discoveries (Alender, 2016; Preece, 2016). Technology and public participation are essential elements for most, if not all, Cit Sci projects in our current era. Becoming familiar with and learning to use online platforms (e.g., website portals and mobile apps) play a great role in how much effort and time a volunteer is willing to contribute to a project. 2.5.4. Onboarding Elements: User Identification 30 Online registration plays many roles in websites and online apps. It can collect information about users, typically in a form structure, asking their e-mail addresses or phone numbers, zip codes, names, and so on. It can request username creation and password. With that, a service provider can create users' identity that links all content generated by each user, and it can serve a variety of marketing tools such as profiling, targeted advertisement, and content and communication channels (Malheiros & Preibusch, 2013). Registration sometimes includes steps to collect preferences and interests like in the following example. Duolingo app asks users to choose a learning goal prior to the sign up. It also offers multiple options to the users to respond why they are learning a new language. Using this resource does not imply that it needs to happen during UId stage, but it is the most common setting, and constitutes design pattern called self-segmentation. Creating an account is the last step of this first contact with the app. For crowdsourcing sites, users' identification allows them to monitor submissions quality and frequency per user and allows users to access their contributions or website activities in the future. Gamified platforms oftentimes require registration in order to retain users ?scores based on participation, present contributions rankings, and offer users future rewards for their work (Jay et al., 2016). Users are more likely to persist with their first-time experience after this step when they are able to grasp the value of the service/technology right before signing up and still believe that the potential benefits outweigh the effort it takes to fill out a form or give personal information (Malheiros & Preibusch, 2013). Regarding registration placement and UI design strategy, although past studies have tackled these issues in different scenarios and contexts, Cit Sci initiatives still lack specific examination. 2.5.5. Onboarding Elements: Informational Support In general, commercial and non-commercial apps, information regarding the app mechanics, rules, and features functions, can assume many formats, exposed during the first use as tooltips, inline hints, pop-up windows, quick tips, and explanations. 31 Informing or helping the users on how to perform tasks on-the-go which increases the chances of success and making the users less prone to encounter doubts and problems. This approach is called proactive help (Joyce, 2020) and it is imperative for onboarding success. More than an essential element during the onboarding, providing users with information that helps them to understand the system and how they should interact with it, the necessity of documentation was stated in the 10th Usability Heuristic states: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user?s task, list concrete steps to be carried out, and not be too large. (Nielsen, 1994b, pg.156) Another - and more conventional - form of including informational support is including documentation in the system usually organized in manuals and tutorial texts. This practice goes back in time when pieces of software were sold always accompanied by a paperback manual to help new users learn the mechanics of the product. In a time when technology and therefore apps were not as user-centric as today, documentation was essential to convey the engineers' reasoning on how the product works and should be used (Pogue, 2017). Over time, documentation then took the form of digital manuals, online instructions, FAQ pages, or About sections. In scenarios like these, users are required to actively look for links to access directions and Help content that is not explicitly presented during the tasks. The problem with this approach is that it concentrates on the fact that users would have to look for information every time they had doubts or encountered problems. This configures reactive help, which is when users have difficulty, and they need help to address it. So systems rely on users' disposition to look for the informational support they need. Informing or helping the users on-the-go increases the chances of success and makes the users less prone to encounter doubts and problems. This approach is called proactive help (Joyce, 2020), and it is imperative for onboarding success. To prompt the users to actively engage with the system, whether it is making a contribution such as in crowdsourcing platforms or subscribing to a service in commercial apps, oftentimes users need guidance and instructions on how to perform such tasks. Strategies on how to better present that type of information?when and where in the interaction?can 32 vary, and a few examples are tours, demos, tooltips, videos, and tutorials. At this stage in the onboarding, there is also an opportunity to clarify community rules and expected behaviors. For Cit Sci apps, this step in the onboarding process might present an opportunity to inform users on how to collect quality data or use certain features. We call this step ?informational support?, since it can provide the basics to novice volunteers to start using the app and familiarize themselves with the features, the UI, tasks, and protocols. This step can offer the minimum information necessary so users can successfully interact with the app and grasp value in it. Including an informational support strategy helps the users build an understanding of many aspects of the system, usability-wide such as how to post contributions, comments, and questions. For Cit Sci initiatives, data quality and reliability are an important issue, thus instructing newcomers on how to better collect data and use equipment for example (Cohn, 2008), become key points to be communicated. Going back to Nielsen's usability heuristics, interestingly, the 10th stated the need to include in any system help information documentation. What is more, the SoP construct plays this part in the onboarding by providing users with information that assists them to understand the system and how to start interacting, such an essential part of onboarding, 2.5.6. Onboarding Elements: Conversion Event As discussed earlier, the term conversion has been largely used by the UX community when making reference to the moment when a user becomes a customer or a paying member (Kuan et al., 2005). Apart from the commercial and SaaS context, conversion can mean any desired action that systems or apps ?providers aim for users to perform. While conversion events are not necessarily sales, product managers and owners tend to define them as important actions that can be rated and counted, therefore, helping to measure and track users' engagement. It is not uncommon to see key performance indicators (KPI) being used as conversion rates such as signing up for a paid or free subscription; or downloading a free trial of a digital product (Nielsen, 2013b). Another example would be the number of clicks on ?read more? button of a news portal and also measuring how much time users spend on the 33 article page. Although users might click on that button and do not read the rest of the article or leave the page open and disengage with the content for a few minutes ? which can be difficult to ascertain; UX researchers can identify relevant cues and measurable variables and define them as performance indicators, ultimately using them as conversion events. In the Cit Sci context, a usual conversion event would be a contribution placed by a member ? an action that is relevant for the community, represents users' participation and can express engagement, which is often highly desired by the project owners. Although conversion events do not occur exclusively during onboarding ? users are constantly motivated to take actions; it is usually during that first experience that product teams can define initial conversions they desire the user to make, serving also as measures for engagement or activity rates. Identifying and defining which conversion events are significant for a community or app and will be tracked is a major tool for designers, UX and marketing teams and product owners because, when properly interpreted, they provide evidence of what in the users' interaction is working well and what it is not. When retrieving conversion data, the project team can look into several measures with different goals to inform diverse aspects of the design. For example, a Cit Sci online community can consider the number of contributions as a conversion event. The team can measure how many contributions are posted each day, how many are made by novice users, how many users contribute just once, how many contribute every week, and so on. In a study that examined the ability of employing a marketing campaign to drive recruitment for the Cit Sci project Season Spotter, Crall et al. (2017) defined conversion as the number of times users landed at SeasonSpotter.org either indirectly through SciStarter or any of the tested marketing strategies. Given the importance of the conversion event, designers often employ strategies to lower barriers so the easiness and frictionless of the flow can lead or persuade users to perform the indicated task. Conversions can happen ?anytime anywhere? during product usage depending on what events the product team is looking to optimize and attract users to take action. During onboarding it is no different. In fact, most of onboarding processes aim to guide users to a desired action, which newcomers should complete during their first visit before leaving the system. 34 2.5.7. A note on Reengagement Although some authors mention reengagement strategies as part of the onboarding steps, the present model will not consider it as an element. Reengagement features usually assume the shape of notifications or other types of communication with the goal of retaining the new users and motivating them to come back and use the system multiple times (Renz et al., 2014; Segal et al., 2016). Some designers argue that it can be considered a form of continued onboarding, which can include external actions that take place outside the system, such as e-mails or texting notifications. But it can also consist of internal events, such as in- app notifications and tooltips in future visits, when users receive advanced cues and gain experience to grow their engagement or commitment. O?Brien and Toms (2008) differentiate reengagement in two different ways; it can happen both in the short term?during the app usage, they disengage from a task and then come back?and the long term, when the engagement is over, and they only return to the app in the future (O?Brien & Toms, 2008). For our purposes in this work, the latter is considered. 2.6. SUMMARY Although this work looks at the very first experience with apps and will not be assessing future interactions between users and those apps, it is relevant to investigate whether, based on the first-time experience, (1) users find themselves drawn to what they have seen, and (2) whether their impressions on the first-time experience could correctly predict the chances of them adopting the apps and becoming regular users. We believe the onboarding design has the capacity and weight to influence in those outcomes. Generally, there is no predetermined order assigned to these onboarding elements during the first-time users' experience, nor is there one sole prescription for which strategies and interactions should be used. This terminology should serve as a foundational guide that helps designers, managers, and product teams to address and examine their current onboarding practices and serving as framework for building new ones. Starting with these four structural 35 components in mind, teams can move forward on understanding how the users' context and system?s goals affect first-time users' experiences. Breaking down and organizing the process of onboarding in the four elements? Statement of Purpose, User Identification, Informational Support, and Conversion Event ?, they should not be viewed as a recipe to be applied indiscriminately to every context, leading to infallible success. These four elements are constructs that should be present, however, manipulated and designed according to the technology purpose. They can be developed and designed in several forms to fit the technology needs, audience, and context. There are numerous ways of designing the users' identification step, for instance, and how the scope and the project requirements intertwine with available technology and adequacy can vary and be tailored to its purpose, as in any other design project. An example would be designing an onboarding process for an app that requires users to take repeated measurements or observations at a water stream close to home. Users would probably expect to have an account to keep track of their contributions and recent activity, so registration step needs to be present. Users would also want to be able to log in quickly or save their login information after registering. Tutorials and instructions might be necessary in the first access but should be available later and follow some sort of progression in which users can refine their skills and learn in the process. Through some users' research, the product team can find out what are motivational factors that impact their audience?s decision and likelihood of using the app and should transfer their findings to the design of interactions and features, aiming to build engagement and participation. It is a design job to understand what needs to be included in the app and how to present that to users, test the best solutions and possibly, after defining key events to be performed during onboarding, the product team will be able to gather metrics and measure the design?s outcomes. Product teams can implement different strategies for each step, depending on their needs. As Table 2 shows, a variety of design patterns and strategies are available for designers to use to compose their onboarding designs found in the innumerous UX publications, blogs, and other industry sources of information (as seen in Table 1). The following table organizes and summarizes patterns and their possible apps at each of the four elements of the onboarding. It also illustrates some of the most common pre-designed solutions that can be employed during the process. 36 Patterns that can be Elements of the Common UI design patterns General used within the Onboarding employed across the industry approaches entire onboarding Statement of Welcome messages. Progress bars. Benefits-oriented Purpose Welcome mails. Checklists. onboarding; Function- Introductory slideshow. oriented onboarding; Hotspots. Progressive onboarding. Video. Action-driven (Satia, 2014) tooltips. Gamification. (Renz et al., Microcopy. 2014; Seaborn & Fels, Inline hints. 2014; Shad, 2020; Toscani et al., 2018) Coach marks The Joyriding Approach; The Do Something Approach; The Setup Approach; The Everything at Once Approach. (Cook, 2015) User Lazy registration or deferred Account-focused. (Mullin, Identification signup. 2019) Optimized setup flow. Self-evidence. Social login. Email verification. Quick start model; Self- select model; Top user benefits model. (Google, Informational Product Tour. 2014) Support Decomposed Intro Tour. Persona-based user Content as a tutorial. onboarding. (Saez, 2016) Introductory slideshows. Proactive onboarding; Wizards. Reactive onboarding. (Shad, UI tours. 2020) Walkthrough/ Playthrough. Blank slates. Placeholders. Conversion Call-to-Action button. Event Paywall. Freemium. Taximeter. Time limits. Bulk sales. Sale by the piece. Monetary exchange. Permission. Social Sharing. Lead. Questionnaire. Table 2: Various patterns and strategic approaches organized by the four onboarding elements. 37 3. METHODOLOGICAL CONSIDERATIONS Etymologically, epistemology means the study or reason (logos) of science (episteme). Also called Philosophy of Science, it concerns the study of knowledge and reason, truth and belief, evidence, and reliability of the various sciences. To put it simply, it consists of understanding how we know what we know and how trustworthy is this way of knowing it. This branch of Philosophy concerns how we can produce knowledge about the world (Fryer, 2021). Thus, epistemology is a systematic and reflexive study of knowledge, organization, formation, development, functioning, and intellectual products. Among the many traditional coined views on producing knowledge, some epistemological perspectives grew stronger in HCI. Historically, according to Frauenberger (2016), HCI stemmed from a more positivist (or post-positivist) approach inherited from its engineering roots. Since then, this empiricist way of conducting research and producing science has impregnated human factors and ergonomics. Nevertheless, human factors bring together two disciplines that work in different ways on interpreting the world and creating knowledge, resulting in a conflict of paradigms and possibly ontological views. This ambivalence has challenged the scientific community and shaped the HCI spectrum of epistemological perspectives, which are in constant evolution. A significant obstacle with the human factors' engineering past and psychology's paradigms and subjectivity is that the traditional search for causal relationships among elements allowing prediction and uncovering patterns posed by positivism does not work adequately for many of the social aspects involved in human factors. HCI field has already experienced a few shifts in epistemological approaches, resulting in an expansion of their multiplicity. B?dker (2006), Feast (2010), Harrison et al. (2011), and Kaye (2009) state the three main HCI paradigms: the first draws from engineering and human factors as described above, the second is linked to the cognitive science. "The focus was on groups working with a collection of apps. Theory focused on work settings and interaction within well-established communities of practice. Situated action, distributed cognition, and activity theory were important sources of theoretical reflection, and concepts like context focused on analysis and design of human-computer interaction. Rigid guidelines, formal methods, and systematic testing were mostly abandoned for proactive methods such as 38 various participatory design workshops, prototyping, and contextual inquiries." (B?dker, 2006, pg. 1) Emotions and experiences are central to the third wave. Susan revisits this theme in an article in which she debates the third wave, which pivots around "how design may utilize the bringing together of technologies, experiences, and users across domains..." (B?dker, 2015, pg. 27). However, as different approaches of HCI may live in harmony, B?dker states that the second and the third wave seem to be stuck on either side of the divide between work on the one side and leisure, arts, and home on the other: between rationality and emotion. Sampson (2019) addresses the HCI third wave as a "transition from a cognitive theoretical frame to a phenomenological understanding of user experience..." (pg. 3), which Harrison et al. (2011) define as the phenomenological matrix. According to this view, there is interest in the role of emotions, feelings, and affect throughout the users' experience. Sampson (2019) points out that research is now concerned with investigating further pervasive contexts of computing use. However, he draws attention to understanding how emotions and pervasiveness connect and work with the experience economy. The author goes further and affirms that "a critical approach needs to explore the role market logic plays in putting user experiences to work." (Sampson, 2019, pg. 1) A large chunk of these discussions on how HCI should be approached, i.e., diverging or competing epistemologies, can be boiled down to the variety of metrics, methodologies, evaluation and, more importantly, the legitimization of knowledge is challenging to harmonize with each other. The present work adopts critical realism as a philosophical position, originating from realism and subjectivism. Critical realism frames and builds on both of its ?opponents?: It acknowledges that the world is real, and that knowledge production is fallible and theory-dependent but not theory-determined. It's also happy to say that meaning and discourse are important. Still, they're not the only things that exist. (Fryer, 2021, pg. 17). In agreement with the author, we believe that positivism and alike "offer a very shallow perspective on causation. Conversely, constructivism encourages researchers only to consider meaning and discourse: ?We must go further to look at causes, social structures, and the impact of discourse." (Fyer, 2021, pg. 19). 39 Along this line of thought, Frauenberger (2016) proposes a critical realism for HCI with the aim of ?re-framing some of the dilemmas and apparent dichotomies that seem to define current discussions. (pg. 16)" He argues to "highlight possible ways to reconcile the many practices, theories, and underlying philosophical stances that are generally believed to be HCI, in a multi-faceted, but conceptually coherent way." Our philosophical position implies the choices of our research approach, strategies for data collection, and analytical methodologies. The philosopher Roy Bhaskar, the proponent of the critical realism movement, argues that the way of carrying controlled experiments affects the behavior of the examining object, making it unnatural. According to this movement (Collier, 1994), behavior is determined by the interplay of many mechanisms in the real world, an inherently open system. Frauenberger (2016) claims, however, that these conditions are not failures when designing an experiment, but it provides the opportunity to discover and grasp particular aspects "of a real thing" (Frauenberger , 2016, pg. 7). What we discover are tendencies of mechanisms which are the causal powers of real things. These tendencies are more than statistical probabilities as they are related to things and mechanisms rather than a sequence of events. In contrast to empiricism, this understanding does not hinge on the pretended objectivity of observable data but recognizes human reason as a central tool to produce knowledge about mechanisms. (Frauenberger , 2016, pg. 7) According to Crotty (1998), methodological logic and criteria are grounded on the philosophical position. Therefore, agreeing with Crotty, our methodological choices, procedures, and the research design as a whole were conceived to comply with a critical realism perspective. With that in mind, we decided to expose the work's ontological and epistemological assumptions early in this session since they drive many research choices. Details on techniques, data analysis method, instruments, data collection, and more, are located separately in each study section. The three sub-questions presented in Section 1.5, pivot around one central problem, summarized in: how to design an onboarding that leverages engagement? Our methodological choices were also underpinned on the interdisciplinary quality of HCI, which takes a significant part in its paradigmatic shifts, the reason behind the whole 40 contention among different views that exists in the first place. More than multidisciplinary, HCI and Design, in general, are considered interdisciplinary, or, at least, it is what designers and researchers should aim for, according to a few authors, as Feast (2010) and Souleles (2017). Today, design education is changing from the object-centered master-apprentice model of the guild tradition to the theory-driven problem-solving approach characteristic of a university discipline. (Feast, 2010, pg. 4) The design practice can only be considered genuinely interdisciplinary when the work integrates several disciplinary insights, facing their disagreements but sharing the same goal, generating new knowledge, still considering those differences as part of it (Kaye, 2009). An important aspect of design, and, therefore, of UI design, is the notion of design as a process, opposing the object-centered approach concerned with the superficial aspects of the object's aesthetic. Acknowledging design as a process consists of several key decisions that act on multiple layers and go hand in hand with the human-centered approach, which reinforces the necessity of including different areas of knowledge, collaborating with these actors, and reinforcing interdisciplinary (Feast, 2010). The prior considerations match what Harrison et al. (2011) call the 3rd Paradigm of HCI. Given that we can consider key concepts to this paradigm: context and embodiment, interaction has as goal to support for situated action in the world "and the questions that arise revolve around how to complement formalized computational representations and actions with the rich, complex, and messy situations at hand around them. " (Harrison et al., 2011, pg. 9). Based on the exposed epistemological position, this present work is interested in approaching our research problems from the perspective of the 3rd paradigm of HCI, sharing a common understanding of the most prominent properties of interaction. Paradigms, in general, typically also offer: the types of questions that appear to be both interesting and answerable about those properties of interaction; the procedures that can be used to provide legitimate answers to those questions; and the common understanding of how to interpret the results of these procedures. Of course, there is no single set of correct methods, techniques, data analysis methodologies, and so on. However, there are plentiful appropriate choices that, in alignment with epistemological and paradigmatic perspectives, gain strength and produce better outcomes and more significant insights. 41 3.1. A CASE FOR QUALITATIVE RESEARCH Discussions about the differences between adopting a qualitative or a quantitative approach basically concentrate on two points (Becker, 1996). First, the decision regarding which methods are to be employed affects the possibility of generalizations that can be claimed. What more, besides generalizations, quantitative researchers are expected to provide explanations based on logic and variables that reveal the cause-effects. Differently, on the qualitative approach, researchers normally are not concerned with providing causal explanations nor predictions. Instead, they are interested in unveiling the underlying mechanisms and subjacent structure. The explanations are descriptive and contextually situated concerning the research object's role, place, or meaning (Jensen, 2013). The second point that contrasts the qualitative and the quantitative approaches is the way data are collected and managed. Methods that quantify data will likely inform the researchers previously about the type, nature, and other aspects of the information they can acquire. Likewise, qualitative researchers plan their data collections but will unfailingly face and get a variety of data in more or fewer amounts they cannot anticipate. These rich data potentially provide directions for further investigations or new paths of inquiries. This is due to the fact that there are researchers in the field that cannot isolate themselves from the context or get unattached from experience (Becker, 1996). In the HCI field, the focus on the task is starting to be seen as inefficient to deliver good design. Additionally, is growing the necessity of perceiving the impacts of usability issues on users through a subjective and collective lens (Adams et al., 2008). Qualitative research has been shown as a suitable strategy for these problems since it "examines the qualities of a particular technology and how people use it in their lives, how they think about it and how they feel about it." (pg. 3) Social sciences offer numerous approaches based on qualitative research that vary according to the topic studied, how it can be examined, and the goals set for the work. For HCI, shifting the focus implies learning users' emotional and social drives and perspectives, their motivations, expectations, trust, identity, social norms, and so on. It also 42 ?means relating these concepts to work practices, communities, and organizational, social structures" (Adams et al., 2008). According to the current authors, HCI researchers are already taking this path to a more qualitative perspective towards the results that HCI needs. 43 4. STUDY I: ONBOARDING DESIGN IN CITIZEN SCIENCE APPS With a descriptive characteristic, Study I aimed to unveil the current onboarding strategies and design aspects of a few Cit Sci apps. We analyzed fifteen Cit Sci mobile apps available for download in the US Apple App Store. Based on the reviewed literature from academic and industry backgrounds, a set of aspects that describe, characterize, and identify essential features, was elaborated to examine the strategies and design elements present in each app. Departing from the four basic elements ? Statement of Purpose (SoP), User Identification (UId), Information Support (InS), and Conversion Event (CnE) - the criteria included five aspects as described in Section 4.3. Evaluation Criteria. 4.1. GOALS AND LIMITATIONS This preliminary study provided an overview of how a sample of current Cit Sci apps are onboarding their new users. The results revealed the variety of onboarding setups and strategies that are being employed currently in this realm. The limitations of this study concentrated on the number of apps selected and the evaluation method. Although it was conducted an extensive online search that included apps stores, academic works, and other publications, Cit Sci portals such as SciStarter, websites and online communities, only fourteen apps matched the adopted criteria: 1) To fit the contributory project category; 2) To offer an iPhone app; 3) And be available and functioning at the time of the study. A complete explanation of the selection process is described in the next Section, 4.2. 44 4.2. SELECTION OF PLATFORMS For this study, twenty Cit Sci initiatives were initially selected, but six were not available at the time of the analysis. The unavailability was due to common issues, e.g., apps not being active anymore or not being available for download. Others were available, but the app crashed, preventing us from carrying on with the use. Based on Shirk and colleagues ?models for Public Participation in Scientific Research (Shirk et al., 2012), we chose to focus on currently active Cit Sci projects that could be classified as contributory. Projects in this category are typically proposed by professional researchers or scientists who define research questions and the study, while participants contribute mostly by collecting samples or data recording. This initial selection resulted in fourteen apps. Therefore, we considered apps that: o Fit the contributory project category (Shirk et al., 2012); o Offer a compatible app for iPhone to be downloaded from the US Apple App Store; o Allow users to join by collecting and sending data (e.g., observations, reports) via their mobile devices. Since we are investigating how Cit Sci initiatives onboard their new users, apps intended to mediate participation in contributory projects are more likely to feature all the onboarding stages necessary to introduce newcomers to the community and tasks. Contributory projects that work through apps or participatory sensing (Goldman et al., 2009), according to Haklay?s typology (2013), present an interesting use case for our study, in which oftentimes volunteers are out in the field, in nature, or simply ?on the go? and feel motivated to contribute and ready to collect data. In this mobile context, the onboarding can happen virtually anywhere at any time, which adds a layer of complexity for designers and project managers. 45 Citizen Science app Development Goals mPING Meteorological National Oceanic & Collects public weather reports through the free Phenomena Atmospheric app. Reports are used by the NOAA National Identification Near the Administration (NOAA), Weather Service to fine-tune its forecasts. NSSL Ground National Severe Storms uses the data in a variety of ways, including to Laboratory (NSSL), develop new radar and forecasting technologies University of Oklahoma, and techniques. Cooperative Institute for Mesoscale Meteorological Studies Marine Debris Tracker NOAA Marine Debris Spreads awareness of marine debris. Report Program, Southeast marine debris or litter found in coastal areas. Atlantic Marine Debris Initiative (SEA-MDI) from the University of Georgia SatCam Space Science and Capture observations of sky and ground Engineering Center at the conditions through the app. Observations helps to University of Wisconsin- check the quality of the cloud products created Madison from the satellite data. MISIN?Midwest Michigan State University Assist both experts and citizen scientists in the Invasive Species Department of detection and identification of invasive species. Network Entomology Laboratory for Applied Spatial Ecology and Technical Services Shrimp Black Gill Georgia Sea Grant and the It allows shrimpers to submit information about Tracker University of Georgia the catch to researchers studying the shrimp black Skidaway Institute of gill problem. Oceanography eBird Cornell Lab of To gather birdwatchers? information in the form Ornithology and Audubon of checklists of birds, archive it, and share it to power new data-driven approaches to science, conservation, and education. Merlin Bird ID Caltech, Cornell Tech, Designed to be a birding coach for beginning and and Cornell Lab of intermediate bird watchers. Birdwatchers respond Ornithology to questions and get help identifying birds from their phones. 46 iNaturalist UC Berkeley's School of It is a crowdsourced species identification system Information Studies and and an organism occurrence recording tool. California Academy of Observations are shared with scientific data Sciences repositories to help scientists find and use submitted data. NatureNet University of North Allows users to submit nature information and Carolina at Charlotte, pictures for a specific project or topic or to create University of Boulder, their own project. University of Maryland College Park Hummingbirds@Home National Audubon To report sightings, share photos, and learn more Society about these birds. Reports help scientists understand how climate change, flowering patterns, and feeding by people are impacting hummingbirds. Bee-friend Your Garden Earthwatch Institute, Monitor the numbers and types of insects seen on Waitrose, and the Crown the bushes and flowers in individual gardens Estate around once a week. The data will become part of a project at the University of Sussex about pollinators and the plants that they are attracted to. HerpMapper HerpMapper is a Gather and share information about reptile and 501(c)(3) nonprofit amphibian observations. Shared data is available organization. to partners: biologists working for state or regional agencies, university researchers, or conservation organizations. Secchi Disk Secchi Disk Foundation Measuring the Secchi Depth can help map the and Plymouth ocean?s phytoplankton, the concentrations of University?s Marine which have declined due to rising sea surface Institute, UK temperatures as a consequence of current climate change. Globe Observer NASA?s Global Learning Allows users to photograph clouds and record sky (Globe Clouds) and Observations to observations and compare them with NASA Benefit the Environment satellite images. (GLOBE) Program Observer Table 3: List of selected contributory iPhone Cit Sci apps 47 4.3. EVALUATION CRITERIA Given that the goal of this study was to provide a general outlook of the onboarding situation in Cit Sci apps, an inspection method was adopted: Expert Review with a few adaptations. This study was run by the author, a UX specialist and first-time user. The inspection was conducted similarly to a walkthrough, in which we reproduced the user's first contact with the app, to make a contribution required by the tool. It consisted of: launching for the first-time, exploring the app, examining the journey step by step, and observing the sequence of the tasks, demands, and features. During the inspection, screenshots of each step were taken and stored for future consultations. We wanted to reveal practices and common elements among the various apps more than judging the onboarding designs based on usability heuristics. Consequently, we adapted the review to reflect specific onboarding criteria defined ad hoc based on literature. While a classic expert review would result in specific components, such as severity rates of the problems and recommendations (J. Nielsen, 1994; J. Nielsen et al., 1994), this work summarized the points relevant for first-time interactions. As pointed out previously, based on the literature background, a set of criteria was developed using the main four elements of onboarding as a framework to learn the current practices. The six criteria below were used to assess each app: 1) The presence or absence of each onboarding element (SoP, UId, InS, CnE); 2) Whether and which design patterns were present; 3) Whether a general approach was identified; 4) Whether the elements were part of an embedded flow; 5) If account creation (UId) was mandatory to use the app; 6) If the contribution (CnE) was designed as a guided set of tasks. Notes from each app were organized separately in tables containing the six criteria responses, annotations, and observations. 48 4.4. FINDINGS We were able to identify that our sampling of Cit Sci apps employs diverse onboarding formats. Nevertheless, we have found a few common denominators, which are organized as the following: General Onboarding Flow: Eight out of the fourteen apps did present a first-access flow that guided newcomers through the tasks toward contribution: SatCam, Merlin Bird ID, iNaturalist, Hummingbirds@Home, Bee-friend Your Garden, Secchi Disk, and Globe Observer. However, it is important to highlight that, at this point, we could not infer whether the apps truly offered an onboarding process, complete and in compliance with our onboarding definitions and structure. Hence, these criteria meant to assess the presence of, even if rudimentary, a flow that guided users through the various steps, intentionally designed or not. Statement of Purpose (SoP): Only a minority of apps (Shrimp Black Gill Tracker, Marine Debris Tracker, and HerpMapper) did not present the project's statement or community mission in-app. The Shrimp Black Gill Tracker app is currently a pilot project1, so the lack of SoP might change in the future. However, because it targets a particular community, shrimp fishers participating in this project, we assumed they were informed by other means. Marine Debris Tracker and HerpMapper communicated their respective goals and missions on their websites. Of the eleven apps that offered a statement of purpose, almost half of them (SatCam, iNaturalist, Bee-friend Your Garden, Secchi Disk, and Globe Observer) included it as a step the users need to go through and it was also presented as part of a flow. The remaining apps included information regarding their goals, yet it was not visible throughout the contribution journey, tasking the users to find it. At this point, we were looking for cues that reveal whether the apps included projects' goals and purposes anywhere inside the app, even if it was not designed as a formal onboarding element. Users' Identification (UId): Most apps (ten) required registration, but not upfront; they let users explore and even start to contribute before registering, only requesting to sign in 1 Johnsen, personal communication, August 20, 2017. 49 right before the submission (called deferred signup or delayed registration). Most apps did not employ any gamification features. mPing and Bee-friend Your Garden contributions were completely anonymous. The inclination to let users register later in the process which led us to believe that it might be interesting to further examination usage/sign-in numbers of each app to see how this setup affects users ?paths and dropouts. Informational Support (InS): The approach of reactive help was the case for the majority of apps of our sampling, in which the content was offered but occult and was displayed as lengthy instructions pages using text heavy U.I.s. with descriptions on how to use the app. Yet, as seen in other communities and commercial apps, alternate strategies could be implemented in this stage that might better prepare users to collect quality data and avoid abandonment. It is important to notice how this informational support should be available and accessible throughout the whole interaction whenever the users face issues in the usability. Conversion Event (CnE): Support and feedback were not always available at the time of the contribution; so, if the users have difficulties during the data submission, little guidance or help resources are available. Most contributions consisted of six or more actions or microtasks to be completed ? such as inputting names, selecting checkboxes, or filling out forms ?fields ?in order to submit to the community. 4.5. DISCUSSION The findings of this study reveal current design practices in the Cit Sci field. Although not extensive, our apps sample concentrated on mobile Cit Sci apps and deriving criteria from the onboarding framework previously established (see Chapter 2.5, pg 24), the outcomes endorse what we speculated: Cit Sci apps are running behind in terms of onboarding practices, indicating an opportunity to improve this aspect in their designs. According to Smith and Gallicano (2015), volunteerism behavior itself is seen as an engagement behavior that creates social capital. A few factors should be present, so volunteers get involved and engage connection, sense of presence, interactivity, and interest in the activity On Cit Sci. Curiously, this first-time experience of a Cit Sci volunteer might embody a decisive 50 aspect of the member's acquisition so vital for such communities that have neglected this step of the interaction. Onboarding steps embedded in the app's design hold such user's adoption potential during a first-time interaction that it makes it an essential process for crowdsourcing apps that depend on the user's input and participation. The first interaction might be a unique opportunity to gain newcomers that eventually turn into participants. Unlike commercial apps or online subscription services in which users are giving information or money in exchange for a tangible benefit such as using a service or accessing content, Cit Sci communities have neither visible nor tangible benefits that are easily perceived before committing. Besides making onboarding indispensable for Cit Sci apps, this characteristic alone demonstrates how distinct onboarding design must be thought, designed, and conceived with Cit Sci and similar crowdsourcing platforms in focus. Furthermore, the lack of substantial rewards and reliance on personal interest makes designing onboarding processes for Cit Sci different from designing onboarding for any other type of app ruled by a different adoption logic. It is not a common practice in this field to offer financial rewards or any other compensation to participants for their effort and work. Initial participation is mainly regulated by the volunteers' intrinsic motivation, mainly personal interest, which can suffer from interaction and usability problems and the absence of informational content and training in such technological projects (Rotman et al., 2014). External motivations can also affect participation and sustain it. However, the type of rewards expected by Cit Sci contributors seems to differ from other apps, especially the commercial ones. Extrinsic motivations are illustrated by material incentives, public recognition, and the involvement of personal acquaintances. Furthermore, recent research has shown that specific strata of Cit Sci participants are differentially motivated to engage in such projects (Cunha et al., 2017), speaking directly to the design of these technologies, which should target various groups of the public in research projects. Based on the review we conducted, it is clear that most apps are not embracing any noticeable strategy or seem to be addressing intrinsic motivations. We can confirm that based on a few aspects noted, such as: 1) The fact that only a minority of the apps analyzed present the SoP displayed to newcomers as part of the flow when using the app for the first time reveals a lost opportunity 51 to inform, persuade or reinforce some of the interests and motivations newcomers bring. Altruist motives can be addressed by informing users about the project's goals, how collected data will be used, shared, and, ultimately, contribute to the greater end. In addition, showing how participants are recognized, whether there is a rank of contributors or indicating possible knowledge gains, among others, can be addressed upfront, creating a rapport and articulating the users' eventual extrinsic motivations (Lakomy et al., 2020). 2) A reactive approach to offering informational support for newcomers prevails. However, arbitrarily designed or not, the content on how to contribute and use features is present but not visible or readily accessible as often instructions are displayed as lengthy direction pages using text-heavy UIs. The lack of directions when the apps do not present a self-explanatory interface might hurt newcomers' sense of self-efficacy (Heller & Kitsantas, 2016) as it decreases the chances of success in executing tasks for the first time. Regarding the presence of these and other elements across the different apps in our data, we could not identify any robust onboarding design, structure, or recurrent strategies employed. The elements cited here are UIs parts that could be classified as typical onboarding components or resources, such as elements that act as InS pieces and present information about the app mechanics, features, and functions, which can assume multiple formats as tooltips, inline hints, pop-up windows, quick tips, and explanations. Therefore, this expert review does not look at formal onboarding processes containing every element we list in Chapter 2.5. Instead, we scrutinize every UI element and interaction attribute and categorize whether they fit into an onboarding process, perform any onboarding functions, and fit into any of the four elements (SoP, UiD, InS, and CnE). Our data revealed heterogeneity in the practice of how onboarding should be designed and operated. In sum, we could not find general onboarding strategies used consistently in the Cit Sci domain. In our data, it is possible to notice some commonalities regarding particular elements, primarily unfavorable, such as providing SoP (i.e., Statment of Purpose) hidden somewhere in the app, leaving it to the users to search and find it. Another trend regards the InS (i.e., Informational Support), which is presented in instructions of help pages, heavily adopting a reactive approach. Guidance on participation, instructions, and helpful information are not exposed or easily accessible, including the 52 project's purpose. We speculate that these tendencies seen across the sampled apps are most certainly not employed deliberately or knowingly by research teams that are probably not aware of the harmful effects on user adoption such approaches can cause. Furthermore, because specific onboarding design principles or proprietary heuristics are yet to be established, tested, and broadly adopted in the Cit Sci domain, it is difficult for design or research teams to find precise information on comprehending the best options and strategies when designing onboarding. Conversely, based on our literature review, designers in the industry seem to be combining off-the-shelf techniques or isolated methods to build onboarding strategies for different products and audiences. Since general onboarding tips and guides for commercial purposes abound online, designers can experiment and test what works best for their products. 4.6. CONCLUSION Based on a sample of existing Cit Sci apps, our findings reveal, in detail, what newcomers face when using those apps for the first time. An important factor when analyzing this sample is to consider that most current Cit Sci apps, if not all, originate in an academic environment that lacks professional assistance throughout their development, from the back end to the UI design. Moreover, this characteristic dictates considerably of what is seen in that kind of app's final design product that is, typically, far from optimal in terms of features, UI, and visual design, especially if compared with commercial apps largely adopted by the public. Of course, such development and design limitations are due to various reasons -- financial, staff, and scope, which makes it unfair to judge the quality and resources employed by popular apps owned by technology giants that offer commercially successful products. Nevertheless, design teams inside companies constantly test, modify, and improve commercial apps in iterative processes that provide the opportunity for changes and optimizations that are usually not within reach for most Cit Sci programs. While this pilot study provides a relevant overview on how current Cit Sci initiatives 53 are onboarding their users and what practices are being adopted in the UX in the apps, we acknowledged the necessity of collecting firsthand information and observing how potential users experience these apps and their impressions of different onboarding setups. Informed by that, the second study is presented next, a users' study featuring structured interviews with participants who used a sample of the previously selected apps. 54 5. STUDY II: CITIZEN SCIENCE APPS USER STUDY This study focuses on analyzing the users' attitude towards Cit Sci apps when using them for the first-time. Selection criteria was explained on Section 5.2. The goal is to closely examine how the users' attributes interact with the system characteristics articulated by the onboarding design of each app. Also, to reveal how the characteristics of the onboarding and the set of design decisions built-in the apps influence users' engagement and first-time experience. This study collected data during individual user sessions in three instances (described in the section ?4.6 Instruments?): observing and taking notes on how people would interact with Cit Sci apps for the first-time; asking them a few questions afterwards in a semi-structured short interview; followed by a survey on-site to evaluate their user experience through a questionnaire (adapted from SUS questionnaire). We also collected demographic data related to gender, age, and nationality through a prequestionnaire (Survey 1). 5.1. GOALS AND LIMITATIONS This user study aims to reveal challenging points and design barriers the users might encounter in the process of onboarding a new Cit Sci app for the first-time and to explore how these obstacles might affect their willingness to: complete the joining flow, place a contribution, and come back to use the app in the future. To provide responses to these questions, we set up a usability lab to observe participants using Cit Sci apps for the first-time and to ask them to join the project, exploring the app and trying to make a contribution. Participants then answered a few questions in a semi-structured interview after the session about the app use and how engaging the experience was. Although the data collection has been carefully prepared as we strived for the minimum interference and intrusiveness that might affect users ?individual experiences with 55 each app, the limitations of this study lie in two main points. The first is the lack of control over personal interest of each volunteer in the various apps in the sample. Cit Sci initiatives are, a priori, intended to be adopted and embraced by the largest number of volunteers possible, with various backgrounds and interests, who encountered the app for different reasons and, again, for different reasons might or might not engage with the project. Most Cit Sci apps available at the US Apple app Store are completely free for anyone to download and start using it. An unanswered question is whether previous interest in the project theme or cause may influence the technology adoption, increasing the chances of a newcomer enduring a bad UX if they are willing to participate. In our study, participants were recruited without considering personal interests in the selected apps. The second limitation is the artificiality of the sessions ?place. Because we were interested in witnessing the first experience with a new app, users were brought to a meeting room where they could be observed interacting with the app. The first-time UX can happen virtually anywhere ? outdoors, at home, while traveling, during a commute, and so on. It can be inspired by someone?s recommendation, advertising, literature, social media announcements, and the like. These facts led us to set up a scenario where the mediator would start the session by saying, ?Let?s say a friend told you about this app, and you decided to check it out later. You went ahead and downloaded it on your phone, and now you?re going to launch it for the first-time.? The participants were also guided to feel comfortable, act naturally, as if they were by themselves, where there is no need or pressure to complete tasks, or use the app for any predetermined time. This statement made in the beginning of the session is extremely important to remind the users that they are trying something for the first-time, but they are not being judged or evaluated and natural reactions are expected. Additionally, we made clear that if they feel like to abandon the app because it is uninteresting, difficult, or just not engaging, they should feel comfortable to drop out anytime as they would in a real situation with nobody watching. 56 5.2. SELECTION OF PLATFORMS AND PARTICIPANTS The app selection was based on the previous study sample we had assembled containing fourteen apps in total. From there, given the type of observational study chosen, a few apps were impractical to be tested by us. Table 4 shows which apps were included or not in this study. Previously, a pilot study with two participants exposed that the number of apps per session could be adjusted. We were aiming to have all eight apps tested by each participant in individual sessions taking up to thirty minutes. The pilot exposed that the total time per app, including responding to the prequestionnaire, using the app, responding the contextual interview, and responding to the postquestionnaire, could take participants from seven to twelve minutes, and, subsequently, repeating this process for several apps would make the session unfeasible due to participant fatigue and prolonged duration. The solution was to adjust the number of apps to be used by each participant in one session. apps from Study I Selected for Criteria for inclusion/exclusion Study II mPING Included Fitted the contributory project category (Shirk et al., 2012). Meteorological Allowed users to contribute by reporting data regarding Phenomena weather in their locations via an iPhone. Identification Near the Ground Marine Debris Included Fitted the contributory project category (Shirk et al., 2012). Tracker Allowed users to contribute by reporting data litter and debris near the water via an iPhone. SatCam Included Fitted the contributory project category (Shirk et al., 2012). Allowed users to contribute by sending a ?live? picture of the sky via an iPhone. MISIN Dismissed This app was not included due to its geographical limitation Midwest Invasive of the north-central states of the United States. Since we were Species Network running the study in New York City, participants would not be able to report realistic data. 57 Shrimp Black Gill Dismissed It was intended for shrimp boat fishers and recreational Tracker shrimpers to help document the extent of black gill throughout the shrimp season. Since the checklist asks for an extensive detailed report on freshly caught shrimp, relying on shrimpers? skills to identify all the necessary aspects of the disease, our participants would not meet these requirements. eBird Included Fitted the contributory project category (Shirk et al., 2012). Allowed users to contribute by reporting data on birds found in their locations via an iPhone. Merlin Bird ID Dismissed As a field guide, it drew upon data from eBird Cit Sci project, while the identification results received from the users are not being gathered and used to inform a project. Still, Merlin Bird ID can be seen as a Cit Sci tool for informal learning. Hummingbirds@ Dismissed This initiative requires volunteers to have access to a garden, Home backyard, or outdoor space where they can constantly survey for hummingbirds, sources of nectar, and feeding events, called patch survey. Due to the difficulties of finding participants able to perform such activities, we opted to not include this app in our study. Bee-friend Your Dismissed It asked volunteers to monitor the numbers and types of Garden insects seen on the bushes and flowers in individual gardens around once a week. Due to the difficulties of asking participants to perform such activities, we opted to not include it. Secchi Disk Dismissed It required volunteers to buy or build their own Secchi disk and use it in the ocean to take measurements and report on plankton. Due to the difficulties of taking participants to perform such activities, we opted to not include this app in our study. Globe Observer (Globe Clouds) Initially Fitted our criteria and was initially selected. However, later in iNaturalist included the data collection phase, the last four apps randomly listed, and later were put in stand by for future research since the first four NatureNet dismissed. filled out our needs. HerpMapper Table 4: Included and excluded apps in Study II 58 The apps that were used in the study were: mPing, SatCam, Marine Debris Tracker, and eBird. It is important to mention, though, they were arbitrarily chosen. 5.3. USE CASE SCENARIOS Most of the activities for data collection proposed by those apps are intended to be conducted in the outdoors, often in specific types of locations. The Marine Debris app, for instance, asks specifically for reports that take place in coastal regions, such as beaches. Considering that mobile apps can be installed and used anywhere, new users could be accessing an app for the first-time anywhere as well, including at the site where they plan to initiate participation and collect data ? i.e., in a park where they plan to bird watch ? or in another scenario, the users could install and launch the app at home or school and access it again when it is time to report sights or pictures. Given the unpredictability of contexts and locations where users download and use an app for the first-time, we created different scenarios for each app (Table 05), disclosed before each user session, compatible with the type of contribution required. The scenario facilitates how users would see themselves hypothetically using each app in their daily lives. Cit Sci app Type of contribution Scenario Marine Debris Debris items count, found on the Imagine you are taking a walk on your Tracker sand/shore/ground. favorite beach during the summer, or Item category, number, location, optional: we are at the Hudson River, walking by photograph and description. the waterfront. We provided three photographs of debris in beaches (ropes, nets, plastic rests) are shown to the participant, who can pick any one and report what is seen. Plastic cups and bottles were also available as suggestions. Users could take pictures of the images or the actual litter we offered. 59 eBird Bird sightings checklist. Number of birds, Imagine you have visited Central Park duration, location, distance, observation this morning and spotted a few birds type (traveling, stationary, etc.). during a walk. We provided three photographs of common species found in New York City and Central Park are shown, and the participant can pick any or all to report. mPing and mPing: current weather report at the time These apps did not require any SatCam of the app use. additional props, suggestions, or SatCam: Submit a picture of the sky to objects to be used in the contribution. report cloud formations at the time of the Users had access to windows and satellite passage. outdoor area from where the sky was visible. Table 5: List of scenarios elaborated for each app. 5.4. SUBJECTS RECRUITMENT AND DEMOGRAPHICS Volunteers were recruited via posts on online local communities (e.g., Facebook and Meet Up) that have as a main topic nature and biodiversity interests. Since the response rate was low in the beginning of the period, convenience sampling and snowball sampling (Wilson, 2013) were used, and we were able to recruit ten participants in the New York City area. Participants were diverse, fifteen males and nine females, originally from the United States, Brazil, and Germany, ages ranging from twenty-five to forty-four, all with a bachelor?s or higher degree. It is well known that typical Cit Sci volunteers in the U.S. are middle-aged Caucasian people (primarily males) who achieved, at least, a college education (Herodotou et al., 2018; Preece, 2016). On the one side, this profile became common for several reasons, such as having free time (retirees), somewhat interested in science, access to innovative technology and devices, and other cultural aspects. On the other side, the call for younger participants, more diverse in general, has influenced the discourse of many researchers recently (Newman et al., 2012; Toerpe, 2013). 60 We broadened our public with that in mind since there was no reason to resemble the Cit Sci audience perfectly. Moreover, most of the Cit Sci apps are available to anyone curious, and it is certainly in their interest to increase participation from a diverse population, which requires an accessible app for everyone. Therefore, almost every volunteer who applied for the study met the two requirements, that is, not being an expert in Cit Sci and had not used any of the selected apps before. The rationale behind these requirements is because we were interested in observing the first-time someone got in contact with the selected apps. Hence, we assumed Cit Sci experts would likely be more accustomed to the mechanics of some of those apps. 5.5. INSTRUMENTS For this study, we applied a prequestionnaire designed to gather demographics and Cit Sci background from participants; a short contextual and structured interview; and a structured postquestionnaire, aiming at evaluating the onboarding experience. Contextual Interview The category of interviews, usually qualitative, is the semi-structured interview (McIntosh & Morse, 2012; Merton, Fiske, & Kendall, 1990; Richards & Morse, 2007). These interviews consist of a question stem, to which the participants may respond freely. Probing questions, planned, or arising from the participants' responses, may be asked. The contextual, structured interview was conducted right after observing the participants interacting with each app, as soon as they ended the interaction. The interview guide consisted of five questions (Table 6) that relied on subjects ? opinions about each of the apps, addressed their personal opinions and intention of using them again. 61 Semi-Structed Interview Questions o What is your overall opinion about app #? (Describe) o What did you like the most in the app and would you like to be different? o How compelling the was the app for you? o Were you interested in the Project?s goals? (Why?) o Would you consider using it during your free time? (Why?) Table 6: Semi-Structured Interview Questions We created a one-page template (Figure 1) containing the fields to be completed during each session, including date and time and users' randomly assigned number. The one- page document was divided into two sections: the five questions; on one side; and space for notes and observations, on the other one. They were printed, one for each user, for each app, and served as a standardized form to guide the interview and record the data collected. Figure 1: A sample of the print template used on interviews and field notes. Figure 1 62 Post-Questionnaire The post-questionnaire items were based on similar instruments found in the HCI literature, such as system usability scale (SUS) questionnaires (Brooke, 1996) and the heuristics (J. Nielsen et al., 1994) as a starting point, then modified to assess the particularities of the onboarding process, reflecting the stages identified and described in the beginning of this work. An initial version of questions was elaborated, later refined, and resulted in five sets of statements assessing the different constructs of onboarding. Four questions had to be tailored to address different apps features, also shown on Table 07. One example was asking about the registration step which at some apps it was nonexistent. Therefore, we adapted the questions to reflect individual designs of each app. All questions are presented below, organized by the onboarding constructs and to each app they were applied. Statements related to SoP ? The purpose of the Cit Sci project was clear. Applied to all apps. ? I?ve got excited about the opportunity of contributing to this Cit Sci project. Statement related to UId Cases where no registration is ? I liked being able to contribute without registering/ required Applied to: anonymously. mPing ? I would not mind registering in exchange to track my participation. Cases when the registration is ? I consider being asked to register up front a nuisance. mandatory. Applied to: ? I feel like the registration process worked a barrier. SatCam ? The registration process was time consuming. eBird ? I don?t see a reason for registering. 63 Cases registration step is ? If registration was mandatory that would be annoying. present but is not mandatory. ? I liked being able to contribute without registering/ Applied to: anonymously. Marine Debris Tracker ? I would not mind registering in exchange to track my participation. Statements related to InS Applied to all apps. ? It was easy to start using the app. ? It was clear how I could start contributing. apps that offer any sort of ? Instructions to make contributions were helpful. instructions, information, or guidance. Applied to: SatCam, eBird, mPing. apps that do not offer ? Instructions/guidance were missing. instructions. Applied to: Marine Debris Tracker Statements related to the CnE (Contribution) Applied to all apps. ? The importance of my participation is clear to me. ? Placing a contribution meant a lot of work. ? Placing a contribution was easy. Statements related to the general experience and reengagement Applied to all apps. ? I'm interested in using this app again. ? I would like to receive updates and news about this app. ? I might join other Cit Sci platforms in the future. Applied only to eBird and ? I might use this app to learn which birds others are reporting. mPing apps respectively. ? I might use this app to learn what others are reporting on the weather. Applied to all apps, one open-ended question: ? In my opinion, the main drawback of this app is? Table 7: Post-Questionnaire items organized by the onboarding constructs they address and to 64 which app they were applied In the survey, all questions, apart from the open-ended, required five-point Likert- style responses ranging from 1 = ?strongly disagree? to 5 = ?strongly agree?. Questions were elaborated as statements, which combined aimed to comprise the four constructs (elements of the onboarding), and a fifth construct, the overall experience with the app: 1) Purpose and goals, indicating the SoP construct; 2) Registration denoted the UId construct; 3) Informational support and guidance, representing the InS construct; 4) Effective contribution, meaning the CnE construct; 5) Overall experience and retention. For the SatCam app, the questionnaire asked the users sixteen questions to evaluate the five constructs aforementioned. Nine participants completed the survey; there were between six and nine responses for each survey item. The eBird app users answered seventeen questions. The users responded to the same sixteen questions asked for SatCam app, plus one question tailored specifically for this app: users supposed to agree or disagree with ?I might use this app to learn which birds others are reporting.? This was due to the fact that eBird featured a social component, which was an important part of the app mechanics. Therefore, other than reporting birds, users could use the app to solely discover what species are nearby, for example. The mPing app users answered the same fourteen questions as the others with four exceptions: the registration construct eliminated three of the original five questions because there is no registration process in this app. For the same reason aforementioned on eBird, this app had one tailored question: ?I might use this app to learn what others are reporting on the weather.? The Marine Debris Tracker app asked the users the same thirteen questions as mPing app, with two different items: ?Instructions/guidance were missing? replaced one of the three InS construct set. Since there were no social features available (except for a ranking with 65 the names of the biggest contributors), as in SatCam app, there was no need to include a question on that. To find whether these questions collapsed separately into the five constructs, a series of bivariate correlations and Cronbach?s Alpha tests were computed. Each set of questions had the appropriate statistics calculated (see Appendix X), resulting in significantly correlated sets of questions for every construct (Cronbach?s Alpha > 0.7), which indicated the five constructs represented five distinct and unique criteria for all questionnaire versions (one set for each app). Some of the questions intentionally overlapped with topics included in the interview to allow the participants to express opinions informally and openly in both moments. However, the survey allows for more privacy and freedom to participants express their thoughts without feeling uncomfortable facing the interviewer. So, if by any chance a participant desired to give an adverse opinion or feedback on the apps designs, an anonymous survey could provide that opportunity. It is not uncommon for participants to feel constrained to communicate their dissatisfaction or dislike with a product if they think the facilitator or interviewer is the product owner or stakeholder in the project. 5.6. DATA COLLECTION User sessions were scheduled at the beginning of March 2018, after a few weeks of recruitment, and lasted until the first week of April 2018. Through email and WhatsApp, we scheduled a date and time for each participant individual session in different days. The individual sessions were held in a coworking office space in downtown Manhattan, NYC, where each subject was invited to a meeting room where the study was set up. Subjects were offered a laptop on which they responded the questionnaires and could also browse the respective Cit Sci project website before starting to use the corresponding app. The website was made available so that subjects would have the option of acquiring initial 66 contextual information about each app if they desired. An iPhone with the apps installed was available right next to the laptop, and participants were free to start using the apps. The participants were asked to bring their phones or computers in case they needed to register for any of the apps and get an e-mail confirmation so they could easily use their personal e-mail accounts to log back into the apps. Still, the laptop was available for them to access their personal accounts, if necessary. Each user was invited to start interacting with each Cit Sci apps for the first-time while we observed and took notes. Once they concluded their use of each app, that is, ending the interaction for various reasons (i.e., felt they have explored enough, were not able to achieve any task or goal, and gave up), we then asked them a few questions in the form of a semi-structured short interview regarding the experience. Next, we invited them to respond to a survey on-site to evaluate their users' experience through a questionnaire, which adapted from SUS questionnaire. As earlier described in Table 4, the apps that required a scenario and/or previously taken pictures to be used in the interaction were explained and clarified before the users started the experiment. For example, for the Marine Debris Tracker app, we would let the users know that they could make up a list of debris to report or could use one of the three images of beaches and riverfront we had prepared beforehand, to count the number of items seen in the pictures (Figure 2 extracted from Google Images). The equivalent was prepared for eBird app, in which pictures of selected birds (Figure 3 extracted from Google Images) could be seen in the iPhone and on the laptop. SatCam required the users to approach the window and take a picture of the sky, while mPing asked them to look out the window or go outside to report the current weather conditions. 67 Figure 2: Image used as examples of debris found at beaches. Figure 3: Birds pictures used during the sessions. 68 5.7. DATA ANALYSIS This section was organized in the following order: firstly, we presented the types of data collected during this part of the study; secondly, we exposed the approach chosen to treat each group of data. 5.7.1. Types of Data The participants' sessions resulted in an extensive and detailed data corpus comprised of four sets of data: a) Responses of Prequestionnaire (Survey 1): demographics and Cit Sci background of a ten questions prequestionnaire; b) Semi-Structured Interview: answers from the contextual and structured interview; c) Observations: annotations of any spoken sentences, doubts, complaints, and compliments, all the verbal communication during the session. Notes also included participants' nonverbal behaviors, such as their posture, gestures, and facial expressions; d) Responses of Postquestionnaire (Survey 2): Likert-scale scores from the survey responded in loco. Including the answers from the last open-ended question. 5.7.2. Analysis Methods Survey 1: Pre-Questionnaire Survey 1 was built using Google Forms, so the responses were automatically compiled into a spreadsheet with the multiple-choice answers given by each participant. This data provided us an overview of the diverse background participants had and their experience with Cit Sci., plus volunteering work in general. It revealed that thirty percent of the users had 69 previously participated in some kind of volunteerism, but none had voluntarily collected nature data for a scientific project during their free time, such as Cit Sci projects. Furthermore, the seventh percent of the group has never heard of Cit Sci. before. Semi-Structured Interview and Observations The answers from the Semi-Structured Interview and Observations were transcribed from paper annotations to Google Docs files and organized in separate documents containing: the number of the users, date, and time, followed by the five answers separately on one column and observation notes on another. Users were assigned random numbers, which later were used to identify each file that referred to one user's session, constituted of observational and interview data. The individual transcriptions were saved as individual files, one for each session. These files were subsequently imported to NVivo software. We then created an NVivo file for each app, respectively. Finally, once all the interview data and observation notes were reviewed and organized individually, we carried a Qualitative Content Analysis (QCA) for each app, user by user, anonymously. QCA was the chosen method for analyzing the transcriptions from interviews and observations. This method is similar to many Thematic Analysis methodologies; nonetheless, it is a distinct approach to analyzing qualitative data. Moreover, philosophical perspective, conduct, alongside elementary concepts such as theme and code, can also diverge among these approaches. According to Spannagel et al. (2005), QCA is a "qualitative oriented method that applies different techniques for a systematic analysis, mainly of text material gained, e.g., by interviews, diaries, observation protocols, or documents.? (pg 3). In a recent work from Virginia Braun and Victoria Clarke (2020), expert researchers in thematic analysis and qualitative research methods, they reflect on and compare the various branches of thematic analysis and other pattern-based qualitative analytic approaches. QCA is addressed as a very similar methodology to Thematic Analysis (TA). Both TA and QCA, although they had been applied in multiple forms and shapes historically, likely originated 70 from the same stream of development of qualitative approaches to research. They share several characteristics, for instance, using the coding process and theme development, explicit and inferred meaning, and the centrality of researcher subjectivity (Braun & Clarke, 2020). Although both methodologies were deemed appropriate for this study, since they agree on numerous analytical aspects, QCA allows the possibility of using deductive and inductive coding approaches or a combination of both. We deemed it more interesting and suitable to adopt an inductive approach, when analyzing observational material and interview responses for this work. QCA fits our epistemology underpinnings, sharing subjective characteristics and practices reflected from TA methodologies and supporting empirical assumptions reminiscent of a post-positivist or realist past. We believed that it is through the combination of such aspects that QCA could offer the necessary flexibility and suited into HCI's context of research approach. On the one hand, QCA is often presented as atheoretical, whose results can be interpreted as shallow analysis and merely descriptive studies. On the other hand, our view disagreed and considered our approach on QCA theoretical flexible and acknowledged assumptions without reaching the other extreme of positioning as theory-determined (constructionism view) (Fryer, 2021). Qualitative content analysis goes beyond merely counting words to examining language intensively for the purpose of classifying large amounts of text into an efficient number of categories that represent similar meanings (Weber, 1990). These categories can represent either explicit communication or inferred communication [...] qualitative content analysis is defined as a research method for the subjective interpretation of the content of text data through the systematic classification process of coding and identifying themes or patterns. (Hsieh & Shannon, 2005, pg. 3) Therefore, QCA was selected to guide our data analysis of this portion of Study II. The following paragraphs describe our procedures while conducting the analysis. Once all the rich data were coded in dozens of different nodes on Nvivo software, the search for the main themes started by organizing similar codes and comparing their definitions. We then proceeded to rank all codes by the number of times statements and ideas were conveyed in the users' data (Figure 6). Nevertheless, this analysis step was not supposed to be 71 measured quantitatively. Otherwise, the results might be compromised due to a more than simplistic analysis. Looking at the eBird's data, for instance, a code-named "website's video" was created, regarding the video users watched before using the app. Although mentioned several times, it did not change much of users' perception or alter their experience. Furthermore, first-time users visiting the website and watching the intro video before downloading the app is a detached event, which might happen or not, and designers should not count on that to help users grasp the project's meaning or importance. We considered it essential to look exhaustively at each code and its text fragments to check whether that topic was representative of the users' experience. For example, some topics or ideas might be punctual or too particular about how users handled a task or their attitude towards the app or the project's topic, such as the "social component" code. For instance, on eBird's sessions, not many users reported missing seeing others' contributions, pictures, or being able to socialize or interact with others through the app. In this case, such a matter is relevant to collaborative communities and crowdsourcing projects. However, it did not represent a concern for most users. Duplicate or remarkably similar codes were merged into one code that communicated the very same meaning, such as "app's purpose," "value," and "objectives" codes on mPing, which later were all grouped into one embracing theme "Perception of Value." 72 Figure 4: Coding process example carried on NVivo. Next, once we prioritized the most representative codes, mind-maps were put together to visualize the final codes, moving them around, grouping, and categorizing them into new constructs or themes (Figure 7). This time, instead of merging related codes, we organized them under the same umbrella theme, namely: "Lack of directions" and "Low accessibility" codes. The quotes that reflected a lack of guidance or instructions and led users to face difficulties and uncertainty jeopardize the general accessibility. 73 Figure 5: Coding Process and Mind Map Construction. As priorly stated, we adopted an inductive approach in order to find the most representative themes. However, another important part of this study was to identify where, in the onboarding process, users had more difficulties or any relevant touchpoints. With that goal, we adopted a deductive strategy of situating the main themes back to the point in time when they occurred in the users' journey and search for what app feature or interaction was connected to them. Therefore, we placed the themes back into the timeline of the events so we could visualize each one, where, when, and how they acted. So doing, we could organize the themes by onboarding construct, non-exclusively, grouping themes deductively under the framework 74 of the onboarding process. That allowed to grasp the role, effect, or influence during the onboarding elements individually. Figure 6 illustrates the process in a simplistic way: Figure 6: Qualitative analysis stages carried. In a flexible yet comprehensive way, the goal was to grasp a high-level overview of the first use of each app, discover the main touchpoints and how important, appropriate, meaningful, and especially engaging they meant for users. As a result, we believe QCA was an effective and proper method to identify codes, examine their meanings in the users' context, and inform the most relevant themes brought up that need attention. Therefore, themes can ultimately help learn how users respond to different design features and what works to engage new users. Although the present work?s primary author filled the role of expert reviewer responsible for analyzing the data, identifying the key ideas, and formulating the codes, QCA methodologies affirm to be necessary to fill out the criteria of reliability and validity. However, this approach was not adopted in this study. Braun and Clarke (2020) differentiate QCA from TA methodologies based on several contrasting points, and one of them is the reliability and rigor criteria. 75 Despite such positioning, (post) positivist theoretical assumptions are often imported into the analysis through the use of quality measures like calculating inter- coder agreement and a concern to minimize researcher subjectivity and maximize the ?accuracy? of coding. (Braun & Clarke, 2020, pg. 4) Themes, developed from codes, are constructed at the intersection of the data, the researcher?s subjectivity, theoretical and conceptual understanding, and training and experience. A dataset does not ?hold? a single TA analysis within it. Multiple analyses are possible, but the researcher needs to decide on and develop the particular themes that work best for their project ? recognizing that the aims and purpose of the analysis, and its theoretical and philosophical underpinnings, will delimit these possibilities to some extent. (Braun & Clarke, 2021, pg. 18) This work is committed to present valid and useful findings and provide insights to future research. However, the adoption of an external reviewer or having the collected data coded by a second or third coder(s) would be unlikely to provide ?better? data or ?more reliable? themes, since we are not looking for consensus among different researchers. Therefore, calculating IRR would not benefit the findings. We agree with Braun and Clarke (2021) when they point out that having an external coder looking at the data, conceptualizing other themes, and then ?checking? the accuracy of the codes and themes between two coders, would not provide more accurate themes, but would only reveal whether the first set of themes (generated by reviewer A) is accurate to the second set created by the external researcher (reviewer B). The various sets of themes and codes that could be generated by a team of coders should not be considered more reliable or valid than others, according to the authors they are just different. For example, many quality criteria and standards include ?member checking? or ?participant validation? as a form of credibility check (e.g., Elliott et al., 1999; Morrow, 2005) ? in some cases without acknowledgement that this quality practice is not conceptually coherent with all forms of qualitative research (Reicher, 2000), or consideration of the practical and pragmatic challenges of implementing this practice. (Braun & Clarke, 2021, pg. 37) Therefore, the methodology adopted in this study did not carry such measures and followed a few subjectivists and qualitative guidelines that stand up for the value of the experience and expertise of the evaluator. 76 Postquestionnaire: Survey 2 We treated the collected data using SPSS calculating the internal consistency of the questionnaire. To test the hypotheses that each subset of questions collapsed uniquely into the five constructs, a series of bivariate correlations and Cronbach?s Alpha tests were computed, whenever appropriate. Nineteen participants, in total, participated in this study and completed the survey. The Cronbach?s Alpha tests showed how strongly participants agreed or disagreed with the statements, revealing a few insights on how the audience sees each construct. The results indicated which aspects of the onboarding are working in favor of the participation and engagement, and which ones are working as barriers to users ?engagement, and consequently jeopardizing the first experience. In the following sub-sections, 5.7.3, 5.7.4, 5.7.5, and 5.7.6. we present the analysis for each app separately, mPing, eBird, Marine Debris Tracker, and SatCam, respectively. 5.7.3. mPING app: Analysis and Discussion mPING stands for Meteorological Phenomena Identification Near the Ground Collects and offers an app for public weather reports. Reports are used by the NOAA (National Weather Service) to fine-tune its forecasts. NSSL uses the data in a variety of ways, including developing new radar and forecasting technologies and techniques. Users allow the app to track their live locations and then proceed to choose from a list of weather conditions and to submit a report according to the current weather, wherever they are. User Interview and Observations Analysis As described previously, mPing sessions were carried with ten volunteers, resulting in observation notes and interview answers, later transcribed, and inserted into NVivo for further analysis. 77 Users were offered to look at the mPing?s project website on the laptop available in the room. Most users started by skimming the website?s homepage very briefly. The users? few comments complained about the lack of helpful information or information being irrelevant for first visitors. On NVivo, once the data was organized and a familiarization analysis was carried out, we started out with 45 preliminary codes, which described interesting points present in what users experienced. According to the described methodology in section "3.3.6 Data Analysis", we then moved forward and proceeded to dive into the data and better identify and organize codes into meaningful groups. The process of breaking down and untangling meaningful ideas from the data, then reorganizing, merging, and prioritizing the more representative codes was carried back-and-forth exhaustingly. This iterative effort resulted in eliminating nearly half of the codes found initially. We also elaborated a mind map to help clarify and organize the ideas during this process, as seen in Figure 7. A few examples illustrate this process, as follows: the code "Lack of directions" represents the users' complaints on lack of orientation and clear objectives they should follow, e.g., "Where should I click? Is the weather report about today?" [User #04], especially the app's homepage, which makes them feel lost and unsure about what they were supposed to be doing there: "Why can't I interact with the map?" asked User #07. The code "Unclear CTAs" refers to users expressing frustration on not having a clear path in the app and trying to "?make sense of this app..." [User #03] and questioning what to do with the map: "Oh, this map? I can zoom in. Not sure what this really does." [User #6]. Both codes deal with app features that are not noticeably interactive, do not state the project?s goals, revealing how the app falls short on guiding the user to whatever features, goals, or tasks users should be interacting with. After much analysis, we realized that many comments coded "Lack of directions" were about "Unclear CTAs". Having found these two main ideas, "Lack of directions" and "Unclear CTAs," it was possible to conceive an encompassing theme: "Hierarchy problems." We identified "Hierarchy Problems" when users would feel and show they were lost, did not know what to do next, or even where to start interacting. We also coded for sentiment analysis, resulting in three main adverse reactions visible when users met "Hierarchy Problems?: frustration with all the difficulties they encountered; confusion leading to the lack of interest; and discouragement, as illustrated by User #02, when 78 selecting types of report: "I don't understand what I'm supposed to be doing in here"; and User #03 "I'm trying to make sense of this app...". Furthermore, "Uncertainty" was omnipresent throughout the sessions. It became a theme by itself because it represents a significant pain point caused by several factors and problematic features, such as lack of interactivity and feedback of the Map; another central theme. For example, when viewing the reports, User #10 said indistinctly: "I assume it is just mine [report] that is showing." "Uncertainty" was also present when users were trying to figure out what they were supposed to do initially, whether they were successful when completing the task, and a lack of understanding of whether or how the social features work. For instance, when User #06 clicks on Submit Report, he seems surprised: "Did I submit something? What did I submit?" he asks insecurely. Users, in general, also got uncertain regarding their contributions, not only expecting feedback when completing a task, but also what impact it might have on research and community, whether the data is actually helpful, and the result is something beneficial. For instance: "I'm trying to figure out what's my goal, or my job" [User #10]. They also questioned other participants? contributions, whether they were visible or accessible, and whether the Map was supposed to perform this social aspect. "Uncertainty" is evident in some excerpts: User #06 said he ?... would like to see what others are contributing to the community." User #08 spent a few minutes going back and forth in the app: "Can't find others' reports." User #04 declared he ?... would expect to see a map across the US and others' submissions." and "I'd like to see the reports in my area. Get some benefit from the reports." Undoubtedly, the sentiment of frustration and demotivation also showed up and contributed to a negative attitude. As mentioned before, two other themes were also unveiled: "The Map", deemed unclear and, therefore, pointless, and "My Contribution", which embraces codes like "Data Impact and Usefulness", "Reward", "App's Purpose", and "Task Success". 79 Related sentiment Themes Codes and ideas comprised and emotions Hierarchy This theme includes subthemes such as Information Adverse reactions Problems overload, Lack of directions, Unclear CTAs, and Visual such as confusion, Interface. Hierarchy issues lay in the lack of visual cues difficulties, that users were supposed to interact with and difficulty uncertainty, and differentiating buttons and links from static images. dissatisfaction. Information Too much information presented at once without guidance or Confusion, overload support weighted down users. This theme includes codes inadequacy, negative such as Visuals, Scientific Character, Confusion, and First comments. Impressions. The code Visuals was about the quality of the visual interface, elements, colors, and organization on the screen. The combination of visual features and excessive technical language led users to perceive it as "sciency" or designed for experts. Confusing navigation and layout contributed, overall, to a negative first impression. The Map feature This feature was very unpopular since it raised various pain Frustration, points due to its lack of interactivity and purpose. The Map demotivation, negative theme encompasses codes related to interaction qualities responses. such as Clarity and app's and Map's Purposes, plus the Purposelessness. project's Social Aspect. My Contribution This theme is a cluster of codes, and ideas that reveals Frustration, users? preoccupations with the impact and usefulness of demotivation due to their contributions and desire to receive rewards and the lack of benefits. It includes codes such as Data Usefulness, recognition, negative Benefits and Rewards, Not Fun, app?s Purpose. reactions. Table 8: mPing discovered Themes. The focal point of the app?s screen was a large map showing the users' real-time location once the users have permitted their cellphones to the app track their locations. Users seemed to have judged it by the size on the screen to conclude the Map was an important feature, perhaps the central part of the app. However, it raised several questions about whether the Map showed others ?reports, how many people were participating, why there was no visible activity, no feedback or interaction at all. Users ended up making suppositions on the Map?s purpose while disapproving of this design choice. Users also questioned the purpose or objective of their contributions, just like User #06 states: ?It is unclear what is it that I am contributing for...?. Another user questioned the 80 efficacy and validity of the task he was required to perform -- report the local weather: ?Not sure how it can be useful.? [User #08]. These criticisms also tell us how unclear tasks ?purpose and lack of information on how data are utilized can strongly dissuade users from participating and engaging. The absence of such information leads to users feeling like they are putting time and energy into something they do not comprehend how important it might be to the project. This uncertainty is also present not only in contributions ?usefulness, but also in another level purpose-wide: whether the research?s results impact the environment realistically. That coincides with Alender's (2016) study on citizen scientists working with water quality monitoring. Participants had a massive positive reaction when they gained access to work's results, comprehending how the data collected produced a real impact on environmental issues and how their efforts resulted in tangible outcomes. From the participants' perspective, we identified a determined rationale and common expectations: once the users are "called" to contribute to a project, they need to grasp the purpose of that project, a decision usually made in seconds, which justifies the importance of clear goals and the project's purpose. If hooked, they need to perform the required task, aware of its relevance and how the project's team will use and treat the collected data. Moreover, most of them desire some sort of reward, such as benefits or acknowledgment. Thus, users' comments reflect the relevance of the reward, which has been already shown by extensive research on motivation (Cappa et al., 2018; Raddick et al., 2010; Rashid et al., 2006; Rotman et al., 2014). At best, users want to be appreciated for their effort and feel they have accomplished something. One participant even declared: "The user needs to get something out of it." [User #09]. Several participants' comments also match recent studies (Pejovic & Skarlatidou, 2019; Skarlatidou et al., 2020), showing how vital participants' motivation is to see their contributions being implemented to view data in use. While reward can take many forms in the crowdsourcing sphere, extensive research has shown that Cit Sci participation can benefit from public online acknowledgment as the primary form of reward (Cappa et al., 2018). However, studies mainly compare rewards practices employed to crowd-in participants in the first place before they decide to join the project, such as financial compensation, authorship, and public acknowledgment. Nevertheless, while these strategies may inform effective ways of promoting initial 81 engagement, our take on rewards differs since we are looking at benefits the users must receive once they have already joined and are experiencing the app for the first-time. Throughout the onboarding, all users revealed dissatisfaction with the lack of benefits for the user. Some mentioned what they would expect to receive, such as "thank you?, once they are done sending the report, access to others' weather reports in their area, while others expressed the urge to learn the impact of their contribution. In addition, the lack of fun while performing the tasks was also mentioned as any other type of motivator. Because the lack of benefits offered can dramatically affect users' willingness to continue participating in returning to the platform, teams, and Cit Sci researchers should pay attention to this aspect more thoughtfully. Presenting benefits to users should occur in two distinct moments in the onboarding: at the beginning of the interaction, during the statement of purpose since it can work as an incentive to participate. Furthermore, right after users have put effort and time into contributing and need an acknowledgment or reward of any sort. The theme definition process was supported by a mind map elaborated to visually help with the codes grouping and allowed to draw links and connections literally and figuratively between the codes. Figure 7: mPing app screens examples. 82 Figure 8: mPing app Mind Map. 5.7.4. eBird app: Analysis and Discussion eBird is one of the most well-known and successful Cit Sci projects to the present. It requires participants to report on birds they encounter. The app is built to gather birdwatchers ? information in the form of checklists of birds, archive it, and share it to power new data-driven approaches to science, conservation, and education. User Interview and Observations Analysis As described previously, sessions were carried with nine volunteers, resulting in observation notes and individual and interview answers. We could identify key themes that were present, discussed and stand out, helping us to understand how users deal with eBird app for the first-time. Users were offered to look at the eBird website in the laptop available in the 83 room. Some of them started by watching the demo video eBird has in the homepage. The app use followed that, and subsequently, the interview. Related sentiment Theme Codes and ideas comprised and emotions Aesthetics: It encompasses UI's aesthetic aspects, such as colors, style, Positive opinions Visual design and visual elements, and image use. In addition, subthemes were and appraisal. usability related to how the app's visual delivered a professional look and a positive first impression. Perception of It encompasses subthemes about how users perceive the Disappointment, Value app's qualities and how it might suit their demands and interest, and expectations. The value for the user revolves around the negative comments. balance between what the app offers and the costs of using it, meaning how much effort they need to put into learning how to use it, register, time consumed, and other compromises. Some raised aspects included the lack of personal interest in the project's topic, the clarity of the specific goals of the overall eBird project, the scientific character of the app, and potential knowledge gain. The desire for images use was present. Navigation It encapsulated the gap between the mental model of the Self-doubt, mental difficulty designer and that of the app's user. It included the lack of model mismatch, information organization, low findability, and the use of confusion. expert terminology, making information somewhat inaccessible. Registration as a Users? effort and work, commitment, time consuming and Negative attitudes Barrier frustration. Technical problems were included. and frustration. Table 9: eBird discovered Themes. Starting by the Aesthetics theme, it encompassed visual design and some usability issues. The app?s visual interface was unanimously raised as a positive aspect. The overall visual aspect of the UI contributed to what they called ?professional look? and therefore, users expressed having good first impressions. Comments on the app?s look were the first comments to be expressed by most users spontaneously in the beginning of the sessions, such as: "It looks pretty. The name is cute." [User 09]; "The home page of the app, I mean the UI, is clean and 84 pretty." [User #07]. It was evident that the more quality the UI?s visuals present, the better is the reaction of users. As they have shown positive attitudes, a connection between an elaborate visual interface and the professionalism of the project was revealed, and even brought some users to perceive what they called a ?sciency? aspect. Interestingly, although positive, this ?sciency? impression conveyed by the UI?s visual of sophistication and scientific character also fed a notion that users of this app should be as refined or committed to the project reciprocally, as this user stated: "? but also, it looks like it expects more from users" [User #02]. This perception can lead novice users to believe that they must be familiar with birding topic, necessarily be an expert or bird lover so he might be able to participate rather effectively in the project. Figure 9: eBird app screens examples. On our evaluation, eBird?s UI carries several positive features and simplicity that caused users to feel comfortable and excited with the project. Both the clarity of the project?s goals and the cleanness of the interface are noticeable. We believe that those characteristics, plus the consistency of visual elements throughout the screens (internal links), minimum use 85 of color, small number of icons and other picture elements, together with an apparently organized layout, contributed to the positive impression for the users and afforded them a good start. In short, we confirmed that investing and building a high-quality visual design certainly has the power to change people?s perception of a product, to build positive affect from its users, setting the product for success. However, this advice must be taken sensibly considering the audience the product wants to reach. Evidently, the first experience of users using a technology product, such as the eBird app, does not go through one and only aspect of the use (e.g., aesthetics and UI?s visual quality). Nevertheless, it deserves attention since it has the power to overlay other aspects of the product. This is called the Aesthetic-Usability Effect, thoroughly studied back in the 1990?s and well explored by Don Norman (2017) more recently. According to the Aesthetic-Usability Effect, users are susceptible to the appeal of visual and overall aesthetics of any given interface, which can make their experience ?look? better than it actually was, less judgmental and more tolerant to minor usability problems. In our study, the effect of the UI?s visual design on our volunteer users were strongly noticed, and how it influenced their enthusiasm, curiosity, and optimistic attitude from the start. As long it is compatible with the app?s goals and audience, visual elements, colors, imagery, and even language, function as a strong engagement attribute. Conversely, as the interaction progressed, it was clear how the pain points faced by the users, causing frustration self-doubt, started to overlay the initial positive responses to become more prominent. Based on our users? experiences, the Aesthetic-Usability Effect was noticed until a certain point; users? enthusiasm and interest were gradually fading and being replaced by disappointment and frustration, when trying to use the app and placing a contribution: "I liked the app in the beginning, but then I realized how specific it was... hmmm... so I don't feel like... I?m part of this audience." [User #04] The next theme identified was Perception of Value. The lack of interest in birds, in general, did not prevent most users to acknowledge the clarity of the specific goals of the overall eBird project. The scientific character of the app and opportunity of knowledge gain counted as positive aspects presented by the users. The app was also efficient in communicating the value of each volunteer?s contribution and how the collected data was actually being used by the project team: "This makes sense? I understand why birds' locations 86 are important" [User #04]. When a user stated: "It's actionable. Seems legit. You can see them using the data". [User #06], the app has shown how the practical value of his actions and contributions, i.e., efforts, were relevant to the initial drive. Many of the users ?spontaneous statements resembled personal assessments as if they were (and after all, they are) deciding whether initiating the apps use worth their time and effort. Such instances illustrate how sensitive the presentation of the right information regarding what we call "Purpose of Statement" during the onboarding, when users had just started. It includes the goals of the overall project, the task goal and app purpose, how contributions can make a difference, whether data from contributions are actually useful and are in use by the project (legitimacy), which can also include project?s outcomes or achievements. Another aspect of the perception of value are the expectations users bring to the experience, which may even be reinforced by characteristics of the app itself, or the topic of the project or merely individual anticipations. During the sessions, it became clear that users expected to find, to use, or to capture images of birds. Since they were offered to take a look at the project?s website homepage in the beginning of the session, the demo video and the interface visuals turned out to be well received by them, and might have contributed to their believe that birds' pictures would be present in the app. The lack of images in various areas of the app truly impacted users who felt frustration rapidly: "At least they could provide me a picture of each bird species? I'd learn something." [User #05] "I'm curious! I hope there's a map showing cool birds around me". [User #05]. Users also indicated disappointment: ?I was hoping to be asked to take pictures or record bird songs.? [User #01]; "I have no idea what birds those are? I was expecting to send a picture or confirm the ones I see." [User #05]. In this case, the presence of pictures would have benefited the easiness of use avoiding users to feel unable to participate plus feel not part of this community or audience. On another topic, the theme Navigation Difficulty was also relevant. Almost unanimously all users got lost in the app and did not refrain from showing their frustration. Clearly, there was a gap between the mental model of the project?s team or designers who built the app, and the users we tested, as seen in here: "Hotspots!? What is it? Who is in there?" [User #06]; and "In what order is this organized? How can I find a red cardinal in here?" 87 [User# 02] The lack of birds ?species pictures or detailed descriptions not only prevented users from executing the tasks and placing a contribution, but also aroused a feeling of self-doubt leading them to believe they were not the right audience for this app; only experts or bird lovers would have the required knowledge and capabilities to do it. Furthermore, doubts on the terminology and findability problems prevailed across the sessions well illustrated by this user?s comment: ?? most features are not visible or findable. I was hoping to be asked to take pictures or record bird songs. Maybe such features are not present at all. Anyhow, I couldn't find them.? [User #01]. Together with the previous theme, another obstacle faced by the users was the registration step: Registration as a barrier. Three main issues arouse: firstly, the commitment of registering and creating an account, including the work and effort necessary to do it; secondly, even the users who got through the process considered it to be time consuming and annoying. The fact that users needed to log in before using any feature of the app weighted down their experiences: "I'd like to open the app and see the functions. All of that before registering. It should offer a guest registration? I'm not a birder." [User #06] For some users, just the fact alone of being redirected from the registration screen in the app to a form in an external website had already represented a negative point, perhaps indicating that the process would take even longer. And finally, some users experienced technical problems, such as not receiving the confirmation email supposed to be sent instantaneously. 88 Figure 10: eBird app mind map. 5.7.5. Marine Debris Tracker app: Analysis and Discussion Marine Debris Tracker project from NOAA and The University of Georgia offers an app to volunteers that go to the beach or any coastal areas can report a number or items of marine debris or litter found in land or close to the water. Besides keeping a track record of locations, types and quantities of items found by users, the project claims to spread awareness of marine debris. Interview and Observation Analysis Using NVivo software, after organizing the observational and answers data and get familiar with the content, we originated forty-five preliminary codes that described interesting points present in what users ?experiences. Next, according to the explained methodology in section "5.7 Data Analysis", we moved to dive into the data, and identify and organize codes into meaningful groups. From the beginning of the Marine Debris analysis, we could perceive and trace a 89 pattern of users' responses and even reactions. This behavior would reveal two main preoccupations. First, as soon they would start interacting with the app, most users could get a sense of the subject it was being addressed, even if superficially, they demonstrated approval or enthusiasm: "An important problem that deserves attention. [User #01] and "As a beachgoer, I liked the mission. Being alert." [User #04]. "I'm willing to participate because I care about this problem." [User 06]. So "Personal Interest" was designated as a relevant code. However, doubts concerning their roles as well as usefulness and purpose of contributions immediately started to arise. Users started questioning some of the features and interface elements, for instance, the different items lists. As soon as they started exploring the app and tried to understand how it worked, it became clear that this was the main issue encountered so far, that is, the "App's Mechanics". Users struggled to comprehend exactly how the proposed task of counting and reporting any sort of debris found near water could effectively help the cause. The theme "App's Mechanics" was developed based on codes that emerged from data, namely: "Contributions? Data Usage", "How it works", "Outcomes", and "Reward". "Lack of guidance" and more "Primary Usability Issues", two other relevant codes, were also included in the difficulties users faced and are part of the mechanics and how the app was designed and implemented. Users' comments illustrate these topics: "I'm not sure who am I helping. To whom is my data going?". [User 04]. " I feel unsure about the function of it... it is a demotivating factor. Feels like sending data to a void!" [User 05]. User #03 asked a crucial question: "According to the app, should users be cleaning and collecting the litter or merely reporting the items found on the coast?" Following that line of thought, many users questioned the project's purpose, how important it truly is, and the impact of the collected data. They also asked about the goals listed on the app's website: "Unclear how the use of the app can raise awareness? Maybe only if I tweet about this or use social media..." [User #05]. Given these very significant considerations, the central theme we defined was the "Purpose's Validity". This theme also included codes such as "Lack of Clarity", as many users expressed an inability to grasp the purpose. They would recognize or speculate the impact or possible results as beneficial to the environment, resulting in cleaner beaches and litter-free natural landscapes. However, the great question was, "but how?". Criticisms concentrated on that: "The problem is compelling. I can infer the impact of this project, but... How it works is unclear." [User #06]. Others affirmed that they thought the 90 project was compelling, yet the way it was built led to extreme uncertainty, therefore, loss of interest. Considering that the execution of the required task - log debris items found on the shore or at any body of water, under the categories listed in the app and submitting it -, revealed itself as comparatively straightforward, the lack of directions and information did not represent a significant difficulty for most users. On the one side, minor usability problems did arise, such as buttons with enigmatic labels, i.e., "Desc button." Also, all items in the list would start with a number "1" by default leading a few users to think they would have to reset to zero any item not found during their trip. Yet, users were satisfied with the diversity and organization of the debris items. On the other side, lack of information affected the project's trustworthiness much harder by not displaying how exactly data collection and data usage are supposed to integrate a solution to help the aforementioned environmental issue. When users cannot find value for their contributions -- starting from data collection, treatment, and practical outcomes right from the beginning of the interaction; the effect on their momentary motivation will likely be damaging in terms of engagement and willingness to contribute again. 91 Theme Codes and ideas comprised Related emotions and feedbacks My Role Users? role questioning was about Reactions predominant negatives: their contribution?s purpose. Doubts Uncertainty regarding their tasks and data and uncertainty on how the generated destination or usage. data is used. Users expressed discontentment leading to It comprised some other items such purpose questioning. And not willing to as: the importance of individual use the app again. participation, the results and impact of the contributions. Included codes related to rewarding and recognition. App?s Mechanics Usability issues: Log button, Desc Reactions predominant positive regarding button. It included navigation the variety and organization of items. problems, users? doubts on how to use Mostly negative demonstrations were items lists and queues. The lack of related to the uncertainty over elements? guidance and confusion in functions. Feelings of confusion and understanding how the app worked frustration. Users presented doubts on was noticed. project?s relevance and outcomes. The items organization facilitated some tasks. Questioning the Encompassed codes such as: Predominantly negative responses: purpose Lack of clarity. Frustration Relevance. Doubts on project?s relevance and Personal interest and topic outcomes. appreciation. Lack of information. Results / Impact of contributions. The social features present were considered positive for some users. Table 10: Marine Debris discovered themes. 92 Figure 11: Marine Debris Tracker app mind map. Figure 12: Marine Debris Tracker app screens examples. 93 5.7.6. SatCam app: Analysis and Discussion SatCam is an app that allows users to participate by sending "live" pictures of the sky via an iPhone. The reported observations help the scientific team verify the quality of the cloud products created from the satellite data obtained by Terra, Aqua, and Suomi NPP satellites. In return, the app sends back to users satellite images captured above their exact location anywhere in the globe. It works as an exchange: users take a picture of the current sky, wherever they are, and receive a photograph taken by satellite of the users' locations. However, each satellite has its orbit, so the users need to check satellite passes over their locations before submitting a picture. In addition, it is possible to set alarms and receive alerts on their phone. Interview and Observation Analysis SatCam's sessions resulted in observation notes and interview answers from nine participants, transcribed and inserted into NVivo software for further analysis. Following the previous program, users were offered to look at the project website on the laptop available in the room. Most users began with the website's homepage. Although a few users commented negatively on the layout and visual design of the page, some commentaries were positive and highlighted objectivity and cleanness. Although users showed some criticism (e.g., the jargon used "cloud products" as not being helpful), they expressed curiosity and were intrigued by the description given on the website and the topic. During the analysis on NVivo, we came to thirty-two initial concepts, including pain points, user's reactions to features, and unexpected situations, both positive and negative observations. Following the methodology in from "5.7.2 Data Analysis", after much deliberation, we organized those initial concepts into groups where the codes denoted similar aspects of the users' experiences. As in the previous sessions, we moved ahead to reread the data and check whether the preliminary arrangement made sense. Common ideas started to emerge and relevant pain points began to appear. At a certain point in this iterative process of reorganizing and merging the more representative codes, we felt the necessity to categorize many of the comments and observations made by 94 users as positive or negative demonstrations. This cyclic process of grouping codes, checking, coming back and forth reduced the number of codes initially found to eighteen. Related sentiment and Theme Codes and ideas comprised emotions Registration as a Formed by codes such as Effort to complete the Negative comments: Barrier step and Disengagement as a result for many Annoyance and frustration. users. Technical problems are also included. Information This theme embodied the codes related to lack Uncertainty, self-doubt, lack of Missing of Goals' Clarity and Guidance, how the interest in going further. contributions worked, and, most importantly, Negative comments. the Mechanics of the app. Unfortunately, that fault led many users to feel uncertain and make guesses on the apps' purpose. Mental Model This theme integrated the gap between the users' Some users expressed curiosity mental models and the app's framework. while making guesses, but Mismatch It covered codes like Uncertainty, Guessing, and overall, they would feel users' expectations getting rewarded. insecure and annoyed. Early This theme embodies the consequences of pain Lack of interest or enthusiasm, Disengagement points experienced by the users that resulted annoyance, frustration, negatively, which is users leaving the app. indifference. Table 11: SatCam discovered Themes and Codes. The analysis revealed a negative tendency towards the registration step, revealing dissatisfaction and several pain points for the users. The first identified codes were gathered under "Registration as a Barrier" theme. The mandatory registration, set up as the app first screen, turned out to act as an entry barrier unanimously among participants. On the more extreme responses, for four participants, the registration upfront led to immediate disengagement and negative comments and attitudes. The "Registration as a Barrier" originated from different reasons for every user. Although they all shared annoyance and negative comments, interesting motives led them to disapprove of this step. Some users expressed that the effort to complete the sign-up was not compatible with what they would supposedly receive, or experience once logged in. For 95 example, User #05 stated: "...the amount of effort required versus the interest in the app did not make sense." The user compared the amount of work he would have to put into this initial step and the interest and motivation to continue, concluding that it was not worth his time and energy. In addition, some users said they felt annoyed that the app asked for sign-up since it delayed their "entrance" on the app. Most users questioned the mandatory aspect; otherwise, they would probably not bother registering later on. Others complained about the sign-up, but for different reasons: "I feel a bit lazy to go through the registration. It is not because of the whole amount of information I would have to input. It is just annoying that they are asking me to do this.? [User #07]. Interestingly, most users did not question the fact that the app asked them to register; instead, criticisms concentrated on timing and obligation of getting registered. That became evident when observing the number of users that suggested social login implementation as a solution. In the social login process, the users choose their preferred social media to log in through, which a request is sent to the social network provider. Once the social network provider confirms the users' identity, the users are able to access the app. That eliminates the need to create a new account exclusive for the app unless the users prefer to do it, yet it was their choice. This tool also benefits organizations and projects teams since it allows them to access and collect more data on the users getting closer to their platforms, reaching out to potential participants in the same networks, or conducting surveys and feedback gathering. Recent scholars (Karegar et al., 2018; Micallef, Adi, & Misra, 2018) have addressed the use of social login in mobile apps, pointing out the various login features used by popular apps and decision-making on advantages and disadvantages for both users and teams. Many of the users' opinions about the registration step as a barrier to their further participation and exploration of the app are directly related to critical codes that permeated most users? experiences: "Uncertainty" and "Guessing", culminating in the theme of "Mental Model Mismatch". A lot of what users faced was due to difficulty to comprehend the "Mechanics" of the app. So, lack of informational support on that aspect, strongly impacted this conflict between the mental model of the user and the system. We then created a specific theme for grouping such issues that have risen, "Missing Information", that embraces codes such "Lack of Guidance" as well. Although associated, we kept these codes apart, so it is possible to demonstrate how they are linked, how they influence each other, furthermore, how they contribute to tell a story 96 of the users' journey. "Missing information" was a recurrent issue throughout the users' experiences, in general. Many pain points were caused by the lack of guidance, instructions, or plain directions on how the app works. Users also complained about the difficulty of grasping what they were supposed to achieve. More specific usability problems were also present, although less evident than the lack of information. When users were trying to place a contribution and interact with the features, the information lacking would be the most critical. This problem is visible when User #10, trying to take a picture, stated: "Hmmm... (I am) trying to figure out how it works... Specific times!? I don't understand!". Five users considered the app's goals to be unclear and were not able to grasp them. Users demonstrated confusion and subsequent low interest in getting involved even when discovered on the website or app. Many users resorted to "Guessing", which constituted a relevant code visible in many moments when trying to make sense of the system: "Oh! Oh! Okay... I think with these controls of notifications and passes, one can set alarms to let them know that the satellite is passing in the area..." [User #01] and "Can I set an alert?" it seems so." [User #08]. When users would make a guess on any feature, it would most certainly take to confusion and a negative attitude. Making a guess revealed a lot of the "Mental Model Mismatch" of the participants and, therefore, how distant it was from the app's mode of functioning, a diverging mental model (N. Nielsen, 2010). This conflict was also noticed when users mentioned an expectation of receiving any reward or payout. We noticed how users frequently assumed they would get some compensation based on the effort required from them. It also exposed how much information was lacking in almost every usage step and how users reacted. It became evident how "Missing Information" led to ?Uncertainty", which, in turn, led to "Guessing" and how this sequence revealed the "Mental Model Mismatch?. Almost all participants went through this sequence, a course of thoughts and actions arising from the gap between the system's shortcomings and users' mental model. The feelings coded as "Frustration and Annoyance" followed and, for this app, led to user "Disengagement". User #05's experience illustrates it, registered in our observation notes as follows: "The registration page asks for an email and a password. Underneath the form, 97 there are two buttons: Register and Sign in. He says he was led to believe that those fields were destined for those who already have an account. He keeps trying to click on the Register button but has no response". Finally, the user then concluded he should try typing his email address anyway and created a password. He checks his email account on his phone and gives up: "The combination of not knowing what this is [the app] for and having to go through this registration step gives me no choice but to leave?. Suddenly, while we talked, the app reloaded, and he then tried to click on the Make Observation button. However, unfortunately, something went wrong, making the app shuts down. He tried numerous times to launch the app and place an observation, but the app kept shutting down. On the brighter side, some other codes emerged, detached from the difficulties and negative attitudes. Although many users did not grasp the project's goal or the contribution mechanism, some demonstrated a positive attitude in their general opinion. Some users located the project?s goals and explanations on the website prior to the app use. Others ignored the website, but still, expressed interest at the beginning of the session, sustaining it over the interaction. "Curiosity" was a code that we could see emerging from common positive attitudes users showed: momentaneous interest, enthusiasm, and compellingness (A. Smith, 2017). Interestingly, in situations where the user could not take a picture and place a contribution, or even sign in and interact with the app, some would express that the topic and general concept of the project had a positive effect on them, creating curiosity. User #06 had a positive overall experience; he was able to sign up, log in, overcame a technical issue when registering, took a picture, and then went through difficulties leading to uncertainty, causing disappointment. When questioning how the photo feature worked and why it failed, several suppositions arose, and after a few tries and exploration, he ended the interaction. During the interview, he offered many feedbacks and critics but remained intrigued why the camera did not work. Overall, this user found this app very compelling and still was not able to have the complete experience: " [About using the app again in the future] Not sure. What do I get in return? It is unclear... Well? maybe a satellite image would be cool!? [User #06]. Otherwise, he would find out later that a satellite image is precisely what the user receives in return. Here, the lack of information on how the app works negatively affected the chance of retention. 98 Conversely, User #10 demonstrated curiosity and interest when he first saw the website. However, after logging in, he could not grasp the app's functions and mechanism ? "Trying to figure out how it works... Hmmm. Specific times? I don't understand! Nothing to do here". Still, he was mostly positive about the whole experience and found it "cool, has a purpose, simple and unique." Additionally, the chances of using it again were meager due to the lack of explanation and guidance to contribute. "Novelty" and "Topic Interest" codes were conceived based on the comments and reactions of users. Users' comments reflect the strong potential this app holds to engage users, contribute to the cause, and perform whatever purpose it has. "Retention Chance" surged as a relevant topic developed from codes that speak to participants' comments and attitudes that indicate re-use in the future. This topic went beyond gathering the negative responses to the last interview question, which directly addressed this issue - "After this experience, would you consider using this app during your free time? Why?" (from interview?s questions list on Section 5.5). Given that only two participants expressed interest and responded positively, the majority responded negatively?moreover, the reasons why offer multiple causes, reinforcing the previous complaints. However, the final motif to not adopt this app and thus, not engage or contribute to this project revealed the aspects that weighed down most of the users' experiences. Thus, "Lack of Interest" combined with a negative experience manifested as the most decisive factor for users not to return as regular volunteers. "Personal Interest" has already been shown as the primary motivator for initial participation (Rotman et al., 2014, 2012), together with three other dominant factors, self-promotion, self-efficacy, and social responsibility. These factors encourage participation as the studies' interviewees revealed intent but did not necessarily act on it. The other code we discovered was "Reward" or payout; working as a moderator, as previous studies reported, it would condition potential participation once people found personal value or benefit. Our findings matched previous literature regarding reasons for users to respond negatively to the likelihood of using the app again. Furthermore, the lack of interest was often reported combined with not having enough information in various instances, reinforcing the idea that lack of interest can be most impacted by not providing appropriate information to the users, hindering any successful interaction or conversion. In sum, whether the new users hold previous interest in the topic, information or guidance might potentialize engagement. 99 Negative responses to future engagement were also due to the lack of any type of noticeable reward to the users or information on that matter. In addition, expressing an overall poor experience with the app was also pointed out as a reason for not wanting to come back in the future or adopting the app. Figure 13: SatCam app mind map 100 Figure 14: SatCam app screens examples. 5.8. FINDINGS The previous sections' analyses concentrate on examining the data individually of each app and discussing the codes and themes within the context of each one separately. That approach allows an in-depth study of each onboarding process, its problems, and specific aspects, contextualized in its necessities, demands, and challenges as different Cit Sci projects. Furthermore, each app has its audience and was built by different teams in distinct conditions and available resources to meet particular demands and objectives. It is undoubtedly valuable to identify themes within their context since it allows one to comprehend the source of specific pain points or other issues based on UI's characteristics and app's functions. Thus, the prior studies (section 5.7) provide the main themes present in each onboarding step unveiling its significant structural aspects and based on users' demonstrations, identifying sentiment and emotions triggered by those. On the other hand, this section complements the prior analyses by adopting a macro approach and looking at all the codes generated by Study II. This reflection aims to draw parallels and compare the diverse issues across all four apps, going back one level and bringing together all the relevant codes without distinguishing their sources. Since themes were elaborated based on leading topics and ideas of each onboarding of each app, they conveyed high-level patterns pertinent to each specific app. For example, the mPing app presented a theme called The Map Feature. The map built in the UI was an essential topic noted by most users, encompassing codes associated with interactivity qualities of this feature like clarity or purpose, how it works, and the social aspect. The map theme would not make sense if applied to other apps that do not offer that feature. However, codes (for instance, ?Clarity") are easily comparable and more valuable to be identified across different onboarding designs and assessed whether it configures as a pattern that should be considered when designing onboarding. 101 5.8.1. Overarching themes across the four analyzed apps As discussed earlier, engagement can be studied in different manners and looked at diverse points in time during the interaction or across several uses of an app. A general users' engagement as a temporal process, defined by O'Brien and Toms (2008), distinguishes three stages: (i) the starting point where the process begins, (i) the point of engagement, sustained engagement, disengagement, and (i) the re-engagement. This approach helps to understand how engagement develops, starting from the point of engagement. These steps can apply one unique interaction when the users first enter into contact with an app. For example, they pay attention to it for a while, disengage while doing other activities or get distracted, and return to the interface re-engaging again. The disengagement step can also fit a situation where the users cease to use the app for a certain period (e.g., a social media app), and return days after, re-engaging with the product again. Still, we are interested in understanding the influences and consequences of such a process, with a more in-depth picture of the first use and the unique engagement process during onboarding. Therefore, the proposed model in this work focus on how a first interaction unfolds due to different design attributes at which engagement might partake in the outcomes of this experience as a positive result. From a more overall perspective, this model embraces engagement characteristics and how onboarding design, content, and user characteristics impact app adoption. In addition, the individual analysis from prior sections gives us a hint of the major players during the onboarding of each selected app. With the most relevant codes found during the examinations, we reorganized them intending to elaborate the second group of themes: the overarching ones. Overarching themes expose the most significant issues in the design of onboarding processes for Cit Sci mobile apps need to operate. Based on rich data collected and analyzed (Chapters 4 to 6), we weighed and synthesized six criteria as outcomes of our evaluations after identifying the effects of such issues on users' attitudes and reactions (see Fig. 15 and Appendix 6 pg. 142). The seven main topics are organized as the following: ? In terms of information: 1) Presence of technical language and jargon. And 2) App's mechanics and guidance,; 102 ? In terms of purpose: 3) Clarity of user's role and contributions' purpose. And 4) Clarity of app's goals, results, and impact; ? Effort and commitment: 5) Benefits and rewards; ? Aesthetics related: 6) UI's visual quality, and 7) Visual cues (usability). The first two issues concern information, how it is presented, and what should be conveyed at different times during the onboarding. Our analysis points out the use of technical language and jargon as elements of increased inaccessibility, inadequate to lay participants. Specialized terms negatively affect participants' self-efficacy, which impacts the chances of re-use (low retention). The second issue is related to a deficiency in informing how the app works, the basic mechanics, and not providing guidance numerous times. The consequences were also related to self-doubt and adverse effect on participants' self-efficacy. When users cannot grasp the rationale on which the app works, they tend to feel uncertain, followed by guessing features' functions or workflow inside the app. Such pain points experienced by users can be efficiently mitigated by employing usability principles and making help content available at all times. The second overarching issue is about purposes. The clarity of users' roles and their contributions' purpose stood out on our data in two ways. First, as a concern users raised, it has shown to be imperative for new participants to know clearly what they are supposed to accomplish when using the apps. Learning the goal of their contributions, how their input or collected data will be employed and what are the participants' duties revealed to be indispensable. Without this information, users are at risk of becoming demotivated and ceasing to participate. Users that are not aware of the importance of their role within the community will hardly continue to spend time and energy even if they were initially interested. By understanding their roles, users also need to clearly capture the app's goals, results, and impacts. From a more comprehensive perspective, the objective of the Cit Sci project must be undoubtedly stated, including the desired results the team hopes to achieve, plus their impact environmentally, socially, etc. According to our study, when participants cannot grasp this information, frustration, and demotivation occur as they question whether their effort is essential and their input is valuable, gradually undermining the chances of future engagement. 103 Along those issues involving the validity of individual efforts come to light the wish for acknowledgment of those efforts the need for rewards. Additionally, we observed that the higher is the effort for a user to overcome difficulties or unsure about the importance of their work, the higher is the expectation is for receiving any compensation. However, it is yet to be investigated how sustainable it would be in terms of lasting participation to offer substantial rewards in exchange for volunteers' participation, even if they are not sufficiently informed about their roles. As for onboarding design, it is assumably safer to affirm that apps should be developed based on the best practices in terms of usability and aiming to meet the information required by users and do not entirely rely on extrinsic rewards as a motivator factor. Complicated registration or signup steps configured triggers to increase users' expectations on benefits and rewards they are about to receive. If registration steps act as entry barriers, users might have to invest considerable effort meaning a commitment they are not ready to assume. This is because they do not know their effort yet will be compensated by any eventual reward. Therefore, many users present an inclination to disengage or stop the interaction that soon. The sixth and seventh overarching themes are related to the aesthetic qualities of onboarding designs. UI's visual quality embraces the app's visual characteristics, which gain such importance during the onboarding because it sets the visual communication tone and style, forming the first impression users experience. The combination of colors, typefaces, organization, layout, choices of images, the balance between textual content and images, style of illustrations, and the arrangement of all interactive elements into a conformed space, hold the capacity to captivate, attract or invite users to experiment the app. Furthermore, visual attributes (i.e., brightness, contrast, use of colors or monotonic palette, and information hierarchy) can convey different messages, suggesting professional visual or amateur aspects, more or less trustworthy, organized, clean, or clean, or many others unpolished, etc. Hence, the onboarding graphic design should reflect what the project needs to be seen and conveyed and be adequate for the targeted audience. Besides a judgment of taste or value, UI's visual design must be thought a layer where information and aesthetic elements combine to mirror other app characteristics and be designed to make the use efficient and address users' needs and attributes. 104 On the one hand, an example is the eBird interface which the UI was well received by the participants, conveying a professional look and positive first impression. On the other hand, some technical terminology and low information findability resulted in feelings like self- doubt and tension. Some users attributed their loss of interest to those because they figured the app was meant for bird experts or biologists, who would have the necessary know-how to use it. The last comprehensive topic, still under the visual category, sheds light on the usability side of the visual aspects of the onboarding design. Many informational problems (e.g., lack of guidance) could be resolved with visual cues, where aesthetic elements interplay between form and function, drawing the users' attention to relevant features and timely actions, helping to elucidate parts of the system's mechanics behind the interface. Not knowing what is interactive and what is not, the lack of differentiation among static components and actionable ones, and not obtaining visual feedback are usability problems that visual design can help solve. Figure 15: Overarching topics elaboration process. 105 106 6. STUDY III: CROWDSOURCING APPS ONBOARDING ANALYSIS In this study, a commercial crowdsourcing mobile app was selected to have the onboarding design analyzed. The app was downloaded, installed, and launched for the first- time. During an Expert Review, we applied a set of evaluation criteria described next. 6.1. GOALS AND LIMITATIONS This study had two main goals: 1) To obtain a comprehensive and in-depth view of the onboarding design practices employed by the industry on mobile apps. From the literature review and the first study results, we learned that no standard onboarding design structure exists among Cit Sci initiatives. Furthermore, most of the Cit Sci analyzed did not employ strategies to grow engagement and have the opportunity to implement design solutions and patterns already ubiquitous to other fields. Hence, this examination of a sample of onboarding processes employed by the industry, crowdsourcing, specifically, provided valuable information and clarity on the current practices. 2) This study also aimed to show how the characteristics of onboarding and a set of design decisions built-in addressed and have the power to influence users' engagement. Building on Study II that revealed the various engagement attributes, the relevance of certain elements, and typical user's pain points during onboarding, this study aimed at providing a complete examination beyond the Expert Review method. The limitations reside in the fact that only one app was analyzed. It was due to the limited researcher's resources and time constraints. This is an initial proposal of onboarding evaluation which could serve for future studies, since we understand that the higher the number of apps analyzed, the more variety of onboarding designs and challenges would come to light. Regarding the method chosen -- Expert Review (adapted to check onboarding issues), although it originally uncovers many usability problems, it should be paired with a user study, 107 complementary to usability testing (Harley, 2018; Molich & Jeffries, 1993). These two methods associated during a project, researchers would better understand minor to severe problems from both expert and user perspectives. It configured another limitation of this study. 6.2. SELECTION OF THE PLATFORM Among the profuse variety of commercial mobile apps, numerous onboarding designs are launched and tested constantly. The SaaS industry has been taking advantage of onboarding strategies for a long time and knows the impact of the first use for a product's success. Although there is little or no consensus on onboarding practices, each design team responsible for them is in charge of creating, testing, and refining techniques, and artifices to engage their audience and conquer new customers. While golden rules for onboarding design are not consolidated yet among the industry or the HCI professionals, Cit Sci apps teams lack the guidance to strive for an experience that contributes to users staying, participating, and returning. After verifying a sample of the current Cit Sci onboarding techniques and strategies (or lack of) to acquire participants, "competition research" can bring insights from outside the field. Hence, in this study, we conducted an expert analysis on commercial crowdsourcing app. The rationale behind this choice comes from the desire to capture an overview of strategies adopted by successful commercial apps to onboard their users. Nonetheless, considering all the different categories of apps available for download, the different roles users' perform, among other characteristics that distinguish them, were considered a determinant for choosing crowdsourcing apps. Thus, differences between commercial crowdsourcing apps and Cit Sci apps could produce a fruitful discussion regarding the strategies adopted for different audiences and contexts that might benefit Cit Sci apps. Crowdsourcing initiatives share fundamental aspects of Cit Sci, such as participation of the public as the main drive for success, the need for recurrent input from users -- retention, and so on. Cit Sci can be understood as a narrower subset of crowdsourcing or a type of 108 scientific crowdsourcing (Haklay, 2013; Wiggins & Crowston, 2011). Therefore, electing a commercial or for-profit app that matches that same structure was defined as a suitable selection for this study. From a long list of thriving crowdsourcing communities and projects that offer mobile apps for the general public to participate in, we selected the GoFundMe platform. GoFundMe Created in 2010, GoFundMe is a crowdfunding platform for money raising for personal passions, needs, and causes. It became the largest crowdfunding platform globally? 50 million people gave more than $5 billion on the site through 20172. People can participate in two ways: 1) creating a money-raising campaign of their own and 2) supporting an existent campaign by donating through the platform. Having the onboarding design of GoFundMe analyzed could contribute to Cit Sci projects for a few reasons: since it is a crowdfunding platform, it is entirely different from scientific crowdsourcing; and it differs on goals and purposes, types of contribution, participants' needs and expectations, and rewarding methods, among other aspects. Conversely, Cit Sci apps and crowdfunding apps like this still partake of the same principles: they depend entirely on people's adoption and critical mass, which demand a certain volume of participation, contributions, and inputs from the public. Interestingly this all happens through technology and is ultimately conditioned by the UI. Moreover, if the first contact with this technology does not render effective participation, pleasant interaction, and, perhaps, empathy or compatibility between user and product, adoption and advocacy hardly arise. 2 https://www.theatlantic.com/magazine/archive/2019/11/gofundme-nation/598369 109 Figure 16: GoFundMe app screens examples. 6.3. EXPERT REVIEW & EVALUATION CRITERIA Like Study I on Cit Sci apps, one crowdsourcing app was downloaded and launched, and the first use was analyzed. We adopted an Expert Review method, including a few adaptations, to reflect the onboarding context. Rather than concentrating efforts to look for usability issues present in the apps and elaborating recommendations or showing best practices, this review shed light on how successful apps onboard their new participants. We focused on investigating the design strategies, whether the onboarding elements defined earlier in this work were present and how they were embedded in the app. Based on previous studies findings and literature, we refined the criteria set used on Study I and elaborated to develop a revised, complete set of criteria which was organized in Table 13, as following: 110 Evaluation criteria employed 1) Presence of each of the onboarding elements. This point was the baseline to start analyzing onboarding design. It was unlikely that the four elements would show up as definite stages or dedicated screens. Although objective, these constructs might not be explicitly presented in the UI, it was essential to carry an analytical approach to acknowledge, document, and contextualize them. 2) Format and Set-up. UI, navigation, and interactions can be designed in infinite ways, heavily influencing usability, and tasks, ultimately contributing to the user and systems achieving their goals. This item sought to describe the resources used to structure each onboarding stage and the tasks involved. 3) Whether those elements formed a flow or a guided interaction. Guided tours, tutorials, and walkthroughs are the most popular and cited mechanisms often mistaken for the onboarding itself. These resources are design patterns that might be employed in an app. However, a guided flow can tie the various onboarding steps, creating a continuous sequence of actions that direct the newcomer towards a specific goal, i.e., subscribe to a service or start a fundraiser. This item focuses on identifying a flow that connects all onboarding elements and helps the user navigate. 4) General approach Although little consensus exists among designers on onboarding golden rules, many authors and professionals refer to general approaches that can be "applied" as a formula in different products. Some approaches are listed in section 2.5.8. 5) Whether UId was a mandatory step during the onboarding. As seen in Study II, mandatory registration can significantly affect users' attitudes and initial engagement by creating a barrier to joining the platform. Studying how successful crowdsourcing apps deal with it can provide insights and help understand the product's logic. 111 6) Whether SoP was clearly presented (summary of the platform and apps? goals could be found) Strongly recommended by literature and well supported by Study II, offering clear and accessible SoP impacts users' engagement directly since it is a trigger to reinforce underlying motivation, personal interest, and affective connections to the exposed goal. A clear SoP configures one of the most influential factors for successful onboarding. 7) Design patterns employed. This item investigated which UI design patterns were used at each onboarding stage, associating pre-designed solutions and platform goals to address user needs. 8) Potential pain-points. This item was derived from the User Journey technique broadly utilized by UX teams to describe all the possible problems a user might find from before the first use (acquisition or trial version, for example) across the whole interaction with the product. Locating these critical issues can provide valuable insights and uncover unknown obstacles. Table 12: Evaluation criteria for Study III. Through this analysis of an existing onboarding design, we aimed to identify, describe, and gain insights on strategies in use. This review was developed looking at each onboarding element and examining how the seven selected categories performed and shaped the onboarding process. 6.4. GOFUNDME APP ANALYSIS In this section, we describe each of the onboarding elements against the criteria 112 adopted. The first item was the presence of each one of the elements that constitute the onboarding. In this case, we were able to identify all four elements in the app. Onboarding Format and Set-up constructs SoP Slogan and short description summarize the apps? purpose. UId Sign in was offered but not mandatory. The button appeared in the first screen but is just one of the three action options available at this point. The sign-up process included 5 steps: Filling out a short form (name, email, phone number) Password creation. Verification code sent to phone. Verification code entered. Phone successfully verified, and welcome message. InS The app offered plenty of information on its mechanics, rules, and advice. User must advance and select a fundraiser and find "Learn more" links at the bottom of each fundraiser description. From there a help center concentrated all the content on how the app works, why to use it, success stories, getting started, account management, money management donor FAQ, common issues, etc. An intro video on the top of the page was displayed. Campaigns' description, pictures and updates were organized within each fundraisers' individual pages. 113 CnE The first main CTA was Start a GoFundMe presented in the first screen after launching the app and then, again, on the second screen, when browsing existent projects. A second main CTA was Donate now. Another two secondary actions were available: like and share buttons. Once a fundraiser was chosen to receive a donation, the app redirected the user to a browser page to then enter the donation amount. At this point, three focal points are presented: The name of the project the person who set it up and the donation will benefit, and a large box for entering the donation amount in USD a continue button. Also, a GoFundMe Guarantee message saying: "We protect your donation". When completed, options for payment method and donation details were shown. User could choose on agreeing to display their name publicly on the campaign page or stay visible for the fundraiser organizer and beneficiary only. Donate with Apple Pay and credit card payment methods were available there. Next screen showed a thank you message with a new CTA asking for sharing the project. This was possible throughout social media links. At the bottom of the page the app asked to show another type of support: A new screen provided a public comment box with the prompt: I donated because... Table 13: Format and Set up of onboarding elements. 114 Onboarding Timeline constructs SoP Was present in the very beginning. In a reactive form, therefore, could be accessed anytime. So, it only participated in the flow if the user looks for it, access the help center and then return to the app. UId Was available at the beginning. This action was available across the whole app giving the user a chance to sign in or sign up at any point. InS Was available in a reactive form, therefore they could be accessed anytime. So, it only participated in the flow if the user looked for it, accessed the help center and then returned to the app. CnE Donation button is available once inside the fundraiser page. As well as the sharing feature. Donation concluded, the app acknowledged the users' contribution and asks more support by spreading the fundraiser through social media. Table 14: Timeline. Onboarding Presence of flow or guided interaction throughout the onboarding. constructs SoP Yes. UId Yes, and could be deferred. InS Partially. Fundraisers' specific information (description, place, beneficiary, amount of money aimed at, and amount raised so far) was complete and displayed on the flow. However, although explanatory content and help (mechanics and rules) were available through links most of the times, it was not presented as part of the flow (users could complete the process without being exposed to it). CnE Yes, the main conversion CTA, the donation, was a completely guided process, leading to another CTA, the sharing through social media. Table 15: Flow and guidance. 115 Onboarding Design patterns constructs SoP Splash screen, isolation effect. UId Deferred sign in and optional sign-in: It was possible to go all the way through donating an amount without creating an account. Form filling, password rules, password validation. InS During fundraiser browsing: Continuous Scrolling, search filters, search box, categorization, pull to refresh, progressive disclosure, clear primary actions. Social patterns like reaction, share, follow, testimonials, flagging and reporting. CnE Tunneling, commitment & consistency, sequential steps, clear primary actions, guarantee stamp, set completion. Table 16: Design patterns. Onboarding Potential pain points constructs SoP Not found. UId The registration issue could be confusing. Although it was possible to donate without having to sign up, a donation summary screen is presented even with the user logged out. InS app's rules and mechanics although present, were not actively findable: How it works, Why use it; Success Stories, etc. CnE Once the donation is done, the user was not redirected back to the app. This could have prevented or discouraged new donations to other fundraisers, working as a dead end. Table 17: Potential pain points. 116 Figure 17: GoFundMe app onboarding flow. 6.5. GOFUNDME ANALYSIS DISCUSSION Conducting an Expert Review throughout the onboarding of an existing platform provided an intensely accurate portrayal of the many design decisions the design team made. Behind a relatively clean and uncomplicated interface, dozens, maybe hundreds, of features and moving parts seem to have been weighed thoroughly to achieve the millions of users and fundraisers to date. Looking at the onboarding that GoFundMe app offered, some characteristics stood out for a few different reasons. We explored each of these key issues in the course of this section. Nevertheless, building on the findings and supported by the discussions in this work, 117 some interesting relationships could be drawn regarding users' needs, attributes, design features, and content. We moved ahead and examined the various onboarding aspects, presumably addressing the possible and desirable engagement attributes that contribute to engaged and returning users. When SoP is presented, in this app case, immediately, in the beginning, users that are new to this app are concerned with a few things: what this app is for and whether it will help achieve what they need. Hence, we identified three primary engagement factors, at this point: motivation, personal interest, and specific goal achievement. Different features address these three factors, namely: a slogan and short descriptions of the platform are displayed right from the start. This informs the purpose of the app, the roles users can assume (donor or fundraiser), thus reflecting what precisely a user can get from the app. Furthermore, the browsing pages allow users to quickly access categories of projects they might hold interest in or look for a particular project and browse through freely to contribute deliberately. On the one hand, the deferred registration might contribute to users scanning the app and finding a fundraiser that fits their motivations, personal interests, and topics. It gives freedom and time to the users that might also feel welcomed and not forcibly committed to participate. On the other hand, if the users decide to sign up, the process is easy and quick, demanding low effort or commitment. The informational support regarding each fundraiser was complete and organized, saving users' time and effort. Photographs and short descriptions of each fundraiser would reach a few engagement determinants, such as: novelty, affective and emotional appeal, and intrinsic and external motivational aspects. During the donation process, some other aspects stood out. Because donation was the only CTA on most pages, it certainly addressed the attention attribute contributing to engagement. However, the donation process also was set as a funnel where each step the users advance towards the completion of the donation, fewer and fewer distractors were shown, leaving no option but paying the donation. The thank you message fulfilled the reward need right after the payment and appealed 118 to the emotional demand of recognition. In no time, the app asked for more support but in a different form: it urged the users to share the link with family and friends to contribute. This request played as a call for showing (more) commitment and assuming a little responsibility for the project's success. This message also could have appealed to the users' emotional side. As a second form of reward, the users were invited to comment on after donating, stating why they donated and submitting pictures. This feature provided an opportunity for "showing off," feeding recognition, and community status. It can also work as a response to motivational factors like commitment to the cause, reputation gains, self-efficacy, and empathy (Rotman et al., 2014). 6.6. FINDINGS Our findings reveal that a significant part of the factors that frustrate initial users' leading to subsequent engagement disruption, or eventual facilitation of disengagement, stems from informational problems. These problems include an array of missing, disorganized, too much, or poorly presented information. Failing to inform the users impacts different factors that can hurt the experience and undermine early engagement and future use. Such factors include understanding the app's mechanics, the rules within the community, the role they should be performing, contributions impacts, the feeling of self- efficacy when operating the app. Missing content on how the input data are being used and the collection's objectives typically prevent users from learning the purpose of their contributions. Not comprehending what the app is about and how one's effort can be helpful for the community represents one of the main demotivators for initial engagement. This issue is present in three out of the four apps' studied, embodied by the themes each analysis revealed. For mPing, the theme that most represented users' issues regarding their role and contributions importance is "My Contribution". This theme reunites codes that reveal users' preoccupations with the impact and usefulness of their contributions and desire to receive rewards and benefits. It includes "Data Usefulness", "Benefits", "Rewards", "Not Fun", 119 and "App's Purpose". Informational problems also occur whenever users face difficulties utilizing a feature that is not self-explanatory, and neither directions nor guidance are available when needed. In the Marine Debris Tracker study, the theme that conveys this problem is denominated "My Role". Users questioned what was supposed to be their role based on their contributions' purpose that was not clear either. They also expressed doubts and uncertainty on how the generated data would be employed. Lack of apprehending their contribution's relevance affected their perception of the importance of individual participation, the contributions' results, and impacted on the community's environmental level. We observed adverse reactions predominantly related to the uncertainty of how and why to execute the tasks and data destination or usage. Users also expressed discontentment and dialogues that led to questioning the purpose of Marine's project. Many users stated that they were not willing to use the app again. Adversely, users are often presented with abundant information but with the wrong timing or inappropriate format, like a long sheet of text or a tutorial video that pops up too soon in the interaction. Tips and suggestions that should help new users navigate might end up overwhelming and annoying if placed in the wrong spot or moment; patterns such as coach marks pop up automatically, decontextualized from what the user is doing at that moment. It is all about how information is presented, ensuring the message is timely, adequate, findable, clear, and quickly obtained. We found a noticeable factor that influences engagement during the first use, which is the importance of comprehending the purpose of the app or Cit Sci project. At the SoP level, our data reveal a tendency: when information regarding purpose (or what is this project really about) is not presented at the beginning of the interaction, not clearly stated, and reflected aesthetically, the chances of the user losing interest increase, as they become less tolerant with issues they encounter. As a result, users seemed to get more prone to disengage easily. Additionally, information-wise, we identified a sequence of behaviors triggered by the shortage of timely informational support: The absence of guidance or instructions, which usually result from a mental model mismatch, leads users to experience uncertainty, which in turn often leads to guessing that when unsuccessful, feeds the loss of interest and therefore engagement (see Figure 18). 120 Failure to inform the users on how valuable their contribution is and explicitly define their role early in the onboarding is also very likely to be responsible for users not expressing positive intentions to keep using the app. Thus, not knowing their role as participants works as a causal mechanism to lower chances of engagement. In many cases, this role crisis makes them question the purpose of contributions, the project's goals, and the app's purpose. When skepticism on one's role in a project rises, it is usually the tip of the iceberg of a more significant problem that might undermine the path to engagement. Figure 18: Diagram showing a sequence of observed and reported behaviors triggered by the deficiency of timely informational support. 121 7. DISCUSSION In this section, we start by restating the Onboarding Model as our main contribution and examine how it offers a solution for our research problem. Then, we revisit the two secondary research questions as necessary research steps towards building the model. Next, we concisely reiterate the methodological steps chosen to approach each question and describe the findings they promoted individually. Finally, we reflect on the central themes from Study II and contextualize them considering existing research and theory. In the effort to respond to the main research question, 'How can onboarding design improve user engagement in Cit Sci mobile apps, ultimately leading to higher chances of reuse??, this work is dedicated to elaborating an onboarding model that elucidates the multiple components that operate concomitantly. Hence, the most significant contribution of this work lies precisely in identifying the users' engagement attributes, content scope, and design features, alongside tracing connections between them. Furthermore, to acknowledge that for an onboarding process to leverage engagement, it must be designed so that the technology features (system and UI) address assertively the users' side of the interaction, implying the motivations and attributes of engagement. The present body of work and the reflections from our analysis suggest that: when an onboarding process, i.e., design and content elements, does not address the users' engagement attributes, engagement will unlikely arise. With that, the primary goal of this dissertation culminates in developing a model that operationalizes the onboarding conceptual and structural elements. To this end, several research steps were conducted to ensure that the most relevant aspects, factors, and viewpoints of a first-time experience for a newcomer are brought to light. Organizing these steps leads us to formulate the research sub-questions. For example, to respond to the first sub-question, (Sub-RQ2) Which user attributes, and system characteristics interact and play a role in this initial engagement process?, we identified the need to develop onboarding concepts in the HCI field, formulate terminology, and establish 122 the fundamental components: Statement of Purpose (SoP), User Identification (UId), Informational Scaffolding (InS), and Conversion Event (CnE). This research step fed into the construction of the model. Widely grounded on HCI and UX communities' know-how and documentation regarding first use strategies to optimize engagement and retention of users, especially in the crowdsourcing context, we were able to describe each element. Therefore, we built an analytical framework for the onboarding process, and we defined four Onboarding Elements, the constructs that helped understand the functions, timeline, and scope of each one. This dissertation proposed the second research sub-question that guided us to other significant findings as part of the main contribution. We elaborated on three studies to respond to (Sub-RQ2) Which user attributes and system characteristics interact in the initial engagement process? Study I portrayed the current onboarding practices in the Cit Sci field. Concentrating on mobile Cit Sci apps and deriving criteria from the onboarding framework previously established, although not extensive, the outcomes endorsed what we speculated. Although it corroborated some standard aspects, we could identify neither any robust onboarding design or structure nor recurrent strategies employed across the apps that reveals heterogeneity in the practice of how onboarding should be designed and operated. We could also not find general onboarding strategies that were used consistently. Instead, it appeared that designers and researchers in industry were using combinations of off-the shelf strategies or isolated techniques. We noticed some tendencies regarding the particular elements, such as offering the SoP hidden somewhere in the app, that is, it depends on the users to search and find it. Additionally, most apps tended to let users explore before registering or they allowed them to participate anonymously. In general, InS, in the form of instructions of help pages, tended to assume a reactive approach. Guidance on participation, instructions, and helpful information were not exposed or easily accessible, including the project's purpose. Although valuable as a preliminary investigation, we perceived the necessity to move forward to run a study that could inform how users perceived those issues and how the trends found may influence engagement and chances of future reuse. The findings aligned with the literature on Cit Sci projects that deal with technological products that depend on remote participation. These design trends support the communities' concerns regarding maintaining a 123 stable and active crowd of participants, given that gaps in the onboarding may contribute to the dropping rates of returning users reported in several commercial apps. Although not exclusive to the Cit Sci field, failures in onboarding can be more costly to Cit Sci teams which rarely can afford user testing, analytical tools, or professional design. An onboarding that is weak or deficient in some aspects might not affect the current participants' situation directly, but, based on our findings, it tends to fail in captivating new users. Moving forward to the second study, we also aimed at answering the second sub- question and hopefully shed light on the users' perspective when interacting with an app for the first-time. In this step, we carried out users' studies to closely examine: How the user s? attributes interact with the app?s components articulated by the onboarding? We aspire to witness the first use of different apps by newcomers, observe their reactions, doubts, difficulties, accomplishments, feelings, and opinions. A mixed methods approach that combines elements from inductive and deductive research was employed to reveal different findings. Following the qualitative strategy, from Study I, in this pragmatic research, we explored the onboarding concepts and collected observational data with the advantage that the participants used their own voices to describe their experiences in a neutral environment. Critical ideas were discussed during the semi- structured interview, in which feedback loops and conversation were possible. Inductive qualitative analysis revealed a set of themes that helped to characterize the first-time experience of the users, and not only their personal opinions on the apps, but also enabled us to detect where barriers and disengagement points were happening. Study III provided a glance at how the industry crowdsourcing platforms that have successful growth onboard their newcomers. The review identified design decisions and usability features on the GoFundMe app. Our analysis identified missing opportunities for the Cit Sci apps to implement. Searching for successful practices can enrich the Cit Sci portfolio of design and technology, as a sort of benchmark that will only contribute to current designs. We can highlight a few onboarding features that agree with the findings discussed in the previous study, Study II. The purpose message of the GoFundMe app is unarguably present, straightforward, and clear. It is also communicated early enough in the process, so there is no room for doubts about users' roles once inside. Both purposes converge ? the apps 124 and the users ?goals. Interestingly, the description of what is expected the users to perform, i.e., the phrase "Fundraise for the people and causes you care about", is presented in a form that addresses many of the users' engagement attributes personal interest (either in helping a particular cause or person), clarity of the contribution's goals, perceived impact on the project/community and positive affect. On the other hand, probably the same message stated differently, such as "Give your money here" or "Donate to our platform" would not sound as appealing and yield to conversion. In fact, the app does not refer to payment or monetary terms at any time; even the Apple Pay button is customized to reflect the app?s appeal and to sound more subtlety: "Donate with Apple Pay." 7.1. A MODEL OF USER ONBOARDING FOR CITIZEN SCIENCE In this section, we proposed a model for User Onboarding. This model was built on the crowdsourcing and Cit Sci literature that investigate initial motivations to join communities and to volunteer, motivation theory, as well as based on the findings of this work. Combining engagement factors and previous models, we focused on the design aspects that play a role in the initial interaction and might impact engagement and experience; thus, we elaborated a structure that brings together the onboarding components and attributes of these arenas. Our proposed model originated in the crowdsourcing setting, moreover, in the Cit Sci context, and elucidated the components of initial users' interaction between users' attributes and system properties, identifying the attributes and characteristics on each end. It suggests that users' engagement is a positive result of the interaction of those components, and it can be leveraged by the onboarding design. Therefore, the more the onboarding design addresses user attributes through user research and understanding his/her needs, the earlier engagement can be effective in terms of retaining new users and improving the initial experience. 125 Figure 19: Model of user engagement in open collaboration crowdsourcing proposed by de Vreede et al. (2013). Building on (de Vreede et al., 2013) theoretical model for user engagement in crowdsourcing (Figure 19), the model we proposed expands and details the elements involved in this interaction. Our model also considers personal motivation and participants' interest in the topic as important drivers for engagement. Nevertheless, we strive to acknowledge and explore the UI's elements during onboarding and the overall design that new users will face when first interacting with a new app. Previous Cit Sci studies have already revealed motivational factors of initial participation in projects in the United States. According to Rotman et al. (2014), they all stem from self-related themes: Personal interest, Self-promotion, and Self-efficacy. Personal interest involves enjoyment in such activities, as a form of leisure usually carried as a hobby during free time, and existing environmental or biodiversity interests. Self-efficacy is related to the sentiment volunteers experience of belonging to a scientific community and contributing effectively to scientific research. The Self-promotion factor is associated with building a reputation among other members, eventual social advancement, or career advancement within a research or academic context. De Vreede et al.'s (2013) model suggests that intrinsic, more than extrinsic, motivational factors constitute the central motivational element that leads volunteers to contribute; thus, it operates combined with an interest in the topic. Both drivers are moderated by Goal Clarity of the crowdsourcing community or project, highlighting the relevance of a clear message and objective tasks for volunteers. Considering that de Vreede et al.'s (2013) model focuses on crowdsourcing initiatives, 126 it also speaks directly to Cit Sci projects. The authors address exclusively three main factors that drive user engagement. The authors acknowledge that this model might be incomplete, and further research is needed to reveal other factors that might help us understand the relationship between users and engagement. Figure 20: An initial scheme describing engagement components. Building on the model mentioned above, our research shined a light on initial interaction, offering an in-depth examination and thorough look beyond the elements cited to uncover a key element: the quality of the first-time user experience. Therefore, we combined findings from different studies and models to help identify the users' and the system's attributes that lead to early users' engagement, which is an outcome of the users' experiences, in this case, initial interaction with a new platform (O'Brien & McKay, 2018). On the one hand, looking at general users' engagement as a temporal process described by O?Brien and Toms (2008), they distinguish three stages of the engagement process: (i) the starting point where the process begins, (ii) the point of engagement, sustained engagement, disengagement, and (iii) the re-engagement. Our proposal, on the other hand, was 127 considering a broader view to the initial interaction, in which the described process by O?Brien and Toms (2008) might fit and inform the engagement resulting. We were referring to how the first single interaction unfolds and has the power to influence the users' relationship with technology in the long run. The engagement could or could not participate in the outcomes of this interaction. In a broader perspective, this model tends to embrace not only engagement characteristics, but also how onboarding design, product/app/platform content and users' characteristics impact in adoption. The proposed model had the goal of clarifying and organizing attributes and other factors involved in the initial interaction. 7.1.1. The Users' Attributes Some engagement attributes identified by O?Brien and Toms (2008), plus the further items summarized by Attfield, Kazai, and Lalmas (2011) ? focused attention, positive affect, aesthetics, endurability, novelty, richness and control, reputation, trust and expectation, and user context-, could provide indicators of which elements would play relevant parts in an initial interaction. In a more recent study, six engagement factors were defined: the aesthetic appeal, novelty, focused attention, felt involvement, usability, and endurability (O?Brien & Toms, 2010). Later, the authors collapsed a few aspects and showed that the aesthetic appeal and novelty of the interface predicted focused attention and felt involvement, which, in turn, predicted usability and endurability (O?Brien & McKay, 2018). However, in this dissertation and in the model that we described, it was adopted the six engagement factors as the baseline criteria to investigate how they affect onboarding. The goal was to deeply investigate how different attributes, designs, and users? needs act and interact during the first use of an app. Therefore, we sought to deconstruct the first use process, and, consequently, the engagement process to elucidate how they intersected and developed. The aforementioned works supported the understanding of the complexity and numerous aspects that play a role during the onboarding. Figure 19 illustrates a preliminary attempt to visually organize our ideas and list components that seemed important for the 128 onboarding. In the User box, we grouped attributes into three categories: Personal, Task- related, and Geographic & Social. As users may come from various contexts and backgrounds, we listed the main attributes that might affect their first-time experience when interacting with a UI. In the studies that we carried and reported in the present work, we have selected the attributes we assume, based on previous research and literature, to be the most sensitive to assess initial experiences and adequate to the scope of this work. In the Personal Attributes group, we have included demographic information (viz. age, gender, and education). Although it might be reasonable to consider that all those aspects influence users' experience when interacting with a Cit Sci mobile app, we understand that the audience of most of these projects are diverse since most apps are available to any user to download and start using. That is one of the principles of science crowdsourcing, that is, to have as many participants as possible, contributing significantly to the project. Other personal attributes included in this group are related to attitude towards technology and lifestyle, which we also believe might play a role in adopting a crowdsourcing project to contribute. Since it is not an object of this research to conduct comparisons and dive into how demographics affect users' experiences, we consider the task-related attributes to be the most meaningful ones, and that might overlap or even synthesize personality treats and users' backgrounds. How each user's attribute interacts with a system aspect can be seen as a touchpoint. Revealing those touchpoints means comprehending where users are coming from in terms of goals and motivations, the context of use, and other attributes that drive their interaction. As an outcome, designers can modify, measure, and rethink any improvements in the design towards a positive effect on engagement and participation. The result of the first interaction between users' attributes and the system is the first- time users' experience. We hypothesized that when this interaction is positive and fruitful, early users' engagement can be leveraged. 7.1.2. The Model There are two critical reasons for establishing a successful onboarding for Cit Sci 129 online initiatives besides proffering a pleasant and positive initial experience. First, it speaks directly to engagement scaffolding and, consequently, reaching critical mass and retaining volunteers, an enormous concern for any crowdsourcing platform. Second, to acquire high- quality and valuable data for the projects, onboarding offers an opportunity to guide and inform how to collect and report observations, and contributions in general. Brief instructions can prepare volunteers to collect and inform valuable data, as well as their impact. Cit Sci initiatives are a type of crowdsourcing, and, as seen in the studies' sections, they have different needs from other crowdsourcing platforms. To mention a few, they rarely rely on financial gains to reward users; users are performing entirely voluntary work; unlikely there will be a professional design team building the technology; there are no commercial transactions, or any other explicit personal gains involved. For these and other reasons, Cit Sci mobile apps demand a well-thought design. The model proposed has one main objective: to operationalize the onboarding interactive components and constructs into a deployable process that benefits Cit Sci and possibly other crowdsourcing platforms. A large portion of this dissertation provided a theoretical and practical basis to structure this model. To propose this model, we elucidated many users' attributes and technical characteristics that perform different roles in the initial interaction that lead to engaged users. Therefore, this model is a response to our second research question. The model, presented as a diagram (Figure 20), is divided into two parts: first, it depicts the minimal onboarding structure proposed to achieve the onboarding goals, as explained in Section 2.5. Also, it features the scope and content that each element is to perform, exposing the functions ascribed to each one. 130 Figure 21: The Model for the User Onboarding (part one, above, and part two, bottom). The second part of the model describes four structural dimensions that the onboarding elements entail ? starting from the scope and content layer, showing "what" each part of the onboarding should present in a high-level way. The other three layers distinguish different 131 dimensions of the onboarding, breaking it down into design components that must be consolidated and operationalized to succeed. The second part of diagram (Figure 20) informs these design layers, analogous to Garrett's (2011) elements of the experience diagram, from the more conceptual and abstract to the more visual and superficial, listed here: 1) Scope and Content; 2) Typical Pain Points; 3) Engagement attributes and emotional aspects to be addressed; 4) UI Design Patterns. Therefore, our model is visually presented in the diagrams (Figure 20), and more detailed in Table 18. Each construct or element has a defined Scope and Content that should be covered. The SoP, for example, is a construct of the onboarding that needs to perform certain goals to be effective during the interaction: inform platform's purpose, how it works, the role of the participants, and whether there will be benefits or any compensation; plus, it needs to communicate the contribution's purpose for the project and its impact in the overall goal of the platform. Such attributions were assigned for each construct, as the descriptive Table 18 shows. 132 Table 18: Onboarding Model detailed by layers and elements. 133 7.1.3. Seven Drivers of the Newcomer Engagement The onboarding model proposed in this dissertation consists of two main parts: first, the four structural components specialized designed for Cit Sci mobile apps; and secondly, the design layers that sustain and inform such structure. Therefore, each onboarding element is built on four layers that guide designers and Cit Sci teams to plan and design their processes. Since this model was elaborated based on Cit Sci mobile apps and thought to collaborate and help other Cit Sci teams, most of the scope and format of the model fit this specific type of crowdsourcing platform. However, to further develop a broader source of resources and recommendations for varied crowdsourcing initiatives, we elaborated on the seven overarching themes identified in Study II (Section 5.8.1), allowing us to assemble more generalist recommendations that will serve a wider audience. The seven drivers are defined as design recommendations for onboarding design that can be adopted for virtually any crowdsourcing app: 1) Avoid the use of technical language and jargon; 2) Offer information on the app's mechanics and provide guidance to tasks accomplishments; 3) Demonstrate and emphasize the user's role and their contributions' purpose within the project; 4) Be transparent about the app's goals, results, and impact on the world; 5) Clarify any benefits or rewards right from the beginning, even they are not tangible or immediate; 6) Consider UI's visual quality as a decisive interest factor and design it according to the intended audience; 7) Use of visual cues to enhance usability and reduce uncertainty. Different types of crowdsourcing platforms can readily implement the seven recommendations as they work as advice and are not necessarily attached to any preconceived format. The onboarding structure described in the proposed model at the beginning of this chapter can also be adapted to serve further initiatives. While CnE and UId are usually implemented as well-defined steps during the first- 134 time use, the SoP and InS work as constructs permeating the process. The SoP is an essential element in the onboarding structure. It is in consonance with two of the engagement drivers we cited above, number 3) Demonstrate and emphasize the user's role and their contributions' purpose within the project; and number 4) Be transparent about the app's goals, results, and impact on the world. They share the same content and scope within the onboarding, but outside the Cit Sci field, the other design layers (Table 18) might require an adaptation and further analysis. In other words, the generalization of this entire model to other domains, for example, crowdfunding, might not be as effective because participants' motivations, expectations, and further engagement attributes differentiate from Cit Sci volunteers. 135 8. CONCLUSIONS, LIMITATIONS, AND FUTURE WORK This research strives to learn how design aspects and users' attributes operate during the initial interaction process with new technology and investigate the potential impacts on user engagement. The first part of this dissertation provided a definition of onboarding and its components resulting from the literature review to start the conversation by addressing the current meanings and definitions. Additionally, we developed the terminology necessary to study this topic. Study I helps partially respond to the second sub-question while providing an overview of current practices and revealing significant design aspects present in the Cit Sci apps. In addition, it gives insight into the design side of the interaction and how teams have built their onboardings, consciously or not. Study III also consists of a similar analysis, with the addition of engagement attributes perspective. On a different approach, Study II reflects users' experiences with different apps revealing critical aspects during the onboarding and accounts for significant findings. We concluded that informational support, rather than a step of the onboarding, acts as a construct that permeates the entire process. The same is accurate for purposes' statements, which, in their absence, there is a tendency of undermining engagement. The limitations of this work include that it is restricted to account for a certain type of apps and our epistemological perspective does not aim at generalizations, i.e., developing a step-by-step recipe to deploy a successful onboarding design that will increase users ?adoption for any type of product or service. Marketing professionals, product and design teams often seek for such encompassing and guaranteed formula that provide them safe ground to design and adapt ?golden rules? to their reality. Although this work has the intention of serving as a pragmatic work and helpful for designers, in general, we must acknowledge the depth and concentration of the scientific research. Therefore, the limitations lay on many aspects, as the number of apps we analyzed during Study I, for example. Also, we focused on one category of Cit Sci initiative so it would provide parameters for comparisons, but our discoveries could 136 benefit from looking at more types of Cit Sci apps. Both in Studies I and II, a larger sample of apps would likely reveal more commercial-grade, high-quality design apps, that are developed by profit-oriented entities, teams with large grants or financial resources that allow scientific teams to have designers, testers, and so on, as part of their development process. Looking at the scientific literature from CHI, CSCW, and Cit Sci communities ?papers on Cit Sci platforms and apps case studies, it is conspicuous the lack of professional designers involved in the UI design process and a little number of apps born in universities have this help from a specialist or practitioner. And this work recognizes that academic research has constraints in terms of the amount of time and resources available for such studies. Cit Sci apps are frequently the means to research teams to accomplish other tasks and goals, i.e., Shrimp Black Gill Tracker, we mentioned earlier. The app was a part of a Sea Grant project to develop a Cit Sci approach to measuring the prevalence and onset of shrimp black gill disease. It was not expected nor anticipated the app design to be efficient, complying with usability best practices, let alone be designed by a UI or UX designer. With that said, this research is limited to reporting a portray on a narrow sample of Cit Sci apps that are available on the web. Besides the size and type of apps sampled, our recommendations on important aspects the onboarding design should entail was also derived from a sample of users, although heterogenous, with backgrounds and experiences that hardly reflect the exact existent users ? characteristics. However, limitations and gaps offer opportunities to future researchers to piece together other learnings and inquiries in new contexts. These limitations suggest that there is still important research to be done, to further investigations to elucidate other mechanisms present during first-time interactions. It is still unclear how people with and without previous knowledge and personal interest on the apps ? topic would react to different types of onboarding patterns (e.g., tutorials, walkthroughs, or guided tours). It is important to what learn features are dispensable for experienced users and how the onboarding flow can serve to collect such information (as in recommendation systems). Some other inquires are whether our model applies to a diverse population of users and work for people of different cultures since reward and thanking actions might be differently interpreted or necessary for certain social or economic contexts. Further examination on a larger range of commercial apps could also bring new perspectives on how 137 teams use design features and address people?s needs. We also understand that the proposed model might apply differently to different typologies of Cit Sci initiatives or platforms ? for example, platforms in which the work is done entirely online as occurs for some of the Zooniverse projects, rather than for Cit Sci in which the data are collected in the field. Looking at these other typologies, we found projects in which users' participation happened through a game (e.g., Fold It and Nasa?s NeMO-Net Coral Classification). Onboarding users for such activities may also require different engagement components. Another suggested direction is looking at the onboarding for different devices (i.e., laptops and tablets), in which a different audience and tasks can be performed. An important aspect of Cit Sci success is the social component and the opportunity to expand the network of potential participants that arrive to platforms via others ?invitations and content sharing. As it happens in crowdfunding, many users get to the platforms not by installing an app and searching for causes, but by following family's and friends ?asks for support through a link that take the users to the target pages. It is imperative to look at onboarding design not as a preconceived sequence of steps because the more pervasive technology becomes, the more entryways and shortcuts will be inserted in our daily tasks, devices, and routine. Just like in the past, not so long ago, software would come in disks or DVDs to be, unpacked, installed, and launched, following a certain order, usually supported by print instructions. Mobile apps are becoming so essential and ubiquitous that we are flowing seamlessly from daily tasks to social media, from work messages to online classes. Transitioning between all those contexts, services, and roles, faster and easier than ever, raises the onboarding matter on how future designs should approach users that spend less time, effort, and focused attention to learn and use a new platform. As a result, the HCI community can learn how those relationships work and better design such interactions. The proposed onboarding model, elements definition, and description of how these different components are related, offer a robust foundation for practitioners to improve the onboarding experiences of thousand Cit Sci participants, while inviting to a glance on exciting future research into understanding the deeper nature of onboarding for different kinds of users in different types of human activity. 138 APPENDIX A DATA ANALYSIS EXAMPLE FROM STUDY I TABLE A1 mPING app Onboarding SoP UId InS CnE Presence No Yes Yes Yes Patterns None. Deferred registration. None. None. Gen. Approach Not identified. Specifics Part of a flow? No No No No Mandatory? Yes Guided? No N. of Tasks 3 2 TABLE A2 Marine Debris app Onboarding SoP UId InS CnE Presence Yes Yes Yes Yes Deferred and optional Patterns None. None. None. registration. Gen. Approach Not identified. Specifics Part of a flow? No No No No Mandatory? Yes Guided? No 139 N. of Tasks 3 2 TABLE A3 SatCam app Onboarding SoP UId InS CnE In-app Presence No Yes No Yes Email verification Call-to-Action Patterns - - Deferred signup button Gen. Approach - Specifics Part of a flow? No No No No Mandatory? Yes Guided? Yes N. of Tasks 5 2 TABLE A4 MISIN-Midwest Invasive Species Network app Onboarding SoP UId InS CnE In-app Presence Yes Yes Yes Yes Email verification Patterns - - - Deferred signup Gen. Approach - Specifics Part of a flow? No No No No Mandatory? Yes 140 Guided? No N. of Tasks 2 4 APPENDIX B POST-QUESTIONNAIRE RESULTS MPING POST-QUESTIONNAIRE ANALYSIS Since we were looking at the onboarding elements in this study, fourteen questions were elaborated as describe in Section 5.5 which comprised the five constructs: 1) Purpose and goals, indicating the SoP construct. 2) Registration denoted the UId construct. 3) Support and guidance, representing the InS construct. 4) Efficacy, meaning the CnE construct. 5) Overall experience and retention. To test the hypotheses that these fourteen items collapsed uniquely into the five concepts, a series of bivariate correlations and Cronbach?s Alpha tests were computed. Ten participants completed the survey; all respondents completed each survey item. Construct 1 was theorized to contain two survey items: 1) The purpose of the project was clear, and 2) I?m excited about contributing to this project. Since only two items comprised this concept, a bivariate correlation test was employed. The two items were not significantly correlated with one another (r = -0.14, p = 0.71). Thus, only the first item was retained to 141 represent the concept. Construct 2 was comprised of two items: 1) I liked being able to contribute without registering, and 2) I would not mind registering to track my participation. Item 2 was reverse coded. Since only two items comprised this concept, a bivariate correlation test was employed. The two items were significantly correlated with one another (r = 0.85, p = 0.002). Construct 3 was comprised of three items: 1) It was easy to use the app, 2) It was clear how I could contribute, and 3) Instructions were helpful. The Cronbach?s Alpha test values was 0.81, indicating the items represent a distinct and unique concept. Construct 4 was comprised of three items: 1) The importance of my participation is clear, 2) Contributing was a lot of work, and 3) Contributing was easy. The second item was reverse coded. The three items did not together well (Cronbach?s Alpha = 0.05). However, when removing the first item, the Cronbach?s Alpha test value improved to 0.79. Thus, it was deemed that these 2 items hang together reliably well, and that they represent a distinct and unique concept. Finally, Construct 5 was comprised of four items: 1) I?m interested in using the app again, 2) I am interested in receiving updates about the app, 3) I might join other Cit Sci platforms, and 4) I might use the app to learn what others are saying about the weather. The 4 items hung together well (Cronbach?s Alpha = 0.70), indicating the items represent a distinct and unique concept. The open-ended question ?In my opinion, the main drawback of this app is...? revealed users ?dissatisfaction with these two main points: ? Project and contribution purpose: Even though subjects were able to quickly grasp the primary feature of the app, reporting whether, the purpose of the project as a whole and, therefore, the relevance of their participation stood unclear: ?The lack of context in terms of the research project? What am I contributing to exactly? What?s the impact?? [User 08] ? Benefits to the user: The lack of a rewarding mechanism, which acknowledge their contribution, turn out to be almost unanimous complain: ?Who am I helping? I don?t see any benefit for me? There?s no incentive to stay engaged? [User 04]. The absence of benefits to users, including statements 142 like ?[this app is] nothing fun? [User 05], represented an obstacle to engage and finding the app compelling. During the sessions, as soon as participants had a glimpse of the website homepage and launched the app, many would compare mPING to the crowdsourced GPS navigation app Waze, where drivers report traffic events around them and share traffic and road conditions. However, the similarities seem to be limited to that. Participants would immediately show a loss of interest when trying to use the map for finding out what other users were reporting without success. Even in cases where users discovered the ?View Reports? link, they would find it difficult to navigate and locate others ?activities, and just a few were able to find the button for the label list and decode the tiny icons on the screen. EBIRD POST-QUESTIONNAIRE ANALYSIS Eight participants completed the survey; there were between six and eight responses for each survey item. The eBird app users answered seventeen questions. To test the hypotheses that these seventeen items collapsed uniquely into the five constructs (SoP, UId, InS, CnE), a series of bivariate correlations and Cronbach?s Alpha tests were computed. Construct 1 was theorized to contain two survey items: 1) The purpose of the Cit Sci project was clear, and 2) I?ve gotten excited about the opportunity of contributing to this Cit Sci project. Since only two items comprised this concept, a bivariate correlation test was employed. The two items were significantly and positively correlated with one another (r = 0.97, p = < 0.001). Construct 2 was comprised of five items: 1) I consider being asked to register a nuisance, 2) I don?t mind having to register to place my contribution, 3) I feel like the registration process was a barrier, 4) Registration was time consuming, and 5) I don?t see a reason for registering. Item 2 was reverse coded. The Cronbach?s Alpha test value was 0.56, indicating the items do not reliably comprise a distinct concept. However, when removing the reverse coded item, the Cronbach?s Alpha test value improves to 0.84. Thus, that item was removed from the concept. Construct 3 was comprised of three items: 1) It was easy to use the app, 2) Instructions 143 were helpful, and 3) It was clear how I could contribute. The Cronbach?s Alpha test values was 0.68, which is just shy of the 0.70 cutoff value of being considered reliably a distinct concept. Given the small sample size and the Alpha value, it was deemed that these three items hang together reliably well and represent the concept well. Construct 4 was comprised of three items: 1) The importance of my participation is clear, 2) Contributing was a lot of work, and 3) Contributing was easy. The second item was reverse coded. The three items hung together well (Cronbach?s Alpha = 0.72). Thus, the items represent a distinct and unique concept. And Construct 5 was comprised of four items: 1) I?m interested in using the app again, 2) I am interested in receiving updates about the app, and 3) I might join other Cit Sci platforms. The three items hung together well (Cronbach?s Alpha = 0.83). Thus, the items represent a distinct and unique concept. The open-ended question ?In my opinion, the main drawback of this app is...? revealed users ?dissatisfaction with these two main points: ? Lack of pictures: As unveiled and discussed previously in the interviews, users were eager to encounter nature photographs in the app and be able to submit bird?s pictures as part of their contributions. Users indicated disappointment: ?I was hoping to be asked to take pictures or record bird songs.?[User #01]. The absence of imagery use in the app is directly connected to the Aesthetic theme and the next issue. ? Impression that expertise is required: Users noticed the UI design as more professional and refined. And, according to their thoughts, that can be a two- edged sword. Visually attractive and organized UI can benefit the users in various ways and facilitate usability. On the other side, it can be perceived as less accessible product, a more selective community as many users expressed. Investing in visual design can also be associated with a more professionalized platform that, therefore, establishing a certain level of quality and curation expected from participants. ? The lack of pictures is also a design decision that affects the visual and aesthetic 144 language of the UI, affecting the audience, which might be more experienced birders that do not necessarily need pictures identify birds and report. MARINE DEBRIS TRACKER APP POST-QUESTIONNAIRE ANALYSIS Eight participants completed the survey; there were between six and eight responses for each survey item. The eBird app users answered fourteen questions. Similar to the other apps, the items were measured via a Likert-type scale. The fourteen questions comprised the same five constructs as done with the other apps. To test the hypotheses that these 14 items collapsed uniquely into the 5 concepts, a series of bivariate correlations and Cronbach?s Alpha tests were computed. Construct 1 was theorized to contain two survey items: 1) The purpose of the project was clear, and 2) I?m excited about contributing to this project. Since only two items comprised this concept, a bivariate correlation test was employed. The two items were not significantly correlated with one another (r = 0.02, p = 0.94). Thus, only the first item was retained to represent the concept. Construct 2 was initially comprised of three items: 1) I liked being able to contribute without registering, 2) I would not mind registering to track my participation, and 3) Mandatory registration would be annoying. Items 2 and 3 were reverse coded. The Cronbach?s Alpha test value was -0.26, indicating the items do not reliably comprise a distinct concept. However, when removing the third item, the Cronbach?s Alpha test value improved to 0.68, which is just shy of the 0.70 cutoff value of being considered reliably a distinct concept. Given the small sample size and the Alpha value, it was deemed that these 2 items hang together reliably well. Construct 3 was comprised of three items: 1) It was easy to use the app, 2) It was clear how I could contribute, and 3) Instructions were missing. Item 3 was reverse coded. The Cronbach?s Alpha test values was 0.72, indicating the items represent a distinct and unique concept. Construct 4 was comprised of three items: 1) The importance of my participation is clear, 2) Contributing was a lot of work, and 3) Contributing was easy. The second item was 145 reverse coded. The three items hung together well (Cronbach?s Alpha = 0.81). Thus, the items represent a distinct and unique concept. And Construct 5 was comprised of three items: 1) I?m interested in using the app again, 2) I am interested in receiving updates about the app, and 3) I might join other Cit Sci platforms. The three items did not hang together well (Cronbach?s Alpha = 0.-0.70) and removing any of the three items did not improve the Alpha. Thus, only the first item was retained for the concept. SATCAM APP POST-QUESTIONNAIRE ANALYSIS The SatCam app users answered sixteen questions. The items were measured via a Likert-type scale, with 1 representing Strongly Disagree and 5 representing Strongly Agree. Fourteen questions were collapsed uniquely into the five constructs (SoP, UId, InS, CnE), a series of bivariate correlations and Cronbach?s Alpha tests were computed. Nine participants completed the survey; there were between six and nine responses for each survey item. Construct 1 was theorized to contain two survey items: 1) The purpose of the project was clear, and 2) I?m excited about contributing to this project. Since only two items comprised this concept, a bivariate correlation test was employed. The two items were significantly correlated with one another (r = 0.97, p < 0.001). Construct 2 was comprised of 5 items: 1) I consider upfront registration a nuisance, 2) I don?t mind registering to track my contribution, 3) Registration was a barrier, 4) Registration was time-consuming, and 5) I don?t see a reason for registering. The second item was reverse coded. The Cronbach?s Alpha test value was 0.91, indicating the items represent a distinct and unique concept. Construct 3 was comprised of three items: 1) It was easy to use the app, 2) It was clear how I could contribute, and 3) Instructions were helpful. The Cronbach?s Alpha test value was 0.93, indicating the items represent a distinct and unique concept. Construct 4 was comprised of three items: 1) The importance of my participation is clear, 2) Contributing was a lot of work, and 3) Contributing was easy. The second item was reverse coded. The three items were close to the 0.70 cutoff value for reliability (Cronbach?s 146 Alpha = 0.65). Given the small sample size, it was deemed that these three items hang together reliably well, and that they represent a distinct and unique concept. Finally, Construct 5 was comprised of three items: 1) I?m interested in using the app again, 2) I am interested in receiving updates about the app, and 3) I might join other Cit Sci platforms. The three items were close to the 0.70 cutoff value for reliability (Cronbach?s Alpha = 0.63). Given the small sample size, it was deemed that these three items hang together reliably well, and that they represent a distinct and unique concept. The relevant items were averaged into their various constructs; because the items were averaged, the 5-point scale was maintained. The following table displays the means and standard deviations of the 5 concepts, as well as what the mean represents for the concept. 147 APPENDIX C CODES GENERATED BY STUDY II Most relevant codes generated across all the four apps mPING eBird Marine Debris Tracker SatCam Information overload. UI's aesthetic - User? role and Effort to register. professional look contribution?s purpose. Lack of directions. Early disengagement. Positive first Uncertainty. Unclear CTAs. impression. Technical problems. Questioning data usage, UI aesthetics. Imagery importance. results, and impact. Lack of clarity on the goals. Lack of visual cues. Effort x Learning the Recognition - Rewards mechanics. Lack of guidance. Technical information Variety and organization of overload. Lack of personal items as positive points. Contributions doubts. interest. UI aesthetics. Uncertainty about features. Mechanics. Clarity of the app and Sciency impression. project?s goals. Doubts on the relevance Mental models and the and outcomes of the app's framework. Confusing navigation. Scientific character. project. Uncertainty. Lack of interactivity. Potential knowledge Doubts on project?s gain. relevance and outcomes. Guessing. Lack of clarity regarding purposes. Mental model gap - Social features ? Positive. Expectations ? Lack of info Rewards. Project's social aspect. organization. Usability issues ? Navigation. Disengagement. Data usefulness. Low findability. Lack of guidance ? Loss of interest. Benefits and rewards. Technical terminology - Mechanics. Boring ? Uninteresting. Self-doubt. Items organization and App?s purpose. Effort ? Commitment. variety ? Helpful. Time consuming. Lack of clarity. Disappointment. Relevance. Technical problems. Personal interest and topic appreciation. Results / Impact of contributions. 148 BIBLIOGRAPHY Adams, A., Lunt, P., & Cairns, P. (2008). A qualitative approach to HCI research - Open Research Online. In P. Cairns & A. Cox (Eds.), Research Methods for Human-Computer Interaction. (pp. 138?157). Cambridge Publish Press. http://oro.open.ac.uk/11911/. Alender, B. (2016). Understanding volunteer motivations to participate in citizen science projects: A deeper look at water quality monitoring. Journal of Science Communication, 15(3). Aristeidou, M., Scanlon, E., & Sharples, M. (2017). Profiles of engagement in online communities of citizen science participation. Computers in Human Behavior, 74, 246? 256. https://doi.org/10.1016/J.CHB.2017.04.044 Attfield, S., Kazai, G., & Lalmas, M. (2011). Towards a science of user engagement (Position Paper). WSDM Workshop on User Modelling for Web Applications. https://doi.org/978-1- 4503-0493-1/11/02 Balboni, K. (2019). We categorized over 500 user onboarding experiences into 8 UI/UX patterns. Appcues Blog. www.appcues.com/blog/user-onboarding-ui-ux-patterns Balestra, M., Cheshire, C., Arazy, O., & Nov, O. (2017). Investigating the Motivational Paths of Peer Production Newcomers. CHI 17: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 6381?6385. https://doi.org/10.1145/ 3025453.3026057 Becker, H. S. (1996). The Epistemology of Qualitative Research. Ethnography and Human Development Context and Meaning in Social Inquiry, 1(3), 53?71. https://books.google.com/books/about/Ethnography_and_Human_Development.html?hl= pt-BR&id=ItxXzvwlJVUC Beenen, G., Ling, K., Wang, X., Chang, K., Frankowski, D., Resnick, P., & Kraut, R. E. (2004). Using social psychology to motivate contributions to online communities. Computer Supported Cooperative Work, 212?221. https://doi.org/10.1111/j.1083- 6101.2005.tb00273.x B?dker, S. (2006). When Second Wave HCI meets Third Wave Challenges. Proceedings of the 4th Nordic Conference on Human-Computer Interaction Changing Roles - NordiCHI 06. https://doi.org/10.1145/1182475 B?dker, S. (2015). Third-wave HCI, 10 years later---participation and sharing. Interactions, 22(5), 24?31. https://doi.org/10.1145/2804405 Bonney, R., Ballard, H. L., Jordan, R., McCallie, E., Phillips, T., Shirk, J. L., & Wilderman, C. C. (2009). Public Participation in Scientific Research: Defining the Field and Assessing Its Potential for Informal Science Education. In CAISE Inquiry Group Report. Brabham, D. C. (2010). Moving The Crowd At Threadless. Information, Communication & Society, 13(8), 1122?1145. https://doi.org/10.1080/13691181003624090 Braun, V., & Clarke, V. (2020). Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern-based qualitative analytic approaches. Counselling and Psychotherapy Research, 21(1), 37?47. https://doi.org/10.1002/CAPR.12360 149 Braun, V., & Clarke, V. (2021). Conceptual and design thinking for thematic analysis. Qualitative Psychology, Advance online publication. https://doi.org/10.1037/QUP0000196 Brooke, J. (1996). SUS - A quick and dirty usability scale. In Usability evaluation in industry (pp. 4?7). Taylor & Francis. https://doi.org/10.1002/hbm.20701 Cappa, F., Laut, J., Porfiri, M., & Giustiniano, L. (2018). Bring them aboard: Rewarding participation in technology-mediated citizen science projects. Computers in Human Behavior, 89, 246?257. https://doi.org/10.1016/j.chb.2018.08.017 Cohn, J. P. (2008). Citizen Science: Can Volunteers Do Real Research? BioScience, 58(3), 192. https://doi.org/10.1641/B580303 Collier, A. (1994). Critical Realism: an Introduction to Roy Bhaskar's Philosophy. Verso Books. Cook, M. (2015). UX Flows: How to Turn Onboarding into an Amazing First Date with Your User. DTelepathy Blog. www.dtelepathy.com/blog/design/ux-flows-onboarding. Cox, A. L., Gould, S., Cecchinato, M., Iacovides, I., & Renfree, I. (2016). Design Frictions for Mindful Interactions: The Case for Microboundaries. Proceedings of CHI 2016. https://doi.org/10.1145/2851581.2892410. Crall, A., Kosmala, M., Cheng, R., Brier, J., Cavalier, D., Henderson, S., & Richardson, A. D. (2017). Volunteer recruitment and retention in online citizen science projects using marketing strategies: Lessons from season spotter. Journal of Science Communication, 16(1). Crowston, K., & Fagnot, I. (2008). The Motivational Arc of Massive Virtual Collaboration. Proceedings of the IFIP WG 9.5 Working Conference on Virtuality and Society: Massive Virtual Communities. Cunha, Davi G.F. et al. (2017). Citizen science participation in research in the environmental sciences: key factors related to projects? success and longevity. Anais da Academia Brasileira de Ci?ncias [online]. v. 89, n. 3 Suppl. 2229-2245. https://doi.org/10.1590/0001-3765201720160548. de Vreede, T., Nguyen, C., de Vreede, G.-J., Boughzala, I., Oh, O., & Reiter-Palmon, R. (2013). A Theoretical Model of User Engagement in Crowdsourcing. Collaboration and Technology, 94?109. DeCarlo, M. (2018). Design and causality?: Types of research. In Scientific Inquiry in Social Work. Open Social Work Education. https://scientificinquiryinsocialwork.pressbooks.com/ Deci, E. L. (1975). Intrinsic Motivation. Springer US. https://doi.org/10.1007/978-1-4613- 4446-9 Dickinson, J. L., Shirk, J. L., Bonter, D., Bonney, R., Crain, R. L., Martin, J., Phillips, T., & Purcell, K. (2012). The current state of citizen science as a tool for ecological research and public engagement. Frontiers in Ecology and the Environment, 10(6), 291?297. https://doi.org/10.1890/110236 Doherty, K., & Doherty, G. (2019). Engagement in HCI: Conception, theory, and measurement. ACM Computing Surveys, 51(5). https://doi.org/10.1145/3234149 Drenner, S., Sen, S., & Terveen, L. (2008). Crafting the initial user experience to achieve community goals. Proceedings of the 2008 ACM Conference on Recommender Systems, 150 187?194. https://doi.org/10.1145/1454008.1454039 Eveleigh, A., Jennett, C., Blandford, A., Brohan, P., & Cox, A. L. (2014). Designing for dabblers and deterring drop-outs in citizen science. Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems - CHI 14, 2985?2994. https://doi.org/10.1145/2556288.2557262. Fagerholm, F., Johnson, P., Guinea, A. S., Borenstein, J., & Munch, J. (2013). Onboarding in Open Source Software Projects: A Preliminary Analysis. IEEE 8th International Conference on Global Software Engineering Workshops, (5?10). https://doi.org/10.1109/ICGSEW.2013.8. Feast, L. (2010, July). Epistemological Positions Informing Theories of Design Research: Implications for the Design Discipline and Design Practice. Design and Complexity - DRS International Conference. https://dl.designresearchsociety.org/drs-conference-papers/ drs2010/researchpapers/40. Fogg, B. J. (2002). Persuasive technology. Ubiquity, (December), 2. https://doi.org/10.1145/ 764008.763957. Fogg, B. J. (2009). A behavior model for persuasive design. Proceedings of the 4th International Conference on Persuasive Technology - Persuasive 09. https://doi.org/10.1145/1541948.1541999. Frauenberger, C. (2016). Critical realist HCI. Conference on Human Factors in Computing Systems - CHI2016, 07-12-May-2016, (341?351). https://doi.org/10.1145/2851581. 2892569. Fryer, T. (2021). A short guide to ontology and epistemology: Why everyone should be a critical realist. Palgrave Macmillan. https://doi.org/10.1057/S41307-021-00232-2. Gasparini, A. (2015). Perspective and use of empathy in design thinking. ACHI, The Eight International Conference on Advances in Computer-Human Interactions, (49?54). Geiger, D., Seedorf, S., Nickerson, R., & Schader, M. (2011). Managing the Crowd?: Towards a Taxonomy of Crowdsourcing Processes. Proceedings of the 17th Americas Conference on Information Systems, Detroit, Michigan, 4-7 August 2011, (1?11). https://doi.org/10.1113/jphysiol.2003.045575. Goldman, J., Shilton, K., Burke, J., Estrin, D., Hansen, M., Ramanathan, N., Reddy, S., Samanta, V., Srivastiva, M. (2009). Participatory Sensing: A citizen-powered approach to illuminating the patterns that shape our world. Foresight & Governance Project, White Paper, 1-15. Google. (2014). Onboarding. Material Design. https://material.io/design/communication /onboarding. Onboarding - Material Design. (2015). Google. https://material.io/design/communication/ onboarding.html. Gupta, K. (2016). Checklist of techniques for effective mobile user onboarding. UX Cam Blog. http://blog.uxcam.com/the-21-step-checklist-for-bulletproof-mobile-user- onboarding/. Haklay, M. (2013). Citizen Science and Volunteered Geographic Information: Overview and Typology of Participation. In D. Sui, S. Elwood, & M. Goodchild (Eds.), Crowdsourcing Geographic Knowledge: Volunteered Geographic Information (VGI) in Theory and 151 Practice (105?122). Springer Netherlands. https://doi.org/10.1007/978-94-007-4587-2_7. Hao, Y., Chong, W., Man, K. L., Liu, O., & Shi, X. (2016). Key Factors Affecting User Experience of Mobile Crowdsourcing Applications. Proceedings of the International Multiconference of Engineers and Computer Scientists, 967. Harley, A. (2018, February). UX Expert Reviews. Nielsen Norman Group Articles. https://www.nngroup.com/articles/ux-expert-reviews/ Harrison, S., Sengers, P., & Tatar, D. (2011). Making epistemological trouble: Third paradigm HCI as successor science. Interacting with Computers, 23(5), (385?392). https://doi.org/ 10.1016/J.INTCOM.2011.03.005 Hassenzahl, M. (2017). User Experience and Experience Design. In M. Soegaard & R. Friis- Dam (Eds.), The Encyclopedia of Human-Computer Interaction, 3rd ed, (63?113). The Interaction Design Foundation. Hekler, E. B., Klasnja, P., Froehlich, J. E., & Buman, M. P. (2013). Mind the Theoretical Gap: Interpreting, Using, and Developing Behavioral Theory in HCI Research. Proceedings of the 2013 CHI Conference on Human Factors in Computing Systems - CHI 2013, (3307? 3316). https://doi.org/10.1145/2470654.2466452. Herodotou, C., Aristeidou, M., Sharples, M., & Scanlon, E. (2018). Designing citizen science tools for learning: lessons learnt from the iterative development of nQuire. Research and Practice in Technology Enhanced Learning 2018 13:1, 13(1), 1?23. https://doi.org/10.1186/S41039-018-0072-1. Hess, W. (2010). Onboarding: Designing Welcoming First Experiences | UX Magazine. UX Magazine. http://uxmag.com/articles/onboarding-designing-welcoming-first-experiences. Hiller, S. E. (2016). The Validation of the Citizen Science Self-Efficacy Scale (CSSES). International Journal of Environmental and Science Education, 11(5), (543-558). Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277?1288. https://doi.org/10.1177/10497323052 76687. Hulick, S. (2014). The Elements of User Onboarding. https://doi.org/10.1002/ ejoc.201200111 Jay, C., Dunne, R., Gelsthorpe, D., & Vigo, M. (2016). To Sign Up, or not to Sign Up? Maximizing Citizen Science Contribution Rates through Optional Registration. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI 16, 1827?1832. https://doi.org/10.1145/2858036.2858319. Jensen, J. S. (2013). Chapter 3 EPISTEMOLOGY. In The Routledge Handbook of Research Methods in the Study of Religion (62?75). Routledge. https://doi.org/10.4324/ 9780203154281-11. Johnston, K. A. (2018). Toward a Theory of Social Engagement. In The Handbook of Communication Engagement (17?32). John Wiley & Sons, Ltd. https://doi.org/10.1002/ 97811191 67600.ch2. Johnston, K. A., & Taylor, M. (2018). The Handbook of Communication Engagement. In K. A. Johnston & M. Taylor (Eds.), The Handbook of Communication Engagement. John Wiley & Sons, Inc. https://doi.org/10.1002/9781119167600. 152 Joyce, A. (2020, December). Help and Documentation: The 10th Usability Heuristic. NNGroup Articles. https://www.nngroup.com/articles/help-and-documentation/ Karegar, F., Gerber, N., Volkamer, M., & Fischer-H?bner, S. (2018). Helping John to Make Informed Decisions on Using Social Login. Proceedings of the 33rd Annual ACM Symposium on Applied Computing. https://doi.org/10.1145/3167132 Kaye, J. N. (2009). The Epistemology & Evaluation of Experience-Focused HCI. Ph.D. Dissertation. Cornell University, USA. Kim, S., & Baek, T. H. (2018). Examining the antecedents and consequences of mobile app engagement. Telematics and Informatics, 35(1), (148?158). https://doi.org/10.1016/j.tele. 2017.10.008. Kraut, R. E., & Resnick, P. (2012). Building Successful Online Communities: Evidence- Based Social Design. MIT Press. Kuan, H., Bock, G.-W., & Vathanophas, V. (2005). Comparing the Effects of Usability on Customer Conversion and Retention at E-Commerce Websites. Proceedings of the 38th Annual Hawaii International Conference on System Sciences, 174. https://doi.org/10.1109/HICSS.2005.155 Lakom?, M., Hlavov?, R., Machackova, H., Bohlin, G., Lindholm, M., Bertero, M. G., & Dettenhofer, M. (2020). The motivation for citizens' involvement in life sciences research is predicted by age and gender. PLOS One, 15(8). doi.org/10.1371/journal.pone.0237140 Lampe, C., Wash, R., Velasquez, A., & Ozkaya, E. (2010). Motivations to participate in online communities. CHI '10: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems April. https://doi.org/10.1145/1753326.1753616 Law, E., Williams, A. C., Wiggins, A., Brier, J., Preece, J., Shirk, J. L., & Newman, G. (2017). The Science of Citizen Science. Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing - CSCW 17 Companion, 395?400. https://doi.org/10.1145/3022198.3022652 Lin, C. S., Tzeng, G. H., & Chin, Y. C. (2011). Combined rough set theory and flow network graph to predict customer churn in credit card accounts. Expert Systems with Applications, 38(1), 8?15. https://doi.org/10.1016/J.ESWA.2010.05.039 Lin, K. Y., & Lu, H. P. (2011). Why people use social networking sites: An empirical study integrating network externalities and motivation theory. Computers in Human Behavior, 27(3), (1152?1161). https://doi.org/10.1016/j.chb.2010.12.009 Malheiros, M., & Preibusch, S. (2013). Sign-Up or Give-Up: Exploring User Drop-Out in Web Service Registration. A Turn for the Worse: Trustbusters for User Interfaces Workshop, Symposium on Usable Privacy and Security 2013, (1?6). Mayring, P. (2000). Qualitative Content Analysis. In Forum Qualitative Sozialforschung / Forum: Qualitative Social Research (Vol. 1, Issue 2). https://doi.org/10.17169/FQS- 1.2.1089 Micallef, N., Adi, E., & Misra, G. (2018). Investigating login features in smartphone apps. UbiComp/ISWC 2018 - Adjunct Proceedings of the 2018 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2018 ACM International Symposium on Wearable Computers, (842?851). https://doi.org/10.1145/3267305.3274172 153 Molich, R., & Jeffries, R. (1993). Comparative Expert Reviews. CHI 03 Extended Abstracts on Human Factors in Computing Systems (CHI EA 03), 1060?1061. https://dl-acm- org.proxy-um.researchport.umd.edu/doi/10.1145/765891.766148 Mullin, S. (2019). 6 User Onboarding Flow Examples (with Critiques). CXL Blog. https://cxl.com/blog/6-user-onboarding-flows/ Munger, N. (2014). Onboarding Users Is Harder Than You Think. Inside Intercom. Intercom Blog. https://blog.intercom.io/strategies-for-onboarding-new-users/ Newman, G., Wiggins, A., Crall, A., Graham, E. A., Newman, S., & Crowston, K. (2012). The future of citizen science: emerging technologies and shifting paradigms. Frontiers in Ecology and the Environment, 10(6), 298?304. https://doi.org/10.1890/110294 Nielsen, J. (1992). Finding usability problems through heuristic evaluation. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI 92, 373?380. https://doi.org/10.1145/142750.142834 Nielsen, J. (2013a). Conversion Rates. Nielsen Norman Group Articles. https://www.nngroup.com/articles/conversion-rates/ Nielsen, J. (2013b). What s a Conversion Event?? NN/g Nielsen Norman Group. https://www.nngroup.com/articles/conversion-rates/ Nielsen, J. (1994a). Usability inspection methods. Conference on Human Factors in Computing Systems - Proceedings, April, 413?414. https://doi.org/10.1145/ 259963.260531 Nielsen, J. (1994b). Enhancing the explanatory power of usability heuristics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI '94. Association for Computing Machinery, New York, NY, USA, 152?158. Nielsen, J., Mack, R. L., Lewis, C., & Polson, P. (1994). Usability inspection methods. In Usability inspection methods. Wiley. Nielsen, N. (2010, October). Mental Models and User Experience Design. https://www.nngroup.com/articles/mental-models/ Nov, O., Arazy, O., & Anderson, D. (2011a). Dusting for science: motivation and participation of digital citizen science volunteers. IConference, March, 68?74. https://doi.org/10.1145/1940761.1940771 Nov, O., Arazy, O., & Anderson, D. (2011b). Technology-Mediated Citizen Science Participation?: A Motivational Model. Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, July, 249?256. https://doi.org/10.1145/1940761.1940771 Nov, O., Arazy, O., & Anderson, D. (2014). Scientists@Home: what drives the quantity and quality of online citizen science participation? PloS One, 9(4), e90375. https://doi.org/10.1371/journal.pone.0090375 O?Brien, H. L., Arguello, J., & Capra, R. (2020). An empirical study of interest, task complexity, and search behavior on user engagement. Information Processing and Management, 57(3). https://doi.org/10.1016/j.ipm.2020.102226 O?Brien, H. L., & McKay, J. (2018). Modeling Antecedents of User Engagement. In The Handbook of Communication Engagement (pp. 73?88). John Wiley & Sons, Ltd. 154 https://doi.org/10.1002/9781119167600.ch6 O?Brien, H. L., & Toms, E. G. (2008). What is user engagement? A conceptual framework for defining user engagement with technology. Journal of the American Society for Information Science and Technology, 59 (6), 938?955. https://doi.org/doi.wiley.com/10.1002/asi.20801. O?Brien, H. L., & Toms, E. G. (2010). The development and evaluation of a survey to measure user engagement. In Journal of the American Society for Information Science and Technology (Vol. 61, Issue 1, pp. 50?69). John Wiley & Sons, Ltd. https://doi.org/10.1002/asi.21229. Ojala, A. (2013). Software-as-a-Service Revenue Models. IT Professional, 15(3), 54?59. https://doi.org/10.1109/MITP.2012.73 Paulini, M., Maher, M. Lou, & Murty, P. (2014). Motivating participation in online innovation communities. International Journal of Web Based Communities, 10(1), 94. https://doi.org/10.1504/IJWBC.2014.058388. Pejovic, V., & Skarlatidou, A. (2019). Understanding Interaction Design Challenges in Mobile Extreme Citizen Science. International Journal of Human-Computer Interaction. https://doi.org/10.1080/10447318.2019.1630934. Perea, P., & Giner, P. (2017). UX Design for Mobile . Packt Publishing Ltd. https://books.google.com/books/about/UX_Design_for_Mobile.html?hl=pt- BR&id=LedDDwAAQBAJ Plattner, H. (2010). An introduction to Design Thinking. Institute of Design at Stanford, 1? 15. Pogue, D. (2017, March). What Happened to User Manuals? Scientific American, 316(4). Portman, J. (2017). User onboarding essentials. Inside Design Blog by InVision. https://www.invisionapp.com/inside-design/user-onboarding-essentials/ Preece, J. (2016). Citizen Science: New Research Challenges for Human?Computer Interaction. International Journal of Human-Computer Interaction, 32(8), 585?612. https://doi.org/10.1080/10447318.2016.1194153 Preece, J. (2017). How two billion smartphone users can save species! Interactions, 24(2), 26? 33. https://doi.org/10.1145/3043702 Preece, J., & Shneiderman, B. (2009). The Reader-to-Leader Framework: Motivating technology-mediated Social Participation. AIS Transactions on Human-Computer Interaction, 1(1), 13?32. https://doi.org/10.5121/ijfcst.2014.4403 Prestopnik, N., & Crowston, K. (2011). Gaming for (Citizen) Science. Exploring Motivation and Data Quality in the Context of Crowdsourced Science Through the Design and Evaluation of a Social-Computational System. IEEE Seventh International Conference on E-Science Workshops, 28?33. Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C. J., Murray, P., Schawinski, K., Szalay, A. S., & Vandenberg, J. (2010). Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers. Astronomy Education Review, 9(1). https://doi.org/10.3847/aer2009036 Rashid, A. M., Ling, K., Tassone, R. D., Resnick, P., Kraut, R., & Riedl, J. (2006). Motivating participation by displaying the value of contribution. Conference on Human Factors in 155 Computing Systems - Proceedings, 2, 955?958. https://doi.org/10.1145/1124772.1124915 Renz, J., Staubitz, T., Pollak, J., & Meinel, C. (2014). Improving the Onboarding User Experience in Moocs. EDULEARN14 Proceedings, 3931?3941. Rotman, D. (2013). Collaborative science across the globe: The influence of motivation and culture on volunteers in the United States, India and Costa Rica. University of Maryland. Rotman, D., Hammock, J., Preece, J., Hansen, D., Boston, C., Bowser, A., & He, Y. (2014). Motivations Affecting Initial and Long-Term Participation in Citizen Science Projects in Three Countries. IConference, 110?124. https://doi.org/10.9776/14054 Rotman, D., Preece, J., Hammock, J., Procita, K., Hansen, D., Parr, C., Lewis, D., & Jacobs, D. (2012). Dynamic changes in motivation in collaborative citizen-science projects. Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work - CSCW 12, 10. https://doi.org/10.1145/2145204.2145238 Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68?78. Saez, A. (2016). Your how-to guide for persona-based user onboarding. Appcues Blog. https://www.appcues.com/blog/persona-based-user-onboarding Sampson, T. D. (2019). Transitions in human?computer interaction: from data embodiment to experience capitalism. AI and Society, 34(4), 835?845. https://doi.org/10.1007/S00146- 018-0822-Z. Satia, G. (2014). Mobile Onboarding?: A Beginner s Guide ? Smashing Magazine. Smashing Magazine. https://www.smashingmagazine.com/2014/08/mobile-onboarding- beginners-guide. Seaborn, K., & Fels, D. I. (2014). Gamification in theory and action: A survey . Int. J. Human- Computer Studies, 74, 14?31. https://doi.org/10.1016/j.ijhcs.2014.09.006. Segal, A., Gal, K., Kamar, E., Horvitz, E., Bowyer, A., & Miller, G. (2016). Intervention Strategies for Increasing Engagement in Crowdsourcing?: Platform , Predictions , and Experiments. Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence IJCAI 16, 3861?3867. Shad, A. A. (2018). The In-depth Onboarding UX Resource You ll Ever Need. Userpilot Blog. https://userpilot.com/blog/onboarding-ux/ Shad, A. A. (2020). Proactive & Reactive User Onboarding. Medium.Com. https://medium.com/@Aazarshad/proactive-reactive-user-onboarding-two-concepts-in- onboarding-you-didnt-know-d1335490463a Shirk, J. L., Ballard, H. L., Wilderman, C. C., Phillips, T., Wiggins, A., Jordan, R., McCallie, E., Minarchek, M., Lewenstein, B. V., Krasny, M. E., & Bonney, R. (2012). Public Participation in Scientific Research: a Framework for Deliberate Design. Ecology and Society, 17(2), 20. Silvertown, J. (2009). A new dawn for citizen science. Trends in Ecology & Evolution, 24(9), 467?471. https://doi.org/10.1016/j.tree.2009.03.017 Singer, J. (2011). Onboarding: The First, Best Chance to Make a Repeat Customer. http://justin-singer.com/post/2684064738/onboarding-the-first-best-chance-to-make-a Skarlatidou, A., Moustard, F., & Vitos, M. (2020). Experiences from Extreme Citizen Science: 156 Using smartphone-based data collection tools with low-literate people. Conference on Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/3334480. 3375220. Sledgianowski, D., & Kulviwat, S. (2009). Using Social Network Sites: The Effect s of Playfulness, Critical Mass and Trust in a Hedonic Context. Journal of Computer Information Systems, 49(4), 74?83. https://doi.org/10.1080/08874417.2009.11645342. Smith, A. (2017). Definition and Development of a Measurement Instrument for Compellingness in Human Computer Interaction. 151. Master Thesis. Ames, Iowa, US. Smith, B., & Gallicano, T. (2015). Terms of engagement: Analyzing public engagement with organizations through social media. Computers in Human Behavior, 53, 82?90. https://doi.org/10.1016/j.chb.2015.05.060. Snell, A. (2006). Researching onboarding best practice: Using research to connect onboarding processes with employee satisfaction. Strategic HR Review, 5(6), 32?35. https://doi.org/10.1108/14754390680000925. Souleles, N. (2017). Design for social change and design education: Social challenges versus teacher-centred pedagogies. The Design Journal. 20(sup1), pg. 927?936. https://doi.org/ 10.1080/14606925.2017.1353037. Spannagel, C., Gl?ser-Zikuda, M., & Schroeder, U. (2005). Application of QualitativeContent Analysis in User-Program Interaction Research. In Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, Vol. 6, Issue 2. https://www.qualitative- research.net/index.php/fqs/article/view/469/1005 Steinmacher, I., Gerosa, M. A., & Redmiles, D. F. (2015). Social Barriers Faced by Newcomers Placing Their First Contribution in Open-Source Software Projects. Proceedings of the ACM Conference on Computer-Supported Cooperative Work & Social Computing, 1379?1392. https://doi.org/10.1145/2675133.2675215 Toerpe, K. (2013). The Rise of Citizen Science. Futurist. July. https://www. informalscience.org/rise-citizen-science Toscani, C., Steinmacher, I., Gery, D., & Marczak, S. (2018). A gamification proposal to support the onboarding of newcomers in the flosscoach portal. IHC 2018: Proceedings of the 17th Brazilian Symposium on Human Factors in Computing Systems, 1?10. https://doi.org/10.1145/3274192.3274193 Wald, D. M., Longo, J., & Dobell, A. R. (2016). Design principles for engaging and retaining virtual citizen scientists. Conservation Biology, 30(3), 562?570. https://doi.org/10.1111/cobi.12627 Waldron, J. (2015). 4 Steps to Great User Onboarding Conversions vs onboarding. Net Guru Blog. https://www.netguru.co/blog/great-user-onboarding Wiggins, A., & Crowston, K. (2011). From Conservation to Crowdsourcing: A Typology of Citizen Science. 44th Hawaii International Conference on System Sciences, 1?10. https://doi.org/10.1109/HICSS.2011.207 Wilson, Chauncey. (2013). Interview Techniques for UX Practitioners?: a User-Centered Design Method. Elsevier Science, Waltham, Massachusetts, US. Yadav, S. (2012). Wunderlust?s Cross-Platform Acquisition & Onboarding Process. UX Magazine. https://uxmag.com/articles/wunderlists-cross-platform-acquisition-onboarding- 157 process Zambonini, D. (2014). User Onboarding Checklist ? Designing the New User Experience (NUX). Bipsync Blog. https://www.bipsync.com/blog/user-onboarding-checklist- designing-new-user-experience-nux/ Zheng, H., Li, D., & Hou, W. (2011). Task Design, Motivation, and Participation in Crowdsourcing Contests. International Journal of Electronic Commerce, 15(4), 57?88. https://doi.org/10.2753/JEC1086-4415150402