ABSTRACT Title of Document: EFFECTS OF COGNITIVE DEMAND ON WORD ENCODING IN ADULTS WHO STUTTER Pei-Tzu Tsai, Doctor of Philosophy, 2011 Directed By: Professor Nan Bernstein Ratner Department of Hearing and Speech Sciences The etiology of persistent stuttering is unknown, but stuttering has been attributed to multiple potential factors, including difficulty in processing language-related information, but findings remain inconclusive regarding any specific linguistic deficit potentially causing stuttering. One particular challenge in drawing conclusions is the highly variable task demands across studies. Different tasks could potentially reflect either different processes, or different levels of demand. This study examined the role of cognitive demand in semantic and phonological processes to evaluate the role of linguistic processing in the etiology of stuttering. The study examined concurrent processing of picture naming and tone-identification in typically fluent young adults, adults who stutter (AWS) and matched adults who do not stutter (NS), with varying temporal overlap between the dual tasks as manipulation of cognitive demand. The study found 1) that in both AWS and NS, semantic and phonological encoding both interacted with non-linguistic processing during concurrent processing, suggesting that both linguistic processes are demanding in cognitive resources, 2) that there was no observable relationship between dual-task interference to word encoding and stuttering, 3) that AWS and NS showed different trends of phonological encoding under high but not low cognitive demand, suggesting a subtle phonological deficit in AWS, and 4) that the phonological encoding effect correlated with stuttering rate, suggesting that phonological deficit could potentially play a role in the etiology or persistence of stuttering. Additional findings include potential differences in semantic encoding between typically fluent young adults and middle-age adults, as well as potential strategic differences in processing semantic information between AWS and NS. Findings were taken to support stuttering theories suggesting specific deficits in phonological encoding and argue against a primary role of semantic encoding deficiency or lexical access deficit in stuttering. EFFECTS OF COGNITIVE DEMAND ON WORD ENCODING IN ADULTS WHO STUTTER By Pei-Tzu Tsai Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park, in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2011 Advisory Committee: Professor Nan Bernstein Ratner, Chair Associate Professor Rochelle S. Newman Assistant Professor Yasmeen Faroqi-Shah Associate Professor Shelley B. Brundage Professor Thomas Wallsten, Dean?s Representative ? Copyright by Pei-Tzu Tsai 2011 ii Dedication To my husband Chung-Ching Shen and family in Taiwan. iii Acknowledgements I am grateful to my advisor, Dr. Nan Bernstein Ratner, who guided me along the way, shaping this journey into a unique learning experience in life. I would like to thank Dr. Victor Ferreira for his generous and constructive advice in the development and completion of this study. Thanks to my dissertation committee, Drs. Shelley Brundage, Yasmeen Faroqi-Shah, Rochelle Newman and Thomas Wallsten for offering helpful suggestions in handling complex concepts, design and data. I am thankful to all participants who volunteered in this study, and to Vivian Sisskin, National Stuttering Association, American Institute for Stuttering, StutteringTalk Podcast and many individuals who facilitated participant recruitment. This dissertation would be impossible to finish without their generous help. Last but not least, thanks to my fellow doctoral students, Monica Sampson, who not only helped with data coding but has also been supportive and available for all sorts of discussions, as well as Cathy Eaton, Sally Gallena, Giovanna Morini and Arifi Waked. iv Table of Contents Dedication ..................................................................................................................... ii Acknowledgements...................................................................................................... iii Table of Contents......................................................................................................... iv List of Tables ............................................................................................................... vi List of Figures ............................................................................................................. vii Chapter 1: Introduction ................................................................................................. 1 Word Production in Typical Adult Speakers ............................................................ 3 The Word Production Process .............................................................................. 3 How do we understand how words are produced? The Picture-Word Interference Task....................................................................................................................... 4 Concurrent Processing during Word Production ...................................................... 5 The Psychological Refractory Period Paradigm................................................... 6 Predictions of the PRP Paradigm.......................................................................... 8 Stuttering and Linguistic Processing ...................................................................... 12 Theoretical Approaches to the Etiology of Stuttering ........................................ 12 Semantic Processing in AWS ............................................................................. 15 Phonological Processing in AWS ....................................................................... 19 Concurrent Processing in AWS .............................................................................. 23 Purpose of the Study ............................................................................................... 26 Chapter 2: Pilot Study ? Word Encoding in Typically Fluent Young Adults ............ 32 Method .................................................................................................................... 33 Participants.......................................................................................................... 33 Stimuli................................................................................................................. 34 Apparatus ............................................................................................................ 37 Procedure ............................................................................................................ 37 Design ................................................................................................................. 38 Analysis............................................................................................................... 39 Results..................................................................................................................... 40 Discussion ............................................................................................................... 42 Chapter 3: Experiment 1 ? Automaticity of Word Encoding in Typically Fluent Young Adults .............................................................................................................. 44 Introduction............................................................................................................. 44 Method .................................................................................................................... 48 Participants.......................................................................................................... 48 Apparatus and Stimuli......................................................................................... 49 Procedure ............................................................................................................ 49 Design ................................................................................................................. 52 Data analysis ....................................................................................................... 53 Results..................................................................................................................... 54 Experiment 1A.................................................................................................... 54 Experiment 1B .................................................................................................... 60 Discussion ............................................................................................................... 64 v Chapter 4: Experiment 2 ? Automaticity of Word Encoding in Adults Who Do and Do not Stutter.............................................................................................................. 68 Method .................................................................................................................... 74 Participants.......................................................................................................... 74 Apparatus and Stimuli......................................................................................... 76 Design and Analyses........................................................................................... 76 Reliability............................................................................................................ 78 Procedure ............................................................................................................ 78 Results..................................................................................................................... 79 Automaticity ....................................................................................................... 79 Automaticity and Stuttering................................................................................ 94 Deficiency of Processing Skill in AWS.............................................................. 98 Linguistic Encoding Demand and Stuttering.................................................... 101 Discussion ............................................................................................................. 102 Chapter 5: General Discussion and Conclusion........................................................ 114 Appendix A............................................................................................................... 118 Appendix B ............................................................................................................... 120 Appendix C ............................................................................................................... 122 Bibliography ............................................................................................................. 123 vi List of Tables Table 1. Lexical properties of stimuli Table 2. Mean response accuracies in Experiment 1A Table 3. Mean response accuracies in Experiment 1B Table 4. Predictions of semantic and phonological encoding deficiencies Table 5. Participant demographic information Table 6. Group NS Experiment 2A mean response accuracies Table 7. Group NS Experiment 2B mean response accuracies Table 8. Group AWS Experiment 2A mean response accuracies Table 9. Group AWS Experiment 2B mean response accuracies vii List of Figures Figure 1. A word production model Figure 2. PRP effect: Task 2 response latency increases as SOA decreases Figure 3. Predictions of Task 2 response latency with Task 1 manipulation Figure 4. Predictions of Task 2 response latency based on Task 2 manipulation Figure 5. Time course of events in a trail in Experiment 1 Figure 6. Response latency of the picture-naming task in typically fluent young adults Figure 7. Response accuracy of the picture-naming task in typically fluent young adults Figure 8. Predicted latency patterns of Experiment 1A with picture naming as the first task. Figure 9. Predicted latency patterns of Experiment 1B with tone identification as the first task. Figure 10. Time course of a trial in Experiment 1A Figure 11. Time course of a trial in Experiment 1B Figure 12. Experiment 1A mean response latencies of tasks across SOAs Figure 13. Experiment 1A mean response accuracy of tasks across SOAs Figure 14. Mean response latencies in Experiment 1A Figure 15. Mean response latencies in Experiment 1B Figure 16. Predicted NS latency patterns of Experiment 2A with picture naming as the first task Figure 17. Predicted NS latency patterns of Experiment 2B with tone identification as the first task viii Figure 18. Predicted AWS latency patterns of Experiment 2A with picture naming as the first task Figure 19. Predicted AWS latency patterns of Experiment 2B with tone identification as the first task Figure 20. Distribution of stuttering severity based on the SSI-4 Figure 21. Group NS Experiment 2A mean response latencies Figure 22. Group NS Experiment 2B mean response latencies Figure 23. Group AWS Experiment 2A mean response latencies Figure 24. Group AWS Experiment 2B mean response latencies Figure 25. Correlation between SSI-4 total score and interference effect size in Experiment 2A Figure 26. Correlation between SSI-4 total score and interference effect size in Experiment 2B Figure 27. Correlation between stuttering rate and interference effect size in Experiment 2A Figure 28. Correlation between stuttering rate and interference effect size in Experiment 2B Figure 29. Semantic interference effects in AWS and NS across SOAs Figure 30. Phonological facilitation effects in AWS and NS across SOAs Figure 31. Correlation between stuttering rate and phonological facilitation effect in the short SOA Figure 32. PRP profiles by age group when picture naming was the first task Figure 33. PRP profiles by age group when tone identification was the first task 1 Chapter 1: Introduction Persistent developmental stuttering is a communication disorder characterized by disruptions in speech flow, affecting approximately 1% of the population (Bloodstein & Bernstein Ratner, 2008). The overt characteristics of the speech flow disruptions have led a large body of researchers towards examination of the speech- motor control system in people who stutter (PWS) (e.g., Cooper & Allen, 1977; Cross & Luper, 1979; Denny & Smith, 1992; Max, Caruso, & Gracco, 2003; Smits- Bandstra, de Nil, & Saint-Cry, 2006; van Lieshout, Hulstijn, & Peters, 1996). On such tasks, very often PWS perform differently from those who do not stutter (NS), but evidence has not yet emerged that identifies any specific motoric deficit common to PWS, such as motor planning, motor coordination or motor execution. It has also been proposed that linguistic processing deficits in the stages prior to speech motor execution might play an important role in the etiology of stuttering, which manifest as speech flow disruptions (e.g., Howell, 2004; Postma & Kolk, 1993). There is now ample evidence that people who stutter differ from their typically fluent peers at several levels of linguistic processing, including semantic, phonological and syntactic processes (e.g., Anderson, Wagovich, & Hall, 2006; Bosshardt & Fransen, 1996; Byrd, Conture, & Ohde, 2007; Hakim & Bernstein Ratner, 2004; Hartfield & Conture, 2006; Sasisekaran & de Nil, 2006; Tsiamtsiouris 2 & Cairns, 2009; Wijnen & Boers, 1994). However, while young children who stutter (CWS) consistently demonstrate subclinical deficits in these linguistic skills, the accumulated findings in adults who stutter (AWS) have been rather contradictory. Potentially, this could reflect developmental normalization in linguistic processing in PWS (e.g., Au-Yeung, Howell, & Pilgrim, 1998). Alternatively, the equivocal findings could also relate to the varying tasks and approaches used across studies. Supporting the view that linguistic processing is deficient and potentially leads to stuttering, AWS often show altered patterns of linguistic processing even when overt speech is absent or controlled (e.g., Arnstein, Lakey, Compton, & Kleinow, 2011; Bosshardt & Fransen, 1996; Cuadrado & Weber-Fox, 2003; Sasisekaran & de Nil, 2006; Weber-Fox & Hampton, 2008). The increasing evidence of altered interactions between motor control and linguistic processing provides further support for the relationship between linguistic planning and stuttering. For example, AWS demonstrate greater variability in speech motor movements than NS under tasks involving high linguistic complexity (Kleinow & Smith, 2000; Smith, Sadagopan, Walsh, & Weber-Fox, 2010). The subtle linguistic deficits reported in AWS are usually captured in experimental laboratory manipulations; this contrasts with CWS profiles of language skill, which are typically reported as a function of standardized language test scores or observed in analyses of naturalistic language data. In the adult stuttering literature, deficiency is typically defined by the slower or less accurate task performances of AWS than their typically fluent peers. A more dynamic view of deficiency takes processing demand into consideration, and suggests that AWS lack sufficient 3 cognitive resources for linguistic processing when the processing demand is high; that is, the deficit lies in the linguistic processing, which breaks down under excessive demand given the available cognitive resources (capacity), that is, the inefficiency of linguistic processing (Bosshardt, 2006). Based on the view of inefficiency, one could hypothesize that any potential processing deficiency in PWS might range between a fundamental deficit to a subtle deficit, which should be observed under different levels of processing demand. The following sections describe contemporary models of linguistic encoding for word production and central cognition, which serve as the theoretical framework of the current study to examine the role of cognitive demand in linguistic processing. Word Production in Typical Adult Speakers The Word Production Process Word production is usually conceptualized as a relatively simple language task, although it is presumed to involve multiple levels of linguistic processing before articulation of the intended word (Caramazza, 1997; Dell, 1986; Garrett, 1988; Levelt, Roelofs, & Meyer, 1999). Based on the widely adopted production model proposed by Levelt and colleagues, the pre-motor linguistic processing stage of word production includes formulation of the underlying message and processing of meaning (semantic encoding). This stage next activates the connected lexical representations (lemmas) appropriate to the major semantic concepts, which also encode the relevant grammatical information that governs how the concepts will be integrated with linguistic rules for sentence construction. Following the selection of the appropriate lemma, its word-form representation is activated and the 4 corresponding phonemes are selected (word-form selection and phoneme selection in phonological encoding). Finally, the speech gesture plans are formed for articulation (phonetic encoding) and word forms are adjusted for concatenated production in larger phrasal strings. In addition, the model includes stages of speech monitoring that occur following both phonological encoding and overt articulation (pre- and post- articulation monitoring, respectively). Hypothetical stages in word production are illustrated in Figure 1. Figure 1. A word production model (adapted from Levelt et al., 1999) How do we understand how words are produced? The Picture-Word Interference Task A classic approach to isolating linguistic processes in word production is the picture-word interference (PWI) task, in which participants name a target picture while a distracting auditory/visual word is presented with the picture. When the target-distracter pairs are semantically related, the naming responses are typically !"#$ %&'()*+$,-.+&//$ 01.).2.34+(2$,-.+&//$ 56.-789.-'$/&2&+*.):$ 501.)&'&$/&2&+*.):$ ";&-<$/,&&+1$ =7?3>! ?7?3?! ?7?! ???! ?3?! !"#$$%&' !()*(#+,&' !-)./'0).$&$ !+1)*#$#2&$ !2+##(1'3#2,4.#2&' !2)4*/'-%5#2&' @.)4<.-4)3$ 5,-&8(-*+A2(*.):$ 5,./<8(-*+A2(*.):$ B-*+A2(*.)$ 01.)&*+$,-.+&//$ 5 delayed, compared to that of unrelated pairs (i.e., the semantic interference effect), an index of semantic processing (e.g., Levelt et al., 1999; Roelofs, 1992; Schriefers, Meyer, & Levelt, 1990; Damian & Bower, 2003). The slowed response latency under semantic interference is attributed to increased competition at the semantic level during lemma selection, and the effect is particularly consistent between word pairs that are under the same semantic category (e.g., horse-zebra) (Mahon, Costa, Peterson, Vargas, & Caramazza, 2007). In contrast, when the target-distracter pairs are phonologically related (e.g: dog-dot), responses are typically facilitated as compared to unrelated pairs (the phonological priming effect) (Cutting & Ferreira, 1999; Damian & Martin, 1999; Meyer & van der Meulen, 2000). According to the word production model, this pattern results from facilitation of processing at the level of phoneme selection, proposed to be a much later stage in word production. Concurrent Processing during Word Production Because of the common need to multi-task during language processing in everyday life, the impacts of concurrent processing of linguistic and non-linguistic information has drawn much attention in recent years (Ayora, Janssen, Dell?Acqua, & Alario, 2009; Cook & Meyer, 2008; Dell?Acqua, Job, Peressotti, & Pascali, 2007; Dent, Johnston, & Humphreys, 2008; Ferreira & Pashler, 2002; Gaskell, Quinlan, Tamminen, & Cleland, 2008; Kemper, Schmalzried, Herman, Leedahl, & Mohankumar, 2009; Roelofs, 2008; Rabovsky, ?lvarez, Hohlfeld, & Sommer, 2008). While some stages of word production (e.g., semantic encoding) have been implicated as highly automatic and not interfered by concurrent, non-linguistic 6 processing (such as tone judgment and visual arrow detection), other stages (e.g., phonological encoding) have been suggested to involve cognitive or attentional resources (Roelofs, 2008b). In this study, how well a linguistic process is protected from concurrent, non-linguistic interference will be termed its level of automaticity. The Psychological Refractory Period Paradigm One popular dual-task paradigm for examining automaticity of cognitive/linguistic processes is the psychological refractory period (PRP) paradigm (Telford, 1931). It involves two unrelated tasks, performed in a speeded manner and in close temporal succession (Task 1 and Task 2). For example, a participant is instructed to name a picture and judge the pitch of a tone as quickly as possible, while the tone is presented 150 ms after the picture. The temporal interval between the onsets of the two tasks is referred to as the stimulus-onset-asynchrony (SOA). In the above example, there is a 150-ms SOA between the picture-naming task (Task 1) and the tone-judgment task (Task 2). In general, the response latency in the second task would increase as SOA decreases, with the assumption that the short SOA induces an overlap of the central processing of the two tasks. This functional change of latency with SOA is the PRP effect. One popular account for the PRP effect is the central bottleneck model (Pashler, 1994). The model makes the assumption that performing a task involves three stages: pre-central perceptual processing, central-stage response selection and post-central response execution, and that only response selection is capacity demanding, involving the shared central mechanism. According to the bottleneck model, this central mechanism is devoted to carrying out one task at a time (serial 7 processing), in a first-come, first-served manner; thus, when the central mechanism is occupied by a task, any other process that requires the central mechanism will be temporary suspended, forming the central bottleneck. Processes that are pre- or post- central in stage do not involve the central mechanism, and are considered highly automatic. These automatic processes can operate in parallel with other cognitive processes without interference. Given the above premises, in a dual-task condition, a pre-central process could operate concurrently with any other process without dual- task interference, but a central-stage process in Task 2 would be temporarily suspended until that of Task 1 is completed, creating a ?slack? (Figure 2). For example, when a participant performs picture naming (Task 1) and tone judgment (Task 2) in a dual-task condition, tone judgment responses would be slower under a short SOA than long SOA because of the ?slack?. This increase in Task 2 latency as SOA decreases is the PRP effect. 8 Figure 2. PRP effect: Task 2 response latency increases as SOA decreases. Each bar represents processing from stimulus onset to response onset: A: Pre-central stage process; B: Central stage process; C: post-central stage process; 1: Task 1 (e.g., picture naming); 2: Task 2 (e.g., tone identification); SOA: stimulus onset asynchrony. Predictions of the PRP Paradigm According to the PRP paradigm, one can manipulate a task to isolate a specific cognitive process and then determine whether or not the process is capacity demanding and involves the central bottleneck. For example, one can induce a semantic interference effect in a picture-naming task with a PWI paradigm (Task 1) and examine how the effect influences tone judgment latencies (Task 2) to determine whether semantic encoding involves the central bottleneck. Consider the following example. If semantic interference occurs during or before the central-stage process of picture naming, the central-stage process for picture naming (Task 1) would be delayed, and consequently, the central-stage of the tone judgment (Task 2) would be delayed as well, resulting in longer tone judgment latency (effect propagation) (Figure 3 (b) and (c)). On the other hand, if semantic interference occurs after the central-stage process of picture naming (Task 1), it would not delay the central-stage process of tone judgment (Task 2) (there would be no effect propagation) because post-central processes can operate in parallel with any other process without interference (Figure 3 (d)). 9 Figure 3. Predictions of Task 2 response latency with Task 1 manipulation with short SOA. A: Pre-central stage process; B: Central stage process; C: post-central stage process; 1: Task 1 (e.g., picture naming); 2: Task 2 (e.g., tone identification); E: manipulated effect. (a) Baseline condition; (b) effect manipulation at the pre-central process (e.g., semantic interference effect propagating onto Task 2); (c) effect manipulation of central process (e.g., phonological facilitation effect propagating onto Task 2); (d) manipulation of post-central process (e.g., response execution effect not propagating onto Task 2; lack of effect propagation). A different set of latency patterns could be predicted for the above example when the task order reversed, that is, when the tone judgment task (Task 1) is 10 followed by the picture-naming task (Task 2). If semantic interference effect occurs before the central-stage process of the picture naming (Task 2), the effect could be absorbed by the ?slack? and thus would not fully delay response latency in picture naming (Task 2) (an under-additive effect) (Figure 4 (b)). On the other hand, if semantic interference occurs during or after the central-stage process of picture naming (Task 2), the effect would not be absorbed by the ?slack? and would thus fully delay the picture naming latency (producing an additive effect) (Figure 4 (c) and (d)). The above predictions are the locus-of-slack logic. Figure 4. Predictions of Task 2 response latency based on Task 2 manipulation with short SOA. 1: Task 1 (e.g., tone identification); 2: Task 2 (e.g., picture naming); E: manipulated effect. (a) Baseline condition; (b) manipulation of the pre-central process (semantic interference effect resulting in an under-additive effect); (c) manipulation of central process (e.g., phonological facilitation effect resulting in an additive effect); (d) manipulation of post-central process (response execution effect resulting in an additive effect). 11 Based on the PRP paradigm, it has been suggested that in typically fluent young adults, semantic encoding is likely to be a pre-central stage processing, which is highly automatic and can operate without dual-task interference from non-linguistic processing, as patterns of effect propagation and/or under-additive effect have been observed in the PRP paradigm (Ayora et al., 2011; Dell?Acqua et al., 2007; Ferreira & Pashler, 2002). In contrast, it is proposed that phonological encoding is a central- stage process, which is capacity demanding and involves shared cognitive resources. Because of this, it is likely to be interfered with by concurrent non-linguistic processing, and, thus, patterns of effect propagation and additive effect have been observed in the PRP paradigm (Ayora et al., 2011; Cook & Meyer, 2008; Roelofs, 2008a). The PRP paradigm, when combined with the PWI paradigm, serves as a useful methodology to simultaneously examine the effects of cognitive demand on linguistic encoding processes in AWS. In addition, an advantage of using the PWI paradigm to examine linguistic processing in AWS is that all naming responses are compared against the speaker?s own baseline (the unrelated condition), thus washing out the potential effect in speech motor movement, although at the same time, this would not allow examination of the interaction between motor and language processing (Kleinow & Smith, 2000), which will be discussed in the following section. 12 Stuttering and Linguistic Processing Theoretical Approaches to the Etiology of Stuttering Several theories of stuttering propose that it results from temporal dis- synchrony in speech planning and production (Adams, 1990; Starkweather & Gottwald, 1990; Bosshardt, 2006; Howell, 2004; Karniol, 1995; Perkins, Kent, & Curlee, 1991; Peters & Starkweather, 1990; Postma & Kolk, 1993). For example, the Neuropsycholinguistic (NPL) theory proposes that the temporary misalignment between linguistic processes (e.g., semantic, syntactic and phonological) and paralinguistic (prosodic) processing together with time pressure underlie stuttering (Perkins et al., 1991). The EXPLAN theory holds a similar view of temporal asynchrony, although suggests that it specifically occurs between speech plan and motor execution; it also proposes a basic deficiency in phonological processing that triggers the asynchrony (Howell, 2004). The Covert Repair Hypothesis (CRH) also proposes a deficit at the phonological level, that is, specifically an error-prone phonological encoding system that triggers stuttering as a coping mechanism during the correction of internal phonological errors (Postma & Kolk, 1993). The Demand and Capacity Model (DCM) suggests that stuttering arises when internal or external demand exceeds the speaker?s capacity (Adams, 1990; Peters & Starkweather, 1990; Starkweather & Gottwald, 1990). Although it does not explicitly include a temporal disruption in processing, it is compatible with the suggested non- fluid processing between linguistic and motor skills in PWS outlined by the above theories. According to the DCM, the mismatch between demand and capacity could occur in any of the multiple speech-related components, such as motor, linguistic, 13 cognitive and emotional aspects. Although some have criticized the DCM as untestable because of its lack of specificity in identifying particular systems that are vulnerable to breakdown in stuttering (Bernstein Ratner, 2000), several studies have shown increased stuttering in adults with increased demand in language tasks (Bosshardt, 2002; Bosshardt et al., 2002). However, this model lacks a well-defined theoretical mechanism for incorporating multiple factors, and suffers from criticism for its circular reasoning; based on the model, the measure of mismatch between demand and capacity is stuttering itself (Siegel, 2000). However, the model provides a conceptual direction for examining selected processing skills (capacity) and processing demand by referencing task behavior to matched control profiles on speech production tasks. Consistent with the view, findings have shown that AWS lack compatible capacity compared to typically fluent speakers in linguistic skills, including semantic processing (Bosshardt, 1993; Bosshardt & Fransen, 1996; Blomgren, Nagarajan, Lee, Li, & Alvord, 2003), syntactic processing (Weber-Fox & Hampton, 2008) and/or phonological processing (de Nil & Bosshardt, 2000); they also show atypical psychoneurological profiles (such as differences in timing, amplitude, and/or cortical activation maps) during linguistic processing, either when speaking or listening, as compared to NS (Weber- Fox & Hampton, 2008; de Nil & Boshhardt, 2000). Another multi-factorial model of stuttering (The Dynamic Multifactorial Model; Smith, 1999; Smith & Kelly, 1997) also proposes a stuttering complex involving the motor system and multiple other components (linguistic, cognitive and emotional). However, unlike theories that propose a specific internal processing 14 deficiency leading to disruptions in motor execution, this model does not presume any pre-motor planning deficit within the production system per se (Smith et al., 2010); rather, it proposes that the atypical interaction between the motor system and other processes plays the primary role in the etiology of stuttering. According to this model, stuttering is to be additionally viewed as a continuous symptom characterized by underlying as well as observable events in speech; under this assumption, high processing demand should interact and destabilize the speech-motor system in AWS, as the motor systems appear to be less mature in AWS (Kleinow & Smith, 2000). All of these stuttering theories are not necessarily incompatible with each other, especially theories proposing multiple interactive components underlying stuttering. However, the different models provide directions for examining selected skills in relation to processing demand to better understand the interaction among components; testing selected components can provide more data to strengthen and refine the current theoretical frameworks. In particular, models proposing an underlying linguistic deficiency in PWS versus those that do not support this view are worth examining. If linguistic deficiency plays a primary role in stuttering (e.g., as specified by the NPL and DCM models), differences should be reliably observed between AWS and typically fluent speakers at a specified level (e.g., at the phonological level) or in the interactions among prospective components (e.g., at the phonological level under high processing demand). For example, according to Bosshardt and colleagues? findings (see 2006 for review), AWS are less efficient linguistic processors across linguistic domains, and deficiency may or may not be observed under low demand, 15 and thus, it would be more likely to observe processing differences when AWS are examined under high cognitive demand, particularly in semantic processing compared to phonological processing. However, according to the CRH and EXPLAN models, a phonological encoding deficit plays the primary causal role of stuttering, and examination of multiple levels of linguistic processing should reflect a selective difference at the phonological level. Few studies have examined multiple levels of linguistic processing using the same paradigm with the same group of participants. Of these, most have examined semantic and phonological processes, and findings have been inconsistent, concluding that neither process is deficient (Hennessey, Nang, & Beilby, 2008), that semantic but not phonological processing is deficient (Bosshardt & Fransen, 1996; Bosshardt, Ballman, & de Nil, 2002), and that both processes may be deficient in AWS (Bosshardt, 1993; de Nil & Bosshardt, 2002; Song, Peng, & Ning, 2010). Some of this research is reviewed below. Semantic Processing in AWS Lexical processing is often examined as one potential index of compromised semantic processing skills that could potentially lead to speech disruptions. Semantic encoding is thought to occur at the initial stage of lexical access and could conceivably lead to many problems ?downstream?; however, it is not proposed by any current major stuttering theory as the primary cause of stuttering. Supporting evidence for semantic processing deficiency in AWS is usually taken to support a broader scale of deficiency (such as lexical access) in the upper stream of the production system. Measures of behavioral (Bosshardt, 1993; Bosshardt & Fransen, 16 1996; Bosshardt et al., 2002) and neural activities (Weber-Fox, 2001; Weber-Fox & Hampton, 2008) have suggested that AWS are more vulnerable to interference effects and show less distinct patterns in the neural substrates involved in activating or analyzing semantic information than do typically fluent speakers. Bosshardt (1993) specifically examined the hypothesis that AWS are less efficient in semantic processing. If so, this impairment should be observed in the short-term memory performance for nonwords with different levels of meaningfulness, defined by the average number of word associations generated by each nonword item (the association norm; see Bosshardt, 1993). Adults who stutter performed more poorly than typically fluent speakers on items with high but not low meaningfulness. This is in line with research showing a lexical contribution (such as imageability, concreteness and lexical frequency) to serial recall (e.g., Bourassa & Besner, 1994; Roodenrys & Quinlan, 2000; Walker & Hulme, 1999). However, it is difficult to determine whether the group difference was attributable to lexical semantic processing per se, as the nonword stimuli not only bore no specific or explicit semantic information but also differed in pronounceability between the two levels of meaningfulness. Evidence from studies examining lexical access in real-word processing suggests that under high demand, AWS appear to be particularly deficient in semantic but not phonological processing (Bosshardt, et al., 2002; Bosshardt & Fransen, 1996; but see de Nil & Bosshardt, 2000 and Song et al., 2010, who did not find such a pattern). In an immediate sentence production study, AWS showed more stuttering during sentence generation and immediate sentence production when concurrently 17 performing semantic judgments (whether two words belong to the same semantic category) than when concurrently performing phonological judgments (whether two words rhyme), suggesting a selective deficit in lexical semantic processing (Bosshardt et al., 2002). However, in the delayed production version of Bosshardt and colleagues? (2002) study, in which participants silently generated sentences for a set period of time and then produced the planned sentence upon signal, AWS were compatible to typically fluent speakers in semantic, but not phonological, judgment during the silent generation phase (de Nil & Bosshardt, 2000). Such a design makes it difficult to test a difference between AWS and NS in sentence generation and production, as it is possible that the difference lies in strategy use during the silent generation phase. For example, that AWS (but not NS) showed higher semantic judgment accuracy during the generation than the production phase could reflect that AWS strategically (partially) held off sentence planning to perform semantic judgments to relieve processing load during the silent generation phase in the delayed production study. This would be less feasible in the immediate production study, and thus modulate the outcomes. In a word monitoring study (Bosshardt & Fransen, 1996), AWS were slower in monitoring for semantically-related words, but not identical or phonologically- related words, while reading sentences silently, in a word-by-word, self-paced manner. The study also found no difference in performance among various types of sentences, including syntactically and semantically correct sentences, syntactically correct but semantically incorrect (meaningless) sentences and scrambled sentences, suggesting that the between-group difference in semantic monitoring was not 18 influenced by context or syntactic processing skills. This finding suggested a specific processing deficit at the semantic level. A recent study compared semantic, phonological and orthographical processes in AWS and concluded that AWS show selective deficits in semantic and phonological, but not orthographic, processing in Chinese (Song et al., 2010). Using a dual-task paradigm, the authors varied the temporal interval between the two tasks (a visual word naming task and a linguistic judgment task) to manipulate cognitive load. Adults who stutter showed more dual-task interference (greater delay in response time and/or accuracy) in either naming or judgment performances under concurrent semantic (whether a homonym contained an action meaning) and phonological (whether a word carried a certain rhyme) judgment, but not orthographic (whether a character contained a certain sub-component) judgment. This is consistent with prior research on neural activities of visual word processing, showing that the early language perception stage in AWS appears to be compatible with that observed in typically fluent speakers (Cuadrado & Weber-Fox, 2003; Weber-Fox, 2001). Despite the evidence supporting a stronger role of semantic deficiency and a less important role of phonological processing in stuttering, research specifically contrasting the semantic and phonological encoding aspects of word production in AWS has failed to find evidence for semantic or phonological deficiency. In a picture naming study, AWS and fluent speakers showed no difference in either semantic encoding, measured by the semantic interference effect, or phonological encoding, measured by the phonological facilitation effect, in the Picture-Word Interference (PWI) paradigm (see Hennessey et al., 2008; this paradigm will be discussed in 19 greater detail in the following sections). Although no deficiency in either semantic processing or phonological processing was implicated in AWS in this PWI study, it is also likely that single PWI tasks impose very low processing demand and lack the sensitivity to detect any subtle deficit in AWS. Nonetheless, taken together, prior studies seem to suggest a potential role for semantic deficiency in stuttering, characterized by less efficient semantic processing, particularly under high processing demand for monitoring and when making relevant semantic judgments. Phonological Processing in AWS The notion that stuttering is characterized by phonological encoding deficiency receives strong support in research conducted with CWS, showing depressed phonological working memory, measured by nonword repetition tasks (Anderson, Wagovich, & Hendricks, 2009; Anderson et al., 2006; Hakim & Bernstein Ratner, 2004), less well-developed phonological representations and encoding skills, measured by the phonological priming effect in picture naming tasks (Byrd et al., 2007), and less efficient rhyme judgment skills, measured by electrophysiological activities in visual rhyme judgment tasks (Weber-Fox, Spruill, Spencer, & Smith, 2008). However, findings have been inconsistent in studies conducted with AWS. Let us consider the task of naming a single pictured item. Monitoring for a target phoneme during naming is hypothesized to include phonological encoding and phonemic retrieval/selection (Wheeldon & Levelt, 1995; Wheeldon & Morgan, 2002). In prior research, when compared to typically fluent speakers, AWS have been slower in phoneme monitoring during silent naming but not in other monitoring situations (such as monitoring for a pure tone), leading to the conclusion that there is 20 a specific deficit in phonological encoding in AWS (Sasisekaran & de Nil, 2006; Sasisekaran et al., 2006). However, in other monitoring studies, when participants monitored for phonologically-related (rhyming) words while silently reading passages, AWS did not differ from typically fluent speakers (Bosshardt & Fransen, 1996). The discrepancy between the two monitoring studies could be attributable to task differences or representational differences in that one involved silent naming, whereas the other silent reading, and that one monitored for phonemes, whereas the other monitored for rhymes. Several other studies using rhyme judgment in single-task conditions have also failed to find evidence for a phonological deficiency in AWS (de Nil & Bosshardt, 2000; Weber-Fox, Spencer, Spruill, & Smith, 2004). In an event-related potential (ERP) study of phonological processing, AWS performed similarly to typically fluent peers in rhyme judgment task across words that were congruent in orthography and phonology (i.e., orthographically similar and rhyming, thrown/own, or completely incongruent, such as cake/own), but not when word pairs were partially incongruent, thus making the task more difficult (i.e., orthographically similar but not rhyming, gown/own). Because there is no evidence to suggest that AWS differ from typically fluent speakers in visual processing of orthography or early stage phonological processing, as measured by early ERP components corresponding to the early time course of phonological processing (Weber-Fox, Spencer, Spruill, & Smith, 2004), it appears that speeded analysis and judgment of incongruent/complex linguistic information appears to be difficult for AWS. 21 Phonological working memory (Baddeley & Hitch, 1974) is typically assessed with nonword repetition (NWR) tasks, in which items increasing in phonological length and complexity (e.g., ?mab? > ?mabshibe? > ?mabfieshabe? > ?mabshaytiedoib?) are verbally presented and participants repeat the items one at a time. In theory, this task involves decoding, storing and encoding of novel phonetic sequences. AWS appear slower in NWR articulation rate than typically fluent speakers, suggesting slower rehearsal speed and poorer phonological working memory (Bosshardt, 1993). However, in one study, no difference was observed in NWR accuracy (Smith et al., 2010), although the speech-motor coordination during NWR differed between AWS and their typically fluent peers, suggesting an atypical interaction between phonological processing and the speech motor system in AWS (Smith et al., 2010). Several other production studies have also reported no difference between AWS and NS in phonological encoding. In a PWI study, AWS demonstrated similar phonological facilitation effects as were seen in NS during conditions thought to improve phonological encoding (Hennessey et al., 2008). Burger and Wijnen (1999) found equivalent late-stage phonological encoding (encoding the selected phoneme into metrical structure from left to right) in AWS, using the implicit priming paradigm. The paradigm reflects left-to-right serial encoding of the selected phonemes into a metrical structure (prosodification). In the task, participants learn a list of words that are homogeneous or heterogeneous at the word-initial positions (shared vs. different word-initial phoneme(s)) (e.g., ?room?, ?roof?, ?rule?), and memorize each word with its semantically-related cue (e.g., cue ?house?- target 22 ?room?). Participants then produce each target word based on the cue provided by the experimenter (Meyer, 1990). Both AWS and fluent speakers responded similarly: faster when word initial sounds were available and implicitly prepared (the homogeneous condition) than when they were unavailable (the heterogeneous condition). Burger and Wijnen?s (1999) study was an attempt to replicate an earlier implicit priming study by Wijnen and Boers? (1994), which found that AWS showed some atypical phonological encoding skills: unlike typically fluent speakers, AWS were not primed by onset consonant alone but required onset consonant plus the subsequent vowel to improve performance, supporting the hypothesis that AWS had difficulty encoding the stress bearing unit (the vowel nucleus), and thus could only benefit when the vowel was primed. However, this effect was mainly driven by a small subgroup of PWS, and the result was not replicated. In sum, potential semantic and phonological deficiencies in PWS have been researched primarily to evaluate the potential role of a lexical/word-form encoding deficiency in the etiology of stuttering. This is not unreasonable: the relationship between word processing demand and stuttering is supported by empirical evidence showing that stuttering rate increases with selective linguistic manipulations (Bosshardt, 2002; Bosshardt et al., 2002; Brundage, & Bernstein Rater, 1989; but see Vasi? & Wijnen, 2005). However, findings regarding the presence/absence of either specific type of linguistic deficiency remain equivocal. This is not particularly surprising for two reasons. First, given the wide range of tasks implemented among studies, these tasks might have recruited and reflected a variety of cognitive/linguistic processes in addition to the target process. Second, it has been suggested that AWS 23 are more vulnerable to cognitive demand, particularly processes that involve decision making (Hennessey et al., 2008; Weber-Fox et al., 2004). If linguistic processes in AWS are more capacity demanding (less modular) as suggested by Bosshardt and colleagues, the different tasks used across studies (e.g., category judgment versus simple picture naming) could result in different levels of available capacity for linguistic processing in AWS and lead to different outcomes. Concurrent Processing in AWS In the adult stuttering research literature, linguistic processing efficiency has mainly been examined using the dual-task paradigm (e.g., Bosshardt, 1999; Bosshardt, 2002; Bosshardt et al., 2002; de Nil & Bosshart, 2000; Song et al. 2010). Despite the equivocal findings on whether degraded performance reflects differences in semantic versus phonological processing ability, the overall findings of poorer linguistic performance under concurrent processing demand suggest that linguistic processing in AWS is less modular than in typically fluent speakers. Further, the relationship between linguistic processing inefficiency and stuttering itself has also been supported by studies using the dual-task paradigm to show that processing demand modulates stuttering rates in AWS. In a concurrent overt production and subvocalization study, participants repeated words overtly while they silently read or memorized words that were manipulated in phonological similarity to the repeated words. Stuttering rate in AWS increased significantly when the two tasks involved phonologically similar words than when the tasks involved dissimilar words, whereas the disfluency rate in typically fluent peers did not differ 24 between the two conditions (Bosshard, 2002). In another dual-task study involving sentence production (Bosshardt et al., 2002), the manipulation of processing demand modulated the number of propositions in sentence production, and stuttering rates increased along with the number of propositions in AWS, whereas disfluency rates showed no relationship with sentence propositions in their typically fluent peers. Both findings support the hypothesis that the overall level processing demand plays an important role in stuttering. The dual-task paradigm has been used to examine a different account of stuttering, the vicious circle/cycle hypothesis (VCH). The hypothesis proposes that people who stutter allocate too much attention to monitoring the temporal flow of speech and reactively ?fix? any perceived discontinuity, which perturbs production fluidity; that is, the hyper-vigilant monitoring of speech flow is the major causal factor in stuttering (Vasi? & Wijnen, 2005; Bernstein Ratner & Wijnen, 2007). The dual-task paradigm described in the following sections allows manipulation of attention allocation during concurrent processing to test the VCH. In Vasi? and Wijnen?s (2005) study, the rate of stuttering in AWS decreased when they performed a language production task (retelling newspaper articles) and simultaneously engaged in a secondary task (playing video games or self-monitoring for target words), and the decrease was greater with word monitoring than with video games. The authors argued that the video games divided AWS? attention from language production in general, whereas word monitoring distracted the monitoring attention away from the habitual focus on speech flow, and was thus most facilitative to fluency. 25 Not all studies concur in finding this pattern: in previously described studies that have involved speech production and a concurrent secondary task, some AWS showed an increase in stuttering rate (Bosshardt, 2002). Another study, in which participants continuously repeated a list of three words and simultaneously performed mental calculation, showed a greater variance in stuttering rate during the dual-task than during the single-task condition (Bosshardt, 1999). Thus, it is difficult to conclude the precise nature of the relationship between stuttering rate and cognitive load from prior studies for the following reasons. The two studies differed in the context of speech production and also task-specific demands of the secondary tasks. Another challenge for conclusions drawn from comparing fluency across (semi-) spontaneous language samples, as in the Vasi? and Wijnen?s (2005) study, is the lack of control over the properties of the produced samples (e.g., quality, complexity and rate of speech). Without evidence that samples were comparable between the single- and dual-task conditions, the observed decrease in stuttering rates in the dual-task condition could be an artifact from compromised content under increased demand (Bosshardt et al., 2002). In sum, the variety of task demands across studies have imposed a major difficulty in drawing conclusions about the nature of language processing skills in adults who stutter; these problems are complicated given the suggested subtleness of the linguistic deficits, potential interaction with central cognitive processes and high individual variability of the population. Results from these studies could reflect different levels of cognitive demand and/or different cognitive/linguistic processes. The crucial key to controlling for cognitive/linguistic processes would be to use the 26 same tasks involving the same input stimuli and output responses across all levels of demand. The combined picture-word interference (PWI) and psychological refractory period (PRP) paradigms serve as an excellent approach to investigating the effects of cognitive demand on semantic and phonological processes in AWS. Purpose of the Study There has been consensus among theoretical approaches that stuttering is likely attributable to multiple speech-related factors, such as genetic predisposition (Kang et al., 2010), speech-motor skills (e.g., Smits-Bandstra et al., 2006), speech- language (linguistic) skills (e.g., Postma & Kolk, 1993), sensory-motor integration abilities (e.g., Neilson & Neilson, 1987; Max, Guenther, Gracco, Ghosh, & Wallace, 2004), cognitive skills (e.g., Bosshardt, 2006) and emotional reactivity and regulation (Karrass et al., 2006). Each area has been identified as a potential contributor to stuttering. However, it is still not clear how such multiple components interplay within the language production systems of PWS and NS. The current study focused on the relationship among cognitive demand, linguistic processing and stuttering. In order to investigate the essential processes underlying speech/language production, the study targeted the hypothetical stages presumed to underlie word production. Word production is a relatively simple linguistic task highly practiced in everyday life, essential for producing language in any context. In most word production models, when a concept is formed, linguistic processes proceed in a relatively prompt and automatic manner within several 27 hundred milliseconds (Levelt et al., 1999) to generate the desired word form, whereas the more complex process of sentence production is more likely to involve various cognitive processes (Levy, Pashler, & Boer, 2006; Kemper et al., 2009; Kubose et al., 2006) that occur over a longer period of time. The overall purpose of this study was to provide further details about the cognitive-linguistic dynamics of word production in stuttering. The study was conducted within a framework based on the word production model proposed by Levelt, Roelofs, and Meyer (1999) and the central bottleneck model (Pashler, 1994), to examine whether a phonological encoding deficiency appears to be common in AWS (such as is proposed by the CRH and EXPLAN models or whether inefficient linguistic processing particularly at the semantic level underlies stuttering (Bosshardt, 2006). The overall purpose of the study was approached with three specific aims. First, the study examined the ?automaticity? of phonological and semantic processes in AWS, aiming to determine if they were highly automatic (not involving the central bottleneck), or if they were individually capacity demanding (involving the central bottleneck). The automaticity of semantic and phonological encoding was examined based on the central bottleneck model, using the PRP paradigm with a picture-naming task (in the PWI paradigm) and a tone identification task, in two experiments with reversed task orders. It was hypothesized that 1) in AWS, both semantic and phonological encoding would be capacity-demanding (central) processes, and that 2) in typically fluent speakers, semantic encoding would be an automatic (pre-central) process while phonological encoding would be a capacity- demanding (central) process, as suggested in the typical literature on young adults. 28 The pattern of both linguistic encoding skills being capacity demanding in AWS would suggest that these encoding skills are not modular, and support theories such as the DCM, the dynamic-multifactorial model by Smith and colleagues and that of Bosshardt and colleagues?, which propose a problem space with multiple interactive components during speech/language processing. The predicted patterns in AWS for the capacity-demanding (central) semantic and phonological processes included 1) when picture naming was the first task, both PWI effects would be observed in tone identification (Task 2) latencies in short SOA and not long SOA (as a result of effect propagation), and 2) when picture naming was the second task, both PWI effects would be observed in the picture naming (Task 2) latencies in both short and long SOA (as a result of additive effect). Predicted patterns in NS for the automatic (pre-central) semantic processing task included 1) when picture naming was the first task, a semantic interference effect would be absent in the tone identification (Task 2) latencies in both short and long SOA (a lack of effect propagation), and 2) when picture naming was the second task, a reduced or absent semantic interference effect would be observed in the picture naming (Task 2) latencies in short compared to long SOA (an under-additive effect). Predicted patterns for the capacity-demanding (central) phonological processing in NS would be the same as the predictions in AWS. Second, the study examined the relationship between automaticity of word production and stuttering severity, aiming to determine whether such automaticity of verbal production was relevant to stuttering behaviors in AWS. It was hypothesized that automaticity in word production could be a potential factor underlying stuttering. 29 If stuttering severity correlated with automaticity of word production, this would support processing efficiency as a strong factor in stuttering, as suggested by Bosshardt and colleagues. It was predicted that the interference effect, measured in the PRP experiment with picture naming as the primary task (Task 1), would correlate positively with stuttering measures by the Stuttering Severity Instrument for Children and Adults- Fourth Edition (SSI-4; Riley, 2009). Third, the study examined for potential ?deficiency? in the two linguistic processes in AWS by manipulating the role of cognitive demand, aiming to determine whether there was fundamental deficiency in each of the linguistic processes, which could be observed even with low cognitive demand, or there was a subtle deficiency which could only be observed with high cognitive demand. This study focused on cognitive demand that was non-linguistic in nature, and examined the effects of cognitive demand via temporal manipulation of SOA, without changing other task demands in stimuli and responses. It was hypothesized that any potential semantic or phonological deficiencies were relatively subtle in AWS, and that these deficiencies would only be observed in high cognitive demand conditions. If the subtleness of both linguistic deficiencies could be demonstrated through varying levels of cognitive demand, it would suggest that cognitive demand would be a potentially strong factor to account for the equivocal findings in the adult stuttering literature, and add to the data supporting multi-factorial theories. However, if AWS showed a selective deficit in semantic but not phonological encoding, this would provide further evidence against the CRH and EXPLAN models, and vice versa. Further, if the semantic and/or 30 phonological encoding skills play(s) a primary role in stuttering, the encoding skill(s) should show an observable relationship with stuttering measures. This hypothesis was examined in a PRP experiment in which a picture- naming task with a PWI paradigm was the primary protocol (Task 1). With the assumption that cognitive demand would be higher when two tasks were performed concurrently (short SOA) than sequentially (long SOA), it was predicted that group differences would be observed in both the semantic interference effect and the phonological facilitation effect in short SOA (high demand) but not long SOA (low demand). Further, both semantic interference and phonological facilitation effects would correlate with the stuttering measures recorded by the SSI-4. The study included a pilot study and a series of two experiments. The pilot study tested two sets of stimuli to examine semantic and phonological encoding in typically fluent young adults, using a picture-naming task with the PWI paradigm. The two sets of stimuli were adapted from prior research, to better control for lexical factors that might affect word production particularly in AWS (e.g., number of syllables). Experiment 1 examined the automaticity of semantic and phonological encoding in typically fluent young adults in the PRP paradigm, using the same picture-naming task as in the pilot study and a tone identification task, with the purpose of replicating previous findings and contributing to the typical literature. In Experiment 1A, the picture-naming task preceded the tone identification task, while in Experiment 1B, the task order was reversed. 31 Experiment 2 examined the automaticity of semantic and phonological encoding in AWS and matched NS through a set of experiments. Experiment 2A and Experiment 2B used the same design and tasks as in Experiment 1A and 1B, respectively. In addition, Experiment 2A examined how cognitive demand affected semantic and phonological encoding in AWS and matched NS, to determine the presence/absence of a subtle/fundamental deficiency in semantic and phonological encoding and the relationship between semantic and phonological encoding skills and stuttering in AWS. 32 Chapter 2: Pilot Study ? Word Encoding in Typically Fluent Young Adults The picture-word interference paradigm is one method that permits us to investigate the implicated lexical processes underlying word encoding. As discussed earlier, it contrasts verbal responses between related and unrelated distracter conditions, with the assumption that the contrasting conditions differ only in the type of interference. Therefore, stimuli among conditions should be well controlled for other factors potentially affecting lexical access, such as word frequency, word length, lexical neighborhood, phonological and orthographic structure (Andrews, 1992; Grainger, 1990; Hudson & Bergman, 1985). Yet, not all prior research has carefully controlled for lexical factors besides the number of phonemes/letters in the stimuli. Given that stuttering has been suggested to relate to word frequency, word length, onset phoneme (Brown, 1945) and potentially in orthographic-phonological processing (Weber-Fox et al., 2004), this study adapted stimuli from prior research, used mainly consonant-onset target stimuli, and matched across distracter conditions on the mentioned lexical factors. Since prior research in the typical literature has mainly examined the young adult population (mostly undergraduate students), the present experiment was piloted on undergraduate students, with the goal of replicating the same PWI effects in picture-naming tasks found in prior research, but using the adapted stimulus lists. Two lists were created for two experiments in the main study (Experiment 1 and 2 in the following chapters). 33 In the PWI paradigm, each target picture is paired with three visual word distracters that vary in relatedness to the target: a semantically-related distracter, a phonologically-related distracter and an unrelated distracter. Each target-distracter pair is presented simultaneously, while the participant is instructed to name the picture and ignore the word. Semantically-related distracters typically slow down the naming responses more than unrelated distracters, which is known as the semantic interference effect, reflecting the presumed early stage of semantic encoding in word production. In contrast, phonologically-related distracters typically speed up the naming responses when compared to unrelated distracters; this is known as the phonological facilitation effect, reflecting the presumed later-stage phonological encoding (phoneme selection) in word production. It was hypothesized that the typical PWI effects (the semantic interference effect and the phonological facilitation effect) would be observed using both stimulus lists, with no significant list difference in the induced PWI effects. Method Participants Twenty (5 male, 15 female) typically fluent young adults with a mean age of 20 years old (SD = 1.6) participated in the word encoding experiment. All participants were native English-speaking undergraduate students in the local campus community, and none reported speech, language or hearing disorders. All students participated in the study for course credits. 34 Stimuli Stimuli for were adapted from prior PWI studies (Cook & Meyer, 2008; Damian & Martin, 1999; Hennessey et al., 2008). Sixty highly namable, 3 inch by 3 inch line-drawing objects were selected from the International Picture Naming Project (Szekely et al., 2004) and Snodgrass and Vanderwart (1980) and randomly assigned into two lists, A and B (Appendix A). Each picture was paired with three distracter words in bold Arial 18-point font. The three types of distracters included a semantically-related distracter that was categorically related to the target picture, a phonologically-related distracter that shared at least two-thirds of the phonemes in identical positions with the target picture name, and an unrelated distracter that showed no obvious relationship with the target picture in meaning or sound. For example, the target picture cake was paired with the semantic distracter pie, the phonological distracter cave and an unrelated distracter deer. Distracter conditions were matched for various lexical properties, including familiarity rating, log- transformed word frequency, number of phonological neighbors, number of orthographic neighbors, word length in the number of letters, phonemes and syllables (ps > .05), all measures based on MRC Psycholinguistic Database (Wilson, 1988) and English Lexicon Project (Balota et al., 2007). A detailed explanation of the lexical properties is provided below. Familiarity rating and word frequency were extracted from MRC Psycholinguistic Database (Wilson, 1988), with familiarity rating derived from merging three sets of familiarity norms (Gilhooly & Logie, 1980; Pavio, unpublished; Toglia & Battig, 1978). Remaining lexical properties were extracted from the English 35 Lexicon Project (Balota et al., 2007), with log-transformed word frequency derived from Hyperspace Analogue to Language frequency norms (Lund & Burgess, 1996). The number of orthographic neighbors was defined as the number of words that can be obtained by changing one letter while preserving the identity and positions of the other letters (e.g., cash and cast); the number of phonological neighbors was defined as the number of words that can be obtained by changing one phoneme while preserving the identity and positions of the other phonemes (e.g., cash and cat). Lexical properties of the stimuli are presented below, with properties matched across distracter types (Table 1). Table 1. Lexical properties of stimuli 36 Lexical property Target Distractor Semantically- related Phonologically -related Unrelated List A Familiarity 565 547 530.52 532 Log frequency 9.46 8.49 8.66 8.75 Number of letters 4.4 4.63 4.53 4.63 Number of phonemes 3.67 3.73 3.80 3.60 Number of syllables 1.23 1.23 1.23 1.23 Number of orthographical neighbors 8.43 5.80 6.67 6.23 Number of phonological neighbors 15.9 13.33 15.73 12.70 List B Familiarity 557 542 526 525 Log frequency 9.49 8.73 8.73 8.83 Number of letters 4.33 4.70 4.60 4.52 Number of phonemes 3.57 3.67 3.80 3.69 Number of syllables 1.30 1.30 1.27 1.24 Number of orthographical neighbors 7.43 5.70 6.83 6.24 Number of phonological neighbors 16.47 11.37 13.60 12.17 37 Apparatus Stimuli were presented through PsyScope X software (Cohen, MacWhinney, Flatt, & Provost, 1993; http://psy.ck.sissa.it) on a 15-inch MacBook Pro with Mac OS X. Verbal response latencies were recorded via a Shure SM58 microphone interfaced with an ioLab USB Button Box. The USB Button Box voice key was calibrated each session through the PsyScope USB Button Box panel and set to a threshold sensitive to verbal responses for each participant. Response accuracy was coded online during the experiment and all responses were recorded using a Sony ICD-P520 digital voice recorder. Procedure Signed consents were obtained from all participants prior to participation. All testing took place in a quiet room for approximately half an hour. Participants sat in front of the computer in a comfortable position, and spoke with habitual vocal loudness, while the voice key was calibrated and adjusted accordingly. Practice. Participants first completed the practice phase with the 60 target pictures presented one at a time. Participants were to name the picture as quickly and accurately as possible following a 500 ms fixation cross; the accurate name was presented for 500 ms at the bottom of the picture following each naming response. All pictures were presented once in a random order, with a randomly generated inter- trial interval between 500-600 ms. Task: Participants then performed the picture-naming task. Participants were instructed to fixate at the center of the screen and name the pictures as quickly and accurately as possible while ignoring the distracter words. Each trial included a 1000 38 ms fixation cross at the center, followed by a 500 ms blank screen, then a simultaneous presentation of a target picture and a distracter word, which was superimposed onto the picture in the center of the screen. The picture was presented for a maximum of 2000 ms, while the distracter word was presented for 200 ms and immediately replaced by a 500 ms visual mask of seven ?X?s. Each trial ended with a verbal response or 2000 ms after the picture-word stimuli onset, whichever occurred first. Figure 5. Time course of events in a trail in Experiment 1. Design The experiment included two independent, within-subject variables: distracter type (semantically-related, phonologically-related and unrelated) and list (A and B). 1000 ms + fox 500 ms 200 ms 500 ms XXXXXXX 1300 ms !"#$%& Time Responses Stimuli 39 Distracter type was manipulated completely within participants and items and the list was manipulated within participants. All target stimuli were pseudo-randomized within lists, with no single prime condition occurring on more than three consecutive trials, and list order was counter-balanced across participants. Reaction time and accuracy rate were measured. Reaction time was defined as the latency between stimulus onset and response onset. For statistical analysis, reaction time was transformed into z-score by subject and by task to correct for latency differences between tasks and skewness in latency distribution (Winer, Brown, & Michels, 1991). Accuracy rate was defined by the percent of correctly responded trials; any initial response not corresponding to the target was marked as an error, including incorrect responses, disfluencies, such as fillers (?uh?) and non- verbal responses (e.g., coughing, sneezing and laughing). In addition, rationalized arcsine transformed accuracy was also calculated and reported when results differed from that of accuracy rate analysis and subsequently influenced interpretation of the findings. Analysis Trials were excluded from reaction time analysis when the response was inaccurate and/or when the reaction time was greater than 2000 ms. To evaluate the effects of semantic interference and phonological facilitation, reaction times and accuracy rates were analyzed using two-way 3 X 2 repeated measures analyses of variances (ANOVA) with participant (F1) and item (F2) as random variables, with alpha level at .05. Pairwise comparisons with Bonferroni correction were conducted 40 to separately evaluate the effects of semantic relatedness (semantically-related versus unrelated) and phonological relatedness (phonologically-related versus unrelated). Results Following the exclusion of invalid trials including inaccurate responses (4.6% of data), accurate responses with latencies longer than 2000 ms (0.3% of data) and inappropriate activation of the voice key (e.g., by noises) (0.1% of data), a total of 5% of data were excluded from reaction time analysis. As predicted, results showed patterns of fastest responses in the phonologically-related condition (e.g., when the participant had to name the picture cake and saw the word cave), followed by the unrelated condition (e.g., when the participant had to name the picture cake and saw the word deer), and the slowest and least accurate responses in the semantically-related condition (e.g., when the participant had to name the picture cake and saw the word pie). This pattern was similar for both stimulus lists. Figure 6 and 7 illustrate the average response latency and accuracy in relation to distracter type across lists A and B. 41 Figure 6. Response latency of the picture-naming task in typically fluent young adults Figure 7. Response accuracy of the picture-naming task in typically fluent young adults The observed patterns were confirmed with statistical analyses of response latencies, revealing a main effect of distracter type, F1(2, 38) = 75.838, p < .001, !"#$% !"#&% !"#'% !"#(% !"#)% "#"% "#)% "#(% "#'% "#&% *+,-./012/3% 4/5./012/3% 6-7/012/3% !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 23#*0)+*"0'*,$"' 89:2;% 89:2<% !"#$% !"$!% !"$&% !"$'% !"$(% !"$)% !"$*% !"$+% !"$,% !"$#% !"$$% -./01234526% 7281234526% 90:234526% !" #$ %& #" '() )* +( ), ' -.#/+()/"+'/,$"' ;<=5>% ;<=5?% 42 partial ?2 = .733; F2(2, 116) = 67.288, p < .001, partial ?2 = .537; no main effect of list or interaction between list and distracter was observed (ps > .1). Pair-wise comparisons further showed that responses were statistically significantly slower in the semantically-related than the unrelated condition, with an interference effect of 24 ms, t1(39) = 3.956, p < .001; t2(59) = 3.162, p < .01. Responses were statistically significantly faster in the phonologically-related than the unrelated condition, with a facilitation effect of 65 ms, t1(39) = -11.577, p < .001; t1(59) = -8.323, p < .001. Similar results were obtained for analyses on accuracy rates, F1(2, 38) = 5.279, p < .01, partial ?2 = .217; F2(2, 116) = 4.696, p < .05, partial ?2 = .075. Pair-wise comparisons of accuracy rates revealed no difference between the phonologically- related and unrelated conditions, and a marginal difference between the semantically- related and unrelated conditions by subject only, t1(39) = -2.241, p < .05; that is, the semantically-related condition not only slowed response latency, but also decreased response accuracy; there was no evidence of speed-accuracy tradeoff among the obtained response latency results. Same patterns were obtained in transformed accuracy analysis. Discussion Because stuttering has been linked to a number of linguistic variables that could exert an influence on this experimental paradigm, this experiment modified stimuli from prior research to match across the visual-word distracter conditions on multiple lexical factors, including log-transformed lexical frequency, familiarity, number of syllables, and numbers of orthographic and phonological neighbors. In 43 addition, the study applied visual masking of the distracter word after the preset interval to minimize the possibility of continuous activation of the distracter word throughout the trial, following procedures in Damian and Martin?s (1999) Experiment 2. The current experiment showed clear and substantial effects that were compatible to prior research for semantic interference and phonological facilitation in the classic PWI paradigm, successfully replicating prior findings reflecting semantic and phonological encoding for word production in the typically fluent young adult (undergraduate) population. In addition, results suggested that the two sets of stimuli, list A and B, did not differ from each other in terms of inducing semantic or phonological effects in the PWI paradigm, and would be compatible for use in the two versions of the PRP experiment in the main study. Based on the present stimuli and procedures of the picture-naming task, the following set of experiments (Experiment 1A and 1B) was conducted to examine the automaticity of semantic and phonological encoding, a replication and follow-up of prior research in the typical literature. 44 Chapter 3: Experiment 1 ? Automaticity of Word Encoding in Typically Fluent Young Adults Introduction Typically fluent young adults, mainly undergraduate students in the typical literature (Henrich, Heine, & Norenzayan, 2010), have been suggested to exercise highly automatic semantic encoding and demanding phonological encoding skills while they plan for word production. Several studies have examined the automaticity of semantic and phonological encoding using the PRP paradigm and indicated complex patterns of linguistic processing in relation to cognitive resources. However, there is converging evidence to indicate that lexical-semantic processing appears to be highly automatic without involving central cognitive resources (Ayora et al, 2011; Dell?Acqua et al, 2007). In contrast, phonological encoding, including word-form retrieval and phoneme selection, have been suggested to be quite capacity demanding, involving central cognition or high level of attention (Ayora et al., 2011; Cook & Meyer, 2008; Ferreira & Pashler, 2002; Roelofs, 2008). While Ferreira and Pashler (2002) first obtained PRP patterns for phonological encoding (phoneme selection) that suggested a highly automatic, post- central process, follow-up studies by Cook and Meyer (2008) and Roelofs (2008) both supported the concept that phoneme selection was a capacity demanding task. In particular, Cook and Meyer (2008) proposed that PRP patterns observed in Ferreira and Pashler?s study was potentially an artifact of a slowed self-monitoring process immediately after word encoding, an effect induced by the visible distracters in the 45 classic PWI paradigm. Using the masked priming paradigm, in which the prime (distracter) was presented briefly and masked by symbols to minimize its visibility while maintaining the linguistic effects (e.g., Ferrand, Segui, & Grainger, 1996; Schiller, 1998, 1999), the authors successfully observed the PRP patterns for phonological encoding that were compatible with a capacity-demanding, central process. Detailed investigation of linguistic processing and cognitive mechanisms has been relatively recent and data are limited. Only two studies have examined encoding processes by manipulating the second task (Task 2) in the PRP paradigm. Based on the locus-of-slack logic, determining a central-stage process involves examining patterns of PRP experiments with both task orders. So far, only two studies reported manipulation of word encoding processes in Task 2. Therefore, Experiment 1B was conducted using the same tasks, but with reversed task orders to further determine whether semantic and phonological encoding are pre-central or central-stage processes, aiming to replicate Dell?Acqua and colleagues? (2007) findings suggesting a pre-central semantic processing and Ayora and colleagues? (2011) findings suggesting a central-stage phonological processing. If semantic encoding is a central-stage process, as suggested by Ferreira and Pashler (2004), Experiment 1B, in which the tone identification task was followed by the picture-naming task, should show similar semantic interference effect in picture- naming (Task 2) responses in both short and long SOAs (an additive semantic effect); otherwise, if the semantic interference effect were reduced or absent in the short compared to the long SOA (an under-additive effect), it would suggest a pre-central 46 process. If Cook and Meyer (2008) were correct that the lack of phonological propagation using the PWI paradigm was an artifact, the reversed task order should not only eliminate the artifacts (because PRP interpretation would depend on responses in the picture-naming task itself, prior to the induced self-monitoring effect), but also reveal a pattern consistent with central stage processing (an additive effect in the picture-naming task (Task 2)) and not pre-central processing (an under- additive effect in the picture-naming task (Task 2)). Thus, the purpose of this experiment was to examine both semantic and phonological encoding processes in the PRP paradigm, using both task orders, to fully determine whether semantic encoding and phonological encoding were capacity demanding or highly automatic. When the picture-naming task is the first task, and we see PWI effect(s) in tone identification responses in the second task in the short SOA but not long SOA (i.e., effect propagation), it would suggest that the corresponding encoding processing is either a pre-central or central-stage process; if no PWI effect(s) is observed in tone identification responses (i.e., lack of effect propagation), then it would suggest a post- central process. The current study followed Cook and Meyer?s interpretation that the lack of phonological effect propagation in picture-word interference is likely an artifact of a propagated effect followed by speech monitoring. The predicted patterns for the current study are based prior research and presented below in Figure 8 (Ayora et al., 2007; Cook & Meyer, 2008; Ferreir & Pashler, 2002). 47 Figure 8. Predicted latency patterns of Experiment 1A with picture naming as the first task. Solid lines: picture naming task; dashed line: tone identification task. When the picture-naming task was the second task, and we saw similar PWI effect(s) in short and long SOA (i.e., additive effect), then it would suggest that the corresponding encoding processing was either a central-stage or post-central process; if reduced or absent PWI effect(s) were observed in short SOA compared to long SOA (i.e., under-additive effect), then it would suggest a pre-central process. The predicted patterns based on prior research are illustrated in Figure 9. !"#$%&' ("#$%&' !" #$ %& #" '() *" &+ ,' -./' !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 48 Figure 9. Predicted latency patterns of Experiment 1B with tone identification as the first task. Solid lines: picture naming task; dashed line: tone identification task. Method Participants Fifteen female adults who had not taken part in the previous word encoding experiment participated in this dual-task experiment in both Experiment 1A and 1B. Participants were all undergraduate students in the local campus community, with a mean age of 22 years old (SD = 4.7). All participants were typically fluent native- !"#$%&' ("#$%&' !" #$ %& #" '() *" &+ ,' -./' !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 49 English speakers, with no known speech or language disorders, with normal/corrected vision, within normal hearing acuity and passed a hearing screening at 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz at 25 dB HL. Participants gave consent and received extra course credits for participation in the study. Apparatus and Stimuli Equipment and materials used for both Experiment 1A and 1B were the same as the pilot study for the picture-naming task, with additional apparatus for the secondary task, the tone identification task. Auditory stimuli were presented to the participant via Sennheiser HD 280 headphones at a comfortable level, and manual responses were collected through three designated keys on a keyboard. Response latency and accuracy of the tone identification task were recorded through the Psyscope program. Stimuli for the picture-naming task were identical to the pilot study, with list A assigned to Experiment 1A and list B to Experiment 1B. Stimuli for the tone- identification task were selected based on prior research (e.g., Ferreira & Pashler, 2002), and consisted of three pure tones, 100 ms duration at 180, 500 and 1200 Hz, and remained the same in both Experiment 1A and 1B. Procedure Procedures were the same as in the pilot study with several additions because of the nature of the dual tasks. All testing was conducted in a quiet room and took approximately an hour. There were three parts to the practice phase: 1) picture naming, 2) tone identification and 3) dual-task picture naming plus tone identification in both task 50 orders. Following picture-naming practice, as described in the pilot study, participants then practiced the tone identification task. Participants were instructed to listen carefully to the three pure tones in three different pitch levels, presented twice in order from low (180 Hz), mid (500 Hz) to high (1200 Hz) frequency. Participants then started practicing identifying each pure tone as low, mid or high pitch by pressing one of the three designated keys from left to right, respectively. The trial ended when a response was detected or 2000 ms after tone onset with visual feedback for error (?Oops!? in red) and slow speed over 2000 ms (????? in red). Tones were presented in random order, six times each with a total of 18 trials. Participants then practiced the dual-task condition. Picture-word stimuli in this practice section were three pictures and nine words randomly selected and not used as test stimuli. There were nine practice trials, with completely randomized presentation of the pictures, words and SOAs. Trials in the dual-task practice phase followed the identical time course as the test phase described below. Picture naming and dual-task practice were completed immediately prior to Experiment 1A and 1B with the corresponding picture stimuli and task order. In the test phase, each trial included two tasks, the picture-naming task, which was identical to that described in the pilot study, and the tone identification task, which was identical to that described in the practice phase, except that there was no performance feedback in test phase. In Experiment 1A, the picture-naming task was presented first (Task 1); the tone identification task (Task 2) was presented either 150 ms or 950 ms after picture onset. The two SOAs were selected within the range reported in prior research (Ayora et al., 2009; Ayora et al., 2011; Cook & Meyer, 51 2008; Dent et al., 2008; Ferreira & Pashler, 2002). The participants were instructed to fixate on the cross, and to name the picture and identify the tone as quickly and accurately as possible while ignoring the words in the center. The trial ended with a response or 2000 ms after Task 2 stimuli onset. The inter-trial interval in this experiment was randomized between 500 ms to 600 ms. The time course of a trial in Experiment 1A is illustrated in Figure 10. In Experiment 1B, the order of the two tasks was reversed, and all other conditions remained the same (Figure 11). Figure 10. Time course of a trial in Experiment 1A. 52 Figure 11. Time course of a trial in Experiment 1B. Design The purpose of this study was to determine the automaticity of semantic and phonological encoding in typically fluent young adults, by examining the PWI effects (the semantic interference effect and the phonological facilitation effect) in the PRP paradigm with two reversed task orders. This experiment included two versions, 1A and 1B, both with the same design but reversed task orders and different stimulus lists. Each experiment had three independent, within-subject variables: task (picture-naming and tone identification), SOA (100 ms and 950 ms) and distracter type (semantically-related, phonologically- 53 related and unrelated), which were manipulated completely within participants and within items. Thus, in a single experiment, the participant saw a set of 30 target pictures six times, three times per SOA condition and twice per distracter type, across three blocks of 60 trials. The three tones, presented in the secondary task of tone identification, were assigned to each condition equal numbers of times. All stimuli were pseudo-randomized so that no single condition or tone occurred in more three consecutive trials and so that all conditions had equal probability of occurrence within each block. Blocks were completely randomized across participants, while the items were presented in a fixed order within each block. Experiment 1A had picture naming as Task 1 while Experiment 1B had picture naming as Task 2. Task order and handedness for manual response keys were counter-balanced across participants. Reaction time and accuracy rate were measured for each task with the same definitions as in the pilot study. Data analysis Trials were excluded from reaction time analysis when responses to either picture-naming or tone identification task within the same trial was inaccurate. Responses were further excluded when the accurate responses were over 2000 ms, and when the voice key was inappropriately activated (e.g., noises during a trial). Response latencies and accuracy rates for picture-naming and tone identification tasks were analyzed separately for each experiment to evaluate the presence of the PRP effect, using a three-way 2 X 2 X 3 (Task X SOA X Distracter) repeated measures analysis of variance (ANOVA) with participant (F1) and item (F2) as random variables, with alpha level set at .05. Further analysis, using a two-way 2 X 2 (SOA X 54 Distracter) repeated measures ANOVA and pair-wise comparisons with Bonferroni correction to alpha level, were conducted for each task separately to evaluate the semantic interference effect (semantically-related vs. unrelated) for semantic encoding and the phonological facilitative effect (phonologically-related vs. unrelated) for phonological encoding separately and in each SOA-distracter condition. The above analyses in Task 2 in both Experiment 1A and 1B would be the primary analyses for evaluating the PRP and bottleneck predictions of the automaticity of semantic and phonological encoding. Results Experiment 1A Following exclusion of invalid responses, including trials in which participants did not respond correctly to both of the dual tasks (15.4% of data), accurate responses with latencies over 2000 ms (0.3%), and inappropriate activation of the voice key (1%), a total of 17% of data were excluded from response latency analysis, which was similar to prior research which has reported approximately 15% of data rejection (e.g., Ferreira & Pashler, 2002). Averaged performances of the dual tasks in short versus long SOAs are illustrated in Figure 12 for latency and Figure 13 for accuracy. 55 Figure 12. Experiment 1A mean response latencies of tasks across SOAs Figure 13. Experiment 1A mean response accuracy of tasks across SOAs Overall response latencies showed the expected pattern of the PRP effect: as SOA decreased from long to short, picture-naming latencies (Task 1) remained relatively similar (average 40 ms increase in latency and no change in accuracy), while tone identification latencies (Task 2) became slower (average 305 ms increase !"#$% !$#&% !$#'% !$#(% !$#)% $#$% $#)% $#(% $#'% $#&% "#$% "*$!+,% -*$!+,% !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234' ./,0%"%12345678%9/+39:;% ./,0%)%1.<98%3=89>?4/><9;% @77<7%A/7,%39=34/58%,5/9=/7=%877<7% !"#!$ !"#%$ !"&!$ !"&%$ !"'!$ !"'%$ ("!!$ (%!)*+$ '%!)*+$ !" #$ %& #" '() )* +() ,' -./' ,-+.$($/0123456$7-*1789$ ,-+.$:$/,;76$1<67=>2-=;79$ ?55;5$@-5+$17<12-36$+3-7<-5<$655;5$ 56 in latency and 6% decrease in accuracy), with no confounding patterns in accuracy. This PRP pattern was supported by a statistically significant Task X SOA interaction in latency, F1(1, 14) = 73.637, p < .001, partial ?2 = .84; F2(1, 29) = 298.129, p < .001, partial ?2 = .911. Task performances across conditions are displayed in Figure 14 for latency and Table 2 for accuracy. Figure 14. Mean response latencies in Experiment 1A. SOA: stimulus-onset- asynchrony. Table 2. Mean response accuracies in Experiment 1A !"#$"% !"#&"% !"#'"% !"#("% "#""% "#("% "#'"% "#&"% "#$"% )*"!+,% -*"!+,% !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234' .//0/%12/,%34536278%,72452/5%8//0/% 9367:/8%42+34;%<=2,>)?% =048%3584@A62@04%<=2,>(?% !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .#/ ' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 57 Task 1 (picture-naming task). The picture-naming task served as the primary task in the PRP paradigm, and the expected patterns in the picture-naming responses were the classic semantic interference and phonological facilitation effects; that is, slower responses in the semantically-related than unrelated condition and faster responses in the phonologically-related than unrelated condition. Mean response latencies in the picture-naming task are illustrated in solid lines in Figure 14. Analyses of response latency showed a statistically significant main effect of distracter type, F1(2, 28) = 13.88, p < .001, partial ?2 = .498; F2(2, 58) = 13.168, p < .001, partial ?2 = .312, reflecting the patterns of slowest responses in the semantically-related condition and fastest response in the phonologically-related condition. Analyses of latency for semantic encoding showed a significant main effect of distracter type in latency, F1(1, 14) = 14.793, p < .01, partial ?2 = .514; F2(1, 29) = 14.062, p < .01, partial ?2 = .327, with no interaction with SOA (ps < .1), reflecting 150-ms SOA 950-ms SOA Task Distracter Accuracy (SD) Accuracy (SD) Picture naming Phonological 0.978 (0.033) 0.964 (0.034) Semantic 0.949 (0.056) 0.947 (0.056) Unrelated 0.980 (0.035) 0.991 (0.015) Phonological 0.873 (0.079) 0.904 (0.09) Semantic 0.787 (0.133) 0.867 (0.11) Unrelated 0.842 (0.112) 0.900 (0.081) 58 the expected semantic interference effect across SOAs. Analysis of accuracy also showed a significant interaction between SOA and distracter type by subject, F1(1, 14) = 4.924, p < .05, partial ?2 = .26, reflecting a trend of speed-accuracy tradeoff, particularly in long SOA. There was a positive yet weak correlation between latency and accuracy in the semantically-related condition in the long SOA but not in any other conditions (r2 = 0.05). Analyses of latency for phonological encoding showed a significant main effect of distracter type, F1(1, 14) = 18.443, p < .01, partial ?2 = .568; F2(1, 29) = 17.537, p < .001, partial ?2 = .377, with no interaction (ps > .5), reflecting the expected phonological facilitation effect across SOAs. Analyses of accuracy showed no interaction between distracter type and SOA (ps > .05), reflecting the consistent patterns of accuracy among conditions. Task 2 (tone identification task). The tone identification task served as the critical task for evaluating the PRP patterns, based on the presence/absence of the PWI effect propagation. The expected patterns included the observation of the semantic interference effect in tone identification latency in short and not long SOA (the propagation of semantic effect) and the absence of phonological facilitation effect in tone identification latency in short and long SOA (the lack of phonological effect propagation). Mean response latencies of the tone identification task are illustrated in dashed lines in Figure 14. Analyses of tone identification latency showed a significant interaction between SOA and distracter type by item, F2(2, 58) = 3.367, p < .05, partial ?2 = .104, marginally by subject (p = .082), reflecting that the distracter types in the 59 picture-naming task had differential effects on tone identification latency across SOAs. Analyses of latency for the semantic effect showed a significant difference between semantically-related and unrelated conditions (i.e., semantic interference effect) in short SOA, t1(14) = 3.212, p < .01; t2(29) = 2.848, p < .01, and not in long SOA (ps < .1), reflecting the semantic interference effect in tone identification latency in short SOA, but not in long SOA (the propagation of semantic interference effect). Analyses of latency for phonological effect showed no effect of distracter type (no phonological facilitation effect) or interaction between distracter type and SOA (ps > .1), reflecting the absence of a phonological facilitation effect in tone identification latency in both short and long SOAs (a lack of phonological facilitation propagation effect). Analyses of accuracy showed a main effect of SOA, F1(1, 14) = 14.455, p < .01, partial ?2 = .508; F2(1, 29) = 8.122, p < .01, partial ?2 = .219, a main effect of distracter type, F1(1, 14) = 6.702, p < .01, partial ?2 = .324; F2(1, 29) = 8.122, p < .01, partial ?2 = .219, and importantly, no interaction between SOA and distracter type (ps > .1), reflecting that the accuracy pattern was relatively consistent among conditions. To summarize Experiment 1A, picture-naming responses (Task 1) showed the expected semantic interference and phonological facilitation effects in both latency and accuracy. This was critical to our use of the task in testing AWS. Importantly, as expected, tone identification responses (Task 2) were modulated by semantic relatedness under short and not under long SOA, suggesting the propagation of the semantic interference effect and that semantic encoding was either a pre-central or 60 central-stage process. In contrast, phonological relatedness did not modulate tone identification responses in short or long SOA, suggesting the ?lack? of phonological facilitation effect propagation. The pattern suggests that phonological encoding is either a pre-central or central-stage processing followed by self-monitoring, as discussed in the introduction of this chapter (Cook & Meyer, 2008). Experiment 1B Following exclusion of invalid responses, including trials in which participants did not respond correctly to both of the dual tasks (14.7% of data), accurate responses with latencies over 2000 ms (0.4%), and inappropriate activation of voice key (0.7%), a total of 16% of data were excluded from response latency analysis. Overall responses in Experiment 1B again showed the expected PRP effect: as SOA decreased from long to short, tone identification latencies (Task 1) remained relatively similar across SOAs (an average increase of 34 ms in latency and a 1% decrease in accuracy), while picture-naming latencies (Task 2) increased (an average increase of 224 ms and no change in accuracy). The pattern was supported by a statistically significant Task X SOA interaction, F1(1,14) = 115.583, p < .001, partial ?2 = .892; F2(1,29) = 389.139, p < .001, partial ?2 = .931. Task performances across conditions are displayed in Figure 15 for latency and Table 3 for accuracy. 61 Figure 15. Mean response latencies in Experiment 1B. SOA: stimulus-onset- asynchrony. Table 3. Mean response accuracies in Experiment 1B !"#$% !$#&% !$#'% !$#(% !$#)% $#$% $#)% $#(% $#'% $#&% "*$!+,% -*$!+,% !" #$ %& #" '() *& "+ ,'- ./ #+ %0 "1 ' 234' .//0/%12/,%34536278%,72452/5%8//0/% 9367:/8%42+34;%<=2,>"?% =048%3584@A62@04%<=2,>)?% !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 62 Task 1 (tone identification task). In the current experiment, the tone identification task served as the first task presumed to pass through the bottleneck, generating the ?slack? in the second task immediately behind (the picture-naming task). It was expected that tone identification responses remained consistent across conditions without interacting with distracter type. Mean response latencies in the tone identification task are illustrated in dash lines in Figure 15. Analyses of tone identification response latency showed an effect of SOA by item only, F2(1,29) = 5.549, p < .05, partial ?2 = .161, reflecting the slower responses in short than long SOA, and an effect of distracter by subject only, F1(2,28) = 4.106, p < .05, partial ?2 = .227, reflecting the slower responses in the semantically-related than phonologically-related condition. Importantly, the above patterns remained independent of each other, showing no interaction between SOA and distracter type (ps > .1). Response accuracy did not vary among conditions (ps > .1). 150-ms SOA 950-ms SOA Task Distracter Accuracy (SD) Accuracy (SD) Picture naming Phonological 0.996 (0.012) 0.998 (0.009) Semantic 0.938 (0.05) 0.94 (0.084) Unrelated 0.989 (0.016) 0.993 (0.019) Tone identification Phonological 0.916 (0.109) 0.884 (0.09) Semantic 0.84 (0.134) 0.844 (0.11) Unrelated 0.869 (0.099) 0.873 (0.081) 63 Task 2 (picture-naming task). The picture-naming task served as the critical task for evaluating the PRP patterns in the current experiment based on the observation of additive or under-additive PWI effects. The expected patterns included the observation of a reduced or absent semantic interference effect in short SOA, compared to long SOA (i.e., an under-additive semantic interference effect), and similar phonological facilitation effects in both short and long SOA (i.e., an additive phonological facilitation effect). Analyses of picture-naming latency showed a significant main effect of distracter type, F1(2, 28) = 19.686, p < .001, partial ?2 = .584; F2(2, 58) = 20.1, p < .001, partial ?2 = .409, reflecting the slowest responses in the semantically-related condition and the fastest in the phonologically-related condition, and no patterns suggesting speed-accuracy tradeoff. Analyses of latency for semantic encoding showed that the responses were significantly slower in the semantically-related than unrelated condition (i.e., semantic interference effect) in long SOA by subject, t1(14) = 2.533, p < .025, and not in short SOA (ps > .1), reflecting an absence of semantic interference effect in short SOA, compared to long SOA (an under-additive semantic interference effect). Analyses for latency of phonological encoding showed a significant main effect of distracter type (the phonological facilitation effect), F1(1,14) = 21.707, p < .001, partial ?2 = .608; F2(1,29) = 14.912, p < .01, partial ?2 = .34, without interaction between SOA and distracter type (ps > .1), reflecting similar phonological facilitation effects across short and long SOAs (an additive phonological facilitation effect). Analyses of picture-naming response accuracy showed an effect of distracter type, 64 F1(2, 28) = 14.278, p < .001, partial ?2 = .505; F2(2, 58) = 14.491, p < .001, partial ?2 = .333, and no other effect or interaction (ps > .5). To summarize Experiment 1B, responses in the tone identification task (Task 1) showed a fairly consistent pattern across conditions and no interactions with distracter type, which is what would be required in order to validate the adaptation of this task to a clinical population. Importantly, picture-naming latency (Task 2) showed a semantic interference effect in long SOA, but not in short SOA, suggesting an under-additive semantic effect and that semantic encoding was likely a pre-central process. In contrast, the phonologically interference effect was significant in both SOAs, suggesting an additive phonological effect and that phonological encoding was either a central-stage or post-central process. Discussion Overall, results in Experiment 1A and 1B replicated the main patterns of semantic and phonological effects in the PRP paradigm in prior studies (Cook & Meyer, 2008; Dell?Acqua et al., 2007; Ferreira & Pashler, 2002; Roelofs, 2008). Specifically, propagation of the semantic interference effect but not the phonological facilitation effect was observed in the tone identification task (Task 2) of Experiment 1A, and an under-additive semantic interference effect as well as an additive phonological facilitation effect were observed in the picture-naming task (Task 2) of Experiment 1B. Semantic encoding. Manipulation in Experiment 1A isolated the target linguistic effects in Task 1 (picture naming). When the dual tasks were in close 65 temporal succession (150-ms SOA), a substantial and reliable semantic interference effect was observed in Task 2 (tone identification), while no effect propagation was observed when the two tasks were temporally apart (950-ms SOA), replicating findings in previous research (Ferreira & Pashler, 2002). Although Ferreira and Pashler (2002) suggested that this effect propagation in short SOA indicated a central- stage process of semantic encoding, it is also possible that semantic encoding is a pre- central process, according to the bottleneck model and the PRP predictions. A PRP experiment with the reversed task order would be necessary to rule out the possibility that semantic encoding occurs at the pre-central stage of word production (Dell?Acqua et al., 2007). In Experiment 1B, the target effects were isolated in Task 2 (picture naming). There was a reliable semantic interference effect in Task 2 (picture naming) in long SOA, but the effect was no longer observed in short SOA, a replication of results in previous research (Dell?Acqua et al., 2007). This under-additive effect is taken to suggest that semantic encoding is a pre-central process, rather than a central-stage process. Phonological encoding. In contrast, in Experiment 1A, there was a lack of effect propagation, suggesting that phonological encoding is a post-central-stage processing (Ferreira & Pashler, 2002). However, as other studies have shown, it could also reflect a central-stage processing plus effects from either strategic or increased self-monitoring process by task demand in the experiment; that is the phonological facilitation effect was propagated but was then cancelled by another central-stage process immediately following phonological encoding and prior to tone 66 identification (Cook & Meyer, 2008; Roelofs, 2008a). If the effect propagated but was canceled by additional effects, it would then suggest that phonological encoding is either a pre-central or central-stage process. In Experiment 1B, a substantial phonological facilitation effect was reliably observed and showed no difference under short versus long SOAs in Task 2 (picture naming), indicating an additive effect. This additive effect suggests that phonological encoding is either a central or post-central processing. Critically, the findings from this experiment and prior research together support the implication that phonological encoding is a central-stage process. There was an inflated semantic effect, compared to that in Task 1 (picture naming) (84 ms and 26 ms, respectively) observed in Task 2 (tone identification) of Experiment 1A. This pattern has been reported in prior research but has yet to be explained (Ferreira & Pashler, 2002). This could be a scaling effect because of the slower responses in tone identification. Alternatively, the self-monitoring mechanism proposed by Cook and Meyer (2008) could also account for this inflated semantic interference effect, except that the relationship between self-monitoring and semantic relatedness needs further research and is not yet clear from the available research. Since accuracy rates were in general the lowest in the semantically-related condition among all three distracter conditions, it is likely that the semantically-related condition draws on greater post-lexical monitoring than does the unrelated condition. Overall, the magnitudes of the observed effects and the PRP results in Experiment 1A and 1B were similar to prior studies in the typical literature, and they together indicate that the semantic encoding process is highly automatic, not involving central cognitive resources, whereas phonological encoding process is capacity demanding. 67 These findings enable us to use this paradigm to evaluate linguistic processing in PWS. 68 Chapter 4: Experiment 2 ? Automaticity of Word Encoding in Adults Who Do and Do not Stutter The implicated overall inefficiency in linguistic processing in AWS, particularly in semantic processing (see Bosshardt, 2006), would predict a fundamental difference in the PRP patterns of semantic effect between AWS and NS. Findings from Experiment 1 and prior research suggest that semantic encoding is highly automatic and efficient in young NS. With the assumption that NS would show the same patterns as in the typical literature, AWS were predicted to show different PRP patterns, suggesting that semantic encoding is a capacity-demanding process. In contrast, while phonological encoding has already been suggested to be capacity demanding, the PRP patterns should be similar for both AWS and NS. The predicted patterns are schematically illustrated for NS (Figure 16 and 17) and for AWS (Figure 18 and 19). Additionally, if stuttering reflects an underlying disorder of linguistic encoding efficiency, it would be predicted that the overall dual-task interference effect in word production could correlate to measures from stuttering assessments (SSI-4 scores). 69 Figure 16. Predicted NS latency patterns of Experiment 2A with picture naming as the first task. Solid lines: picture naming task; dashed line: tone identification task. !"#$%&' ("#$%&' !" #$ %& #" '() *" &+ ,' -./' !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 70 Figure 17. Predicted NS latency patterns of Experiment 2B with tone identification as the first task. Solid lines: picture naming task; dashed line: tone identification task. !"#$%&' ("#$%&' !" #$ %& #" '() *" &+ ,' -./' !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 71 Figure 18. Predicted AWS latency patterns of Experiment 2A with picture naming as the first task. Solid lines: picture naming task; dashed line: tone identification task. !"#$%&' ("#$%&' !" #$ %& #" '() *" &+ ,' -./' !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 72 Figure 19. Predicted AWS latency patterns of Experiment 2B with tone identification as the first task. Solid lines: picture naming task; dashed line: tone identification task. Two theories explicitly propose that stuttering results from a specific deficiency in phonological processing, the Covert Repair Hypothesis (CRH; Postma & Kolk, 1993) and EXPLAN (Howell, 2004). Under the hypothesis that semantic and phonological encoding processes are equivalently capacity demanding in AWS, these processes could be modulated by cognitive demand and examined for fundamental/subtle encoding deficiencies. The study manipulated cognitive demand !"#$%&' ("#$%&' !" #$ %& #" '() *" &+ ,' -./' !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 73 through short SOA versus long SOA in the PRP paradigm. Short SOA is assumed to impose high cognitive demand because of the concurrent processing involving two tasks in close succession, whereas long SOA is assumed to impose low cognitive demand because of the relatively serial processing when two tasks are temporally further apart. If AWS are selectively deficient in phonological encoding, it is predicted that AWS would differ from NS in phonological but not semantic encoding. However, if AWS have problems in semantic encoding, AWS would differ from NS in semantic but not phonological encoding tasks. If AWS are fundamentally deficient in either encoding skill, AWS would differ from NS in low demand (long SOA), whereas if AWS are subtly deficient, AWS would differ from NS under high demand (short SOA) only (Table 4). Further, if stuttering is the result of any linguistic encoding deficiency, it would be predicted that semantic and/or phonological effect(s) would correlate with stuttering measures in SSI-4. Table 4. Predictions of semantic and phonological encoding deficiencies Cognitive demand Linguistic processes Semantic Phonological High Subtle deficit Subtle deficit Low Fundamental deficit Fundamental deficit 74 Method Participants Twenty AWS and 20 adults who do not stutter (NS) participated in the study. Both groups were matched in age (AWS, M = 34 years, range: 19-60, S.D. = 12.8; NS, M = 33.2 years, range: 19-64, S.D. = 14.1), gender (14 male, 6 female) and the level and years of education (AWS, M = 17.1 years, range: 13-21, S.D. = 2; NS, M = 17.3 years, range: 14-21, S.D. = 2.3). All participants were native English speakers, reported no known additional speech/language disorders, had normal/corrected vision, normal hearing acuity, and passed hearing screening at 500, 1000, 2000 and 4000 Hz at 25dB HL. All participants were recruited from local campus communities and regional professional organizations, and gave formal consent prior to participation. To screen for the inclusion criteria and rule out individuals with a late onset of stuttering, all participants completed a questionnaire on language and stuttering history (Appendix B). All AWS had been diagnosed with developmental persistent stuttering. Stuttering severity was assessed with the Stuttering Severity Instrument for Children and Adults- Fourth Edition (SSI-4; Riley, 2009). Distribution of stuttering severity is illustrated in Figure 20. Detail SSI-4 scores of each AWS are included in Appendix C. 75 Figure 20. Distribution of stuttering severity based on the SSI-4 To assess potential group differences in vocabulary and short-term memory skills, all participants were administered the Peabody Picture Vocabulary Test, Fourth Edition (PPVT-4; Dunn & Dunn, 2007) and nonverbal short-term memory span tests (digit pointing test and figure pointing test; De Renzi & Nichelli, 1975). The PPVT-4 measures receptive vocabulary, and requires the participants to identify one picture out of four that best corresponds to the target word spoken by the test administrator. The nonverbal digit and figure pointing tests assess short-term memory span with the participant serially pointing to visual items (digits or pictures) following a string of verbally presented test items (digits or object names). Short-term memory through the nonverbal modality was measured to detect the influence of any short- term memory difference between the groups. No difference was observed between AWS and NS in vocabulary, measured by the PPVT-4 standard scores, or short-term !" #" $" %" &" '" (" )" *" +" #!" ,-./0123" 0123" 043-.56-" 7-8-.-" ,-./7-8-.-" !" # $% &'( )'* + ,' ,-".%&/01',%2%&/-3'4,,567'8' ,-".%&/01',%2%&/-3'9&(:;%' 76 memory skills, as measured by the digit span scores and the figure span scores (all ps > 0.05) (Table 5). Table 5. Demographic information between participant groups Apparatus and Stimuli All equipment setup and stimuli were identical to Experiment 1. In addition, language samples for stuttering assessment were collected using a Flip Video camera. Design and Analyses Automaticity of word encoding in AWS and NS. Experimental design for investigating the automaticity of semantic and phonological encoding was identical to that in Experiment 1, except that dependent variables now included stuttering rate for the AWS and the dual-task interference effect. Randomization procedures described in Experiment 1 is a particularly important part of the design to control for potential practice effect across the experiment. Stuttered responses were marked separately, AWS NS M (SD) M (SD) Age 34 (12.8) 33 (14.1) Education (years) 17 (2.0) 17 (2.3) Digit Point Span 8 (1.6) 8 (1.4) Figure Point Span 6 (0.4) 6 (0.6) PPVT4 109 (9.2) 111 (9.8) ! 77 and included any part-word repetition, blocks and prolongations. All AWS were monitored for stuttering during the experiment by the experimenter and verified by review of the audio recording. Stuttering rate was measured for the picture-naming task only. The measure of dual-task interference effect size was calculated from the total reaction time difference between long and short SOA divided by the total reaction time in the long SOA, [(RTTask1.LongSOA + RTTask2.LongSOA) ? (RTTask1.ShortSOA + RTTask2.ShortSOA)]/ (RT1LongSOA + RT2LongSOA) (Jiang, Saxe, & Kanwisher, 2004). Trial exclusion criteria included the following: responses were excluded from reaction time analysis when either the picture-naming or tone identification task response of the same trial was inaccurate (11% of data), when correct responses were slower than 2000 ms (4% of data), when stuttering was present (1% of data), and when the voice key was not accurately activated by verbal responses (1% of data). There was no group difference in trial exclusion rate, except for the stuttering trials (ps > .5). Data analyses were identical to Experiment 1, with additional analyses described below. The dual-task interference effect size in Experiment 2A, in which picture naming was the primary Task 1, was submitted for correlation analysis with factors including gender, age and scores of SSI-4 and stuttering rates (for AWS only), PPVT, digit pointing and figure pointing. Criteria for potential word encoding deficiency in AWS. The investigation of deficiency included three independent variables: one between-subject variable, group (AWS and NS), and two within-subject variables, distracter (semantically or phonologically-related and unrelated) and SOA (100 ms and 950 ms). Response 78 latency, accuracy rate and stuttering rate were measured for the picture-naming task. All other design variables remained the same as described in the previous sections. Following response exclusion described in the previous section, reaction times, accuracy rate and stuttering rate for the picture-naming task were analyzed using a 2 X 2 (Group X Distracter) mixed-model factorial ANOVA with participant and item as random variables. Separate analyses were conducted to evaluate semantic (semantically-related vs. unrelated) and phonological (phonologically-related vs. unrelated) encoding skills in each SOA condition. To investigate the relationship between stuttering and encoding skills, semantic and phonological effects were submitted for correlation analysis with SSI-4 scores, including 1) stuttering rates measured from conversation and 2) the total score. Reliability Accuracy and stuttering in both experiments were coded by one licensed speech-language pathologist, and 30% of randomly selected data were coded separately based on session recordings by a second licensed speech-language pathologist blind to the purpose of the study. Inter-rater reliability was defined by the percentage of trial-to-trial agreement over total number of trials coded by both raters. There was high inter-rater reliability with an agreement rate of 97%. Procedure Signed consents were obtained from all participants prior to participation. In the test session, agenda included a hearing screening, two non-verbal short-term memory tests, the first experiment, vocabulary assessment, the second experiment, stuttering severity assessment and debriefing. Experimental procedures for 79 Experiment 2A and 2B were identical to Experiment 1A and 1B, respectively. All testing was conducted in a quiet room. Each visit lasted approximately an hour. Results Automaticity Following exclusion of invalid responses, including trials in which participants did not respond correctly to both of the dual tasks (9% and 14% of data in 2A and 2B), accurate responses with latencies over 2000 ms (0.2% and 0.4% of data in 2A and 2B), inappropriate activation of the voice key (1% and 1% of data in 2A and 2B) and when stuttering was present (1% and 1%), a total of 11% and 16% of data in Experiment 2A and 2B were excluded from response latency analysis. Group NS Experiment 2A. Overall response latency showed the expected PRP effect: as SOA decreased, picture-naming latency (Task 1) remained relatively similar while tone identification latency (Task 2) became slower. This pattern was supported by a statistically significant Task X SOA interaction in latency, F1(1,19) = 86.429, p < .001, partial ?2 = .82; F2(1,29) = 633.568, p < .001, partial ?2 = .956, suggesting the effect of the slack in short SOA. Performance of NS in Experiment 2A is illustrated in Figure 21 for latency and Table 6 for accuracy. 80 Figure 21. Group NS Experiment 2A mean response latencies. SOA: stimulus-onset- asynchrony. Table 6. Group NS Experiment 2A mean response accuracies !"#$"% !"#&"% !"#'"% !"#("% "#""% "#("% "#'"% "#&"% "#$"% )*"!+,% -*"!+,% !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234' .//0/%12/,%34536278%,72452/5%8//0/% 9367:/8%42+34;%<=2,>)?% =048%3584@A62@04%<=2,>(?% !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 81 In the picture-naming task (Task 1), the expected response patterns were the classic semantic interference and phonological facilitation effects; that is, slower responses in the semantically-related than unrelated condition and faster responses in the phonologically-related than unrelated condition. Analyses of picture-naming latency showed a main effect of distracter type, F1(2,38) = 26.091, p < .001, partial ?2 = .579; F2(2,58) = 16.009, p < .001, partial ?2 = .356, reflecting the patterns of slowest responses in semantically-related condition and the fastest in the phonologically-related condition. Analyses of latency for semantic encoding showed a main effect of distracter type by subject (i.e., semantic interference effect), F1(1,19) = 9.213, p < .01, partial ?2 = .327, without interaction with SOA (ps > .5), reflecting the expected semantic interference effect across SOAs. Analyses of latency for phonological encoding showed a main effect of distracter type, F1(1,19) = 27.019, p < .001, partial ?2 = .587; F2(1, 29) = 20.386, p < .001, 150-ms SOA 950-ms SOA Task Distracter Accuracy (SD) Accuracy (SD) Picture naming Phonological 0.982 (0.028) 0.99 (0.019) Semantic 0.938 (0.062) 0.953 (0.042) Unrelated 0.98 (0.038) 0.993 (0.014) Tone identification Phonological 0.825 (0.189) 0.875 (0.21) Semantic 0.788 (0.161) 0.87 (0.174) Unrelated 0.837 (0.172) 0.857 (0.204) ! 82 partial ?2 = .413, without interaction with SOA (ps > .1), reflecting the expected phonological facilitation effect across SOAs. In the tone identification task (Task 2), the expected patterns included the presence of the semantic interference effect in tone identification latency in short and not long SOA (the propagation of semantic effect) and the absence of phonological facilitation effect in tone identification latency in short and long SOA (the lack of phonological effect propagation). Analyses of tone identification latency showed a main effect of distracter type, F1(2, 38) = 21.108, p < .001, partial ?2 = .526; F2(2, 58) = 8.819, p < .001, partial ?2 = .233, and no interaction between SOA and distracter type (ps > .1), reflecting that tone identification latency was modulated by distracter type, particularly by the semantically-related distracters. Analyses of latency for the semantic effect showed that responses were significantly slower in the semantically-related than the unrelated condition (i.e., semantic interference effect) in long SOA by subject, t1(1, 19) = 4.994, p < .001, marginally by item (p =.04), but not in short SOA (ps > .1), reflecting no semantic interference effect in tone identification latency in short SOA (i.e., the unexpected lack of semantic interference effect propagation). However, analyses of accuracy in both accuracy rate and transformed accuracy showed a marginal effect of semantic relatedness in short SOA (lower accuracy in semantically-related than unrelated condition) (p = .04), but not in long SOA (ps > .1), suggesting a pattern of speed- accuracy tradeoff in short SOA; this is further supported by a positive and moderate 83 correlation between latency and accuracy under the short but not long SOA (r2 = .475). Analyses of latency for phonological effect showed that responses were significantly faster in the phonologically-related than unrelated condition (the phonological facilitation effect) in short SOA by subject, t1(1, 19) = -3.123, p < .01, marginally by item (p = .04), but not in long SOA (ps > .5), reflecting the phonological facilitation effect in tone identification latency in short SOA ( an unexpected phonological facilitation effect propagation). However, analyses of accuracy rate, but not transformed accuracy, showed an interaction between SOA and phonological relatedness by subject, F1(2, 38) = 3.404, p < .05, partial ?2 = .152, with relatively low accuracy in the phonologically-related condition in the short SOA, suggesting a pattern of speed-accuracy tradeoff in the short SOA trials; there was a positive yet weak correlation between latency and accuracy (r2 = .072) in the short but not long SOA. Responses in group NS in Experiment 2A showed that there was a lack of semantic interference effect propagation and a phonological facilitation effect propagation observed in tone identification latency (Task 2), but both patterns were confounded by potential speed-accuracy tradeoff, and thus cannot be taken to suggest bottleneck processes with confidence. Group NS Experiment 2B. The expected PRP effect was again observed in picture naming (Task 2) but not in tone identification (Task 1), and was supported by a statistically significant Task X SOA interaction in response latencies, F1(1,19) = 156.148, p < .001, partial ?2 = .892; F2(1,29) = 433.4, p < .001, partial ?2 = .937. The 84 overall performances of NS in Experiment 2B are illustrated in Figure 22 for latency and Table 7 for accuracy. Figure 22. Group NS Experiment 2B mean response latencies. SOA: stimulus-onset- asynchrony. Table 7. Group NS Experiment 2B mean response accuracies !"#$% !"#&% !"#'% !"#(% "#"% "#(% "#'% "#&% "#$% )*"!+,% -*"!+,% !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234' .//0/%12/,%34536278%,72452/5%8//0/% 9367:/8%42+34;%<=2,>)?% =048%3584@A62@04%<=2,>(?% !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .#/ ' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 85 In the tone identification task (Task 1), it was expected that tone identification responses would remain consistent across conditions without interacting with distracter type. Analyses of tone identification latency showed an effect of distracter by subject only, F1(2,28) = 3.622, p < .05, partial ?2 = .160, and importantly, no interaction between SOA and distracter type (ps > .1). In the picture-naming task (Task 2), the expected patterns included the observation of a reduced or absent semantic interference effect in short SOA, compared to long SOA (an under-additive semantic interference effect), and similar phonological facilitation effects in both short and long SOA (an additive phonological facilitation effect). Analyses of latency showed a significant effect of distracter type, F1(2,38) = 29.735, p < .001, partial ?2 = .61; F2(2, 58) = 13.339, p < .001, partial ?2 = .315, reflecting the slowest responses in the semantically-related condition and the fastest in the phonologically-related condition. 150-ms SOA 950-ms SOA Task Distracter Accuracy (SD) Accuracy (SD) Picture naming Phonological 0.987 (0.017) 0.992 (0.015) Semantic 0.957 (0.048) 0.945 (0.058) Unrelated 0.99 (0.024) 0.993 (0.014) Tone identification Phonological 0.887 (0.18) 0.887 (0.181) Semantic 0.882 (0.162) 0.882 (0.156) Unrelated 0.88 (0.185) 0.895 (0.16) ! 86 Analyses of latency for semantic encoding showed a significant main effect of distracter type (i.e., semantic interference effect), F1(1,19) = 17.847, p < .001, partial ?2 = .484; F2(1, 29) = 4.299, p < .05, partial ?2 = .129, without interaction with SOA (ps > .05), reflecting similar semantic interference effects across SOAs (an unexpected additive semantic effect). Similarly for phonological encoding, there was a main effect of distracter type (phonological facilitation effect), F1(1,19) = 12.409, p < .01, partial ?2 = .395; F2(1, 29) = 9.84, p < .01, partial ?2 = .253, without interaction with SOA (ps > .1), reflecting similar phonological facilitation effects across SOAs (the expected additive phonological effect). Responses by group NS in Experiment 2B showed that there was an additive semantic interference effect, as well as an additive phonological facilitation effect in the picture-naming latency (Task 2), suggesting that both semantic and phonological encoding are likely either central-stage or post-central processes. This is an unexpected pattern, and will be discussed later in this chapter. Group AWS Experiment 2A. The expected PRP effect was observed in Task 2 (tone identification) but not Task 1 (picture naming). This pattern was supported by a significant Task X SOA interaction in response latencies, F1(1,19) = 84.082, p < .001, partial ?2 = .816; F2(1,29) = 477.987, p < .001, partial ?2 = .943, reflecting that overall tone response latency increased as SOA decreased, while picture-naming response latency remained similar across SOA, suggesting the effect of ?slack? in the short SOA. The overall performances of AWS in Experiment 2A are illustrated in 87 Figure 23 for latency and Table 8 for accuracy. Stuttering rate measured within experiments was relatively low (1%), and showed no difference among conditions (ps > .1). Figure 23. Group AWS Experiment 2A mean response latencies. SOA: stimulus- onset-asynchrony. Table 8. Group AWS Experiment 2A mean response accuracies !"#$"% !"#&"% !"#'"% !"#("% "#""% "#("% "#'"% "#&"% "#$"% )*"!+,% -*"!+,% !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234' .//0/%12/,%34536278%,72452/5%8//0/% 9367:/8%42+34;%<=2,>)?% =048%3584@A62@04%<=2,>(?% !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "( *) +, ( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,' -. #/ ' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 88 In the picture-naming task (Task 1), the expected patterns in the picture- naming responses were again the classic semantic interference and phonological facilitation effects; that is, slower responses in the semantically-related than unrelated condition and faster responses in the phonologically-related than unrelated condition. Analyses of picture-naming latency showed a significant main effect of distracter type, F1(2,38) = 19.849, p < .001, partial ?2 = .511; F2(2,58) = 7.75, p < .01, partial ?2 = .211, and no other effect or interaction (ps > .1), reflecting the patterns of slowest responses in the semantically-related condition and fastest responses in the phonologically-related condition. Analyses of latency for semantic encoding showed a significant main effect of distracter type in accuracy, F1(1,19) = 8.657, p < .01, partial ?2 = .313; F2(1, 29) = 13.157, p < .01, partial ?2 = .312, but not in latency (ps > .5), and a marginal interaction between SOA and distracter type in latency (p = .07), reflecting the expected semantic interference effect in accuracy rather than latency. Same results were obtained with analysis of the transformed 150-ms SOA 950-ms SOA Task Distracter Accuracy (SD) Accuracy (SD) Picture naming Phonological 0.975 (0.021) 0.983 (0.033) Semantic 0.923 (0.068) 0.937 (0.077) Unrelated 0.965 (0.049) 0.972 (0.062) Tone identification Phonological 0.868 (0.119) 0.903 (0.089) Semantic 0.77 (0.154) 0.898 (0.089) Unrelated 0.847 (0.134) 0.902 (0.109) ! 89 accuracy. Analyses of latency for phonological encoding also showed a significant main effect of distracter, F1(1,19) = 19.34, p < .001, partial ?2 = .504; F2(1, 29) = 13.27, p < .01, partial ?2 = .314, without interaction with SOA (ps > .1), reflecting the expected phonological facilitation effect. In the tone identification task (Task 2), the expected patterns included the presence of the semantic interference effect in tone identification latency in short and not long SOA (the propagation of semantic effect) and the absence of a phonological facilitation effect in tone identification latency in short and long SOA (the lack of phonological effect propagation). Analyses of tone identification latency showed a significant interaction between distracter type and SOA by subject, F1(2, 38) = 4.388, p < .05, partial ?2 = .188, and marginally by item (p = .054). Analyses of latency for the semantic effect showed that responses were significantly slower in the semantically-related than unrelated condition (i.e., semantic interference effect) only in short SOA by subject, t1(19) = 3.718, p < .01, marginally by item (p = .03), but not in long SOA (ps > .1), reflecting the semantic interference effect in tone identification latency in short SOA, but not in long SOA (the expected propagation of semantic interference effect). Analyses of latency for phonological effect showed no effect of distracter type or interaction with SOA (ps > .1), reflecting the absence of a phonological facilitation effect in tone identification latency in both short and long SOAs (the expected lack of phonological facilitation propagation effect). Taken together, responses from AWS in Experiment 2A showed the expected semantic effect propagation, suggesting that semantic encoding is either a pre-central 90 or a central stage process. There was also the expected lack of phonological effect propagation, suggesting that phonological encoding is either a post-central process, or a pre-central or central-stage process followed by a central-stage self-monitoring process. Group AWS Experiment 2B. The expected PRP effect was again observed in picture naming (Task 2) and not in tone identification (Task 1). This was supported by a significant Task X SOA interaction in response latencies, F1(1, 19) = 92.994, p < .001, partial ?2 = .830; F2(2,58) = 382.175, p < .001, partial ?2 = .929, reflecting the effect of slack in the short SOA. The overall performances of AWS in Experiment 2B are illustrated in Figure 24 for latency and Table 9 for accuracy. 91 Figure 24. Group AWS Experiment 2B mean response latencies. SOA: stimulus- onset-asynchrony. Table 9. Group AWS Experiment 2B mean response accuracies !"#$% !"#&% !"#'% !"#(% "#"% "#(% "#'% "#&% "#$% )*"!+,% -*"!+,% !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234'.//0/%12/,%34536278%,72452/5%8//0/% 9367:/8%42+34;%<=2,>)?% =048%3584@A62@04%<=2,>(?% !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 92 In the tone identification task (Task 1), it was expected that tone identification responses would remain consistent across conditions without interacting with distracter type. Analyses of tone identification response latency showed an effect of SOA by item only, F2(1, 29) = 9.143, p < .01, partial ?2 = .24, an effect of distracter by subject only, F1(2, 38) = 5.841, p < .01, partial ?2 = .235, and no interaction between SOA and distracter (ps > .2). In the picture-naming task (Task 2), the expected patterns included the observation of similar semantic interference effects in both short and long SOAs (an additive semantic interference effect). Similarly, it was expected that similar phonological facilitation effects would be observed in both short and long SOA (an additive phonological facilitation effect). Analyses of picture-naming latency showed a significant main effect of distracter type, F1(2,38) = 40.401, p < .001, partial ?2 = .68; F2(2,58) = 25.765, p < .001, partial ?2 = .47, reflecting the slowest responses in the semantically-related 150-ms SOA 950-ms SOA Task Distracter Accuracy (SD) Accuracy (SD) Picture naming Phonological 0.995 (0.025) 0.982 (0.028) Semantic 0.942 (0.061) 0.947 (0.056) Unrelated 0.985 (0.041) 0.98 (0.036) Tone identification Phonological 0.878 (0.14) 0.882 (0.133) Semantic 0.862 (0.147) 0.882 (0.133) Unrelated 0.885 (0.139) 0.887 (0.144) ! 93 condition and the fastest in the phonologically-related condition, and no patterns suggesting a speed-accuracy tradeoff. Analyses of latency for semantic encoding showed a significant main effect of distracter type (i.e., semantic interference effect), F1(1,19) = 26.226, p < .001, partial ?2 = .58; F2(1, 29) = 11.542, p < .01, partial ?2 = .285, without interaction with SOA (ps > .5), reflecting similar semantic facilitation effects across short and long SOAs (i.e., the expected additive semantic facilitation effect). Similarly, analyses for latency for phonological encoding showed a significant main effect of distracter type (i.e., phonological facilitation effect), F1(1,19) = 14.675, p < .01, partial ?2 = .436; F2(1, 29) = 14.594, p < .01, partial ?2 = .335, without interaction with SOA (ps > .05), reflecting similar phonological facilitation effects across short and long SOAs (an additive phonological facilitation effect). Responses of AWS in Experiment 2B showed the expected patterns of additive semantic and phonological effects in picture-naming responses in Task 2, suggesting that both semantic and phonological encoding are likely either central- stage or post-central processes. In summary, combined results from Experiment 2A and 2B showed the following: performance of NS showed a pattern of speed-accuracy tradeoff in Experiment 2A, while in Experiment 2B, NS showed an unexpected additive effect of semantic interference, suggesting that semantic encoding is either a central-stage or post-central process. In terms of phonological encoding, NS showed an expected additive effect of phonological facilitation, suggesting that phonological encoding is either a central-stage or post-central process. These patterns together suggest that 94 semantic and phonological encoding are both likely central-stage processes in NS, which will be discussed further in later sections. Performance in AWS showed the expected effect propagation of semantic interference, suggesting that semantic encoding is either a pre-central or central-stage process; AWS also showed an expected additive effect of semantic interference, suggesting that semantic encoding is either a central-stage or post-central stage process. In terms of phonological encoding, AWS showed no phonological effect propagation, suggesting that phonological encoding is likely a post-central process or a pre-central/central-state process plus self-monitoring process. Additionally, AWS showed an expected additive effect of phonological facilitation, suggesting that phonological processing is either a central-stage or post-central stage process. These patterns together suggest that both semantic and phonological encoding processes are likely central-stage processes in AWS, similar to NS. Automaticity and Stuttering The relationship between stuttering and automaticity of word production was examined by a correlation analysis between stuttering measures and the dual-task interference effect size. The dual-task interference effect size did not differ across distracter conditions (p > .05), and an overall interference effect size was calculated for each AWS and analyzed for bivariate correlation with SSI-4 scores and stuttering rates. Analysis showed a positive and moderate relationship between interference effect size and SSI-4 total score (r2 = .155) in Experiment 2A but not 2B (Figure 25 and 26), and there was no correlation between stuttering rate and interference effect in either Experiment 2A or 2B (Figure 27 and 28). Furthermore, stuttering rates 95 measured during the experiments did not vary across SOA conditions, either. This suggests that there is no reliable relationship between the automaticity of word production on the experimental task and stuttering. Figure 25. Correlation between SSI-4 total score and interference effect size in Experiment 2A. 0 10 20 30 40 50 -1 .0 -0 .5 0.0 0. 5 1. 0 SSI-4 total score In te rfe re nc e ef fe ct s iz e r2 = 0.155 96 Figure 26. Correlation between SSI-4 total score and interference effect size in Experiment 2B. 0 10 20 30 40 50 -1 .0 -0 .5 0.0 0. 5 1. 0 SSI-4 total score In te rfe re nc e ef fe ct s iz e r2 = 0.088 97 Figure 27. Correlation between stuttering rate and interference effect size in Experiment 2A. 0 5 10 15 20 25 30 35 -1 .0 -0 .5 0.0 0. 5 1. 0 Stuttering rate (SSI-4 Speaking Task) In te rfe re nc e ef fe ct s iz e r2 = 0.09 98 Figure 28. Correlation between stuttering rate and interference effect size in Experiment 2B. Deficiency of Processing Skill in AWS Semantic encoding. Across all conditions, there was a significant main effect of distracter type by subject, F1(1,38) = 7.416, p < .01, partial ?2 = .163, reflecting the expected semantic interference effect. There was a main effect of group by item, F2(1,58) = 29.491, p < .001, partial ?2 = .337, reflecting significantly slower 0 5 10 15 20 25 30 35 -1 .0 -0 .5 0.0 0. 5 1. 0 Stuttering rate (SSI-4 Speaking Task) In te rfe re nc e ef fe ct s iz e r2 = 0.037 99 responses in AWS than NS. Analyses for semantic encoding in short versus long SOA showed that there was a significant interaction between group and distracter type by subject in long SOA, F1(1,38) = 4.658, p < .05, partial ?2 = .109, but not in short SOA (ps > .1), reflecting the relatively reduced semantic interference effect in AWS as compared to NS in long but not short SOA. This is taken to suggest that AWS differ from NS in semantic encoding under low but not high cognitive demand, indicative of the involvement of additional undefined processing in AWS in the less demanding condition. Response accuracy analyses showed no interaction between group and distracter type (ps > .5). Semantic interference effects are illustrated in Figure 29. Figure 29. Semantic interference effects in AWS and NS across SOAs Phonological encoding. Analysis of responses showed a significant effect of distracter type, F1(1,38) = 46.775, p < .001, partial ?2 = .552; F2(1, 58) = 15.102, p < !"#$% !"#&% !"#'% !"#(% "#"% "#(% "#'% "#&% "#$% ()"!*+% ,)"!*+% !" #$ %# &% '() %* +& ', '$ -*) .) /" * !01* -./% 0/% 12232%452+%678695:;%+:578528%;2232% 100 .001, partial ?2 = .207, reflecting the expected phonological facilitation effect. Analyses for phonological encoding in the short versus long SOA condition showed that there was a marginal interaction between group and distracter type by subject in short SOA, (p = .07), but not long SOA (ps > .5), reflecting the pattern that AWS and NS responded to phonological relatedness differently in short but not long SOA conditions. After excluding outliers with response latency faster than 250 ms (Damian & Martin, 1999), the interaction between group and distracter type in the short SOA became significant by subject, F1(1, 36) = 5.748, p < .05, partial ?2 = .138. This confirmed the pattern that the phonological facilitation effect was relatively reduced in AWS, as compared to NS, under high demand, but not under low demand; this is taken to suggest a subtle phonological encoding deficiency in AWS. Response accuracy analyses showed no interaction between group and distracter type (ps > .1). Phonological facilitation effects are illustrated in Figure 30. Figure 30. Phonological facilitation effects in AWS and NS across SOAs !"#$% !"#&% !"#'% !"#(% "#"% "#(% "#'% "#&% "#$% ()"!*+% ,)"!*+% !" #$ %# &% '() %* +& ', '$ -*) .) /" * !01* -./% 0/% 12232%452+%678695:;%+:578528%;2232% 101 Linguistic Encoding Demand and Stuttering Pearson correlation analysis was conducted to evaluate the relationship between stuttering and semantic versus phonological encoding skills. Stuttering rates measured from language samples and SSI-4 scores were correlated with the relative semantic interference effect and phonological facilitation effects shown in the picture- naming task. Results showed a moderate correlation between stuttering rate and phonological facilitation effect in the short SOA (r2 = .198) but not long SOA (r2 = .018) (Figure 31). No other correlation was observed between stuttering measures and picture-word interference effects. 102 Figure 31. Correlation between stuttering rate and phonological facilitation effect in the short SOA. Discussion Experiment 2 aimed to determine whether semantic and phonological encoding at the single word level differed in the degree to which they are automatic or capacity demanding processes in AWS and NS, and, if so, whether these differences in semantic/phonological encoding strategies might relate to stuttering in 0 5 10 1 5 20 25 30 35 -1 .0 -0 .5 0.0 0. 5 1. 0 Stuttering rate (SSI-4 Speaking Task) S ta nd ar di ze d ph on ol og ic al e ffe ct r2 = 0.198 103 AWS. Findings suggest that 1) for both AWS and matched NS adults, semantic and phonological encoding are both capacity demanding processes, despite the fact that the typical literature has suggested semantic encoding to be an automatic process (Dell?Acqua et al., 2007), 2) for AWS, no observable relationship exists between stuttering severity and the automaticity of word production, as measured by the dual- task interference effect, 3) the subtle phonological encoding deficiency (determined by differences in the phonological facilitation effects between AWS and NS under high demand, but not under low demand) that was observed in AWS could potentially be an underlying factor in the etiology and persistence of stuttering because of its correlation with stuttering rate, and 4) for AWS, semantic encoding is not deficient, as measured by the semantic interference effect under high demand, although its efficiency may be hampered by unknown processing strategies in AWS, such as a tendency to strategically allocate attention towards monitoring task performance in the experiment under low cognitive demand. Automaticity of semantic and phonological encoding in NS. Combined results from Experiment 2A and 2B suggest that the nature of semantic and phonological encoding is similar in AWS and NS in the age range that was tested; both encoding processes were shown to be capacity demanding in both populations, based on the PRP predictions. In Experiment 2A there was a ?lack? of semantic effect propagation in latency in the responses of NS. This could suggest that semantic encoding was a post-central process for NS; however, the latency result was accompanied by a pattern of speed-accuracy tradeoff, which argues against viewing such encoding as a post-central process. This interpretation is consistent with the 104 existing literature on the time course of semantic processing. The study by Ferreira and Pashler (2002) observed the propagation of the semantic interference effect in response latency (once confounding effects of accuracy were controlled), suggesting that semantic encoding was either a pre-central or central-stage process. Further, the semantic interference effect is typically observed when the semantically-related distracter is presented early (e.g., 150 ms or 0 ms before the onset of the target picture) but not late (e.g., 150 ms after picture onset) (Damian & Martin, 1999), suggesting that lexical-semantic selection is a relatively early process, something that is presumed in most common models of word production (Caramazza, 1997; Dell, 1986; Garrett, 1988; Levelt et al., 1999). Therefore, it is highly possible that the semantic interference effect would have propagated to the tone identification latency (Task 2) in NS were it not for the interference effect in tone identification accuracy (that is, a substantial speed-accuracy tradeoff); this would imply that semantic encoding is either a pre-central or central-stage process, rather than a post-central process. In support of this view, NS in Experiment 2B showed a straightforward pattern of additive effect, suggesting that semantic encoding is either a central or post-central stage process. When these observations are taken together, they support the conclusion that semantic encoding is most likely a central-stage, capacity- demanding process in NS. However, the premise that semantic encoding is a central-stage process in the NS cohort is not supported by the finding in Experiment 1, in which typically fluent young adults showed highly automatic semantic encoding at the pre-central stage of word production. While Experiment 1 and 2 were identical in procedures and stimuli, 105 a closer examination of the participants in the two experiments showed a difference in age; the NS participants chosen to match the demographics of our AWS sample were statistically significantly older than the typically fluent young adults by an average of 11 years (p < .01). Thus, differences in the automaticity of semantic encoding between Experiment 1 and 2 could potentially be related to an age effect. There is evidence suggesting poorer semantic processing in healthy older adults compared to young adults (e.g., Burke, White, & Diaz, 1987; Laver & Burke, 1993; Taylor & Burke, 2002). Yet, the NS in the current study were substantially younger than the ?older adult? population tested in the aging literature (e.g., between 60-85 years old), and were only approximately 10 years older than the young adults in Experiment 1. The PRP patterns used to determine central bottleneck are the group patterns across distractor types and experiments rather than a dependent variable measured for each participant. Therefore, running statistical analysis between age and PRP profiles in the current experiment is not possible. However, exploration can be done by splitting the 20 NS by age and plotting latency performances of the two subgroups to obtain PRP profiles. The two subgroups (mean age: 22, 44 years old; SD = 3.16, 11.47 respectively) showed very different PRP profiles: only the younger group resembles the patterns in prior research and in Experiment 1A and 1B of this study (Figure 32 and 33). A sample size of 10 subjects is too small for any further analysis or interpretation of the patterns. The potential effect of age on the automaticity of semantic encoding across adulthood would require more research; if our results were replicated, it would suggest caution against building psycholinguistic 106 models purely on the use of data from college-aged adults (Henrich, Heine, & Norenzayan, 2010). !"#$%&'()*+"),(-.*/012( .3)'("4')56#*53)(-.*/072( 89:;( 89:<( 89:=( 89:7( 9:9( 9:7( 9:=( 9:<( 9:;( 1>98+/( ?>98+/( !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234' 56$'74' 89:;( 89:<( 89:=( 89:7( 9:9( 9:7( 9:=( 9:<( 9:;( 1>98+/( ?>98+/( !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234' 48"99'56$4' 89:;( 89:<( 89:=( 89:7( 9:9( 9:7( 9:=( 9:<( 9:;( 1>98+/( ?>98+/( !" #$ %& #" '() *" &+ ,'- ./ #+ %0 "1 ' 234' 48"::'56$4' !"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .# /' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 107 Figure 32. PRP profiles by age group when picture naming was the first task. From top to bottom left and right: undergraduate group in Experiment 1A, younger subgroup of NS in Experiment 2A, older subgroup of NS in Experiment 2A. !"#$% !"#&% !"#'% !"#(% "#"% "#(% "#'% "#&% "#$% )*"!+,% -*"!+,% !" # $% &# "' () *" &+ ,'- ./ # + %0 "1 ' 234' 45"66'78$9' !"#$% !"#&% !"#'% !"#(% "#"% "#(% "#'% "#&% "#$% )*"!+,% -*"!+,% !" # $% &# "' () *" &+ ,'- ./ # + %0 "1 ' 234' 45"::'78$9' !"#$% !"#&% !"#'% !"#(% "#"% "#(% "#'% "#&% "#$% )*"!+,% -*"!+,% !" # $% &# "' () *& "+ ,'- ./ # + %0 "1 ' 234' 78$';9' ./01%231045674/0%8.7,9):% ;26<=>1%07+20?%8.7,9(:% @>>/>%A7>,%203267<1%,<7037>3%1>>/>%!"#$ %##$ %"#$ &##$ &"#$ '##$ '"#$ (###$ (#"#$ ("#$ '"#$ !" #$ %& '( %) "(* ) +,( -./(*)+,( 01$234"56&47(8'2"49"4"'$"(:#+;( )*+,-./,001$2*0,3*4$ 5-2*0,3*4$ 678-8089:/,001$2*0,3*4$ ;)<)6$ ;)<56$ ;)<66$ !"#$%& '#()& *+,-.,/0& 12!& 3$#+4&!" #$ %& #" '() *" &+ ,'- .#/ ' 5#(674%+,--8&$7-,%79& :($7-,%79& ;"#(#-#6<,--8&$7-,%79& !""#"$%&"'$()*(+&,-$',&)*&"*$-""#"$ 108 Figure 33. PRP profiles by age group when tone identification was the first task. From top to bottom left and right: undergraduate group in Experiment 1B, younger subgroup of NS in Experiment 2B, older subgroup of NS in Experiment 2B. Automaticity of semantic and phonological encoding in AWS. Combined results from Experiment 2A and 2B suggest that both semantic and phonological encoding processes are capacity demanding. If only viewed in light of the performance of the AWS, this finding is consistent with Bosshardt and colleagues? view that linguistic processing lacks modularity in AWS. However, both AWS and NS showed the involvement of the central bottleneck in both encoding processes, which does not demonstrate any differences in modularity specific to the stuttering population. Therefore, we evaluated the relationship between stuttering measures and the automaticity of word production. The relationship between stuttering and automaticity was examined by comparing SSI-4 stuttering rates and total scores against the dual-task interference effect size in Experiment 2A, in which word production was the first task that in theory received primary attention (the prioritized task) (e.g., Navon & Miller, 2002; Tombu & Jolic?r, 2003). There was a trend for the SSI-4 score to increase as the interference effect increased, but this relationship was not statistically significant. In contrast, when tone identification was the first task receiving primary attention (Experiment 2B), the automaticity of tone identification showed no pattern of correlation at all with SSI-4 scores. It is not straightforward to link this set of findings 109 with research suggesting auditory processing deficits in AWS, because of task design differences (e.g., Hall & Jerger, 1978; Hampton & Weber-Fox, 2008), but, it points to the continued unmet need for supporting evidence for the relationship between observed stuttering measurements and measures of any implicated deficits. An important implication of the current findings regarding the automaticity of semantic and phonological encoding in AWS (that both are capacity demanding) is that these encoding processes are vulnerable to concurrent processing demands that compete for shared cognitive resources; that is, increased non-linguistic cognitive demand could potentially hamper semantic and phonological encoding processes if the underlying cognitive resource is limited. We were able to manipulate processing demand without utilizing different or complex language tasks. Experiment 2A was conceptually based on the DCM, testing for potential breakdowns in semantic and/or phonological processes at different levels of cognitive demand, and examined the role of encoding skills in stuttering (i.e., the phonological deficit proposed by the CRH versus semantic inefficiency proposed by Bosshardt and colleagues). Semantic encoding skill in AWS. Results of the picture-naming task in Experiment 2A between AWS and NS showed that the two groups were compatible in semantic encoding under high demand, and differed only under low demand, with AWS showing a relatively reduced semantic interference effect under low demand, compared to NS. This cannot be taken to suggest a semantic encoding deficit in AWS, as the groups were compatible under high demand. Rather, it is argued that the altered semantic processing in AWS in low demand is relevant to strategic processing. Prior research on semantic encoding in AWS has shown a pattern of 110 strategic cognitive processing indexed by neural activities that differed from that seen in NS (Maxfield, Huffman, Frisch, & Hinckley, 2010). Using ERP measures, authors found an electrophysiological component distributed in the posterior region, when AWS processed semantically-related but not unrelated distracters during delayed single word production; this ERP pattern is indicative of an influence from strategic, inhibitory processing in AWS, but not NS, in semantic encoding. In the current experiment, it is possible that AWS were able to inhibit competing tasks under low demand conditions, and thus showed slower responses under low demand as well as relatively reduced semantic interference compared to NS. In addition, that semantic interference effect showed no correlation with stuttering further weakens the likelihood that semantic encoding deficits are a primary factor in stuttering. It is true that some prior research has found evidence of a selective deficiency in semantic processing in PWS (Bosshardt & Fransen, 1996; Bosshardt et al., 2002). In the prior studies, tasks demand involved reading or generating sentences while simultaneously monitoring for the semantically-related words. The depressed performances in AWS in the semantically-related conditions could be attributable to some unknown strategy use in these judgment tasks. Monitoring, inhibition and decision making are often explicitly required to perform in metalinguistic judgment tasks (e.g., judging whether the two words rhyme, or belong to the same semantic category). Even though the demand was high, judgment tasks do not allow attention to be allocated away from explicit monitoring and high level cognitive processing, unlike the current experiment, in which measurements of linguistic processing depended upon picture naming, without any explicit decision making in the task demand. 111 If AWS employed additional or different strategies in processing semantic relatedness, then it is necessary to explore the absence of this strategy use under high demand in the current experiment, as AWS performed similarly to NS in semantic encoding under high cognitive demand. It could be that under high demand, primary cognitive resource was prioritized towards word encoding, and removed from optional strategic processing, and hence the more typical semantic interference effect was observed. Given AWS? life-long experience with stuttering, AWS may have many strategies developed over the years, and certain cognitive strategies might interfere with semantic encoding for speech-language production. It would be necessary to investigate further the cognitive interferences to speech-language production to better understand how strategy use might interfere or help AWS to cope with stuttering, an important implication for developing and selecting therapeutic approaches in stuttering therapy. Phonological encoding skill in AWS. In contrast to the ambiguous results obtained from the semantic encoding tasks, phonological encoding skill appears to be subtly deficient in AWS on the basis of two findings in Experiment 2A. First, there was a marginal group difference in processing phonological relatedness under high, but not low, demand. Specifically, the expected phonological facilitation effect was relatively reduced in AWS compared to that in NS under high demand, while the two groups were compatible under low demand. It should be noted that the phonological facilitation effect was observed in both AWS and NS, but the two groups differed in the relative magnitude of the phonological facilitation. In general, a greater PWI 112 effect suggests a processing deficit under the assumption that, if a group of speakers is less efficient in encoding responses, they will be disrupted by the related distracters to a greater extent than are typical speakers (for example, healthy older adults show a greater semantic interference effect than do younger adults; see Taylor & Burke, 2002). However, the current experiment showed reduced, rather than increased, phonological facilitation in AWS; this pattern of deficiency has been observed in prior research on CWS as well (Byrd et al., 2007). This might be expected if AWS have atypical phonological/phonemic representations (such as suggested by Byrd et al., 2007) or less efficient access routes. It has been suggested that PWS have abnormal phonemic representations for target words, subserved by less distinct neural substrates that organize and facilitate access to these representations (Corbera, Corral, Escera, & Idiazabal, 2005; Sato et al., 2010). Further support for subtle phonological encoding deficiency in AWS is provided by the finding of a moderate correlation between stuttering rates measured on the SSI-4 and the phonological facilitation effect they demonstrated in the high demand context. As the stuttering rate increased, the phonological facilitation effect decreased under high demand, suggesting that AWS who stutter more severely received less expected facilitation from phonological distracters. In contrast, no significant correlation was observed between stuttering rates and semantic interference effect in either demand condition in AWS. In sum, findings from this experiment suggest that a subtle deficiency in phonological encoding skill in AWS is likely to play some role in stuttering from the pattern of group difference in phonological encoding profiles under high demand and the relationship between stuttering rate and the phonological facilitation effect observed 113 under high demand. Such findings are highly consistent with findings from prior research using very different methodology (e.g., Sasisekaran & de Nil, 2006; Sasisekaran et al., 2006), and supports stuttering theories suggesting specific deficits in phonological encoding in PWS (such as the CRH and EXPLAN). Taken together, the current experiments support the existence of a subtle phonological encoding deficiency in AWS; the findings do not support a selective semantic encoding deficiency, with the exception of the one finding that semantic encoding skill was altered in low but not high demand in AWS, suggesting a potential role of strategic processes that may modulate or depress semantic encoding in selected circumstances. This could account for the particularly depressed performances in semantic related conditions when metalinguistic tasks were used to assess semantic processing skills (e.g., Bosshardt & Fransen, 1996; Bosshardt & de Nil, 2002). A potential limitation of the current study is the lack of continuity in the distribution of stuttering severity in AWS, with most AWS in the very mild to mild stuttering categories, a small group clustered in the very severe stuttering category and no participants who could be categorized as moderate or severe. This pattern is often observed in research with AWS; groups are often not well-distributed in terms of stuttering profile. Therefore, until a full range of stuttering is better represented within a group, the correlation patterns between stuttering rate and the phonological processing task need to be treated with caution. 114 Chapter 5: General Discussion and Conclusion The current study examined word-encoding processes in relation to cognitive demand in AWS and demographically matched NS and presented three primary findings. Overview of the study suggests that planning for word production appears to be a demanding task for both AWS and NS, specifically, both at the early-stage semantic encoding (lemma selection) and the late-stage phonological encoding (phoneme selection). The implicated involvement of shared cognitive resources in semantic and phonological encoding suggests that each of these encoding processes is vulnerable to interference from concurrent processing. While the lack of modularity in linguistic processing has been proposed to play a primary role in stuttering (see Bosshardt, 2006 for review; Bosshardt, 1993; Bosshardt, 1999; Bosshardt & Fransen, 1996; Bosshardt et al., 2002; de Nil & Bosshardt, 2000), the current study failed to find any evidence to support the relationship between stuttering and the lack of modularity in word encoding. In contrast, there appear to be a subtle phonological encoding deficit in AWS, which correlated with stuttering measures, suggesting that a phonological deficit could potentially play a role of in the etiology/persistence of stuttering. These findings do not support an account that word-encoding automaticity (modularity) or semantic deficiency is an underlying factor in stuttering, but a subtle deficit in phonological encoding appears to be characteristic of the AWS we observed 115 in this study. Findings from the current study join the growing body of evidence arguing against the view that stuttering results from a difficulty in lexical retrieval/access (Hennessey et al., 2008; Newman & Bernstein Ratner, 2007; Onslow & Packman, 2002; Packman, Onslow, Coombes, & Goodwin 2001), and contribute to the increasing support that stuttering is likely attributable to some deficit at the sublexical level (Sasisekaran & de Nil, 2006; Sasisekaran et al., 2006; Byrd et al., 2007; Bosshardt, 1999; Anderson et al., 2009; Anderson et al., 2006; Hakim & Bernstein Ratner, 2004). The current study examined only semantic and phonological processes in AWS in order to contrast across levels of linguistic processing within the same methodology; thus, we cannot rule out other potential deficits in the speech-language production system of PWS that contribute somehow to stuttering. There have been many inconclusive findings that suggest various altered linguistic processes in PWS, and the field awaits further research for clarification of the relationship between stuttering and the full scope of the speech production system in PWS. Typical speech- language production obviously recruits many other processes reflected in most language production models, such as grammatical/syntactic processing, internal self- monitoring, post-articulatory monitoring, stress/metrical encoding and incremental phrasal encoding (Garrett, 1988; Levelt et al., 1999; Wheeldon & Smith, 2003). Some of these proposed processes have been implicated as potentially relevant to stuttering, such as self-monitoring (Vasi? & Wijnen, 2005; Bernstein Ratner & Wijnen, 2007) and syntactic processing (Bernstein Ratner, 1997; Cuadrado & Weber-Fox, 2003; Kleinow & Smith; Tsiamtsiouris & Cairns, 2009). If we accept the limited capacity 116 framework, the current findings minimally imply that components other than semantic processing within interactive, multi-factorial models of stuttering might play a more primary role in the etiology and persistence of stuttering. Findings from this study also highlight the importance of taking cognitive demand into account in stuttering research, for a demand either too high or too low could yield different patterns of results, given that AWS showed subtle deficiency in processing and that speech-language production appears to be cognitively demanding, consistent with the proposed ?inefficiency? or ?variability? of processing in PWS (Bosshardt, 2006; Smith, 1999). Additionally, the different findings between typically young adults and NS in the current study have particular implication for the clinical as well as the typical speech/language literature. Theoretical frameworks of typical speech-language processing are frequently used when examining clinical populations. However, theoretical models are often supported by research generated from examining young adult college students, or the WEIRD (White, Educated, Industrialized, Rich, Democratic) society (Henrich, Heine, & Norenzayan, 2010), and thus might lack explanations for processing characteristics in the slightly older, non-geriatric population, as found in the current study. Performance patterns in NS could thus contribute to the typical literature, suggesting there is perhaps a shift in semantic encoding from highly automatic to capacity demanding across the early to middle adult years. In conclusion, this study examined the automaticity of semantic and phonological encoding in AWS, the relationship between stuttering and encoding 117 automaticity, the presence/absence of subtle/fundamental semantic and/or phonological deficiency in this population, and the relationship between stuttering and the observed deficiency. It can be concluded that semantic and phonological encoding in AWS are capacity demanding, just as in age-matched NS, but impairment in the automaticity of word encoding does not appear to underlie stuttering. Further, AWS show a subtle deficit in phonological but not semantic encoding, and thus, phonological encoding skill could potentially play an underlying factor in the etiology/persistence of stuttering. The findings warrant future research for examining the interactions among linguistic, cognitive and motor-speech components to better understand the dynamics of the production system in PWS and to provide further and stronger evidence to support the relationship between phonological encoding deficit and stuttering. The finding of altered semantic encoding under low cognitive demand warrants further research in the area of learned/adaptive processing strategies in AWS, and the potential influence of stuttering therapy addressing the use of maladaptive strategies on speech-language encoding and fluency. 118 Appendix A Target Visual-word distractor Semantically- related Phonologically- related Unrelated List A heart square chart slide cake pie cave deer pencil ruler parcel ginger duck swan duct vine horse bull horn pipe lamp torch champ spice sun moon son map dress pants stress blast belt scarf bowel crutch tiger leopard titan pebble foot leg fool song snail worm sail glue bear lion fare soap bottle pitcher beetle journey apple cherry maple collar pear grape stair slope cat bird cash fire chair stool cheer goose box tray boss coin ear nose gear coat baby adult body item basket hamper casket marlin lock key luck firm truck plane trap camp corn bean cone lace train bus brain shop sink tub silk hog saw axe salt herb candle burner candy pepper crab shrimp crib stamp List B arrow target narrow ticket gun knife gown cup chain rope chin leaf camel lizard cannon tennis peanut almond peacock clergy rabbit beaver rabbi timber button zipper bucket archer dog fox dust fist lemon orange lesson hockey hat shoe mat fin 119 sock boot rock ship bone meat bolt junk car bike core math carrot radish carol dimple spoon fork spine trail boy girl boil ash tree bush treat block fly moth flu rake bed couch bend steam bag sack bat rap bell chime yell roof table desk taste cream plug wire slug cart seal whale seam dorm snake frog snack wool hand arm hint sky drum guitar drug glass bowl dish blow king clock watch cloth stage hammer chisel grammar giggle 120 Appendix B Participant ID: ____________ Date: _____________ ADULT FLUENCY & LANGUAGE QUESTIONNAIRE Please complete Sections I, II and III (questions 1-6). I. PERSONAL INFORMATION 1. Date of Birth: ________ ____(MM/DD/YYYY) 2. Gender: ______ 3.1 Education: __________ (HIGHEST DEGREE) 3.2 Approximate total years of education: ________ II. FLUENCY BACKGROUND 4. Were you diagnosed of or have you noticed stuttering in your speech? YES/NO If YES, please complete the following questions. If NO, please jump to Section III. 4-1. Age at which the stuttering was first diagnosed or noticed? ______ 4-2. Would you rate your stuttering as: Very mild Mild Moderate Moderately-severe Severe 4-3. If you speak another language, do you appear to stutter equally in all languages? Please explain. I do not speak another language. 4-4. History of stuttering therapy: Please describe, including age of treatment, duration, approaches (e.g. techniques, devices, like SpeechEasy) and effectiveness. 4-5. Family history of stuttering: Does any family member have a history of stuttering? 121 III. LANGUAGE BACKGROUND 5. Do you speak any language other than English: YES/NO If YES, please complete the following. If NO, you may stop here. Thank you! 5-1. Which other language(s) do you speak? ____________________________ 5-2. At what age did you learn it? ____________________________________ 5-3. On a scale from 1 to 7, please rate yourself on the following: 1-Very poor, 2-Poor, 3-Fair, 4-Functional, 5-Good, 6-Very good, 7-Native-like Speaking ability: English __ Language 2 __ Other __ Comprehension: English __ Language 2 __ Other __ Writing ability: English __ Language 2 __ Other __ Reading ability: English __ Language 2 __ Other __ Pronunciation: English __ Language 2 __ Other __ 5-4. On any given day, what percent of your time is spent using English ______ Language 2 ______ Other ______ 5-5. Language(s) spoken by the parents: ______________________________ 6. Any additional information you would like to provide: 122 Appendix C Participant Duration Score Frequency Score Physical Score Total Score Severity Stuttering rate (Speaking Task) 1 8 5 5 18 Mild 2 2 12 18 13 43 Very severe 30 3 8 8 5 21 Mild 7 4 4 4 1 9 Very mild 1 5 6 5 3 14 Very mild 2 6 14 16 9 39 Very severe 13 7 4 4 3 11 Very mild 1 8 4 4 2 10 Very mild 1 9 6 6 4 16 Very mild 1 10 8 10 5 23 Mild 6 11 6 6 2 14 Very mild 3 12 12 14 11 37 Very severe 10 13 4 4 3 11 Very mild 1 14 8 11 3 22 Mild 11 15 12 15 11 38 Very severe 13 16 8 8 3 19 Very mild 2 17 6 6 5 17 Very mild 3 18 6 8 5 19 Mild 2 19 10 10 4 24 Mild 15 20 4 4 0 8 Very mild 1 123 Bibliography Adams, M. R. (1990). The demands and capacities model I: theoretical elaborations. Journal of Fluency Disorders, 15, 135?141. Anderson, J. D., Wagovich, S. A., & Hall, N. E. (2006). Nonword repetition skills in young children who do and do nt stutter. Journal of Fluency Disorders, 31, 177-199. Anderson, J. D., Wagovich, S. A., & Hendricks, R. (2009). Linguistic processing speed and nonword repetition in children who stutter. Poster session presented at the American Speech, Language and Hearing Association Annual Convention, New Orleans. Andrews, S. (1992). Frequency and neighborhood effects on lexical access: Lexical similarity or orthographic redundancy? Journal of Experimental Psychology: Learning, Memory, & Cognition, 18, 234-254. Arnstein, D., Lakey, B., Compton, R. J., & Kleinow, J. (2011). Preverbal error- monitoring in stutterers and fluent speakers. Brain & Language, 116, 105-115. Au-Yeung, J., Howell, P., & Pilgrim, L. (1998). Phonological words and stuttering on function words. Journal of Speech, Language, & Hearing Research, 41, 1019- 1030. Ayora, P., Janssen, N., Dell?Acqua, R., & Alario, F.-X. (2009). Attentional requirements for the selection of words from different grammatical categories. Journal of Experimental Psychology: Learning, Memory, & Cognition, 35, 1344-1351. Ayora, P., Peressotti, F., Alario, F.-X., Mulatti, C., Pluchino, P., Job, R., & Dell?Acqua, R. (2011). What phonological facilitation tells about semantic interference: a dual-task study. Frontiers in Psychology, 2:57. doi: 10.3389/fpsyg.2011.00057 Baddeley, A. D., & Hitch, G. J. (1974). Working memory. In G. Bower (ed.), Recent advances in the psychology of learning and motivation (Vol. 8). New York: Academic Press. Balota, D. A., Yap, M. J., Cortese, M. J., Hutchison, K. A., Kessler, B., Loftis, B., ?Treiman, R. (2007). The English Lexicon Project. Behavior Research Methods, 39, 445-459. 124 Bernstein Ratner, N. (1997). Stuttering: a psycholinguistic perspective. In R. Curlee & G. Siegel (Eds.) Nature and treatment of stuttering: new directions (2nd edition). Needham, MA: Allyn & Bacon (99-127). Bernstein Ratner, N. (2000). Performance or capacity, the DCM model still requires definitions and boundaries it doesn?t have. Journal of Fluency Disorders, 25, 337-346. Bernstein Ratner, N., & Wijnen, F. (2007). The Vicious Cycle: Linguistic encoding, self-monitoring and stuttering. In J. Au-Yeung (ed.) Proceedings of the Fifth World Congress on Fluency Disorders (pp 84-90). Blomgren, M., Srikantan, S., Nagarajan, Lee, J. N., Li, L., & Alvord, L. (2003). Preliminary results of a functional MRI study of brain activation patterns in stuttering and nonstuttering speakers during a lexical access task. Journal of Fluency Disorders, 28, 337-356 Bloodstein, O., & Bernstein-Ratner, N. (2008). A handbook on stuttering (6th ed.). Boston, MA: Thompson Delmar Learning. Bosshardt, H.-G. (1993). Differences between stutters? and nonstutterers? short term recall and recognition performance. Journal of Speech & Hearing Research, 36, 286-293. Bosshardt, H.-G. (1999). Effects on concurrent mental calculation on stuttering, inhalation and speech timing. Journal of Fluency Disorders, 24, 43-72. Bosshard, H.-G. (2006). Cognitive processing load as a detereminant of stuttering: Summary of a research programme. Clinical Linguistics & Phonetices, 20, 371-385. Bosshardt, H.-G., Ballmer, W., & de Nil, L. F. (2002). Effects of category and rhyme decisions on sentence production. Journal of Speech, Language, & Hearing Research, 45, 844-857. Bosshard, H.-G., & Fransen (1996). Online sentence processing in adults who stutter and adults who do not stutter. Journal of Speech & Hearing Research, 39, 785-797. Bourassa, D. C., & Besner, D. (1994). Beyond the articulatory loop: A semantic contribution to serial order recall of subspan lists. Psychonomic Bulletin Review, 1, 122?125. Brown, S. F. (1945). The loci of stutterings in the speech sequence. Journal of Speech Disorders, 10, 181-192. 125 Brundage, S. B., & Bernstein Ratner, N. (1989). Measurement of stuttering frequency in children?s speech. Journal of Fluency Disorders, 14, 351-358. Burger, R., & Wijnen, F. (1999). Phonological encoding and word stress in stuttering and nonstuttering subjects. Journal of Fluency Disorders, 24, 91-106. Burke, D. M., White, H., & Diaz, D. L. (1987). Semantic priming in young and older adults: Evidence for age constancy in automatic and attentional processes. Journal of Experimental Psychology: Human Perception & Performance, 13, 79-88. Byrd, C. T., Conture, E. G., & Ohde, R. N. (2007). Phonological priming in young children who stutter: Holistic versus incremental processing. American Journal of Speech-Language Pathology, 16, 43-53. Caramazza, A. (1997). How many levels of processing are there in lexical access? Cognitive Neuropsychology, 14, 177-208. Cohen, J. D., MacWhinney, B., Flatt, M., & Provost, J. (1993). PsyScope: An interactive graphic system for designing and controlling experiments in the psychology laboratory using Macintosh computers. Behavior Research Methods, Instruments, & Computers, 25, 257-271. Cook, A. E., & Meyer, A. S. (2008). Capacity demands of phoneme selection in word production: New evidence from dual-task experiments. Journal of Experimental Psychology: Learning, Memory, & Cognition, 34, 886-899. Cooper, M. H., & Allen, G. D. (1977). Timing control and accuracy in normal speakers and stutterers. Journal of Speech & Hearing Research, 20, 55?71. Corbera, S., Corral, M. J., Escera, C., Idiaz?bal, M. A. (2005). Abnormal speech sound representation in developmental stuttering. Neurology, 65, 1246-1252. Cross, D. E., & Luper, H. L. (1979). Voice reaction time of stuttering and nonstuttering children and adults. Journal of Fluency Disorders, 4, 59-77. Cuadrado, E. M., & Weber-Fox, C. M. (2003). Atypical syntactic processing in individuals who stutter: Evidence from event-related brain potentials and behavioral measures. Journal of Speech, Language, & Hearing Research, 46, 960-976. Cutting, J. C., & Ferreira, V. S. (1999). Semantic and phonological information flow in the production lexicon. Journal of Experimental Psychology: Learning, Memory, & Cognition, 25, 318-344. 126 Damian, M. F., & Bower, J. S. (2003). Locus of semantic interference in picture-word interference tasks. Psychonomic Bulletin & Review, 10, 111-117. Damian, M. F., & Martin, R. C. (1999). Semantic and phonological codes interact in single word production. Journal of Experimental Psychology: Learning, Memory, & Cognition, 25, 345?361. de Nil, L. F., & Bosshardt, H.-G. (2000). Studying stuttering from a neurological and cognitive information processing perspective. In H.-G. Bosshardt, J. S. Yaruss, & H. F. M. Peters (Eds.), Fluency Disorders: Theory, Research, Treatment and Self-Help. Proceedings of the Third World Congress of Fluency Disorders (pp. 53?58). Nijmegen: Nijmegen University Press. Denny, M., & Smith, A. (1992). Gradations in a pattern of neuromuscular activity associated with stuttering. Journal of Speech, Language, and Hearing Research, 35, 1216-1229. De Renzi, E., & Nichelli, P. (1975). Verbal and non-verbal short-term memory impairment following hemispheric damage. Cortex, 11, 341-354. Dell, G. S. (1986). A spreading activation theory of retrieval in sentence production. Psychological Review, 93, 283?321. Dell?Acqua, R., Job, R., Peressotti, F., & Pascali, A. (2007). The picture-word interference effect is not a Stroop effect. Psychonomic Bulletin & Review, 14, 717-722. Dent, K., Johnston, R. A., & Humphrey, G. W. (2008). Age of acquisition and word frequency effects in picture naming: A dual-task investigation. Journal of Experiment Psychology: Learning, Memory, & Cognition, 34, 282-301. Dunn, L. M., & Dunn, D. M. (2007). Peabody Picture Vocabulary Test, Fourth Edition. Minneapolis, MN: NCS Pearson. Ferrand, L., Segui, J., & Grainger, J. (1996). Masked priming of word and picture naming: the role of syllabic units. Journal of Memory & Language, 35, 708- 723. Ferreira, V. S., & Pashler, H. (2002). Central bottleneck influences on the processing stages of word production. Journal of Experimental Psychology: Learning, Memory, & Cognition, 28, 1187-1199. Garrett, M. F. (1988). Processes in language production. In F. J. Newmeyer (ed.), Language: Psychological and biological aspects (pp. 69-96). New York: Cambridge University Press. 127 Garrett, M. F. (1992). Disorders of lexical selection. Cognition, 42, 143?180. Gaskell, M. G., Quinlan, P. T., Tamminen, J., & Cleland, A. A. (2008). The nature of phoneme representation in spoken word recognition. Journal of Experimental Psychology: General, 137, 282-302. Gilhooly, K. J., & Logie, R. H. (1980). Age of acquisition, imagery, concreteness, familiarity and ambiguity measures for 1944 words. Behaviour Research Methods & Instrumentation, 12, 395-427. Grainger, J. (1990). Word frequency and neighborhood effects in lexical decision and naming. Journal of Memory & Language, 29, 228-244. Hakim, H. B., & Ratner, N. B. (2004). Nonword repetition abilities of children who stutter: an exploratory study. Journal of Fluency Disorders, 29, 179-199. Hall, J., & Jerger, J. (1978). Central auditory function in stutterers. Journal of Speech & Hearing Research, 21, 324-337. Hampton, A., & Weber-Fox, C. (2008). Non-linguistic auditory processing in stuttering: Evidence from behavior and event-related brain potentials. Journal of Fluency Disorders, 33, 253-273. Hartfield, K. N., & Conture, E. G. (2006). Effects of perceptual and conceptual similarity in lexical priming of young children who stutter: Preliminary findings. Journal of Fluency Disorders, 31, 303?324. Hennessey, N. W., Nang, C. Y., & Beilby, J. M. (2008). Speeded verbal responding in adults who stutter: Are there deficits in linguistic encoding? Journal of Fluency Disorders, 33, 180-202. Henricha J., Heinea, S. J., & Norenzayana, A. (2010). The weirdest people in the world? Behavioral & Brain Sciences, 33, 61-83. Howell, P. (2004). Assessment of some contemporary theories of stuttering that apply to spontaneous speech. Contemporary Issues in Communication Science & Disorders, 31, 122-139. Hudson, P. T., & Bergman, M. W. (1985). Lexical knowledge in word recognition: Word length and word frequency in naming and lexical decision tasks. Journal of Memory & Language, 24, 46-58. Jiang, Y., Saxe, R., & Kanwisher, N. (2004). Functional magnetic resonance imaging provides new constraints on theories of the psychological refractory period. Psychological Science, 15, 390-396. 128 Karniol, R. (1995). Stuttering, language, and cognition: A review and a model of stuttering as suprasegmental sentence plan alignment (SPA). Psychological Bulletin, 117, 104?124. Karrass, J., Walden, T. A., Conture, E. G., Graham, C. G., Arnold, H. S., Hartfield, K., N., Schwenk, K. A. (2006). Relation of emotional reactivity and regulation to childhood stuttering. Journal of Communication Disorders, 39, 402-423. Kemper, S., Schmalzried, R., Herman, R., Leedahl, S., & Mohankumar, D. (2009). The effects of aging and dual task demands on language production. Aging, Neuropsychology, & Cognition, 16, 241-259. Kleinow, J., & Smith, A. (2000). Influences of length and syntactic complexity on the speech motor stability of the fluent speech of adults who stutter. Journal of Speech, Language, & Hearing Research, 43, 548?559. Kubose, T. T., Bock, K., Dell, G. S., Garnsey, S. M., Kramer, A. F., & Mayhugh. J. (2006). The effects of speech production and speech comprehension on simulated driving performance. Applied Cognitive Psychology, 20, 43?63. Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral & Brain Sciences, 22, 1-75.Laver, G. D., & Burke, D. M. (1993). Why do semantic priming effects increase in old age? A meta-analysis. Psychology & Aging, 8, 34-43. Levy, J., Pashler, H., & Boer, E. (2006). Central interference in driving: Is there any stopping the psychological refractory period? Psychological Science, 17, 228? 35. Lund, K., & Burgess, C. (1996). Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, 28, 203-208. Mahon, B. Z., Costa, A., Peterson, R., Vargas, K. A., & Caramazza, A. (2007). Lexical selection is not by competition: A reinterpretation of semantic interference and facilitation effects in the picture-word interference paradigm. Journal of Experimental Psychology: Learning, Memory, & Cognition, 33, 503-535. Max, L., Caruso, A. J., & Gracco, V. L. (2003). Kinematic analyses of speech, orofacial nonspeech, and finger movements in stuttering and non-stuttering adults. Journal of Speech, Language, & Hearing Research, 46, 215?232. Max, L., Guenther, F. H., Gracco, V. L., Ghosh, S. S., & Wallace, M. E. (2004). Unstable or insufficiently activated internal models and feedback-biased motor control as sources of dysfluency: A theoretical model of stuttering. Contemporary Issues in Communication Science & Disorders, 31, 105-122. 129 Maxfield, N. D., Huffman, J. L., Frisch, S. A., & Hinckley, J. J. (2010). Neural correlates of semantic activation sreading on the path to picture naming in adults who stutter. Clinical Neurophysiology, 121, 1447-1463. Meyer, A. (1990) The time course of phonological encoding in language production: The encoding of successive syllables of a word. Journal of Memory & Language, 29, 524-545. Meyer, A. S., & van der Meulen, F. F. (2000). Phonological priming effects on speech onset latencies and viewing times in object naming. Psychonomic Bulletin and Review, 7, 314-319. Navon, D., & Miller, J. (2002). Queuing or sharing? A critical evaluation of the single-bottleneck notion. Cognitive Psychology, 44, 193-251. Newman, R. S., & Bernstein Ratner, N. (2007). The role of selected lexical factors on confrontation naming accuracy, speech and fluency in adults who do and do not stutter. Journal of Speech, Language, & Hearing Research, 50, 196-213. Onslow, M., & Packman, A. (2002). Stuttering and lexical retrieval: Inconsistencies between theory and data. Clinical Linguistics & Phonetics, 16, 295-298. Packman, A., Onslow, M., Coombes, T., & Goodwin, A. (2001). Stuttering and lexical retrieval. Clinical Linguistics & Phonetics, 15, 487-498. Pashler, H. (1994). Dual-task interference in simple tasks: Data and theory. Psychological Bulletin, 116, 220-244. Pellowski, M. W., & Conture, E. G. (2005). Lexical priming in picture naming of young children who do and do not stutter. Journal of Speech, Language, & Hearing Research, 48, 278?294. Perkins, W. H., Kent, R. D., & Curlee, R. F. (1991). A theory of neuropsycholinguistic function in stuttering. Journal of Speech & Hearing Research, 34, 734-752. Peters, H. F. M., & Starkweather, C. W. (1990). The interaction between speech motor coordination and language processes in the development of stuttering: Hypotheses and suggestions for research. Journal of Fluency Disorders, 15, 115?125. Pinker, S. (1994). The Language Instinct. New York: Morrow. 130 Postma, A., & Kolk, H. (1993). The Covert Repair Hypothesis: Prearticulatory repair processes in normal and stuttered disfluencies. Journal of Speech & Hearing Research, 36, 472-487. Rabovsky, M., ?lvarez, C. J., Hohlfeld, A., & Sommer, W. (2008). Is lexical access autonomous? Evidence from combining overlapping tasks with recording event-related brain potentials. Brain Research, 1222, 156-165. Riley, G. (2009). Stuttering Severity Instrument for Children and Adults (4th ed.). Austin, TX: Pro-Ed. Roelofs, A. (2008a). Attention, gaze shifting, and dual-task interference from phonological encoding in spoken word planning. Journal of Experimental Psychology: Human Perception & Performance, 34, 1580-1598. Roelofs, A. (2008b). Attention to spoken word planning: Chronometric and neuroimaging evidence. Language & Linguistics Compass, 213, 389-405. Roodenrys, S., & Quinlan, P. T. (2000). The effects of stimulus set size and word frequency on verbal serial recall. Memory, 8, 71?78. Sasisekaran, J., & de Nil, L. F. (2006). Phoneme monitoring in silent naming and perception in adults who stutter. Journal of Fluency Disorders, 31, 284-302. Sasisekaran, J., de Nil, L. F., Smyth, R., & Johnson, C. (2006). Phonological encoding in the silent speech of persons who stutter. Journal of Fluency Disorders, 31, 1-21. Sato, M., Tremblay, P., & Gracco, V. L. (2009). A mediating role of the premotor cortex in phoneme segmentation. Brain & Language, 111, 1-7. Schiller, N. O. (1998). The effect of visually masked syllable primes on the naming latencies of words and pictures. Journal of Memory & Language, 39, 484-507. Schiller, N. O. (1999). Masked syllable priming of English nouns. Brain & Language, 68, 300-305. Schriefers, H., Meyer, A. S., & Levelt, J. M. (1990). Exploring the time course of lexical access in language production: Picture-word interference studies. Journal of Memory & Language, 29, 86-102. Siegel, G. (2000). Demands and capacities or demands and performance? Journal of Fluency Disorders, 25, 321-327. Smith, A. (1999). Stuttering: A unified approach to a multifactorial, dynamic disorder. In N. Ratner, & C. Healey (Eds.), Research and treatment of fluency 131 disorders: Bridging the gap (pp. 27-44). Mahwah, NJ: Erlbaum. Smith, A., & Kelly, E. (1997). Stuttering: A dynamic, multifactorial model. In R. F. Curlee, & G. M. Siegel (eds.), Nature and treatment of stuttering: New directions, Needham Heights (2nd ed., pp. 204-217). MA: Allyn & Bacon. Smith, A., Sadagopan, N., Walsh, B., & Weber-Fox, C. (2010). Increasing phonological complexicty reveals heightened instability in inter-articulatory coordination in adults who stutter. Journal of Fluency Disorders, 35, 1-18. Smits-Bandstra, S., de Nil, L., & Saint-Cyr, J. A. (2006). Speech and nonspeech sequence skill learning in adults who stutter. Journal of Fluency Disorders, 31, 116-136. Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning & Memory, 6, 174 ?215. Starkweather, W. C., & Gottwald, S. R. (1990). The demands and capacities model II: clinical applications. Journal of Fluency Disorders, 15, 143?157. Szekely, A., Jacobsen, T., D'Amico, S., Devescovi, A., Andonova, E., Herron, D., ? Bates, E. (2004). A new on-line resource for psycholinguistic studies. Journal of Memory & Language, 51, 247?250 Taylor, J. K., & Burke, D. M. (2002). Asymmetric aging effects on semantic and phonological processes: Naming in the picture-word interference task. Psychology & Aging, 17, 662-676. Telford, C. W. (1931). The refractory phase of voluntary and associative responses. Journal of Experimental Psychology, 14, 1?36. Toglia, M. P., & Battig, W. R. (1978). Handbook of Semantic Word Norms. New York: Erlbaum. Tombu, M., & Jolic?r, P. (2003). A central capacity sharing model of dual task performance. Journal of Experimental Psychology: Human Perception & Performance, 29, 3-18. Tsiamtsiouris, J., & Cairns, H. S. (2009). Effects of syntactic complexity and sentence-structure priming on speech initiation time in adults who stutter. Journal of Speech, Language, & Hearing Research, 52, 1623-1639. 132 van Lieshout, P. H. H. M., Hulstijn, W., & Peters, H. F. M. (1996). Speech production in people who stutter: Testing the motor plan assembly hypothesis. Journal of Speech & Hearing Research, 39, 76-92. Vasi?, N., & Wijnen, F. N. K. (2005). Stuttering as a monitoring deficit. In R. J. Hartsuiker, R. Bastiaanse, A. Postma, & F. Wijnen (Eds.), Phonological encoding & monitoring in normal & pathological speech (pp 226-247). Hove (East Sussex): Psychology Press. Walker, I., & Hulme, C. (1999). Concrete words are easier to recall than abstract words: Evidence for a semantic contribution to short-term serial recall. Journal of Experimental Psychology: Learning Memory & Cognition, 25, 1256?1271. Weber-Fox, C. M. (2001). Neural systems for sentence processing in stuttering. Journal of Speech, Language, & Hearing Research, 44, 814-825. Weber-Fox, C. M., & Hampton, A. (2008). Stuttering and natural speech processing of semantic and syntactic constraints on verbs. Journal of Speech, Language, & Hearing Research, 51, 1058-1071. Weber-Fox, C. M., Spencer, R. M. C., Spruill, J. E., & Smith, A. (2004). Phonological processing in adults who stutter: Electrophysiological and behavioral evidence. Journal of Speech, Language, & Hearing Research, 47, 1244-1258. Weber-Fox, C., Spruill, J. E. III, Spencer, R., & Smith, A. (2008). Atypical neural functions underlying phonological processing and silent rehearsal in children who stutter. Developmental Science, 11, 321-337. Wheeldon, L. R., & Levelt, W. J. M. (1995). Monitoring the time course of phonological encoding. Journal of Memory & Language, 34, 311-334. Wheeldon, L. R., & Morgan, J. L. (2002). Phoneme monitoring in internal and external speech. Language & Cognitive Processes, 17, 503-535. Wheeldon, L. R., & Smith, M. C. (2003). Phrase structure priming: A short-lived effect. Language & Cognitive Processes, 18, 431-442. Wijnen, F., & Boers, I. (1994). Phonological priming effects in stutterers. Journal of Fluency Disorders, 19, 1?20. Wilson, M. D. (1988) The MRC Psycholinguistic Database: Machine Readable Dictionary, Version 2. Behavioral Research Methods, Instruments and Computers, 20, 6-11. 133 Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical principles in experimental design (3rd ed.). New York: McGraw-Hill.