Abstract: |
PURPOSE: Relatively recent research documents that visual choral speech, which represents an externally generated form of synchronous visual speech feedback, significantly enhanced fluency in those who stutter. As a consequence, it was hypothesized that self-generated synchronous and asynchronous visual speech feedback would likewise enhance fluency. Therefore, the purpose of this study was to investigate the effects of self-generated visual feedback (i.e., synchronous speech feedback with a mirror and asynchronous speech feedback via delayed visual feedback) on overt stuttering frequency in those who stutter. METHOD: Eight people who stutter (4 males, 4 females), ranging from 18 to 42 years of age participated in this study. Due to the nature of visual speech feedback, the speaking task required that participants recite memorized phrases in control and experimental speaking conditions so that visual attention could be focused on the speech feedback, rather than a written passage. During experimental conditions, participants recited memorized phrases while simultaneously focusing on the movement of their lips, mouth, and jaw within their own synchronous (i.e., mirror) and asynchronous (i.e., delayed video signal) visual speech feedback. RESULTS: Results indicated that the self-generated visual feedback speaking conditions significantly decreased stuttering frequency (Greenhouse-Geisser p = .000); post hoc orthogonal comparisons revealed no significant differences in stuttering frequency reduction between the synchronous and asynchronous visual feedback speaking conditions (p = .2554). CONCLUSIONS: These data suggest that synchronous and asynchronous self-generated visual speech feedback is associated with significant reductions in overt stuttering frequency. Study results were discussed relative to existing theoretical models of fluency-enhancement via speech feedback, such as the engagement of mirror neuron networks, the EXPLAN model, and the Dual Premotor System Hypothesis. Further research in the area of self-generated visual speech feedback, as well as theoretical constructs accounting for how exposure to a multi-sensory speech feedback enhances fluency, is warranted.Learning outcomes : Readers will be able to (1) discuss the multi-sensory nature of fluency-enhancing speech feedback, (2) compare and contrast synchronous and asynchronous self-generated and externally generated visual speech feedback, and (3) compare and contrast self-generated and externally generated visual speech feedback. |