20210702T161520210702T1715Europe/MadridSession 5CVirtual RoomEuroSLA30 | The 30th Conference of the European Second Language Associationeurosla2021@ub.edu
Actively Producing Hand Gestures Mimicking Articulatory Features Favours the Long-Term Acquisition of Novel Words Containing Those Features
Paper presentationTopic 1Regular paper04:15 PM - 04:45 PM (Europe/Madrid) 2021/07/02 14:15:00 UTC - 2021/12/25 15:45:00 UTC
This study investigates the role of producing a fist-to-open hand gesture depicting the airburst for the learning of L2 aspiration contrasts. In a between-subjects experiment with pretest/posttest design, 67 Catalan participants without any knowledge of Mandarin Chinese were trained to perceive and produce Mandarin words contrasting in aspiration in either one of the two experimental conditions, namely (a) producing speech while performing gestures (gesture group) and (b) producing speech only (no gesture group), or (c) the control condition where no training was provided at all. All three groups were tested in an identification and an imitation task administered before, immediately after, and three days after the training. The two experimental groups additionally learned to memorize Mandarin words contrasting in aspiration with or without gestures and their learning outcome was measured by a word-meaning association task. The identification task and word-meaning association task was measured by means of identification score and word-meaning recognition score, while the imitation task was perceptually rated for the overall pronunciation of the target words and acoustically analyzed for the VOT ratio of the target consonants (the VOT of the target plosive relative to the duration of the first syllable which contained that plosive). In addition, the gesture group was divided into well-performed gesture group and poorly-performed gesture group according to the appropriateness of their gesture performance during the pronunciation training. Accordingly, the results were analyzed in four groups: no gesture, well-performed gesture, poorly-performed gesture, and control. The results revealed that (a) the well-performed gesture group significantly improved the VOT ratio of the aspirated plosives from pretest to posttest and maintained the effect size of training at delayed posttest and that (b) only the well-performed gesture group had continuous improvement in overall pronunciation accuracy from pretest to delayed posttest. These results suggest that well-performed gestures could lead to better pronunciation as compared to other groups. However, the appropriateness of gesture performance did not play a positive role in the identification score. As for vocabulary learning, the results show that although participants showed decay in word-meaning recognition score from posttest to delayed posttest, only the no gesture group had a significant decrease, showing that gestural training helped strengthen and maintain the link between phonological form and semantic meaning. These results suggest that visuospatial hand gestures facilitate the L2 segmental pronunciation and lexical learning and that learners' gesture performance plays a crucial role in L2 pronunciation. Therefore, it is important to control learners' gesture performance in the context of embodied learning.
First exposure to sign language: Evidence of learning after just four minutes of naturalistic sign language input
Paper presentationTopic 1Regular paper04:45 PM - 05:15 PM (Europe/Madrid) 2021/07/02 14:45:00 UTC - 2021/12/25 16:15:00 UTC
The adult capacity to learn implicitly from language input at first exposure remains poorly understood despite work on artificial and natural languages in the written and spoken modalities, both inside and outside of classrooms. Specifically, it remains unclear what can be learned from continuous (spoken) language input in the absence of pre-training of individual items, and in a non-didactic setting. Some evidence suggests that adults can extract word-form information and map form to meaning from spoken language even after only a few minutes of contact, depending on properties of the input such as word-form frequency and syllable structure (Gullberg et al., 2010, 2012). Given that input properties matter, it becomes relevant to ask what the adult learning mechanism can do if linguistic input is only visual as in the case of sign languages. This study therefore tested what (hearing) adults with no previous exposure to sign languages can learn implicitly during first exposure to naturalistic, continuous sign language production. We adapted Gullberg and colleagues’ materials to Swedish Sign Language (SSL). We tested whether, after just 4 minutes of viewing a naturalistic weather forecast presented in SSL, sign-naïve adults (n=47, age range=18-40 years, L1=English) were able to recognise which signs they had seen and not seen in that forecast. We manipulated the frequency of the target signs and the phonological plausibility of the distractor items, resulting in 3 sets of 22 stimuli: • Target signs: 11 high-frequency items presented 8 times and 11 low-frequency items presented 3 times in the forecast. High- and low-frequency signs were matched for iconicity (as rated on a 7-point scale of low to high iconicity by a separate group of sign-naïve adults). • Phonologically plausible distractors: signs not present in the forecast, but phonologically similar to target signs. • Phonologically implausible distractors: signs not present in the forecast, and involving phonologically dispreferred patterns across the world’s sign languages. After viewing the weather forecast, participants were presented with each of the 66 individual signs (a different random order was generated for each participant), and they had to register via a button press whether or not they had seen the sign in the forecast. The sign recognition task results revealed that participants’ response accuracy was not random. Recognition accuracy was greater for high- than for low-frequency signs (60% versus 44%, t(46)=5.517, p< .001). Although iconicity correlated moderately with accuracy (r(22)=.420, p=.052), the frequency effect was constant across both high- and low-iconicity items (frequency*iconicity interaction: F(1,46)=0.075, p=0.785). Implausible distractor signs were rejected more often than plausible distractors (65% versus 60% accuracy, t(46)=2.188, p=.034). We conclude that during just 4 minutes of naturalistic exposure, sign novices were able to segment signs from the continuous sign-stream and create memory traces for signs (aided by frequency and iconicity), and to extract information about what makes a sign more or less phonologically plausible. Importantly, the results suggest that the adult mechanism for language learning operates similarly on sign and spoken languages as regards frequency, but also exploits modality-salient properties such as iconicity for sign languages. References: Gullberg, M., Roberts, L., & Dimroth, C. (2012). What word-level knowledge can adult learners acquire after minimal exposure to a new language? International Review of Applied Linguistics in Language Teaching, 50, 239-276. Gullberg, M., Roberts, L., Dimroth, C., Veroude, K., & Indefrey, P. (2010). Adult language learning after minimal exposure to an unknown natural language. Language Learning, 60, 5-24.
Presenters Chloe Marshall UCL Institute Of Education Co-Authors