The Effect of Asynchronous Computer-Mediated Condition on Iranian Undergraduate Learners' Speaking Ability and Behavioral Engagement (Research Paper)

Document Type : Original Article

Authors

Department of English, Ardabil Branch, Islamic Azad University, Ardabil, Iran.

Abstract

The objective of this study was to examine the effect of asynchronous computer-mediated communication condition on intermediate English language learners' engagement with speaking tasks and speaking accuracy, fluency, and complexity. The participants of this study included 40 intermediate undergraduate students who were assigned to two groups of face-to-face and computer-mediated groups. In the face-to-face group, the participants completed their speaking tasks in class and in a face-to-face fashion, and in the computer-mediated condition, the learners used an asynchronous computer-mediated tool (Edmodo) to accomplish the speaking tasks. During each task, the participants watched a monologue video, analyzed it, presented a similar monologue, and exchanged comments on their peers' oral products. The findings indicated the superiority of computer-mediated condition over the face-to-face one in improving foreign language learners' speaking accuracy and fluency; however, the complexity scores were not significantly different across the two conditions. The findings also showed that the learners in the computer-mediated condition provided a higher number of form language-related episodes and produced longer monologues.

Keywords


Article Title [Persian]

تأثیر شرایط یادگیری با واسطه‌ی رایانه به صورت غیر‌همزمان بر توانایی و تعامل فراگیران زبان انگلیسی به عنوان زبان دوم

Authors [Persian]

  • متین رامک
  • حسین سیاه پوش
  • مهران داوری بینا
گروه زبان انگلیسی، دانشگاه آزاد اسلامی، واحد اردبیل
Abstract [Persian]

هدف از این مطالعه بررسی تأثیر شرایط ارتباطی غیر‌همزمان رایانه‌ای بر تعامل زبان آموزان سطح متوسط انگلیسی با تمرین گفتاری، دقت، روان بودن، و پیچیدگی کلام بود. شرکت کنندگان در این مطالعه شامل 40 دانش آموز سطح متوسط بودند که در دو گروه حضوری چهره به چهره و رایانه‌ای قرار گرفتند. در گروه حضوری، شرکت‌کنندگان تمرین گفتاری خود را در کلاس و به صورت چهره به چهره انجام دادند و در شرایط رایانه‌ای، زبان آموزان از یک ابزار رایانه‌ای ناهمزمان به نام ادمودو برای انجام تمرین‌های گفتاری استفاده کردند. در طول هر تمرین، شرکت‌‌کنندگان یک ویدیو تک‌گویی را مشاهده کرده، آن را تجزیه و تحلیل کرده، یک تک‌گویی مشابه را ارائه کردند و نظرات خود را در مورد تک‌گویی‌های همسان‌های خود رد و بدل کردند. یافته‌های پژوهش حاضر بیانگر برتری شرایط یادگیری با واسطه‌ی کامپیوتر نسبت به حالت چهره به چهره در بهبود دقت و صحت و روانی کلام زبان‌آموزان زبان‌های خارجی بود. با این حال، نمرات پیچیدگی گفتار شرکت‌کنندگان در دو گروه به‌ طور معناداری متفاوت نبود. یافته ها همچنین نشان داد که زبان آموزان در شرایط یادگیری رایانه‌ای تعداد بیشتری رخداد زبانی را ارائه دادند وهمچنین تک‌گویی‌‌های طولانی‌تری تولید کردند.

Keywords [Persian]

  • ارتباط با واسطه رایانه
  • مشارکت فراگیران
  • دقت در صحبت کردن
  • روان صحبت کردن
  • پیچیدگی گفتار

The Effect of Asynchronous Computer-Mediated Condition on Iranian Undergraduate Learners' Speaking Ability and Behavioral Engagement

[1]Matin Ramak

[2]Hossein Siahpoosh*

[3]Mehran Davaribina

Research Paper                                                   IJEAP- 2109-1786        DOR: 20.1001.1.24763187.2021.10.4.4.4                                                      

Received: 2021-09-26                                   Accepted: 2021-12-04                                  Published: 2021-12-11

Abstract

The objective of this study was to examine the effect of asynchronous computer-mediated communication condition on intermediate English language learners' engagement with speaking tasks and speaking accuracy, fluency, and complexity. The participants of this study included 40 intermediate undergraduate students who were assigned to two groups of face-to-face and computer-mediated groups. In the face-to-face group, the participants completed their speaking tasks in class and in a face-to-face fashion, and in the computer-mediated condition, the learners used an asynchronous computer-mediated tool (Edmodo) to accomplish the speaking tasks. During each task, the participants watched a monologue video, analyzed it, presented a similar monologue, and exchanged comments on their peers' oral products. The findings indicated the superiority of computer-mediated condition over the face-to-face one in improving foreign language learners' speaking accuracy and fluency; however, the complexity scores were not significantly different across the two conditions. The findings also showed that the learners in the computer-mediated condition provided a higher number of form language-related episodes and produced longer monologues.

Keywords: computer-mediated communication, behavioral engagement, speaking accuracy, speaking fluency, speaking complexity

  1. Introduction

Speaking is one of the main language skills that one should master to orally communicate with other speakers of a language. The advent of new technologies and the development of communication across the globe have motivated people of different countries to communicate for commercial and non-commercial purposes. However, while most people learn to speak in their own mother tongue easily, most L2 (referring to both second and foreign languages) learners have hard time developing their L2 speaking ability (Bahari, 2021; Lin, 2020). As a result, practitioners have experimented different teaching options to improve the efficacy of their second language teaching practices in order to boost their learners' speaking ability (Derakhshan et al., 2016; Zeinali Nejad et al., 2021).

In recent years, teachers have integrated computerized technologies in their classes to help their learners improve their second language ability (Hoomanfard & Rahimi, 2020; Tsai, 2019). They have benefited from a host of learner-computer and computer-mediated technologies to provide their learners with the optimum condition to acquire a new language (Liu et al., 2019). Moreover, the promising results of prior studies have motivated them to examine different technologies in new areas to identify the affordances of each technology in a given context. One of the second language teaching and learning areas that has remained under-explored is the examination of the possible effects of asynchronous tools on learners' speaking ability. Litman et al. (2018) argues that due to difficulties in researching L2 learners' speaking ability, there are a large number of gaps in the L2 speaking literature which need to be filled using further empirical studies.

One of the niches in the literature which has been addressed in this study is the examination of L2 learners' behavioral engagement with asynchronous speaking tasks and English language speaking ability. Engagement, defined as the extent to which learners spend time on activities (Fredricks et al., 2004), with instructional tasks has been reported to be a determining factor in the success of the instruction (Razmjoo & Hoomanfard, 2012; Lee & Drajati, 2019); however, to the best of the researchers' knowledge, no previous study has investigated L2 students' behavioral engagement with asynchronous speaking tasks and English language speaking ability (accuracy, fluency, and complexity development). This study aims to fill this gap in the literature to both contribute to the literature of the second language pedagogy and computer-assisted language learning (CALL). It can examine the extent to which asynchronous computer-mediated condition can facilitate the engagement of intermediate EFL (English as a Foreign Language) learners with speaking tasks and improve their speaking ability. The following two research questions guided the present study:

Research Questions One: Is there any significant difference between the effect of computer-mediated and face-to-face conditions on L2 learners' speaking fluency, accuracy, and complexity?

Research Question Two: Is there any significant difference between the behavioral engagement levels of the participants in computer-mediated and face-to-face groups?

  1. Literature Review

2.1. Computer-Assisted Language Learning

Computerized technologies have been extensively used in the teaching and learning of different subject matters. Language pedagogy is not an exception, and the review of the literature on second language instruction shows the growing use of technologies to facilitate the internalization process of different items (Hajimaghsoodi & Maftoon, 2020). In recent years, advances in the production of portable devices such as smartphones, tablets, and laptops have increased the integration of computer applications into second/foreign language programs (Parmaxi & Demetriou, 2020).

The use of computers in L2 teaching and learning contexts has been supported theoretically and empirically. As the systematic review conducted by Blake (2017) reflects, social constructivism is the main theory supporting the use of technologies in L2 teaching and learning. Based on social constructivism, learners move from other-regulation to self-regulation (Lantolf & Poehner, 2011), and mediational tools such as dialogues or computerized technologies (Kim, 2020) can form the quality and quantity of learners' cognitive development. The examination of the literature on computer-assisted L2 pedagogy shows that the several prior empirical studies have reported the remarkable effect of learning condition on learners' L2 development (e.g., Zhang & Zou, 2020; Zou & Thomas, 2019). Based on the social constructivism and the ecological perspective of language learning, several studies have been conducted to compare either traditional (non-computer-assisted) conditions with computer-mediated ones or compare synchronous and asynchronous conditions to identify the affordances and challenges of each instructional option. Having focused on the topic of the present study, the next section provides a review of the main studies conducted on computer-mediated speaking instruction.  

2.2. Computer-Mediated Communication in Speaking Instruction

When it comes to the speaking skill, computer-assisted language learning can be practiced in either computer-learner interaction form or learner-learner fashion with a computer as the mediational tool. In the former case, which is also known as tutorial CALL (Guillen, 2015), learners speak with a computer to practice pronunciation or elementary speaking interactions (Warschauer & Healey, 1998). The learner-computer interaction was once known with the behavioristic approach to L2 acquisition in which drill and practice were the main techniques, and positive and negative feedback provided by computers were employed to form desirable habits (Warschauer & Ware, 2006). However, in recent years, artificial intelligence has advanced so remarkably that computers can function as active interactants in complex conversations (Blake, 2017).

In the latter case, computers serve as mediational tools which enable interactants to communicate. This use of computers in second language pedagogy is known as computer-mediated language learning, or social CALL (Guillen, 2015). Computer-mediated communication can be either synchronous, carried out in real-time, or asynchronous, done with delay in exchanging data. These alternatives bring about their own affordances and challenges (Skehan, 2003) and can benefit learners in developing their different language skills or components. Therefore, the selection of the right computer-mediated option depends on the tasks which are supposed to be practiced in a program.

A wide range of applications and platforms have been used to improve learners' L2 speaking (see Blake, 2017 for a review of these tools), but computer-mediated communication platforms have become the most common option for the benefits they bring to L2 speaking instruction contexts. Initially, due to technological limitations, researchers examined the effect of written asynchronous communication on learners' speaking ability (Payne & Whitney, 2002), and found the positive effects of written interactions on learners' L2 oral performance. By the introduction of VoIP (Voice over IP) in the first years of the third millennium, researchers started to examine the affordances of two-way synchronous and asynchronous speaking platforms.

Guillen and Blake (2017) found that asynchronous speaking postings helped students provide more complex and more complicated sentences than the time they interacted synchronously. Alshahrani (2016), who examined the effect of videoconferencing on learners' oral performance, did not find any significant difference between the pre-test and post-test mean scores. Hsu (2016) also investigated the impact of voice blogging on learners' oral performance, and the findings of his study revealed that there was a significant progress in the participants' complexity scores, but no significant improvement in accuracy or fluency was observed. In a more recent study, Chen and Tseng (2019) employed Facebook to examine the effect of asynchronous out of class speaking practice on learners' speaking ability and found the learners' improvement in oral performance. Chen argued that the use of asynchronous computer-mediated practice enabled her students to experiment a wide range of structures and lexical items, which could result in their better performance in the post-test. Furthermore, receiving feedback from their peers and teachers helped learners notice the gaps in their linguistic repertoire.

In an Iranian context, Farangi et al. (2015) found that the mean score of the experimental group, in which podcasting was practiced, improved intermediate students' speaking ability during a term, and the mean score of this group was significantly more than that of the control group, in which the interactions were done face-to-face. A few years later, Ebadi and Asakereh (2018) did not find asynchronous speaking practice significantly more efficacious than the traditional face-to-face condition in improving learners' speaking accuracy. Although these studies have deepened our understanding of the issue, they suffered from some methodological deficiencies. The study conducted by Guillen and Blake (2015), for example, compared asynchronous with synchronous conditions, but there was no face-to-face group. The research conducted by Ebadi and Asakereh (2018) was only focused on accuracy, and the other well-established criteria of second language performance, i.e., fluency and complexity (Ellis & Yuan, 2004) were not included in their study. Furthermore, in the studies conducted by Alshahrani (2016), Chen and Tseng (2019) and Farangi et al. (2015), the researchers did not provide separate scores for learners' different measurement criteria (accuracy, fluency, and complexity), and one was reported to reflect each participant's speaking score. This can deprive us of having a thorough understanding of the learners' improvement details. In addition, the studies conducted by Alshahrani (2016) and Hsu (2016) used a single group pre-test-post-test design, which might jeopardize the internal validity of the study (Ary et al., 2010).

2.3. Learner Engagement

An ample body of evidence in general education has shown that learners' engagement with tasks can result in desirable educational outcomes (Amiryousefi, 2017; McGuinness & Fulton, 2019). Both educationalists and educational psychologists have emphasized the importance of learners' engagement with tasks (Christenson et al., 2012; Cornelius et al., 2019). The examination of studies conducted in different disciplines reveals that high levels of learner engagement is linked to "critical thinking skills, persistence, and satisfaction with learning experiences" (Zhang & Zou., 2020, p. 3).

One of the first conceptualizations of learner engagement was provided by Natriello (1984). He defines learner engagement as the active participation of a learner in a given task. In a more elaborated conceptualization, Fredricks et al. (2004) included three aspects of behavioral engagement (learners' time spent on task or participation), cognitive engagement (attention and mental effort), and emotional engagement (learners' interest, enthusiasm, and enjoyment). However, in recent years, the concept has been expanded, and dimensions such as social engagement, referring to interactions and collaborations, (Mohammadi, 2017; Philp & Duchesne, 2016) and agentic engagement, “students' intentional, proactive, and constructive contribution into the follow of the instruction they receive” (Reeve, 2012, p. 161), have been added to the literature of learner engagement, and all or some of these dimensions are included in different studies.

Learner engagement is a relatively new concept in the language learning literature; however, it is positioning as a significant variable in second language development. Recently, Ellis (2019) has argued that learners' engagement during task performance is crucial for learners to notice and establish form-meaning links. The definition provided for learner engagement in L2 learning literature is defined as a condition of "heightened attention and involvement’ in a learning task" (Philp & Duchesne, 2016, p. 51). It seems that Ellis and Philp and Duchesne (2016) mainly focus on cognitive dimension of engagement, which is tightly related to the other dimensions, but it cannot be taken as the only engagement aspect. Similarly, Amiryousefi (2017) has used the general term of learner engagement to refer to behavioral engagement. While in practice, most studies focus on one dimension of engagement, overlooking the multidimensionality of learner engagement while defining it seems to be misleading.

Aubrey et al. (2020) have listed a host of factors which can cause learners' engagement or disengagement with an educational activity. Learner-level set was the first category which included learners' attitude toward learning a second language, learners' perceptions of language skills, and their affective, cognitive, and physical states. The second category was lesson-level which included preparation for the lesson and preparing for the lesson. The third category was task-related and included task design, opportunity to speak, focus on performance, confidence/anxiety, social collective factors, commitment to the task, desire to speak, and enjoyment. Post-task-level was the last engagement level and included learners' evaluation of performance and reflection on performance.

2.4. Theoretical Framework

This study is mainly based on the sociocultural theory, which assumes that human cognition is developed through interactions, and in the process of learning, one moves from other-regulation to self-regulation within his/her Zone of Proximal Development. This traverse is mediated by mediational tools, which can affect the quality and quantity of the product (Lantolf, 2011). In L2 studies, prior studies have supported the effect of mediational tools on the instruction product (e.g., Ajabshir, 2019; Baralt & Gurzynski-Weiss, 2011; Côté & Gaffney, 2021; Hoomanfard & Rahimi, 2020). In this study, the mediational effect of computer-mediated condition on learners' L2 speaking development and engagement with the tasks was examined.

Furthermore, to examine the participants' engagement with the provided tasks, languaging, which is taken from the sociocultural theory, was used in this study. Languaging refers to learners’ use of language as a mediator to objectify their knowledge, which can be assessed, negated and added, or modified. Languaging is an integral aspect of human’s thinking, meaning-making self and the basis of his/her higher mental processes such as consciousness or rehearsing information to be learnt (Swain et al., 2010). To analyze learners’ engagement in languaging, language-related episode (LRE) has been suggested as the unit of analysis. LRE is defined by Swain and Lapkin (2000, p. 63) as “any part of a dialogue where the students talk about the language they are producing, question their language use or correct themselves and others” (p. 326), and consequently, help them improve their language ability. In this study, the effect of face-to-face versus computer-mediated speaking conditions on the number of LREs provided by the participants was studied.

  1. Method

3.1. Design of the Study

This research project was conducted to examine the extent to which intermediate L2 learners in face-to-face and computer-mediated groups engage with speaking tasks and improve their speaking accuracy, fluency, and complexity after a 20-session term. This study employed a quasi-experimental design to examine the effects of different instruction conditions and, unlike most prior studies, examined learners' fluency, accuracy, and complexity to provide a detailed analysis of learners' L2 development. Another niche in the literature, occupied in this study, was the examination of the trajectory of learners' engagement with tasks in computer-mediated and face-to-face conditions throughout a term. The participants' engagement indices were recorded three times in the study to shed light on possible changes in the learners' engagement with tasks.

3.2. Participants

The participants of this study included 40 intermediate-level students who were studying English at a university in Iran. They were all female, and their age ranged between 18 and 35 years (M = 25.7, SD = 2.8). The participants were all native speakers of Farsi who were selected based on convenience sampling and were assigned to face-to-face and computer-mediated conditions randomly. To ensure about their English language command, the participants took Oxford Proficiency Test two weeks before the term commencement, and the results showed that the participants' scores ranged between 49 and 67, reflecting their intermediate level (M = 58.4, SD = 4.7), and there was no significant difference between the mean scores of the two participating groups (t(38) = .592, p = .558).

3.3. Instruments and Materials

3.3.1. Oxford Proficiency Test

To the test the English language ability of the participants, the researchers employed Oxford Proficiency Test, which is a well-established English language proficiency test. This test includes reading, structure, vocabulary and writing sections. Each correct answer was given one point. The learners were allotted 45 minutes for the first part of the test (reading section, structure section, vocabulary section) and 20 minutes for the writing section.

3.3.2. Speaking Tasks

Two tasks were employed to uncover the effects of different learning conditions on learners' speaking development. These tasks were extracted from IELTS Academic 14: With Answers (2019), which is published by Cambridge University Publication, and the tasks included in this book are claimed to be authentic and taken from the original IELTS tasks. IELTS speaking task 2 was employed in this study since it requires the participants to read information provided on a card, prepare for one minute and then talk for one to two minutes. The students were familiar with both topics (co-education and studying abroad) since they were covered in four sessions of their instruction. These tasks elicit the same cognitive processes, and they were not significantly different in terms of difficulty. The learners' performance in these tasks was recorded for further analysis.

3.3.3. Asynchronous Computer-Mediated Medium

Edmodo was employed as the asynchronous computer-mediated communication tool. Edmodo is one of the e-learning platforms that can be used in educational settings to improve teaching and learning (Embi, 2011). Edmodo provides a free site for connecting students, teachers, parents, and administrators in a digital world. Edmodo is one of the popular global educational networks to provide communication, collaboration, and training methods that allows all students to meet their full learning potential. Moreover, a captivating feature of Edmodo is that the free Edmodo Mobile app allows learners to access any recorded materials anytime and anywhere, and it is regarded as a great CMS tool for aiding teachers to manage their online classes simply (Mokhtar & Dzakiria, 2015; Wallace, 2013).

3.4. Experimental Design

This study benefited from a quasi-experimental research design. After assigning the participants to two experimental and conventional groups, the 20-session term started. The same books and units were covered in both classes, and the same teacher moderated the classes. In both classes, the participants had to complete one speaking task in each session. In the conventional group, the last 45 minutes of each session was allocated to speaking practice. However, in the computer-mediated group, the students had to complete their speaking tasks and exchange questions and answers online. Each session, the students were put in groups of four randomly. The groups changed each session to let learners get familiar with different speaking styles and be exposed to different structural and lexical items. The students in both groups practiced how to analyze a monologue in the first three sessions. The learners were invited to pay attention to the function of different sections, the grammatical structures, vocabularies, and discourse markers. The teacher provided a model, asked some students to analyze the monologue, and then asked all students to examine the monologue. Then, she provided her students with feedback on the accuracy and comprehensiveness of their analyses.

 

Figure 1: A Sample of the Asynchronous Meetings

The task included the presentation of a video clip, which provided a sample of a monologue to learners. The length of videos was around five minutes, and they were suitable for intermediate English language learners. Then, the participants in the conventional group had 10 minutes to analyze the monologue regarding the grammatical structures, vocabularies, and organization (function of different sections) of the monologue and raise their questions about the video from their peers. After a three-minute preparation time, they had to produce monologues of utmost five minutes. The participants were allowed to use their notes while presenting their monologues, but they were not allowed to read aloud their texts. Finally, they provided feedback on their classmates' performance. The participants had to present their revised monologues in the next session.

The same steps were undergone in the asynchronous computer-mediated class. The same videos were used in this group, but the analysis time was 24 hours. The videos were uploaded on the class webpage, and the participants had one day to watch each video, share their own assessment with their peers. During this period, the participants could analyze the monologue, ask and answer questions about the meaning of the lexical items, grammatical structures, or the organization of the monologue. Then, they had one day to record their own monologue and post it online. The participants had one chance to upload their videos, and a second attempt was not allowed. Then, they had one day to give feedback on their peers' performance. Finally, the speakers had to upload a second recording of their monologues (task repetition) before the next session. All these interactions were done asynchronously.

3.5. Data Collection and Analysis

The data required to answer the research questions were collected in a four-month period. The data collection started with administering English language proficiency test two weeks before the term. Learners' speaking pre-test scores were gleaned in the first session. To examine the participants' engagement with the tasks provided in different conditions, their engagement with speaking tasks in three sessions (sessions 5, 12, and 20) was collected. In the face-to-face group, their interactions were recorded using a cellphone, and in computer-mediated group, the learners' interactions were automatically recorded on the website. The last data collection stage was the speaking post-test, which was done in the 21st session.

Table 1: Data Collection Steps

English language proficiency test

Two weeks before the term

Speaking pre-test

Session 1

Collecting task performance

Sessions 5, 12, and 20

Speaking post-test

Session 21

The learners' English language proficiency scores were computed straightforwardly using the answer key provided by Oxford Publications. To answer the first research question, the researchers used the number of content language-related episodes, form language related-episodes, and monologue length to examine the participants' behavioral engagement with speaking tasks. Mixed ANOVAs were employed by the researchers to examine the collected data across time and condition. To answer the second question, the participants' speaking scores in both pre- and post-tests were analyzed using six measures (Table 2).

Table 2: Assessment Measures

 

Measures

References

Accuracy

a. Error-free clauses: The ratio of the clauses that was not erroneous. All syntactic, morphological, and lexical errors will be taken into consideration. Any error excludes a clause from being error-free.

Ellis & Yuan (2004)

Yuan & Ellis (2003)

b. Correct verb forms: The ratio of all verbs that are used correctly in terms of tense, aspect, modality, and subject-verb agreement.

 

Fluency

a. Mean length of run: Mean number of syllables supplied between pauses above 0.28 seconds

Ellis & Yuan (2004)

Yuan & Ellis (2003)

b. Speech rate: Syllables per minute

Complexity

Syntactic Complexity:

Mean length of clause (MLC): The ratio of the number of words to the number of clauses in the participants’ production

Wolfe-Quintero et al. (1998)

Lexical Complexity:

Number of Different Words (expected random 50) (NDW–ER50): The mean number of different words of 10 random 50-word samples in the participants’ production.

Ellis & Yuan (2004)

 

To compare the speaking performance of the participants across time and groups, mixed ANOVA was employed. The speaking sections were rated by two raters (one of the authors of this paper and an experienced applied linguistics associate professor), and the inter-rater value of .92 was achieved. The discrepancies were discussed in an extensive meeting until unanimous decisions were made.

  1. Results

4.1. Effect of Face-to-face Condition versus Computer-mediated Condition on Speaking Performance

To answer the first research question, the participants' performance in the pre-test was recorded. The scores obtained for different measures were compared across the two groups to ensure the absence of significant differences at the beginning of the study (Table 3).

Table 3: Mean Scores and Standard Deviations for Learners’ Performance in Pre-test and Post-test

Measure

Pre-test scores

Post-test scores

 

Face-to-face

CMC

Face-to-face

CMC

Error-free clauses

55.20 (.021)

.5625 (.019)

.667 (.29)

.714 (.029)

Correct verb forms

.569 (.023)

.5730 (.012)

.672 (.019)

.70 (.02)

Mean length of run

4.42 (.23)

4.32 (.246)

4.91 (.18)

5.26 (.17)

Speech rate

107.78 (5.6)

110.2 (3.80)

116.5 (5.14)

121.3 (2.9)

Mean length of clauses

5.79 (.20)

5.64 (.34)

6.6 (.21)

6.52 (.34)

Number of different words

34.5 (1.93)

34.25 (1.37)

1.98 (.44)

2.27 (.5)

As provided in Table 3, the mean scores of the participants in face-to-face and computer-mediate groups were not significantly different (t error-free clauses = 1.61, p = .114; t correct verb forms= .674, p = .506; tMean length of run = 1.384, p = .174; tSpeech rate = 1.59, p = .12; tMean length of clauses = 1.691, p = .101; tnumber of different words = .472, p = .64). To identify the significance of the effects of time and condition on learners' scores, Mixed ANOVAs were run. The results of Mixed ANOVAs for accuracy measures are provided in Table 4.

Table 4: Mixed ANOVA for Different Measures of Speaking Ability

 

 

 

df

MS

F

Sig.

Ƞ2

Error-free clauses

Within-group

Time

1

.356

623.88

.001

.943

Time*Group

1

.007

11.59

.002

.234

Between-groups

Group

1

.017

23.2

.001

.379

Correct verb forms

Within-group

Time

1

.264

1590.3

.001

.97

Time*Group

1

.003

17.31

.001

.313

Between-groups

Group

1

.005

9.007

.001

.192

Mean length of run

Within-group

Time

1

10.34

415.502

.001

.916

Time*Group

1

10.56

42.39

.001

.527

Between-groups

Group

1

.311

4.76

.001

.111

Speech rate

Within-group

Time

1

1974.08

1483.9

.001

.975

Time*Group

1

27.14

20.45

.001

.349

Between-groups

Group

1

257.04

6.57

.014

.147

Mean length of clauses

Within-group

Time

1

14.39

2362.23

.000

.98

Time*Group

1

.021

3.46

.070

.084

Between-groups

Group

1

.276

1.80

.187

.045

Number of different words

Within-group

Time

1

1513.8

530.6

001

.93

Time*Group

1

1.8

.631

.432

.016

Between-groups

Group

1

6.05

1.33

.255

.034

                 

As provided in Table 3, there were significant main effects of time, FError-free clauses (1, 38) = 623.88, p< .01, Ƞ2 = .943 and FCorrect verb forms (1, 38) = 15.90, p< .01, Ƞ2 = .97), and condition, FError-free clauses (1, 38) = 23.2, p< .01, Ƞ2 = .379 and FCorrect verb forms (1, 38) =9.007, p< .01, Ƞ2 = .192, on learners' accuracy measures. The interaction of time and condition also proved to have significant effect on their accuracy measures, FError-free clauses (1, 38) = 11.59, p< .01, Ƞ2 = .234 and FCorrect verb forms (1, 38) = 17.31, p< .01, Ƞ2 = .313. Considering the Error-free clauses mean scores (MF2F = .667, SD = .029 and MCMC = .71, SD= .030) and those of the correct verb form (MF2F  = .672, SD = .018 and MCMC = .700, SD = .020), both groups improved significantly during the treatment, and the computer-mediated condition was significantly more successful than the face-to-face condition in improving the participants' speaking accuracy.

The second set of measures addressed the speaking fluency of the students. The results of the Mixed ANOVA showed that there were significant main effects of time, FMean length of run (1, 38) = 415.5, p< .01, Ƞ2 = .91 and FSpeech rate (1, 38) = 1484, p< .01, Ƞ2 = .97), and condition, F Mean length of run (1, 38) = 4.76, p< .01, Ƞ2 = .111 and F Speech rate (1, 38) = 6.57, p< .01, Ƞ2 = .147, on learners' fluency measures. Significant time * condition values were also observed, FMean length of run (1, 38) = 42.39, p< .01, Ƞ2 = .527 and FSpeech rate (1, 38) = 20.45, p< .01, Ƞ2 = .349. Taking the Error-free clauses mean scores (MF2F  = .4.91, SD = .18 and MCMC = 5.26, SD = .176) and those of correct verb form (MF2F = 116.55, SD = 5.14 and MCMC = 121.3, SD = 2.3), the results proved that both groups improved their fluency scores during the term, but the fluency scores of the computer-mediated group was significantly higher than that of the face-to-face group.

Finally, the results showed that the main effects of time on the learners' speaking complexity were significant, Fmean length of clauses (1, 38) = 2362.2, p < .001, Ƞ2 = .98, and FNumber of different words (1, 38) = 530.6, p < .01, Ƞ2 = .93). However, the main effects of condition, F Mean length of clauses (1, 38) = 1.80, p = .187, Ƞ2 = .045 and F number of different words (1, 38) = 1.33, p = .01, Ƞ2= .034, were non-significant, nor were the interactions of time and condition, F Mean length of clauses (1, 38) = 3.46, p = .070, Ƞ2 = .084 and F number of different words (1, 38) = .631, p = .432, Ƞ2 = .016. In other words, although the learners' complexity scores improved significantly from pre-test to post-test, neither of the conditions was proved superior in improving their speaking complexity.

4.2. Learners' Engagement with Speaking Tasks in Face-to-face and Computer-mediated Conditions

In this section, the participants' engagement with tasks in face-to-face and computer-mediated conditions is examined. To have a better understanding of the learners' engagement, their performance in three sessions (sessions 5, 12, and 20) was recorded and analyzed (Table 5).

Table 5: Means and Standard Deviations of Engagement Measures

 

Content LREs

Form LREs

Length of monologues

 

F2F

CMC

F2F

CMC

F2F

CMC

Session 5

2.7 (.57)

3.15 (.68)

4.3 (1.45)

5.35 (2.05)

73.65 (11.67)

78.35 (12.34)

Session 12

3.9 (1.15)

4.3 (1.41)

5.65 (1.81)

6.25 (1.91)

81.75 (9.9)

105.9 (13.66)

Session 20

4.7 (1.49)

5.3 (1.47)

6.55 (2.16)

8.9 (2.18)

111.3 (10.4)

118.7 (15.7)

 

 

    

Figure 2. Content LREs (the left chart) and Form LREs (the right chart)

As Table 5 and Figure 2 reveal, in both classes, the number of content and form language-related episodes and the length of monologues increased during the term, but to examine the significance of the effect of time and condition on learners' engagement with the task, a series of mixed ANOVA was run (Table 6).

Table 6: Mixed ANOVA for Different Measures of Speaking Ability

 

 

 

df

MS

F

Sig.

Ƞ2

Content LREs

Within-group

Time

2

43.30

14.79

.001

.280

Time*Group

2

.108

.037

.848

.001

Between-groups

Group

1

7.008

2.451

.126

.061

Form LREs

Within-group

Time

2

85.50

20.53

.001

.351

Time*Group

2

16.51

6.73

.006

.057

Between-groups

Group

1

53.33

24.78

.001

.124

Monologue length

Within-group

Time

2

14692.3

103.9

.001

.732

Time*Group

2

1047.5

7.41

.001

.163

Between-groups

Group

1

4750.2

27.40

.001

.419

                 

As provided in Table 6 the main effects of time for the three measures of engagement in this study were found significant, Fcontent LREs (2, 76) = 14.79, p < .001, Ƞ2 = .28, Fform LREs (2, 76) = 20.53, p < .01, Ƞ2 = .351, and Fmonologue length (2, 76) = 103.9, p < .001, Ƞ2 = .732. In addition, there were significant main effects of condition, Fform LREs (1, 38) = 24.78, p< .01, Ƞ2 = .124 and Fmonologue length (1, 38) = 27.40, p< .01, Ƞ2 = .419), and Time * Group, Fform LREs (2, 76) = 6.73, p< .01, Ƞ2 = .057 and Fmonologue length (2, 76) = 7.41, p< .01, Ƞ2 = .163). However, the main effects of time and Time*Group on content language-related episodes were not significant, F (2, 76) = .037, p = .848, Ƞ2 = .001, F (2, 76) = 2.451, p = .126, Ƞ2 = .061, respectively. In other words, while the participants' number of content and form language-related episodes and monologue lengths increased during the term, the computer-mediated condition was only superior in yielding more form language-related episodes and increasing the length of monologues.

  1. Discussion

The findings of this study indicated that intermediate English language learners in both face-to-face and computer-mediated conditions improved the speaking accuracy, fluency, and complexity scores during the term. However, the accuracy and fluency scores of those in the asynchronous computer-mediated group were significantly higher than those in the face-to-face group. Furthermore, the findings showed that EFL learners were more actively engaged in the computer-mediated condition as they generated significantly more form language-related episodes and presented longer monologues.

The significance of learner engagement with educational tasks in second language development is well-established in the literature (e.g., Cornelius et al., 2019; Han & Hyland, 2015; McGuinness & Fulton, 2019; Mohammadi, 2017), and improvement in learner engagement seem to yield in learners' higher second language achievement (Ehsanifard et al., 2020). The findings of the present study revealed that foreign language learners were more engaged in the computer-mediated condition. In line with the sociocultural theory, prior studies (Ajabshir, 2019; Hoomanfard & Rahimi, 2020; Hsu, 2016; Mahmoudikia et al., 2014) have shown that computer-mediated communication, as a mediational tool, can significantly affect the quality and quantity of learner engagement and can determine the instructional product (learning). The affordances and limitations provided by a mediational tool can facilitate or hinder the process of second language development.

The results showed that, in comparison to the face-to-face condition, the computer-mediated condition was more successful in engaging learners in the exchange of language-related episodes in the form of feedback and response to feedback. One of the reasons for this finding might reside in the asynchronous nature of the technology employed in this study. Prior studies have shown that asynchronous computer-mediated tools can provide a condition in which learners feel less cognitive pressure to interact in a second language since they have plenty of time, in comparison with the face-to-face condition, to formulate their thoughts, translate them into a second language using suitable grammatical structures and lexical items, and express them written or orally. Based on the cognitive load theory, working memory is of a limited capacity, and the complexity of data (intrinsic load), learner characteristics (germane load), and instructional procedure (extraneous load) can impose different levels of cognitive load on a learner in a specific task. The asynchronous condition, in this study, seems to reduce the extraneous load by minimizing the temporal pressure on learners. This can free up a part of cognitive resources for learners to deal with the intrinsic load (Woolfolk, 2016), which is the second language material in this study.

Similarly, the lower extraneous load provided in the asynchronous computer-mediated condition can help those learners whose individual characteristics deter their active interactions in a second language. The literature on computer-mediated language learning shows that introverted learners (Hoomanfard & Rahimi, 2020), learners with low working memory capacity (Woolfolk, 2016), and L2 low-achievers (Lee, 2017) are more inclined to participate in asynchronous computer-mediated conditions because of their lower extraneous load, which can enable learners to focus on the intrinsic load (L2 language), engage more actively in L2 tasks by providing more form language-related episodes, which are known to be significant vehicles to learn a second language (Swain, 2013).

The findings of this study also indicated the superiority of the computer-mediated condition over the face-to-face one in improving learners' accuracy and fluency; however, there was no significant difference between the complexity scores. The higher scores in accuracy and fluency can be explained by the well-established role of output in the process of second language learning. As Swain (1988) argues, generating L2 material pushes learners to process the second language more actively and deeply. The higher length of monologues produced in the computer-mediated condition may have functioned as a mediational tool to improve L2 learners' use of syntactic structures in their products. The extended production length equals more practice time in English, which can result in learners' deeper engagement with syntactic processing, and higher accuracy scores (Ehsanifard et al., 2020).

Another benefit of output is the feedback that producers receive on their performance. In the present study, a significantly higher amount of form language-related episodes was recorded in the computer-mediated condition. As found in previous studies (Ellis, 2015; Scott & Fuente, 2008; Storch, 2013), the higher number of language-related episodes is correlated with learners' higher second language achievement. The higher amount of form-focused languaging in the form of language-related episodes provided by the learners can also explain the higher accuracy mean score of the computer-mediated condition. Swain (2013) argues that while languaging, learners are exposed to a set of positive and negative evidence, which can refine their second language repertoire.

The higher fluency scores of the participants in the computer-mediated condition can be attributed to learners' accuracy improvement. Considering the limited syntactic structures required by the tasks during the term and the pre- and post-tests, and referring to Skehan's (2009) trade-off hypothesis, it can be argued that learners (especially those in the computer-mediated condition) might have acquired the syntactic structures and, thereby, were able to speak more fluently. According to Skehan, “learners will have difficulty in focusing on all aspects of production at the same time and thus will prioritize one aspect to the detriment of the other aspects” (Ellis, 2015, p.326). In this study, the mastered syntactic structures might have left learners with more working memory capacity to present more fluent oral products.

Unlike the researchers' expectation, the computer-mediated condition was not successful in yielding higher complexity scores. Although the learners in both groups improved the complexity of their oral products, there was no significant difference between the two groups. Considering the requirements of the course learners were taking, a general English course which was a part of a 12-term educational program, the participants of both groups might have focused on the structures covered in their course and did not find it necessary to go beyond the structures mentioned in their textbook. Thus, both groups reached high levels of complexity within their requirements of their own course. This can justify their significant improvement during the term and non-significant differences across the conditions.

  1. Conclusion, Implications, Limitations of the Study, and Further Research

Based on the findings, it is recommended that second language teachers incorporate asynchronous computer-mediated technologies into their lessons so as to provide their learners with a mediational tool which can improve their engagement with speaking tasks and speaking accuracy and fluency. To improve learners' speaking ability, teachers can benefit from computer-mediated technologies to motivate learners' out-of-class oral interactions, which are either absent or highly limited in real-life EFL contexts. The extended practice of a second language in the form of presenting monologues and exchanging language-related episodes beyond the classroom can facilitate learners' second language development.

The findings of this study indicated the superiority of the computer-mediated condition. As Storch (2013) argued, in comparative studies where conventional and computerized conditions are juxtaposed, the learners' outperformance in the computer-assisted condition can be attributed to the novelty effect. While a part of the differences can be related to the novelty issues, since the students had used this application for writing instruction purposes in semesters preceding this research project and were familiar with the condition, it can be cautiously said that novelty effect projected little or no effect on the results of this study.

This study suffered from three limitations which can be addressed in further research. First, the present study focused on learners' production of monologues. Although the production of an extended piece in a second language is a demanding task, it does not let the researcher assess the interactional competence required in dialogues. Therefore, other researchers can either focus on dialogues or both monologues and dialogues to understand the effect of asynchronous computer-mediated condition on their learners' performance.

Another limitation of this study, which could have affected the findings, was the focus of the course. This study was conducted in a general English course within a 12-term educational program, and the participants were not motivated enough to extend their linguistic ability beyond the requirements of the course; however, if the study had been conducted in a prep course for a high-stake test (e.g., TOELF or IELTS), the participants' scores (especially their complexity) might have been different. Thus, other researchers are invited to examine the effect of computer-mediated tools in other research contexts, with different educational goals, to complete the puzzle of computer-assisted language learning.

Finally, this research focused on learners' behavioral engagement with speaking tasks, but other researchers can examine the three engagement dimensions to examine whether and how different engagement aspects can facilitate or hinder one's second language development. Therefore, other researchers are encouraged to conduct case studies to uncover how learners' attitudinal engagement can affect learners' use of learning strategies (cognitive engagement) or participation (behavioral engagement). Similarly, microgenetic analysis can be used to find how learners' behavioral engagement can lead to their second language acquisition.

References

Ajabshir, Z. F. (2019). The effect of synchronous and asynchronous computer-mediated communication (CMC) on EFL learners' pragmatic competence. Computers in Human Behavior92, 169-177.

Alshahrani, A. (2016). Communicating authentically: Enhancing EFL students' spoken English via videoconferencing. CALL-EJ17(2), 1-17.

Amiryousefi, M. (2017). The differential effects of collaborative vs. individual prewriting planning on computer-mediated L2 writing: Transferability of task-based linguistic skills in focus. Computer Assisted Language Learning30(8), 766-786.

Ary, D., Jacobs, L. C., Irvine, C. K. S., & Walker, D. (2018). Introduction to research in education. Cengage Learning.

Aubrey, S., King, J., & Almukhaild, H. (2020). Language learner engagement during speaking tasks: A longitudinal study. RELC Journal, 1-15.

Bahari, A. (2021). Computer‐mediated feedback for L2 learners: Challenges versus affordances. Journal of Computer Assisted Learning37(1), 24-38.

Baralt, M., & Gurzynski-Weiss, L. (2011). Comparing learners’ state anxiety during task-based interaction in computer-mediated and face-to-face communication. Language Teaching Research15(2), 201-229.

Blake, R. (2017). Technologies for Teaching and Learning L2 Speaking. In C. Chapelle & S. Sauro (Eds.), The Handbook of Technology and Second Language Teaching and Learning (pp. 107–117). John Wiley & Sons Inc.

Chen, S. Y., & Tseng, Y. F. (2019). The impacts of scaffolding e-assessment English learning: A cognitive style perspective. Computer Assisted Language Learning, 32(4), 1-23.

Christenson, S. L., Reschly, A. L., & Wylie, C. (Eds.). (2012). Handbook of research on student engagement. Springer Science & Business Media.

Cornelius, S., Calder, C., & Mtika, P. (2019). Understanding learner engagement on a blended course including a MOOC. Research in Learning Technology, 27, 1-14.

Côté, S., & Gaffney, C. (2021). The effect of synchronous computer-mediated communication on beginner L2 learners’ foreign language anxiety and participation. The Language Learning Journal49(1), 105-116.

Derakhshan, A., Khalili, A. N., & Beheshti, F. (2016). Developing EFL learner’s speaking ability, accuracy and fluency. English Language and Literature Studies6(2), 177-186.

Ebadi, S., & Asakereh, A. (2018). Using voice thread to enhance speaking: A case study of Iranian EFL learners. Journal on English Language Teaching8(3), 29-42.

Ehsanifard, E., Ghapanchi, Z., & Afsharrad, M. (2020). The impact of blended learning on speaking ability and engagement. Journal of Asia TEFL17(1), 253.

Ellis, R. (2015). Understanding second language acquisition. Oxford University Press.

Ellis, R. (2019). Towards a modular language curriculum for using tasks. Language Teaching Research23(4), 454-475.

Ellis, R., & Yuan, F. (2004). The effects of planning on fluency, complexity, and accuracy in second language narrative writing. Studies in second Language acquisition26(1), 59-84.

Embi, M. A. (2011). Web 2.0 social networking tools: A quick guide. Pusat Pembangunan Akademik, Universiti Kebangsaan Malaysia.

Farangi, M. R., Nejadghanbar, H., Askary, F., & Ghorbani, A. (2015). The effects of podcasting on EFL upper-intermediate learners’ speaking skills. CALL-EJ Online16(2), 16-39.

Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59-109.

Guillén, G. (2015). Awareness and corrective feedback in social CALL, Tandems, and E-Tandems. IALLT Journal of Language Learning Technologies44(2), 1-42.

Guillén, G., & Blake, R. (2017). Can you repeat, please? L2 complexity, awareness, and fluency development in the hybrid “classroom”. Online language teaching research: Pedagogical, academic and institutional issues,7(1), 55-77.

Hajimaghsoodi, A., & Maftoon, P. (2020). The effect of activity theory-based computer-assisted language learning on EFL learners' writing achievement. Language Teaching Research Quarterly16, 1-21.

Han, Y., & Hyland, F. (2015). Exploring learner engagement with written corrective feedback in a Chinese tertiary EFL classroom. Journal of second language writing30, 31-44.

Hoomanfard, M. H., & Rahimi, M. (2020). A comparative study of the efficacy of teacher and peer online written corrective feedback on EFL learners' writing ability. Zabanpazhuhi (Journal of Language Research)11(33), 327-352.

Hsu, L. (2016). An empirical examination of EFL learners' perceptual learning styles and acceptance of ASR-based computer-assisted pronunciation training. Computer Assisted Language Learning29(5), 881-900.

Kim, H. Y. (2020). Multimodal input during technology-assisted teacher instruction and English learner's learning experience. Innovation in Language Learning and Teaching, 1-13.

Lantolf, J. P., & Poehner, M. E. (2011). Dynamic assessment in the classroom: Vygotskian praxis for second language development. Language teaching research15(1), 11-33.

Lee, I. (2017). Classroom writing assessment and feedback in L2 school contexts. Springer.

Lee, J. S., & Drajati, N. A. (2019). Affective variables and informal digital learning of English: Keys to willingness to communicate in a second language. Australasian Journal of Educational Technology35(5), 168-182.

Lin, Y. L. (2020). A helping hand for thinking and speaking: Effects of gesturing and task planning on second language narrative discourse. System91, 102243.

Litman, D., Strik, H., & Lim, G. S. (2018). Speech technologies and the assessment of second language speaking: Approaches, challenges, and opportunities. Language Assessment Quarterly15(3), 294-309.

Liu, J., Shindo, H., & Matsumoto, Y. (2019). Development of a computer-assisted Japanese functional expression learning system for Chinese-speaking learners. Educational Technology Research and Development67(5), 1307-1331.

Mahmoudikia, M., Hoomanfard, M. H., & Izadpanah, M. A. (2014). Teacher factors affecting ICT use in Iranian classes: A literature review. International Journal of Language Learning and Applied Linguistics World, 6(1), 203-214.

McGuinness, C., & Fulton, C. (2019). Digital literacy in higher education: A case study of student engagement with e-tutorials using blended learning. Journal of Information Technology Education: Innovations in Practice, 18(1), 1-28.

Mohammadi, Z. (2017). Task engagement: a potential criterion for quality assessment of language learning tasks. Asian-Pacific Journal of Second and Foreign Language Education, 2(3), 1-25.

Mokhtar, F. A., & Dzakiria, H. (2015). Illuminating the potential of Edmodo as an interactive virtual learning platform for English language learning and teaching. Malaysian Journal of Distance Education, 17(1), 83-98.

Natriello, G. (1984). Problems in the evaluation of students and student disengagement from secondary schools. Journal of Research and Development in Education17(4), 14-24.

Parmaxi, A., & Demetriou, A. A. (2020). Augmented reality in language learning: A state‐of‐the‐art review of 2014–2019. Journal of Computer Assisted Learning36(6), 861-875.

Payne, S., 8c Whitney, P. J. (2002). Developing L2 oral proficiency through synchronous CMC: Output, working memory, and interlanguage development. CALICO Journal, 20, 7-32.

Philp, J., & Duchesne, S. (2016). Exploring engagement in tasks in the language classroom. Annual Review of Applied Linguistics36, 50-72.

Razmjoo, S. A., & Hoomanfard, M. H. (2012). On the effect of cooperative writing on students’ writing ability, WTC, self-efficacy, and apprehension. World Journal of English Language, 2 (2), 19-28.

Reeve, J. (2012). A self-determination theory perspective on student engagement. In Handbook of research on student engagement (pp. 149-172). Springer.

Scott, V. M., & Fuente, M. J. D. L. (2008). What's the problem? L2 learners' use of the L1 during consciousness‐raising, form‐focused tasks. The Modern Language Journal92(1), 100-113.

Skehan, P. (2003). Focus on form, tasks, and technology. Computer Assisted Language Learning16(5), 391-411.

Skehan, P. (2009). Modelling second language performance: Integrating complexity, accuracy, fluency, and lexis. Applied Linguistics30(4), 510-532.

Storch, N. (2013). Collaborative writing in L2 classrooms. Multilingual Matters.

Swain, M. (1988). Manipulating and complementing content teaching to maximize second language learning. TESL Canada Journal, 68-83.

Swain, M. (2013). The inseparability of cognition and emotion in second language learning. Language teaching46(2), 195-207.

Swain, M., & Lapkin, S. (2000). Task-based second language learning: The uses of the first language. Language teaching research4(3), 251-274.

Tsai, P. H. (2019). Beyond self-directed computer-assisted pronunciation learning: A qualitative investigation of a collaborative approach. Computer Assisted Language Learning32(7), 713-744.

Warschauer, M., & Healey, D. (1998). Computers and language learning: An overview. Language teaching, 31(2), 57-71.

Warschauer, M., & Ware, P. (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research10(2), 157-180.

Wolfe-Quintero, K., Inagaki, S., & Kim, H. Y. (1998). Second language development in writing: Measures of fluency, accuracy, & complexity (No. 17). University of Hawaii Press.

Woolfolk, A. (2016). Educational psychology. Pearson. 

Yuan, F., & Ellis, R. (2003). The effects of pre‐task planning and on‐line planning on fluency, complexity and accuracy in L2 monologic oral production. Applied Linguistics24(1), 1-27.

Zeinali Nejad, M., Golshan, M., & Naeimi, A. (2021). Pronunciation Achievement in Computer-Mediated Communication (CMC) Classrooms. International Journal of Foreign Language Teaching and Research9(38), 205-214.

Zhang, R., & Zou, D. (2020). Types, purposes, and effectiveness of state-of-the-art technologies for second and foreign language learning. Computer Assisted Language Learning, 1-47.

Zou, B., & Thomas, M. (Eds.). (2019). Recent developments in technology-enhanced and computer-assisted language learning. IGI Global.

 

 

[1] Ph.D. candidate in TEFL, m.ramak@iauardabil.ac.ir                ; Department of English, Ardabil Branch, Islamic Azad University, Ardabil, Iran.

[2] Assistant Professor in TEFL (Corresponding Author), siahpoosh_h@iauardabil.ac.ir; Department of English, Ardabil Branch, Islamic Azad University, Ardabil, Iran.

[3] Assistant Professor in TEFL, m.davaribina@iauardabil.ac.ir; Department of English, Ardabil Branch, Islamic Azad University, Ardabil, Iran.

Ajabshir, Z. F. (2019). The effect of synchronous and asynchronous computer-mediated communication (CMC) on EFL learners' pragmatic competence. Computers in Human Behavior92, 169-177.
Alshahrani, A. (2016). Communicating authentically: Enhancing EFL students' spoken English via videoconferencing. CALL-EJ17(2), 1-17.
Amiryousefi, M. (2017). The differential effects of collaborative vs. individual prewriting planning on computer-mediated L2 writing: Transferability of task-based linguistic skills in focus. Computer Assisted Language Learning30(8), 766-786.
Ary, D., Jacobs, L. C., Irvine, C. K. S., & Walker, D. (2018). Introduction to research in education. Cengage Learning.
Aubrey, S., King, J., & Almukhaild, H. (2020). Language learner engagement during speaking tasks: A longitudinal study. RELC Journal, 1-15.
Bahari, A. (2021). Computer‐mediated feedback for L2 learners: Challenges versus affordances. Journal of Computer Assisted Learning37(1), 24-38.
Baralt, M., & Gurzynski-Weiss, L. (2011). Comparing learners’ state anxiety during task-based interaction in computer-mediated and face-to-face communication. Language Teaching Research15(2), 201-229.
Blake, R. (2017). Technologies for Teaching and Learning L2 Speaking. In C. Chapelle & S. Sauro (Eds.), The Handbook of Technology and Second Language Teaching and Learning (pp. 107–117). John Wiley & Sons Inc.
Chen, S. Y., & Tseng, Y. F. (2019). The impacts of scaffolding e-assessment English learning: A cognitive style perspective. Computer Assisted Language Learning, 32(4), 1-23.
Christenson, S. L., Reschly, A. L., & Wylie, C. (Eds.). (2012). Handbook of research on student engagement. Springer Science & Business Media.
Cornelius, S., Calder, C., & Mtika, P. (2019). Understanding learner engagement on a blended course including a MOOC. Research in Learning Technology, 27, 1-14.
Côté, S., & Gaffney, C. (2021). The effect of synchronous computer-mediated communication on beginner L2 learners’ foreign language anxiety and participation. The Language Learning Journal49(1), 105-116.
Derakhshan, A., Khalili, A. N., & Beheshti, F. (2016). Developing EFL learner’s speaking ability, accuracy and fluency. English Language and Literature Studies6(2), 177-186.
Ebadi, S., & Asakereh, A. (2018). Using voice thread to enhance speaking: A case study of Iranian EFL learners. Journal on English Language Teaching8(3), 29-42.
Ehsanifard, E., Ghapanchi, Z., & Afsharrad, M. (2020). The impact of blended learning on speaking ability and engagement. Journal of Asia TEFL17(1), 253.
Ellis, R. (2015). Understanding second language acquisition. Oxford University Press.
Ellis, R. (2019). Towards a modular language curriculum for using tasks. Language Teaching Research23(4), 454-475.
Ellis, R., & Yuan, F. (2004). The effects of planning on fluency, complexity, and accuracy in second language narrative writing. Studies in second Language acquisition26(1), 59-84.
Embi, M. A. (2011). Web 2.0 social networking tools: A quick guide. Pusat Pembangunan Akademik, Universiti Kebangsaan Malaysia.
Farangi, M. R., Nejadghanbar, H., Askary, F., & Ghorbani, A. (2015). The effects of podcasting on EFL upper-intermediate learners’ speaking skills. CALL-EJ Online16(2), 16-39.
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74(1), 59-109.
Guillén, G. (2015). Awareness and corrective feedback in social CALL, Tandems, and E-Tandems. IALLT Journal of Language Learning Technologies44(2), 1-42.
Guillén, G., & Blake, R. (2017). Can you repeat, please? L2 complexity, awareness, and fluency development in the hybrid “classroom”. Online language teaching research: Pedagogical, academic and institutional issues,7(1), 55-77.
Hajimaghsoodi, A., & Maftoon, P. (2020). The effect of activity theory-based computer-assisted language learning on EFL learners' writing achievement. Language Teaching Research Quarterly16, 1-21.
Han, Y., & Hyland, F. (2015). Exploring learner engagement with written corrective feedback in a Chinese tertiary EFL classroom. Journal of second language writing30, 31-44.
Hoomanfard, M. H., & Rahimi, M. (2020). A comparative study of the efficacy of teacher and peer online written corrective feedback on EFL learners' writing ability. Zabanpazhuhi (Journal of Language Research)11(33), 327-352.
Hsu, L. (2016). An empirical examination of EFL learners' perceptual learning styles and acceptance of ASR-based computer-assisted pronunciation training. Computer Assisted Language Learning29(5), 881-900.
Kim, H. Y. (2020). Multimodal input during technology-assisted teacher instruction and English learner's learning experience. Innovation in Language Learning and Teaching, 1-13.
Lantolf, J. P., & Poehner, M. E. (2011). Dynamic assessment in the classroom: Vygotskian praxis for second language development. Language teaching research15(1), 11-33.
Lee, I. (2017). Classroom writing assessment and feedback in L2 school contexts. Springer.
Lee, J. S., & Drajati, N. A. (2019). Affective variables and informal digital learning of English: Keys to willingness to communicate in a second language. Australasian Journal of Educational Technology35(5), 168-182.
Lin, Y. L. (2020). A helping hand for thinking and speaking: Effects of gesturing and task planning on second language narrative discourse. System91, 102243.
Litman, D., Strik, H., & Lim, G. S. (2018). Speech technologies and the assessment of second language speaking: Approaches, challenges, and opportunities. Language Assessment Quarterly15(3), 294-309.
Liu, J., Shindo, H., & Matsumoto, Y. (2019). Development of a computer-assisted Japanese functional expression learning system for Chinese-speaking learners. Educational Technology Research and Development67(5), 1307-1331.
Mahmoudikia, M., Hoomanfard, M. H., & Izadpanah, M. A. (2014). Teacher factors affecting ICT use in Iranian classes: A literature review. International Journal of Language Learning and Applied Linguistics World, 6(1), 203-214.
McGuinness, C., & Fulton, C. (2019). Digital literacy in higher education: A case study of student engagement with e-tutorials using blended learning. Journal of Information Technology Education: Innovations in Practice, 18(1), 1-28.
Mohammadi, Z. (2017). Task engagement: a potential criterion for quality assessment of language learning tasks. Asian-Pacific Journal of Second and Foreign Language Education, 2(3), 1-25.
Mokhtar, F. A., & Dzakiria, H. (2015). Illuminating the potential of Edmodo as an interactive virtual learning platform for English language learning and teaching. Malaysian Journal of Distance Education, 17(1), 83-98.
Natriello, G. (1984). Problems in the evaluation of students and student disengagement from secondary schools. Journal of Research and Development in Education17(4), 14-24.
Parmaxi, A., & Demetriou, A. A. (2020). Augmented reality in language learning: A state‐of‐the‐art review of 2014–2019. Journal of Computer Assisted Learning36(6), 861-875.
Payne, S., 8c Whitney, P. J. (2002). Developing L2 oral proficiency through synchronous CMC: Output, working memory, and interlanguage development. CALICO Journal, 20, 7-32.
Philp, J., & Duchesne, S. (2016). Exploring engagement in tasks in the language classroom. Annual Review of Applied Linguistics36, 50-72.
Razmjoo, S. A., & Hoomanfard, M. H. (2012). On the effect of cooperative writing on students’ writing ability, WTC, self-efficacy, and apprehension. World Journal of English Language, 2 (2), 19-28.
Reeve, J. (2012). A self-determination theory perspective on student engagement. In Handbook of research on student engagement (pp. 149-172). Springer.
Scott, V. M., & Fuente, M. J. D. L. (2008). What's the problem? L2 learners' use of the L1 during consciousness‐raising, form‐focused tasks. The Modern Language Journal92(1), 100-113.
Skehan, P. (2003). Focus on form, tasks, and technology. Computer Assisted Language Learning16(5), 391-411.
Skehan, P. (2009). Modelling second language performance: Integrating complexity, accuracy, fluency, and lexis. Applied Linguistics30(4), 510-532.
Storch, N. (2013). Collaborative writing in L2 classrooms. Multilingual Matters.
Swain, M. (1988). Manipulating and complementing content teaching to maximize second language learning. TESL Canada Journal, 68-83.
Swain, M. (2013). The inseparability of cognition and emotion in second language learning. Language teaching46(2), 195-207.
Swain, M., & Lapkin, S. (2000). Task-based second language learning: The uses of the first language. Language teaching research4(3), 251-274.
Tsai, P. H. (2019). Beyond self-directed computer-assisted pronunciation learning: A qualitative investigation of a collaborative approach. Computer Assisted Language Learning32(7), 713-744.
Warschauer, M., & Healey, D. (1998). Computers and language learning: An overview. Language teaching, 31(2), 57-71.
Warschauer, M., & Ware, P. (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research10(2), 157-180.
Wolfe-Quintero, K., Inagaki, S., & Kim, H. Y. (1998). Second language development in writing: Measures of fluency, accuracy, & complexity (No. 17). University of Hawaii Press.
Woolfolk, A. (2016). Educational psychology. Pearson. 
Yuan, F., & Ellis, R. (2003). The effects of pre‐task planning and on‐line planning on fluency, complexity and accuracy in L2 monologic oral production. Applied Linguistics24(1), 1-27.
Zeinali Nejad, M., Golshan, M., & Naeimi, A. (2021). Pronunciation Achievement in Computer-Mediated Communication (CMC) Classrooms. International Journal of Foreign Language Teaching and Research9(38), 205-214.
Zhang, R., & Zou, D. (2020). Types, purposes, and effectiveness of state-of-the-art technologies for second and foreign language learning. Computer Assisted Language Learning, 1-47.
Zou, B., & Thomas, M. (Eds.). (2019). Recent developments in technology-enhanced and computer-assisted language learning. IGI Global.