The Potentiality of Dynamic Assessment in Massive Open Online Courses (MOOCs): The Case of Listening Comprehension (Research Paper)

Document Type : Original Article

Authors

Department of English Language and Literature, University of Sistan and Baluchestan, Zahedan, Iran.

Abstract

Massive Open Online Courses (MOOCs) as a new shaking educational development provide the scene for achieving social inclusion and dissemination of knowledge. Anyhow, facilitating network learning experiences through creating an adaptive learning environment can pave the way for this open and energetic way to learning. The present study aimed to explore the possible role of Dynamic Assessment (DA) as an evaluation tool for providing an adaptive learning to the diversity of online platforms. 453 Second Language (L2) learners participated in the course. Similar to the typical MOOCs, learners watched lecture videos, answered quizzes, posted responses to forums, and communicated with others. Dissimilar to the typical MOOCs, the quizzes were prepared and assessed based on DA tenets and the learners were required to take a listening comprehension test at the end of each module. Quantitative analyses were made to find out each individual's Zone of Actual Development (ZAD), Zone of Proximal Development (ZPD), and Learning Potential Score (LPS). The findings of the study revealed that DA was promising for giving insight into the development of the learner’s listening comprehension and the assessment of listening ability in MOOC. The results of the study pinpointed that scores on what a learner could do alone and without assistance, ZAD scores, could not provide a clear picture of the learner’s listening ability, while mediation helped to reveal his/her potential abilities (ZPD) and diagnose his/her difficulties. It was also shown that DA brought to light the areas the learner needed more support and instruction, and consequently provided a profound feedback and evaluation on the learner’s learning and performance, the elements missed in MOOC.

Keywords


Article Title [Persian]

عاملیت بالقوه ی ارزیابی پویا در دوره های آزاد انبوه بر خط (موک): مورد موک های درک شنیداری

Authors [Persian]

  • فرخ لقاء حیدری
  • مهری ایزدی
گروه زبان و ادبیات انگلیسی، دانشکده ادبیات، دانشگاه سیستان و بلوچستان، زاهدان، ایران
Abstract [Persian]

دوره های آزاد انبوه بر خط (موک) به عنوان پیشرفتی جدید بستری را برای دستیابی به شمول اجتماعی و انتشار دانش فراهم می آورند. بهرحال، تسهیل تجربه های یادگیری از طریق شبکه با ایجاد محیط یادگیری سازگار می تواند راه را برای این روش یادگیری آزاد و پرانرژی هموار کند. مطالعه ی حاضر با هدف بررسی نقش احتمالی ارزیابی پویا به عنوان ابزار ارزیابی به منظور ایجاد یادگیری سازگار​​با پایگاه های متنوع برخط انجام شد. 453 زبان آموز در این دوره شرکت کردند. همانند موک های معمولی ، زبان آموزان فیلم های آموزشی تماشا کردند ، به آزمونها پاسخ دادند ، به انجمن ها پاسخ دادند و با دیگران به برقراری ارتباط پرداختند. متفاوت از موک های معمولی ، آزمونها بر اساس اصول ارزیابی پویا تهیه و تعیین شدند و زبان آموزان در پایان هر ماژول باید یک تست درک شنیداری را پاسخ می دادند. تجزیه و تحلیل کمی برای پیدا کردن حیطه حقیقی گسترش ، حیطه تقریبی گسترش و نمرات یادگیری بالقوه ی هر زبان آموزانجام گرفت. یافته های مطالعه نشان داد که ارزیابی پویا به پیشرفت درک شنیداری زبان آموز و ارزیابی توانایی شنیداری در موک کمک می کند. نتایج این تحقیق بیان داشت که نمرات آنچه زبان آموز به تنهایی و بدون کمک می تواند انجام دهد ، نمرات حیطه ی حقیقی گسترش، تصویرروشنی از توانایی شنیداری زبان آموز ارائه نمی دهد ، در حالی که میانجیگری به کشف توانایی های بالقوه ی (حیطه ی تقریبی گسترش) زبان آموز و تشخیص مشکلات او کمک می کند. یافته ها همچنین حاکی از این بود که ارزیابی پویا زمینه هایی را که زبان آموز نیاز به حمایت و راهنمایی بیشتری دارد را مشخص می کند و در نتیجه بازخورد و ارزیابی عمیقی در رابطه با یادگیری و عملکرد زبان آموز ارائه می دهد، مواردی که در موک حاصل نمی شوند.

Keywords [Persian]

  • ارزیابی پویا
  • دوره های آزاد انبوه بر خط
  • درک شنیداری
  • موک

The Potentiality of Dynamic Assessment in Massive Open Online Courses (MOOCs): The Case of Listening Comprehension MOOCs

[1]Farrokhlagha Heidari*

[2]Mehri Izadi

IJEAP- 1911-1455

Received: 2020-01-25                          Accepted: 2020-03-31                      Published: 2020-04-18

Abstract

Massive Open Online Courses (MOOCs) as a new shaking educational development provide the scene for achieving social inclusion and dissemination of knowledge. Anyhow, facilitating network learning experiences through creating an adaptive learning environment can pave the way for this open and energetic way to learning. The present study aimed to explore the possible role of Dynamic Assessment (DA) as an evaluation tool for providing an adaptive learning to the diversity of online platforms. 453 Second Language (L2) learners participated in the course. Similar to the typical MOOCs, learners watched lecture videos, answered quizzes, posted responses to forums, and communicated with others. Dissimilar to the typical MOOCs, the quizzes were prepared and assessed based on DA tenets and the learners were required to take a listening comprehension test at the end of each module. Quantitative analyses were made to find out each individual's Zone of Actual Development (ZAD), Zone of Proximal Development (ZPD), and Learning Potential Score (LPS). The findings of the study revealed that DA was promising for giving insight into the development of the learner’s listening comprehension and the assessment of listening ability in MOOC. The results of the study pinpointed that scores on what a learner could do alone and without assistance, ZAD scores, could not provide a clear picture of the learner’s listening ability, while mediation helped to reveal his/her potential abilities (ZPD) and diagnose his/her difficulties. It was also shown that DA brought to light the areas the learner needed more support and instruction, and consequently provided a profound feedback and evaluation on the learner’s learning and performance, the elements missed in MOOC.

Keywords: Dynamic Assessment, Massive Open Online Courses, Listening Comprehension, MOOC

1. Introduction

Over the past decades, the rapid development of Internet Communication Technology (ICT) has altered the way education is being delivered. Online learning platforms “deliver information in a way that removes time, place, and situational barriers” (Beach, 2017, p. 61). It seems that the number of students who take online classes outrun those who participate in traditional face-to-face classes. Given the limitless source of information and wide array of instructional tools available on internet, e-learning has become an attractive alternative tool to facilitate learning. Broadbent and Poon (2015, p. 2) state that online learning platforms offer a new range of educational opportunities including: a) flexibility and accessibility for students, b) additional access to learning resources, and c) synchronous and asynchronous learning.

According to Asiry (2017) and Ahangar and Izadi (2015), e-learning provides more chances to learn and to collaborate, and offers various multimedia which harmonize with different learning styles.  It also paves the way for a switch from passive teacher-centered to active learner-centered learning (Asiry, 2017). Thus, learners can organize their own learning and have access to never-ending information to create and sustain personal aims and initiatives. Online learning also has the potential to lead learners to a self-directed learning experience in which “learning occurs with a sense of autonomy, motivation, and learner control” (Beach, 2017, p. 61).

One recent popular trend in online learning is MOOCs. They provide asynchronous, open-access, web-based online courses which can be implemented by teachers and learners through free enrollment. According to Daradoumis, Bassi, Xhafa, and Caballé (2013), MOOCs present “a continuation of the trend in innovation, experimentation and the use of technology initiated by distance and online learning to provide learning opportunities for large numbers of learners” (p. 208). Increased access and expanding education exponentially have boosted the effective formation, administration and manipulation of information for learning in MOOCs. MOOCs provide high-quality educationally-oriented video contents and learning tasks for individual or group work. Therefore, thousands or hundreds of thousands of participants can enroll in the course platform. The power of MOOC is, thus, defined in terms of active engagement of self-organizing learners.  A typical MOOC lasts 4-10 weeks with 2-6-hour classes per week. Materials are presented based on pedagogical principles of social learning and/or video-lecture content. Participation and completion of the course are defined in terms of either certificates of completion, online badges, or college credits.

The last decade has witnessed an urge and accordingly a surge of interest in the diagnostic type of language assessment (Alderson, Brunfaut, & Harding, 2014; Harding, Alderson, & Brunfaut, 2015). Aligning with Alderson et al.'s (2014) theory of diagnostic assessment, Dynamic Assessment (DA) with its great reliance on mediation throughout the testing procedure allows to diagnose main sources of problems, shed light on the process of learning, provide purposeful information, and observe language development. DA is a present-to-future model of assessment which provides learners with graduated helps, called mediation, to reveal their outer limit of potential performance. DA roots in Vygotsky’s concept of Zone of Potential/Proximal Development (ZPD) which was applied to contrast statistical with dynamic approaches to assessment. With its reliance on ZPD and mediation, DA brings language teaching and language testing closer together. In fact, DA is characterized by some certain features making it not only a past-oriented instructional enterprise where what one has learnt up to now counts, but a future-oriented initiative where one’s potentialities for further learning are also explored and uncovered. Statistical assessment, however, concentrates on evaluating learners’ current knowledge and skills and does not provide important information about learning process. As opposed to standard test, DA focuses on the amount and types of support the examinee needs rather than whether s/he succeeds or fails to complete the task. While MOOCs permit openness and scalability in a most energetic way, the question of appropriate feedback-providing tools and procedures highlights issues relevant to the assessment of learners.  

2. Review of the Literature  

MOOCs as a recent development in distance education have been around since 2006 and emerged as a popular learning mode in 2012. Littlejohn, Hood, Milligan, and Mustain (2016) state that the open nature of MOOCs fosters access and successful learning outcomes and high-quality content of MOOCs means they can be reused for other purposes. They encourage the participants to interact with the wider public across multiple countries, and across institutions. This can expose participants to different points of view, and is an excellent way of sharing best and most fruitful practices. MOOCs also allow learners to self-manage and work on their own (Daradoumis et al., 2013). Thus, more advanced learners can push on through the course activities quickly, while learners who are struggling can take longer to go through. Furthermore, learners can customize the content in order to establish individual goals and personal trajectory. MOOCs help learning by being “self-directed, meaning you follow the course materials, complete the readings and assessments, and get help from large community of fellow learners through online forums” (Gulati, 2013, p. 38).

While MOOCs integrate social inclusion, build on the active engagement of a large number of learners who self-organize their learning according to their own specific learning goals, and provide the facilitation of an acknowledged expert, the automated nature of scoring demands the educators to seek for alternative procedures of providing detailed feedback to students to fulfill their desire for benefiting the whole potentiality of this online innovation. There is a growing concern that MOOCs do not provide a profound evaluation of learners’ learning and performances and this may prevent MOOCs to stand as a complete and stand-alone learning experience. At present, the popular forms of assessments in MOOCs are “computer-scored multiple choice questions, formulaic problems with correct answers, logical proofs, computer codes, and matching items, often with targeted feedback based on the responses given” (Reilly, Stafford, Williams, & Corliss, 2014, p. 84). At the end of each instructional module, student may be asked to complete online quizzes with automated scoring to tell how well they've done according to their grades. There may be also graded quizzes, homework, problem sets, and multiple-time quizzes employed by some instructors with the special aim of counting the highest grade (Glance, Forsey, & Riley, 2013). As Reilly et al. (2014) discuss, the scores reflect students’ mastery over the material as feedback and just tell those who do not get good marks to restudy the previous module to get a better grade (Reilly et al., ibid). However, the inappropriate nature of scoring to evaluate course content where learners are expected to be self-motivated and proactive and play a vital role in online courses to establish a learning community for developing and generating knowledge, is quite evident.

The downsides of automated scoring along with the difficulty of assessing in certain disciplines such as essay writing pave the way for alternative assessment procedures such as self- or peer-assessment with/without a rubric in MOOCs.  These alternative procedures may remove the limitations of automatic scoring by being applicable to all contents and assignments. As Suen (2014) discusses, this assessment procedure “allows a MOOC to be a complete stand-alone educational tool without reducing the role of the MOOC to that of a multimedia interactive textbook” (p. 317). However, the opponents argue that due to the scale of MOOCs, it is almost impossible for the instructors to mediate, supervise, or guide students. Moreover, because of international participants, peer-assessment is influenced by a large variation in students’ First Language (L1), cultural beliefs, and points of view. This problem would be escalated by a lack of teacher’s supervision over the process, bringing about little sense of responsibility or enticement for students to undertake the self-/peer-assessment process seriously. Jordan (2013) and Suen (2014) state that there appears to be a lower course completion rate in course platforms which employ self-/peer-assessment.

A closer review of the role of assessment in educational setting reveals two contrasting views, namely assessment of learning and assessment for learning. The first view aims to provide information on learners’ existing levels of achievement while the second view aims to specify what needs to be done next to move learning. However, there is an inherent problem with these perspectives: teaching and testing are considered as two separate purposes of education. Due to this problem, recent innovations have tried to integrate instruction and assessment, namely assessment over learning. This led to the introduction of Dynamic Assessment.

Dynamic assessment has its theoretical underpinnings in Vygotsky’s writing on the Zone of Proximal/Potential Development. Vygotsky (1998, p. 201) questions “the prevalent view on independent problem solving as the mere valid indication of one’s mental functioning”. By depicting what an individual is able to do in future, he provides an insight into the person’s future development. The fine-tuned assistance places at the heart of ZPD concept which aims at helping the individual transform his/her ZPD into Zone of Actual Development (ZAD) i.e., the individual moves from other-regulation to self-regulation (Aljaafreh & Lantolf, 1994). The amount and type of assistance the individual may require to perform future tasks is determined by the difference between these two levels. Thus in order to obtain a clear measure of learners' ZPD, according to Poehner (2005), instructors must first observe what learners can perform independently (i.e. actual level) and then to compare it with what learners are capable to do through mediation (i.e. proximal level). As Barkhuizen and Ellis (2005) state, ZPD is not something which exists in the individual himself and comes out through collaboration with more capable peers. As an important element of DA, mediation is being defined by Lantolf (2000) as a systematic way to assist an individual to complete a task which is within his/her ZPD but s/he is not able to perform it alone. Trying to integrate assessment and instruction in a dialectical way, DA has gained substantial interest of teachers in ESL/EFL writing classrooms, helping individuals become more efficient in their learning. Lantolf and Poehner (2008) advocate DA as indicating the learner’s current ability and simultaneously promoting development via specific mediations or hints assisting him/her to overcome learning impediments. The interesting point is that unlike static assessment in which learners’ correct responses are indicative of their current ability, DA focuses on the learners’ errors and problems in terms of the individual’s ongoing development resorting to ZPD-sensitive feedback to promote learning. As for the definition of DA, Poehner (2007) comments “in DA, the traditional goal of producing generalizations from a snapshot of performance is replaced by ongoing intervention in development” (p. 323). Lantolf and Poehner (2004) sum up

Dynamic assessment integrates assessment and instruction into seamless, unified activity aimed at promoting learner development through appropriate forms of mediation that are sensitive to the individual’s current abilities. In essence, DA is a procedure for simultaneously assessing and promoting development that takes account of the individual’s zone of proximal development. (p. 50)

Accordingly, DA embraces evaluating rather than scoring with great emphasis put on providing support to the examinees. In this way, DA shifts the focus from whether learners succeed or not to the amount and types of support they need. Therefore, according to Poehner (2007), the unique features attributed to DA include the mediational role of instructor, the integration of assessment and instruction, and taking learning as a continuing process rather than simply products of behavior. Based on these features, researchers (Haywood & Lidz, 2007; Lantolf, 2009; Poehner, 2005, 2007) have come up with the conclusion that DA is a unique and rich procedure towards assessment.

The use of DA in online learning has revealed promise in addressing a number of concerns raised with traditional testing. For example, a growing number of studies have shown that the integration of DA into e-learning can promote learners' regulation, reading, and writing skills (Birjandi & Ebadi, 2012; Shabani, 2012; Shrestha & Coffin 2012; Wang, 2010). Birjandi and Ebadi (2012) investigated the effect of dynamic assessment trough Computer Mediated Communication (CMC) on L2 learners' socio-cognitive development. They came with the conclusion that CMC could provide clearer insights into the participants' level of regulation and their potential for future socio-cognitive development. In a study of Iranian L2 learners, Shabani (2012) investigated the effect of Computerized Dynamic Assessment (C-DA) in promoting learners' reading skills. The study demanded learners to read passages being accompanied with their modified versions presented by highlighted and visual helps. The texts were presented to the learners in the form of a computerized software. For any incorrect answer, the C-DA automatically showed pre-prepared prompts. The hints moved from the textual prompts towards the visual assistance. Learners’ ZPD scores were calculated by counting the number of hints shown to the students in order to answer correctly. According to the study, C-DA mediation significantly enhanced learners’ reading comprehension ability, raised their awareness by addressing their attention to the important parts of the passage and assisted them to comprehend the reading texts better.

Shrestha and Coffin (2012) explored the effect of dynamic tutor mediation on learners’ academic writing ability. Two business students received text-based interaction in line with DA approach primarily through e-mails. The mediator asked the students to write about business-related issues and delivered formative feedback in form of text mediation on each assignment to the students. The tutor, who was their instructor, provided implicit and explicit comments in forms of Wiki posts or word document annotations. Next, the learners were required to write a new draft with respect to the comments they received. Comparison of pre- and post-assessments of writing ability revealed that DA intervention assisted both teachers and learners in finding and responding to the areas that students needed most support. Wang (2010) compared the effect of Web-based dynamic assessment system (GPAM-WATA) and normal Web-based test (N-WBT). If the learners failed to answer correctly for the first time, the GPAM-WATA offered a general prompt and delayed re-answering that item (i.e. itemx). Next, participants continued responding the rest of items and then randomly returned to response itemx. If learners answered incorrectly for the second time, the GPAM-WATA provided a more specific hint. This process would terminate and itemx would be excluded from the test when either the learners still failed to provide a correct answer after a maximum of three prompts offered or they answered itemx correctly. After learners answered all items, the GPAM-WATA provided information about the items that they did not answer correctly. Findings of the study illustrated that GPAM-WATA group experienced a better effective e-learning compared to N-WBT. Wang argued that GPAM-WATA juxtaposed mediation and graded hints so that learners could probe and utilize some essential principles and accordingly solve problems independently and learn more. This, in turn, creates "an assessment-centered e-learning environment that treats assessment as a teaching and learning strategy" (Wang 2010, p. 1165).

However, it seems that the role of DA in promoting listening comprehension has received scant attention. In 2010, Ableeva explored the effect of listening DA on intermediate French learners. A pre-test/enrichment program/post-test was adopted and learners’ performances were compared in DA, Transfer, and Non-Dynamic Assessment (NDA) sessions. In DA and Transfer sessions, learners were required to listen to the text twice, to try to understand it and then to recall it orally. Wherever they faced problem, the mediator provided hints in the form of dialogic interactions. Learners in NDA did not receive any form of treatment. The results of the study showed that the mediation gave rise to diagnosis and assessment of potential level of learners’ listening development and simultaneously the promotion of this development. In another study, Shabani (2014) followed interactionist group dynamic assessment (G-DA), and Mediated Learning Experience (MLE) concept. Learners were required to provide the content of the heard segments or the meaning of selected words and phrases. In NDA group, there was no intervention but the DA group engaged in one-to-one and one-to-group negotiations and dialogues. The analysis of findings revealed that “NDA procedure stops short of fully capturing the learners’ underlying potential and leaves aside the abilities which are in the state of ripening. It was shown that the learners’ ability to recognize an unrecognized word of the pretest transcended beyond the posttest task to the TR session, an improvement signaling their progressive trajectories towards higher levels of ZPD” (p. 1729). In a different study, Hidri (2014) adopted a three-testing-phase study: pre-testing which included wh-, guessing and matching items, while-testing which included two wh- and summarizing items each, Multiple Choice (MC), true/false and guessing items, and finally post-testing which included MC, picture reordering, summarizing and making inference items. Mediation and meaning negotiation were provided to the learners in DA group. The study found that DA phases provided better insights into examinees’ cognitive and meta-cognitive processes as opposed to static assessment.

With respect to computerized format of DA, Mashhadi Heidar and Afghari (2015) investigated EFL elementary learners’ listening proficiency development through electronic dynamic assessment via Skype and the effects of this type of assessment on learners’ autonomy. Their study showed that the specific areas where learners needed improvement could be revealed through online DA. They also found that by implementing DA via Skype, the actual and potential levels of learners’ listening ability could be revealed. Their findings also demonstrated that both autonomous and non-autonomous learners similarly benefited from DA via Skype. While this study is a good example of investigation of the role of electronic DA in listening comprehension, there are some factors which make conducting this research feasible and practical. This study focused on a small sample size (n=60), so it was possible to provide individualized mediation to learners. In case of a larger sample of learners similar to the situation with MOOCs, providing individualized support to each individual may actually seem impossible. Poehner and Lantolf (2013) and Poehner, Zhang, and Lu (2015) developed online multiple-choice tests of reading and listening comprehension in three languages of Chinese, Russian and French. If students were not able to answer an item correctly, four prompts were presented, prepared from implicit (listen/read again) to explicit support (providing the correct answer along with an explanation for the answer). Computerized DA diagnosed individuals’ independent and mediated performances, and tracked their improvement and development through learning potentials and transfer scores. The results indicated that C-DA was able to provide an elegant diagnosis of learners’ listening and reading development which was informative on future teaching and learning. Although Poehner and his colleagues successfully examined the effect of computerized DA in terms of listening/reading comprehension, their focus was mainly on computerized format of DA and its advantages and disadvantages. Izadi (2018) developed a two-phase study to address the shortcomings of the previous studies. The study aimed to develop an Intelligent Dynamic Assessment (I-DA) mean to adapt test takers’ ability to the difficulty level of the test and the adjustment of mediation to the item construct and item mode. I-DA poses three issues which were not existent in (non)computerized DA; namely, ability level (e.g. high and low proficient learners), item construct (e.g. phonetics) and item mode (e.g. comprehension and production). In Phase one of the study, the forms of mediational strategies that best nurtured the development of listening skill were detected through interventionist approach to DA. Based on these results, I-DA was designed and the study explored whether I-DA enhanced the listening comprehension skill of learners. Moreover, the learning potential of learners, the degree of internalization of mediation and the areas learners had problem with in L2 listening skill were pursued. Results of the study revealed that I-DA was capable of enhancing the listening ability, and tailoring proficiency level to learners’ ability level and adapting hints to the learners’ needs.

In spite of the almost approved effectiveness of DA on controllable number of learners in e-learning and assessment, the lack of research on the applicability of DA as an evaluation tool as part of massive open online courses with unlimited participation and open access via the web is quite evident. A prominent feature of MOOCs is the unlimited participation of learners due to the open access nature of these courses which leads to minimal direct interaction between the instructor and learners. Moreover, it is difficult to keep track of learners’ assignments and involvements and provide scores or evaluations. In this regard, the present study aimed to explore the possible role of DA in assessing learners' listening and giving them feedback on their listening comprehension ability in a listening comprehension MOOC.

3. Methodology

3.1. Participants

The study included MOOC student samples from a listening comprehension course presented by Amin University, Isfahan, Iran with an enrollment of approximately 453 students. Participants who completed all four tests were included in statistical analyses and those who dropped out of the course (n=101) or merely completed one/two/three test(s) (n=140) were not considered in data analyses. Therefore, the study continued with 212 participants (167 males, 45 females) who aged 18-29, with a mean age of 21.01 years. The participants were majoring in Translation Studies, Electronics, Civil Engineering, Computer Science, Architecture, Accountancy, Psychology, Business, Industrial and Insurance Management. For all of the participants, Persian was their first language and English was their second language. It should be mentioned that a convenience sampling was applied. This method allowed the researchers to rely on data collection from population members who were available to participate in the study.

3.2. Instrumentation

3.2.1. Test Preparation 

The study adopted Poehner's (2005) and Ableeva’s (2010) DA-based investigation of L2 learners of French. First, 51 listening items were extracted from the book Real Listening and Speaking 3 (Craven, Thaine, & Logan, 2008).  There were multiple-choice items to test the participants’ ability to listen for the key points, detailed information, or inferences from the speaker’s opinion. To better serve the purpose of a DA tool and to reveal leaners’ ZPD, one additional distractor was added to each item (i.e. totally five choices per item). Thus, learners had the opportunity to reattempt an item and were mediated in this regard. Changing these items through an additional distractor was aimed at changing the item characteristics from that appeared in the original test. Test piloting, thus, helped to specify item characteristics after the changes were made. A representative sample of our target group chosen from the learners of the same university was selected. The tests were then piloted with these 24 university English learners. The final test contained 20 listening items and the Cronbach’s alpha revealed a high reliability of .83.

Upon item preparation, the researchers ran one-on-one tutoring classes to prepare a menu of hints (mediation) for each listening item. To prepare appropriate hints, the researchers held classes with 10 individuals. The DA of the learners in this phase was in form of interactionist DA in which the mediation was not designed a prior and the feedback was neither excessively implicit nor explicit. In this way, learners listened to the audio and answered the item(s) individually; wherever they failed to answer correctly, the mediator intervened and provided mediation. It should be mentioned that although the content of the hints differed across items, there was a fixed pattern of moving from most implicit to most explicit across all individuals. The classes were video- and tape-recorded for qualitative analysis of mediator-learner interaction to derive and code the type and frequency of the mediations made by the mediator and learners’ responses to the mediation. The moves were explored and analyzed in order to provide a menu of prompts for individual items.

Accordingly, standardized menus of mediating moves were prepared in form of interventionist DA for each individual test item. For each question, four hints arranged from the most implicit to the most explicit were produced for each individual test item. After the test was equipped with the mediating moves, it was piloted again with 20 EFL university learners who were representative of our target group to study the effectiveness of the prompts. Following that, the prompts were reanalyzed and some modifications were made to make them more comprehensible, and, therefore, more attuned to all students of this MOOC.

3.2.2. Test Analysis

Three types of performances were studied in this study for each learner: independent (unmediated) performance, dependent (mediated) performance and learning potential score. The independent performance was reported based on the unmediated score. If learners’ response was correct, they received the maximum point (4) and if not they received the minimum point (0). On the other hand, the dependent performance was reported based on the mediated score, which was weighted. If learners were able to respond correctly, they scored 4 but if they did not answer correctly, the scores were reduced with presenting more explicit hints. Therefore, for any given item, an individual’s unmediated score would be either 0 or 4 but his/her mediated score ranged from 0 to 4 depending on the amount of mediation (if any) provided. Learning Potential Score (LPS) was introduced by Kozulin and Garb (2002) to check how much progress an individual made when s/he was mediated. As Poehner et al. (2015) explain, “a simple gain score, such as Budoff had proposed, does not adequately capture how learner scores changed, relative to the maximum possible score on the test, when mediation was introduced to the procedure” (p. 10). The formula to calculate LPS is as follows:  LPS= (2 * mediated score – actual score) / Max Score

3.3. Design and Procedure

This study was a quantitative one employing a one-group treatment design. A microgenetic method was applied to concentrate on development rather than on-off experimentation to demonstrate cause (Ableeva, 2010). According to Ableeva (2010), the microgenetic method primarily concerns the reorganization and development of mediation over a relatively short span of time. This method also adheres to the principles of active formation and recreation of the very processes of development and seeks to find ways of influencing developmental processes. As mentioned before, for designing tests an interactionist DA was employed to provide students with mediation for each item they found difficult in the form of hints, sequenced from the most implicit to the most explicit without concern for any predetermined endpoints (Shabani, 2012). However, a test-train-test design or interventionist DA was employed as a feedback providing tool on learners' performance on the tests with quantifying the amount of support to reach a predetermined endpoint (Lantolf & Poehner, 2004).

The learners went through 4 weeks with two-hour classes per week. Similar to the typical MOOCs, learners watched lecture videos, answered quizzes, posted responses to forums, and communicated with others. Dissimilar to the typical MOOCs, the quizzes were prepared and evaluated based on DA principles. Learners were required to complete one listening comprehension test at the end of each lesson. Classes were video-recorded and the materials were uploaded to the platform. The first lesson introduced the course and presented information on what the nature of listening skill was, what type of listener one was and what it meant to listen well. The second lesson discussed the listening models along with the listening skills and strategies. Being more specific, the third lesson focused on listening and lectures and helped learners to get the most out of a lecture, as well as note taking. Finally, lesson four looked at texts/speech on unfamiliar topics and the types of activities that could help improve one's listening skill.

Having gone through the day’s lesson, learners were asked to complete the listening test of the day. Each listening item was accompanied with a menu of mediating hints. While the precise content of the hints differed across items, they all followed the same form of moving from most implicit to most explicit hints. If a learner’s response was correct, s/he moved on to the next listening item. If a learner’s response was not correct, s/he was provided with the most implicit mediating prompt and was allowed to reattempt the item. Upon test completion, three scores were shown to the learners: independent score, dependent score along with the number of hints presented, and Learning Potential Score (LPS). A sample item along with the hints is shown in Table 1.

Table 1: Sample Listening Text, Item and Hints

Listening passage

Accompanying test item

Shown hints

Well, in order to register we’ve got to go to the Law Faculty and get this card stamped and then go back to the Admin building and pay the union fees. That means we’re registered. After that we have to go to the notice board to find out about lectures and then we have to put our names down for tutorial groups and go to the library to …

What must the students do as part of registration at the university?

a)                   check the notice board in the Law Faculty

b)                   find out about lectures

c)                   organize tutorial groups

d)                   pay the union fees

e)                   stamp card in the Admin building

Hint 1: That’s not the correct answer. Listen again.

Hint 2: That’s still not the correct answer. Did you hear “in order to register we’ve got to go to the Law Faculty and get this card stamped and then go back to the Admin building and pay the union fees. That means we’re registered.

Hint 3: Let’s try it one more time. In order to register, the students have to go to the Law Faculty and then Admin building.

Hint 4: Sorry. The correct answer was ‘d’. Click to view an explanation. The students should get their card stamped in the Law Faculty and pay the union fees in the Admin building. That means they are registered.

 

4. Results

Table 2 demonstrates the descriptive statistics of learners’ unmediated and mediated performances. The learners’ performances before and after receiving hints pinpoint interesting findings regarding listening comprehension development.

Table 2: Descriptive Statistics of Actual, Mediated, Gain, and LPS Scores

 

Actual performance

Mediated performance

Gain score

LPS

N= 212

M (SD)

M (SD)

M (SD)

M (SD)

Listening Comprehension

12.00 (8.03)

41.42 (8.35)

29.42 (5.69)

0.88 (.14)

Comparisons of the means revealed that the learners had better performances after the mediation. For example, the mean scores of the learners after mediation (M=41.42, SD=8.35) revealed a marked improvement in learners’ listening comprehension as compared with their actual performance before mediation (M=12.00, SD=8.03). The results of paired-samples t-test also revealed that this difference between the actual and mediated performances of learners was significant, t (211) = -54.72, p<0.01, d=3.59. This gives insight into the effectiveness of mediation on the enhancement of learners’ listening comprehension ability and is evidence of learners’ internalization of mediation.

The mean scores of the gain scores between the unmediated and mediated performances are also presented in Table 2. The learners showed a mean gain score of 29.42 (SD=5.69). The gain scores showed the change between the unmediated and mediated performances, manifesting improvement in learners’ listening comprehension during mediation. Table 2 also tabulates the mean LPS scores of the learners. The learners showed a medium mean LPS score (M=0.88, SD=0.14). It should be taken into consideration that actual scores demonstrate an already developed ability in the time of assessment. They do not reveal learners’ ZPD which, as Vygotsky stressed, is vital for diagnosis and future learning and teaching. Reporting actual and mediated scores, on the other hand, gives insight into a learner’s incomplete and potential abilities. LPS completes this by quantifying the observed changes, the same as a gain score, but brings forward the results in relation to the maximum possible score. In this way, a learner with a low actual score is not harshly judged and may still be accepted to have a high LPS. Table 3 shows the descriptive statistics of learners’ unmediated and mediated performances for each single test.

Table 3: Descriptive Statistics of Actual, Mediated Scores of Each Test

 

Test 1

Test 2

Test 3

Test 4

 

M (SD)

M (SD)

M (SD)

M (SD)

Actual performance

1.46 (4.15)

3.92 (4.15)

7.85 (8.01)

9.42 (7.26)

Mediated performance

5.42 (2.98)

8.78 (2.82)

11.07 (2.53)

15.14 (4.66)

The mean scores of learners after mediation reveal a marked improvement in learners’ listening comprehension ability as compared with their performances before the mediation. For example, in Test 2 the mean score of learners’ mediated performance (M=8.78, SD=2.82) was at a higher level compared to their actual performance (M=3.92, SD=4.15). The results of paired-samples t-test also revealed that this difference between actual and mediated performances of learners was significant for Test 1 (t (211) = -8.60, p<0.01, d=1.09), Test 2 (t (211) = -10.67, p<0.01, d=1.36), Test 3 (t (211) = -4.36, p<0.01, d=0.54), and Test 4 (t (211) = -20.00, p<0.01, d=0.93). Furthermore, to reduce the likelihood of a Type I error, i.e. spuriously significant difference, the Bonferroni adjustment was conducted. The desired alpha-level (0.05) was divided by the number of comparisons made (i.e. 4) and the least significant differences (LSD) p-value required for significance would be .05/4 = .012. Since the p-value levels of the two comparisons are lower than the adjusted alpha-level (p=.00<.012), it can be concluded that the pairs of the actual and mediated performances in the four tests show significant differences. This is an evidence of learners’ improvement as a result of mediation.

Regarding actual performance, Table 3 also tabulates that learners’ mean scores increased test by test. For example, in Test 3 learners’ performance was at a higher level (M=7.85, SD=8.01) compared to Test 2 (M=3.92, SD=4.15) and at a lower level compared to Test 4 (M=9.42, SD=7.26) and the like. Similarly, learners’ mediated mean scores increased test by test. For example, in Test 3 learners’ mediated performance was at a higher level (M=11.07, SD=2.53) compared to Test 2 (M=8.78, SD=2.82) and at a lower level compared to Test 4 (M=15.14, SD=4.66) and the like. The results of a two-way ANOVA revealed that Types of Performance (Actual, Mediated) × Types of Test (Test 1, Test 2, Test 3, Test 4) interaction was significant at F (3, 1688) =2.67, pt (422) =-4.43, p<0.01, d=-0.59), Test 1 and 3 (t (422) =-7.49, p<0.01, d=-1.00), Test 1 and 4 (t (422) =-10.07, p<0.01, d=-1.34), Test 2 and 3 (t (422) =-4.60, p<0.01, d=-0.61), and Test 2 and 4 (t (422) =-6.95, p<0.01, d=-0.93). However, no significant difference was found between mean scores of Tests 3 and 4 (t (422) =-1.53, ns). With respect to the mediated performance, the results of follow-up t-tests revealed that there were significant differences between mean score of Test 1 and 2 (t (422) =-8.65, p<0.01, d=-1.15), Test 1 and 3 (t (422) =-15.26, p<0.01, d=-2.04), Test 1 and 4 (t (422) =-18.57, p<0.01, d=-2.48), Test 2 and 3 (t (422) =-6.38, p<0.01, d=-0.85), Test 2 and 4 (t (422) =-12.34, p<0.01, d=-1.65), and Test 3 and 4 (t (422) =-8.12, p<0.01, d=-1.08).

5. Discussion and Conclusion

This study sought to investigate the potentiality of DA in a listening comprehension MOOC. The study particularly pursued whether the use of dynamic assessment benefited the listening skill of EFL learners participating in a listening comprehension MOOC. The results of the study illustrated that DA mediation developed and evaluated listening comprehension ability of the learners in this MOOC. The results are in line with the findings of previous studies (e.g. Ableeva, 2010; Poehner et al., 2015, Sarani & Izadi, 2016). According to Ableeva (2010), mediation significantly illuminates the areas leaners have difficulty and accordingly assists them to overcome the problems. Poehner et al. (2015) reported meaningful differences between the mean score of independent and dependent scores of learners’ listening comprehension which indicates that students had marked improvement when they were mediated. Sarani and Izadi (2016) argued that “through DA, the mediator is able to identify the abilities that have already developed, those that are developing and those that are yet to develop. When these are discovered, it is then possible to effectively promote learners’ abilities” (p. 178).

Another interesting finding of this study was the significant change and gradual improvement between unmediated and mediated performance of learners indicating the gradual effectiveness of mediation. Similarly, Poehner and Lantolf (2013) and Poehner et al. (2015) reported gradual and significant improvement under mediation based on learners’ gain scores. This study also reported significant changes between learners' independent and mediated performances. This obtained result is in line with Wang (2010), Birjandi and Ebadi (2012), and Shrestha and Coffin (2012) who similarly reported significant changes in the learners’ mean scores in independent and mediated performances.

A gradual and significant improvement of learners' actual performance due to the effectiveness of mediation was also reported in this study. The finding is in line with Mashhadi Heidar and Afghari (2015) who reported that DA via Skype enhanced the listening comprehension abilities of autonomous and non-autonomous EFL learners. Similarly, according to Poehner et al. (2015), data from C-DA provides fine-grained diagnosis of learners' L2 listening comprehension development. Unique to this approach are the role mediation plays, the integration of assessment and instruction, and its focus on processes rather than products of behavior. The results of the study pinpoint that independent scores could not provide a clear picture of learners’ abilities and even are unable to indicate the learners’ areas of difficulties. This supports Vygotsky’s (1978) claim that the size of the individual’s ZPD indicates his/her learning development and not the learner’s actual ability level, i.e. ZAD. According to the results, irrespective of one’s unmediated performance, an individual can benefit from mediation. The mediation helps to diagnose and promote learners’ listening comprehension difficulties.

Regarding MOOC assessment, the findings of the study are promising in removing the present limitations. Due to the platform-nature and vast number of learners in MOOCs, instructor involvement is minimal or is limited to the critical tasks. As a result, Daradoumis et al. (2013, p. 209) argued, “tutoring is usually and consequently poor, since minimal feedback is received by the participants and peer-based evaluation is valuable but often unprofessional and lacking the necessary expertise, both didactical and on the specific subject”. The integration of DA into MOOCs, thus, can open a new window to better implementation of online course platforms. On one hand, similar to current assessment tools in MOOCs, DA evaluates learners’ understanding and progress and indicates learners’ independent (i.e. actual) performance. On the other hand, dissimilar to current assessment tools in MOOCs, DA gives insights into learners’ dependent (i.e. mediated) performance. While learners may demonstrate similar level of performance based on present MOOC assessment, they have different potential abilities, instructional needs and supports. The learner’s needs are the source for growth and enhancement and mediation accelerates the learning process with gentle guidance through hints. As Vygotsky (1998) stated, one’s potential abilities can be uncovered through mediation. By mediating individuals to reexamine their choice, learners have the chance to overcome the problem, and in this way the learner’s potential future development is brought to light.

What puts much more importance on the integration of DA in MOOCs was the way mediation was developed and provided. In the current study, the mediation presented to the learners was based on both interactionist and interventionist DA and aimed to develop individual learning plans according to the learners’ needs. The individualized sessions held with the learners (see Test Preparation section) helped the researchers to identify the areas the learners had most problems with and how to better assist them to overcome difficulties. Based on these mediated moves, the menu of hints was developed and embedded to each individual item. Therefore, it avoids ironing out the learners and allows instructor to mediate, supervise, or guide students, what may seem impossible in a MOOC.

Furthermore, DA helps learners modify their learning; this happens through using prompts, hints, and questions that in turn provide insights into the student’s current comprehension of the topic being instructed. Concurrently, the student is engaged in constructing his/her individual knowledge in a continuous and active manner. DA can thus promote development in individuals by instructing while assessing. It can be concluded that a) some abilities which play a key role in learning may not be evaluated at all by the current tests; b) almost all people perform much less than their actual potentialities; c) instruction joined with assessment in DA shed much more light on one’s potentialities as well as his/her performance; d) considering newly developed abilities, online learning makes much more sense than reporting old learning (Haywood & Lidz, 2007).

It should be mentioned that the present study is pioneer in exploring DA in relation to MOOC and definitely it is not without limitations. In this MOOC, the effect of DA was evaluated through multiple-choice questions to test learners' listening comprehension. The researchers feel the need to investigate the effectiveness of DA in open-ended and constructive-response test formats in MOOC. It is also recommended to explore other language skills and sub-skills to indicate whether similar results would be obtained.

References

Ableeva, R. (2010). Dynamic assessment of listening comprehension in L2 French. (Unpublished Ph.D. Dissertation). The Pennsylvania State University. University Park, PA.

Ahangar, A. A., & Izadi, M. (2015). Online text processing: A study of Iranian EFL learners' vocabulary knowledge. International Review of Research in Open and Distributed Learning, 16(2), 311-326.

Alderson, J. C., Brunfaut, T., & Harding, L. (2014). Towards a theory of diagnosis in second and foreign language assessment: Insights from professional practice across diverse fields. Applied Linguistics, 36(2), 236-260.

Aljaafreh, A., & Lantolf, J. P. (1994). Negative feedback as regulation and second language learning in the zone of proximal development. The Modern Language Journal, 78(4), 465-483.

Asiry, M. A. (2017). Dental students' perceptions of an online learning. The Saudi Dental Journal, 29(4), 167-170.

Barkhuizen, G., & Ellis, R. (2005). Analyzing learner language. Oxford: Oxford University Press.

Beach, P. (2017). Self-directed online learning: A theoretical model for understanding elementary teachers' online learning experiences. Teaching and Teacher Education, 61, 60-72.

Birjandi, P., & Ebadi, S. (2012). Microgenesis in dynamic assessment of l2 learners’ socio-cognitive development via web 2.0. Procedia-Social and Behavioral Sciences, 32, 34-39.

Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education, 27, 1-13.

Craven, M., Thaine, C., & Logan, S. (2008). Real Listening and Speaking 3. Cambridge: Cambridge University Press.

Daradoumis, T., Bassi, R., Xhafa, F., & Caballé, S. (2013). A review on massive e-learning (MOOC) design, delivery and assessment. Paper presented at Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 10 October.

Glance, D. G., Forsey, M., & Riley, M. (2013). The pedagogical foundations of massive open online courses. First Monday, 18(5&6). Retrieved July, 17, 2019 from http://firstmonday.org/index.php/fm/artice/view/4350/3673≠author

Gulati, A. (2013). An overview of Massive Open Online Courses (MOOCs): Some reflections. International Journal of Digital Library Services, 3(4), 37-46.

Harding, L., Alderson, J. C., & Brunfaut, T. (2015). Diagnostic assessment of reading and listening in a second or foreign language: Elaborating on diagnostic principles. Language Testing, 32(3), 317-336.

Haywood, H. C. & Lidz, C. S. (2007). Dynamic Assessment in Practice: Clinical and Educational Applications. Cambridge: Cambridge University Press

Hidri, S. (2014). Developing and evaluating a dynamic assessment of listening comprehension in an EFL context. Language Testing in Asia, 4(4), 1-19.

Izadi, M. (2018). Applying Dynamic Assessment to Develop an Intelligent Language Tutor to Diagnose and Assess Iranian EFL Learners' Listening Development. (Unpublished Ph.D. dissertation). Chabahar Maritime University, Chabahar: Iran.

Jordan, K. (2013). MOOC completion rates: The data. Retrieved from http://www.katyjordan.com/MOOCproject

Kozulin, A., & E. Garb. (2002). Dynamic assessment of EFL text comprehension of at-risk students. School Psychology International, 23(1), 112-127.

Lantolf, J. P. (Ed.). (2000). Sociocultural theory and second language learning. Oxford: Oxford University Press.

Lantolf, J.P., & Poehner, M.E. (2008). Dynamic assessment. In E. Shohamy (Ed.), The Encyclopedia of Language and Education (pp. 273-285). Cambridge: Cambridge University Press.

Lantolf, J. P. (2009). Dynamic assessment: The dialectic integration of instruction and assessment. Language Teaching, 42(3), 355-368.

Lantolf, J. P., & Poehner, M. E. (2004). Dynamic assessment of L2 development: Bringing the past into the future. Journal of Applied Linguistics, 1(1), 49-72.

Littlejohn, A., Hood, N., Milligan, C., & Mustain, P. (2016). Learning in MOOCs: Motivations and self-regulated learning in MOOCs. The Internet and Higher Education, 29, 40-48.

Mashhadi Heidar, D., & Afghari, A. (2015). The impact of dynamic assessment via Skye on Iranian autonomous/non-autonomous EFL learners’ listening comprehension ability at elementary level. International Journal of Language Learning Applied Linguistics World, 8 (1), 13-25.

Poehner, M. E. (2005). Dynamic assessment of oral proficiency among advanced L2 learners of French (Unpublished doctoral dissertation). The Pennsylvania State University. University Park, PA.

Poehner, M. E. (2007). Beyond the test: L2 dynamic assessment and the transcendence of mediated learning. The Modern Language Journal, 91(3), 323-340.

Poehner, M. E., & Lantolf, J. P. (2013). Bringing the ZPD into the equation: Capturing L2 development during computerized dynamic assessment. Language Teaching Research, 17(3), 323–342.

Poehner, M. E., Zhang, J., & Lu, X. (2015). Computerized dynamic assessment (C-DA): Diagnosing L2 development according to learner responsiveness to mediation. Language Testing, 32(3), 337-357.  

Reilly, E. D., Stafford, R. E., Williams, K. M., & Corliss, S. B. (2014). Evaluating the validity and applicability of automated essay scoring in two massive open online courses. The International Review of Research in Open and Distributed Learning, 15(5), 83-98.

Sarani, A., & Izadi, M. (2016). Diagnosing L2 Receptive Vocabulary Development Using Dynamic Assessment: A micro-genetic Study. Journal of Teaching Language Skills, 35(2), 161-189.

Shabani, K. (2012). Dynamic assessment of l2 learners’ reading comprehension processes: A Vygotskian perspective. Procedia-Social and Behavioral Sciences, 32, 321-328.

Shabani, K. (2014). Dynamic assessment of L2 Listening comprehension in transcendence tasks. Procedia- Social and Behavioral Sciences, 98, 1729-1737.

Shrestha, P., & Coffin, C (2012). Dynamic assessment, tutor mediation and academic writing development. Assessing Writing, 17(1), 55-70.

Suen, H. K. (2014). Peer assessment for massive open online courses (MOOCs). The International Review of Research in Open and Distributed Learning, 15(3), 312-327.

Vygotsky, L. S. (1978). Interaction between learning and development. Readings on the development of children, 23(3), 34-41.

Vygotsky, L. S. (1998). The problem of age. In R. W. Rieber & A. S. Carton (Eds.), The collected works of L. S. Vygotsky, Child psychology (pp. 187–206). New York: Plenum.

Wang, T.-H. (2010). Web-based dynamic assessment: Taking assessment as teaching and learning strategy for improving students’ e-learning effectiveness. Computers and Education, 54(4), 1157-1166.

 



[1]Assistant Professor of TEFL (Corresponding Author), Heidari.f@english.usb.ac.ir; Department of English Language and Literature, University of Sistan and Baluchestan, Zahedan, Iran.

[2] PhD in TEFL, izadimi@yahoo.com; Department of English Language and Literature, University of Sistan and Baluchestan, Zahedan, Iran.