Teaching ESP Students Through Learner-Centered Approach: Zooming in on Student Question Generation (Research Paper)

Document Type : Original Article

Authors

1 Department of English, Hidaj-Branch, Islamic Azad University, Hidaj, Iran.

2 English Department, Chabahar Maritime University, Chabahar, Iran.

Abstract

Student Question Generation (SQG) is one of the learner-centered constructive learning techniques which might be a helpful too for engaging students in the learning and assessment processes and increase their interest in the learning materials. This study attempted to investigate the effect of this technique on ESP learners’ achievement in an English course and to study the learners' views on the efficacy of using this technique in teaching English. To reach the said aims, a mixed-method research design was employed to answer the questions. The research was conducted at Islamic Azad University in Hidaj with 60 participants (male and female) who were majoring at accounting and civil engineering and were divided into experimental and control groups. Data were collected through an achievement test which was administered at the end of the course and a focus-group interview held in experimental group class. Independent samples t-test was used to analyze the quantitative data, while the learners' responses were analyzed through content analysis. The results revealed that using SQG helped the students in learning the course materials significantly. The analysis of the qualitative data revealed seven themes related to the effectiveness of using SGQ, of which the most important ones were usefulness of the technique, creating a relaxed and cooperative atmosphere, and increased motivation. They further reported that their interest in the course increased, and the quality of their learning increased. Based on the results, some instructional implications are provided for the teachers, syllabus designers and researchers.

Keywords


Article Title [Persian]

اجرای روش دانشجو محور در کلاسهای انگلیسی برای اهداف ویژه: بررسی طرح سوال توسط دانشجو

Authors [Persian]

  • مجید عسگری 1
  • منصور گنجی 2
1 دانشگاه آزاد اسلامی، واحد هیدج
2 دانشگاه دریانوردی چابهار
Abstract [Persian]

روش آزمون سازی توسط دانشجویان یکی از تکنیکهای زبان آموز-محور است که می تواند در سهیم کردن دانشجویان در روند یادگیری و ارزیابی مفید بوده و علاقه آنها به یادگیری را افزایش دهد. در این تحقیق تلاش گردید تا میزان تاثیر این تکنیک بر توان یادگیری دانشجویان در دروس ESP مورد بررسی قرار گرفته و نظر ایشان در خصوص کارایی این روش مطالعه گردد. جهت یافتن پاسخ برای سوالات این تحقیق یک روش تحقیق تلفیقی مورد استفاده قرار گرفت. این مطالعه در دانشگاه آزاد اسلامی هیدج بر روی شصت دانشجوی با جنسیت آقا و خانم در رشته حسابداری و مهندسی عمران انجام شد که در دو گروه آزمایش و کنترل قرار گرفتند. دیتای لازم از طریق برگزاری آزمون که در پایان دوره انجام گرفت و مصاحبه متمرکزبر گروه بر روی گروه آزمایش بدست آمد. دیتای کمی با استفاد از تی-تست و دیتای کیفی از طریق تحلیل محتوایی مورد تحلیل قرار گرفت. نتایج بدست آمده نشان داد که بکار گیری روش آزمون سازی توسط دانشجویان برتوان یادگیری مطالب درسی تاثیر مثبت معنا دار دارد. تحلیل داده های کیفی نشان داد که عوامل متعددی وجود دارند که با کارایی بکارگیری آزمون سازی توسط دانشجویان مرتبط هستند، که در میان آنها "کارایی این تکنیک"، "ایجاد فضای حمایتی و آرام" و همچنین "انگیزه بالا" ازجمله مهمترین موارد بودند. دانشجویان مطالعه شده همچنین گزارش کردند که سطح علاقه به یادگیری و کیفیت یادگیری آنها نیز افرایش یافته است. بر اساس یافته های این تحقیق پیشنهادهای آموزشی برای محققین، طراحان مطا لب درسی و مدرسان داده شده است.

Keywords [Persian]

  • آزمون سازی توسط دانشجویان
  • آزمون دانشجو-محور
  • آزمون مدرس-محور و ارتقا یادگیری

Teaching ESP Students Through Learner-Centered Approach: Zooming in on Student Question Generation

[1]Majid Asgari

[2]Mansoor Ganji*

   Research Paper                                                           IJEAP- 2010-1635                   DOR: 20.1001.1.24763187.2020.9.4.8.1

Received: 2020-09-05                          Accepted: 2020-11-11                           Published: 2020-12-09

Abstract

Student Question Generation (SQG) is one of the learner-centered constructive learning techniques which might be a helpful too for engaging students in the learning and assessment processes and increase their interest in the learning materials. This study attempted to investigate the effect of this technique on ESP learners’ achievement in an English course and to study the learners' views on the efficacy of using this technique in teaching English. To reach the said aims, a mixed-method research design was employed to answer the questions. The research was conducted at Islamic Azad University in Hidaj with 60 participants (male and female) who were majoring at accounting and civil engineering and were divided into experimental and control groups. Data were collected through an achievement test which was administered at the end of the course and a focus-group interview held in experimental group class. Independent samples t-test was used to analyze the quantitative data, while the learners' responses were analyzed through content analysis. The results revealed that using SQG helped the students in learning the course materials significantly. The analysis of the qualitative data revealed seven themes related to the effectiveness of using SGQ, of which the most important ones were usefulness of the technique, creating a relaxed and cooperative atmosphere, and increased motivation. They further reported that their interest in the course increased, and the quality of their learning increased. Based on the results, some instructional implications are provided for the teachers, syllabus designers and researchers.

Keywords: Student Question Generation, Student-Constructed Test (SCT), Teacher-Constructed Test (TCT), Promoting Learning

  1. Introduction

EFL learners' progress in learning the course materials is mostly examined using the traditional teacher-constructed tests. Learners do not usually play any direct role in preparing the tests developed to evaluate their level of understanding materials. Different aspects of tests such as the content, relevance, significance, and difficulty level are mostly thought of and decided by teachers or field experts, whereas such factors may be viewed by professionals, teachers, and students in markedly different ways (Aikenhead, 2008). In fact, there has been much criticism about the traditional way of testing where tests were solely thought of, planned, and constructed by teachers.

Newer assessment methods are employed in a way that supports a more diverse and impressive effect on learners in the whole learning process. Jacobs and Farrell (2003) contend that alternative assessments can be time-consuming and less reliable concerning their scoring consistency. However, they can find more significance while compared with traditional assessment techniques, which do not often aim at capturing the learners' real competence. The idea extends mostly since the latest research findings tend to emphasize the students' prominent role in nearly all aspects of education. The idea of learners' contribution to different areas of educational programs like syllabus design, course performance, or learning evaluation has been underscored in many educational theories such as peer learning, constructivism, self-reflection, and multiple assessment theory. The benefits and positive effects of test construction by learners on their learning have been accentuated by many researchers (Brown & Walter, 2005; Lan & Lin, 2011; Lavy & Atara, 2010; Wilson, 2004).

Learners are not traditionally much involved in different areas of pedagogy, perhaps due to the fact that they have always been considered as the passive recipients of knowledge who are not expected to and are not able to have any part in decision making process. With the immense development regarding the online connection, learners, teachers, and experts can experience easier ways of promoting different areas of language teaching, particularly benefiting from question generation or test construction (Chin & Osborne, 2008; Offerdahl & Montplaisir, 2014; Yu, 2012). Question generation or sometimes referred to as test construction by learners usually forces them to engage in different cognitive activities, which in turn lead them to better monitor their own learning and to deeply understand the materials (Gulikerset, Bastiaens, & Kirschner, 2004; Papinczak, Peterson, & Babri, 2012; Yu, 2005). Test construction can help learners rehearse, process, and organize the learning materials, which can all be explained by metacognition theory and information processing theory (Yu & Liu 2008). It by nature makes learners more attentive to their learning, assisting them the better coverage of learning materials in an EFL class. However, not many research studies have investigated the effects of question generation on achievement and motivation (Aflalo, 2018; Brown & Walter, 2005; Chang, 2005; Chang & Ho, 2009; Chin, Brown, & Bruce, 2002; Drake & Barlow, 2008; Sanchez, 2004). Through constructing questions and tests, students can increase their involvement in the cognitive processing, and this activity stimulates them to struggle more for learning. It can also promote their interest level by increasing their involvement and motivating them to learn. Given the benefits of test construction for students and educational programs, this research seeks to focus on the possible benefits of question generation by ESP learners in their learning the course materials.

Despite the reviewed literature on the benefits of SQG and given the opposing results of some few studies indicating the inefficacy of this technique, there exist far fewer studies investigating the effects of this technique in language learning (Aflalo, 2018; Yu & Lai, 2014; Yu, Chang, & Wu, 2015). To be more exact, previous studies have mostly looked at the effects of this technique on thinking flexibly (Brown & Walter 2005), learning motivation (Chin et al. 2002), positive attitudes toward the course (Perez 1985), language comprehension (Brown & Walter 2005; Drake & Barlow 2008), cognitive and metacognitive strategy development (Yu & Liu, 2008), and communication skill (Yu & Liu 2005). Neglecting the potential benefits of this technique, students and teachers, on the other hand, do not seem to use this technique very often in their classes (Chin, et al, 2002; Middlecamp & Nickel, 2005). To top it all, while there is a growing body of research substantiating the positive effects of SQG, few research projects are conducted in the Iranian context regarding the impact of contribution of learners' involvement in test construction on learning (Shakurnia, Aslami, & Bijanzadeh, 2018). Even, this single study has focused the efficacy of using this technique in medical education.

  1. Review of Literature

In the past decades, the traditional paper-and-pen tests tended to be the dominant way of testing, aiming to collect information about the learners' achievement and learning. Quite often, such tests were decided and constructed by teachers. However, the wider concept of assessment nowadays assumes that assessment can also be done using other ways such as observing learners' performance, interviewing them, and comparing their present and past performances. In fact, with this new concept, the way in which learners' progress is evaluated can be redefined, so that other methods and procedures like student-constructed, peer-constructed, or online tests are employed for examining and evaluating learners. Besides, concerning the problems and limitations of traditional teacher-made testing, a growing interest can be seen among teachers and testing experts and practitioners for the recruitment of new and diverse assessment techniques in different levels of education (Yu & Su, 2015).

SQG is defined by education experts (Lam, 2014; Yu & Wu, 2013) as the process of constructing questions by learners on the course materials or their related areas of instructional importance and relevance as a helping tool for encouraging student participation, aimed at promoting learning, assessment, motivation, and interest of the said learners. SCT is one way of alternative assessment where learners work individually or in groups to own a share in generating questions and constructing tests. That is, they get involved in evaluating and assessing their own learning progress by earning a more direct and effective role in test construction (Kaufman, 2000). In fact, while traditional teacher-centered way of test construction holds no place for students in the procedures of testing, the SCT involves them directly in different stages and procedures of the construction of tests. Learners tend to engage in all aspects of instructional materials, structuring the materials, and determining the importance and difficulty level of different concepts. They are referred to as student-made tests by Murphy (1994) who believes that they are an effective way of increasing student collaboration in learning course materials and a useful way of controlling and lowering the anxiety level of learners in testing settings and classroom.  Bray and Brown (2004) contend that when learners are expected to take more responsibility in their learning journey, assessment process is the area where they should be involved as it can assist them to realize the importance and priority of materials and promote the cooperation between learners and teachers.

Proponents of alternative testing procedures accentuate the benefits of using alternative assessment techniques for learners. Robinson (1995) and Offerdahl and Montplaisir (2014) believe that they encourage better critical thinking, more authentic learning, deeper level of knowledge, and more cooperation among students. They promote learners' engagement in various cognitive strategies, helping them by attending to important and relevant materials in the learning content, controlling comprehension, and monitoring the understanding of the materials, and putting forward solutions to the problems popped up (Dikli, 2003; Quenemoen, 2008; Yu & Hung, 2006).

Alternative assessment generally guarantees student-centered learning in all areas and levels of education because they it is based on the idea that learners can evaluate their own learning and such evaluation can contribute to their learning the materials. Working together for constructing tests, students experience a good level of cooperation which can lead to higher motivation and confidence in their learning journey (Sadeghi & Ganji, 2020). The cooperative activities are usually followed by more engagement of the learners in completing the tasks which often facilitates learning process and provides them with an attractive supportive learning environment and improves their achievement (Gheith, 2002). When learners work in groups, they start to learn more course materials which are mostly retained longer compared to other ways of presenting instructional materials. In the meantime, group learning brings about more satisfying learning contexts for learners (Beckman, 1990; Johnson & Johnson, 2000; Johnson & Johnson, 2008; Kohn, 1986). Many researchers (Aflalo, 2018; Helmericks, 1993; Muir & Tracy, 1999; Tripathy, 2004)) also asset that collaboration can reduce learner anxiety level in testing, typically called a hindrance to learning materials or recalling information.

Odafe (1998) studied the effect of question generation on learning mathematics and found that students felt more relaxed and interested in taking the test constructed by their own help. In another similar study, Kaufman (2000) concluded that after assisting teachers in test generation, ESL learners were more enthusiastic in their learning and found such strategy helped them share knowledge with other learners, which was very beneficial to their learning. He also found that learners had a better comprehension of materials and clearer idea of the course expectations. This is because SGT involves the learners in locating and understanding the hard part of the materials and assists them to promote their share and involvement in identifying more useful materials for learning. The strategy can be used as homework for students to assign topics and materials for their learning and evaluate their efficiency in improving their learning and test performance (Yu & Chen, 2014). The results of the mentioned studies mostly revealed that question generation by learners has an improving effect on their learning and knowledge retention.

On the other hand, some research findings show no effect of SQG on learning. Bekkink, Donders, Kooloos, De Waal, and Ruiter (2015) reported that they couldn't find SQG as an effective way to promote students' learning process. Murphy (1991) argues that tests constructed by students cannot be taken as a very accurate way to evaluate and assess students, but they can be employed for purposes like stimulating students or collecting data about testing content or procedures. Similarly, Bottomley and Denny (2011) in a study on the use of multiple-choice question generation for learning concluded that the task made the students excited but was not of great value in boosting their achievement.

On the account of the discussed dissatisfaction of the dominant traditional assessment and the scarcity of research conducted on the topic of study especially in Iran (Shakurnia, Aslami, & Bijanzadeh, 2018), and considering the proliferation of online teaching due to the spread of Corona virus which involves much more involvement on the part of language learners, the researchers decided to study the effects of test construction on Iranian ESP students’ learning and their attitudes toward this technique. In fact, this study sought to see if SQG could provide an effective way of learning or assessment for EFL teachers, testing experts, and learners. Thus, the following research questions were raised.

Research Question One: Does question generation by learners promote their learning in an ESP class?

Research Question Two: What do Iranian ESP learners think of the probable effects of question generation on their learning?

  1. Method
    • Design and Participants

A total of 60 male and female students were recruited for this study. A convenient sampling method was used for the selection of the participants. They were students at Islamic Azad university, Hidaj Branch (IAU-Hidaj) in Iran. All of the students were taking the course of General English. They were non-English speaking students, and their age ranged from 19-25. They were majoring in accounting and civil engineering courses. These students were randomly divided into two groups: each with 30 students. In fact, one of the groups was used as the experimental group (EG) and the other one as the control group (CG).

  • Data Collection Procedure

The participants who had been randomly divided into two groups of EG and CG were separated and placed into two different classes. The participants were given a homogeneity test (Nelson Test) to ascertain that they were not significantly different in their English language proficiency. To this end, Nelson Test 200C, which is usually used for determining the proficiency level of participants, was administered in 60 minutes. It is a generic and standardized test that is frequently used in L2 teaching research for such purposes. The test contained 50 multiple-choice items and showed the participants' scores in a range of 0 to 50. When the results of the test showed that the participants were nearly in the same level of proficiency, they were taught in different classes on five days. The class met 10 sessions (two sessions a day). Each session took 45 minutes, and in each session one passage was taught. The passages were the same for both experimental and control groups. However, the experimental group was taught from 9 a.m. to 9:45 a.m., while the control group was taught from 10 a.m. to 10:45 a.m. The instructor for both groups was the same. The materials that were taught consisted of 10 passages, selected from the book entitled “Intermediate Comprehension Passages” by Byrne (1997). The readability of the students’ passages had been calculated by another researcher in a research conducted in Iran. This readability estimate had been calculated by Flesch index of readability which was 75.1.

As regards the treatment, at the end of each session, the experimental group students were asked to generate five open-ended questions and to answer them in class. On the other hand, the control group did not generate any tests or questions based on the materials they studied. They only had to answer the comprehension questions following the passages. The decision on the use of open-ended, but not multiple-choice questions was due to the point that the participants didn't seem to be good at generating such questions. It was easier for them to generate open-ended questions, and this type of question demanded longer answers and more understanding of the passage. However, since university students had not learned how to construct tests, the researcher explained some general rules of generating questions for the students in Persian in the first session of the course, clarifying the vague points through examples. The only difference between the procedures followed in experimental and control group was that in the experimental group the teacher asked the students to make several questions and answer them, but for the control group the students only answered the tests that followed each passage. It should be mentioned that although the students prepared the questions at home and by themselves, they were allowed to discuss, analyze, and compare their questions with those of their classmates in the beginning of the next session. Finally, they answered the questions and submitted the answers to the teacher. After the tenth session, the learners were given an achievement test taken from the course materials taught. As regards the qualitative part, a focus group interview was held, where students discussed the strengths and weaknesses of question generation as a learning technique. The researcher was the chair of the meeting and asked students several questions in relation to the treatment to keep them on track.

  • Data Analysis Procedure

Then, the results of the achievement test for both groups were collected and analyzed through descriptive statistics, normality test, and independent samples t-test in order to show any probable differences between the two groups in their learning. Finally, the learners' views and responses to the open-ended questions in the interview session were analyzed through content analysis. First of all, the interviews were transcribed. Secondly, the researchers read the text two times to find out about the main themes and categories of the responses. The text was analyzed sentence by sentence and in some cases paragraph by paragraph. The names of the codes were mostly based on and taken from the words and phrases used by the students. After reading the text for the first time, 5 of the main themes were easily recognized, named, and finalized. However, identifying the last two categories, separating them from the rest of the themes, and making distinctions about some of the categories which were similar to each other took reading the texts two more times. In order to ensure of the interrater reliability, the codes decided by the two researchers were finally compared, contrasted, and the final decision was made after discussing the differences, which was almost 15 percent of the categories. 

  1. Results and Discussion
    • Research Question One

In order to see whether there was any difference between the two groups regarding their general proficiency in English, the Nelson Test, (200 C) which is used to determine the examinees' English proficiency level was administered. The test results showed that the two groups had nearly the same level of English knowledge; that is, the students in the two groups appeared to be at the same level of general English. This is clearly shown in Table 1 which demonstrates the statistics and results for the proficiency test. It has to be clarified that the score scale is 0-50, that is, the perfect score is 50. 

Table 1: Descriptive Statistics Related to Proficiency Test (EG and CG)  

 

N

Mean

Std.

Variance

Range

Min

Max

Experimental group (EG)

30

28.96

4.88

23.81

18

21

39

Control group (CG)

30

29.86

5.27

27.77

20

20

40

As shown in Table 1, the two groups of students were very similar, 28.96 for EG and 29.86 for CG. Administering the Proficiency test and evaluating the related data was an attempt to find if the students in the two groups were similar or the same with regard to their general English knowledge before receiving the treatment. The results showed that there was not any significant difference between the two groups concerning their general English knowledge.

As mentioned, 10 sessions of teaching were held for both groups. After the final teaching session, all students of experimental and control groups took the achievement test which covered the materials taught for the course. This test consisted of four sections containing multiple choice, matching synonyms, fill in the blank, and comprehension questions based on four passages taken from the books. The test contained 36 items and lasted for 50 minutes. The test was evaluated by four experts in the field of TEFL for ensuring the validity of the test. Then, it was piloted with 16 students similar to the target sample and population. The reliability of the test was 0.71. The results of the test for both groups were collected, analyzed, and compared to search for any probable differences in their achievement. The following table shows the statistics related to the results for the achievement test.

Table 2: Descriptive Statistics for the Achievement Test (EG and CG)

 

N

Mean

Std.

Variance

Range

Min

Max

Experimental group (EG)

30

34.9

6.69

44.75

26

21

47

Control group (CG)

30

28.96

7.81

60.99

28

17

45

As seen in Table 2, the mean scores of the two groups were quite different,34.9 for the experimental group (EG) and 28.96 for control group (CG). Other measures also showed difference between scores of the students in the two groups. Administering the achievement test and evaluating the related data was done to find if the students in the two groups were different with regard to their level of achievement after the treatment. In order to find out whether the difference between the two groups in the achievement test was statistically significant or not, a t-test was run. The scores were analyzed by SPSS, and it was found that in t-table when C was smaller than 0.05, with the DF of 58, the critical value would be 2.021. As it is shown in the Table 3, the T-value is higher than T-critical. So, it can be concluded that the difference between the scores of the groups is statically significant, and the performance of the subjects in the EG was better than those in the CG

Table 3: T-test results for the Scores of the Groups in the Achievement Test

t-critical

two-tailed

DF

t-observed

2.021

0.05

58

3.17

Thus, it was quite safe to claim that question generation by ESP learners improved their achievement in their English classes. In fact, it could be confirmed that the two groups differed significantly in their learning the course materials. This was a support that using question generation by students could be helpful in their learning the materials.

  • Research Question Two

The second research question sought to find out what the participants in the experimental group, who had experienced question generation in their learning, thought of possible benefits of SQG for their learning. To answer this research question, the learners were invited to a focus-group interview session, lasting for 45 minutes. They were asked to briefly state what their perception of the use of generation of questions was. Most of them expressed their ideas during this interview session and were active as of the researcher's advice. The responses were transcribed by the researchers for later content analysis. The responses were read by the researcher, categorized, and coded after they read the text three times. The analysis of the data revealed seven categories as follows: usefulness of the strategy, increasing their level of interest, improving motivation in learning, increasing cooperation among teacher and learners, deeper learning, increased study time, and respect for the learners. The following quotes are among the examples of each category, extracted from the provided statements.

Table 4: Descriptive Statistics on Learners' Perceptions of SQG

 

Learners' Perception

N

Percentage

Total

Usefulness of Using SQG

28

93.3

30

 

Increasing Interest

21

70

30

 

Improving Motivation in Learning

23

76.6

30

 

Increasing Cooperation among Teacher and Learners

26

86.6

30

 

Growing Deeper Learning

21

70

30

 

Increasing Study Time

19

63.6

30

 

Increasing Respect for Learners

22

73

30

 

                 

 The analysis of the data showed that 93.33% (28 out of 30) of the learners acknowledged that the opportunity to generate questions while studying the course materials was very useful and helped them learn better. For instance, a response was saying, "It sure helps us learn better by making us to review the materials". Another learner said, "I was trying more to learn because I had to find useful and important question, and generally found question generation helpful". Although in diverse wording or phrasing, the idea of usefulness could be seen and acknowledged in most of the statements. They mostly believed that question generation provided them with the opportunity to review the course materials more regularly, which made the learned points more long-lasting. The rest of the learners (6.66%) didn’t have a clear view on the usefulness of the strategy. A response for example was, "This way of teaching might have made some changes in our learning; however, I didn't notice any specific benefit".

The second category was increasing the students’ interest level. Twenty-one students which accounted for 70% of the participants mentioned that question generation was interesting, and acknowledge that their interest level increased. Most of them reported, “I liked the experience, and it was interesting”. Another student responded: “This way, the class was not boring because we could do something challenging”. Twelve (35%) of the participants also reported that question generation technique helped them feel less stressed in their learning, and this helped them become more interested in the new learning experience.

The applied change improved the learners' motivation too according to the reported statements. Some 76.66% of the respondents (23 from 30) indicated that this approach of learning where they were given a chance of generating questions was more motivating than the traditional approach in which the test was merely constructed by the teacher. A participant said, "There was an internal force inside us to go back to the sections and parts of the unit, which was sort of reviving the information in the mind." The participants also reported that question generation was assisting them to gain the materials more comprehensively as they were closely attending to all parts of the instructional materials and they were doing their best to compete with their peers. One of them quoted, "In this way of learning, no point is left out of the mind because the attention is on its top". Two statements from the respondents were saying, "The new method is better as it encourages more struggle for them and their classmates.

Another insightful finding in analyzing the ideas given was related to cooperation. Twenty-six (86.6%) of learners mentioned that question generation was a great help for the teaching process, and it increased their sense of cooperation because they preferred to work in teams and were persistent in gaining better scores. Some of them said, "With this method, learners tend to help each other better, and they had a supportive and encouraging atmosphere in their class”. In short, most of the learners were delighted with having the chance to generate questions and accepted that the approach created an interesting and cooperative learning context where their learning and achievement promoted.

Still another concept which was attracting attention when the statements were being analyzed was related to the quality of learning. The participants mostly pointed out the good effects of the method on deepening their learning. One of them said, "It seems we can learn easier with less trying, and we learn the points for long”. The other learner was restating this effect as "I used to forget the points after a while, but now I think I can keep them in mind longer”. It sounds that such a deep learning tends to happen because learners are more attentive and interested in the learning process, which has always been reported as an assisting factor in FL/SL learning (Asgari, Ketabi & Amirian, 2017; Dornyei, 2014; Mazer, 2013)

“The more you try, the better you will learn. With the new method, some improvement is expected to occur, while previously the learners did not try that much”, One of the learners said. It is the learners' efforts that matter, not the method" said another learner. This and some similar statements, which were found in the ideas of 19 participants, could reveal that they increased their total effort and time to learn. They spent some more time for their learning which in turn is expected to improve their learning since finding and posing questions is difficult, especially when the teacher assigns you this as a homework. It could be understood that the learners were delighted with the learning changes, so their study time extended.

About 73% (22 out of 30) of the participants thought of having the chance to generate questions during the sessions as respectful, which led them gain a feeling of value in their own learning process. Some of them believed that they were proud because they were playing a role more than a traditional usual learner. One student mentioned, "I had a feeling of self-importance that I had never had before." Another said, "You can have some kind of worthiness when you, too, can help with your learning." The participants, in fact, are used to a customary trend their learning and they hardly ever see changes or new trends because of some reasons like the lack of permission for teachers, or the educational policies (Guya, Z. & Izadi, S., 2002). Therefore, as language learners do not frequently face new techniques, methods, strategies in their learning in terms of tasks, materials, syllabuses, and teachers, they find it respectful when they were given a part in the process.

  1. Discussion

The aim of this study was to examine the role of question generation by learners in an English course for ESP students. Using a quantitative method, the first aim was to determine the effect of the question generation technique on the achievement of course materials in an EFL learning context. In a qualitative phase, the study sought to find the perceptions that learners had about role of SQG in the learning process.

It was revealed that learners do differently as a result of using SQG technique. In fact, it was found that generating questions by learners from the instructional materials promoted their achievement in learning the course materials and their understanding level. The results are in line with some previous studies in this regard (e.g., Gheith, 2002; Shakurnia, 2015; Yu & Chen, 2014; Yu & Hung, 2006). The results are corroborating information processing theory, cognitive processing theory, and the effectiveness of cooperative learning. It seems that one reason for the improvement of learning is the point that learners notice or highlight specific materials as the ones which they will attend to for generating questions. This makes them review the materials repeatedly, helping them to keep the materials in their minds better. However, the findings did not coincide with the results of some similar studies (Bekkink et al., 2015; Bottomley & Denny, 2011; Murphy, 1991) which did not support using alternative assessments as an effective way for advancing learning. For one thing, the students in this study just worked in groups in the last stage of the question generation, which was comparison with their peers, while the students in Bekkink et al (2015)’s study did the whole task in groups. Secondly, the fact that the Iranian students were doing this technique for the first time in their educational life might have boosted their interest and motivation more, hence exerting a more profound effects on their learning.

With respect to the second research question, the results probably imply that alternative methods in testing and new strategies like SQG often bring about change and difference for students in their learning. SQG seems attractive to students, and this increased attraction to the materials will possibly lead to promoting better struggle for learning on the side of learners (Kaufman, 2000; Odafe, 1998). The enthusiasm resulted from the alternative assessment will encourage students to attend classes with boosted motivation. It is really easier for teachers to face and teach happy and attentive students as the teaching task can be easily and efficiently performed. From the statements seen in the responses given to the second research question, it can further be figured that the approach of giving a chance to learners to have a share in the teaching process increases their participation and attention in class, which in turn helps them become more convinced with their learning process and outcome. Such analysis can be justified by the respect learners receive in their learning experience, which makes the current learning trend different from the previous one. The respect can lead to the appearance of a sense of significance that probably encourages learners to attempt more to learn. The idea is supported by the view presented by Hidi and Renninger (2006) who argued that a learning context with a feeling of self-efficacy contributes to changing learning to be more effective. The improvement can originate from the feeling of novelty or pioneering which learners have as a result of question generation. That is, in EFL teaching, learners seem to get accustomed to some usual and often boring methods and strategies of teaching in which limited changes and chances can be seen. This is mostly due to the lack of a free space for recruiting innovative ways of teaching or alternative assessments by teachers or the policies of the educational system (Guya, & Izadi, 2002). In Iran where the study was conducted, EFL learners can rarely have the chance to experience diverse teaching methods or strategies of teaching since teachers and students are accustomed to and content with the traditional methods of teaching which does not take much time for correction, involvement, and uses old and available texts and exams of previous semesters.

The responses given to the second research question indicated that learning is facilitated because of question generation by learners because their interest level in class and materials increased while their stress decreased. This can be explained by the suggestion that learning improves by supplementing a new technique (as question generation by learners in this study) to the teaching procedure. This implies that learning is enhanced as a result of adding some interesting task which is considered helpful and attractive by learners to the learning cycle. The findings of the research also give support to the view that learning process can proceed more effectively when the role of the learner is seen as important. Student-centered learning considers an active role for learners that helpfully influences the processes of learning and teaching in EFL classes (Kaufman, 2000; Lam, 2014). When students are asked to generate questions, they are actually required to be more participating in the learning process, which in turn boosts their cooperative feelings. The findings are also in agreement with the idea that cooperation among learners will contribute them to learn easier (Brown, 2005; Murphy, 1994; Robinson, 1995; Sadeghi & Ganji, 2020). The results of the study may also be justified since in the new learning context, a high level of collaboration was seen, learners' interest level was enhanced, and their anxiety level will be dropped. All these can improve learners' attention level in the learning process. The finding has been echoed in many previous researchers' findings (e.g., Beckman, 1990; Helmericks, 1993; Kohn, 1986). According to some education experts (e.g., Lee & Pulido, 2017; Walkington, 2013), one way to facilitate learning is by keeping learners more attentive in class, seen recently a vital responsibility for the teachers. Similarly, Chastain (1988) believes that teachers who keep learners attentive and reasonably content are the most affective ones. Considering the findings and views discussed above, SQG serves as an assisting factor in EFL learning through keeping learners more interested and less stressed in the learning process.

  1. Conclusion and Implications

This study examined the probable efficiency of SQG on the achievement of Iranian ESP learners in learning their course materials. The analysis of the achievement scores showed that SQG significantly improved learners' achievement and helped them to a great extent in learning. The responses of the learners also revealed that the learners had a very positive view of the experience and thought of it as a helpful technique in their learning. The study further searched the participants' views on the effect of SQG on their interest and stress in their English class. The results based on the responses admired that learners' interest increased, and their stress level decreased.

Generally, it looks that SQG has the potential to improve EFL learning through raising learners' achievement and their interest because it provides the learners with a more cooperative and relaxed learning opportunity. It proved to be both useful and practical. Based on the results, it is suggested that alternative assessment techniques like SQG or SCT be employed to promote EFL learning. The results clearly endorsed the idea that teachers' attempts to use SQG as a kind of help for learning course materials would be useful and could improve learners' achievement. Teachers are strongly suggested to use the questions generated by learners as a source for their own test construction, which will clearly enrich their tests and encourage the learners to consider their test construction more seriously. Finally, teachers, syllabus designers and experts in EFL teaching should focus on techniques and strategies in which question generation is included as a central activity in the instruction.

Further investigation can contribute to establishing the helpfulness of using SQG on more specific areas of EFL skills and components. More studies can be done to compare the effects of the current method with other techniques in alternative assessment. It also looks necessary to examine more closely the effects made in learners' affective attitudes like interest or stress separately, as such constructs are complex or multi-dimensional (Schraw & Lehman, 2001). Still another option is to compare the attitudes of both control and experimental groups on the efficacy of the treatments used in their classes. Finally, the effects of this technique while using different types of questions or its impact on higher order and lower order thinking skills can be investigated. Thus, research into such constructs can assist teachers and syllabus designers in the field to improve the students’ achievement in EFL learning.

References

Aflalo, E. (2018). Students generating questions as a way of learning. Active Learning in Higher Education. https://doi.org/10. 1177/1469787418769120.

Aikenhead, G. S. (2008). Importation of science programs from Euro-American countries into Asian countries and regions: A recipe for colonization? Paper presented at the Conference of Asian Science Education, Kaohsiung, Taiwan.

Asgari, M., Ketabi, S., & Amirian, Z. (2017). The effect of using interest-based materials on EFL learners' performance in reading: Focusing on gender differences. Iranian Journal of English for Academic Purposes, 6(2), 1-12.

Beckman, M. (1990). Collaborative learning: Preparation for the workplace and democracy. College Teaching, 38(4), 128-133.

Bekkink, M., Donders, A., Kooloos, J., De Waal, R., & Ruiter, D.  (2015). Challenging students to formulate written questions: A randomized controlled trial to assess learning effects. BMC Medical Education, 15(1). https://doi.org/10.1186/s12909-015-0336-z

Bottomley, S., Denny P. (2011). A participatory learning approach to biochemistry using student authored and evaluated multiple‐choice questions. Biochemistry and Molecular Biology Education, 39(5), 352–361.

Bray, G. B., & Brown, S. (2004). Assessing reading comprehension: The effects of text-based interest, gender, and ability. Instructional Assessment, 9, 107-128.

Brown, S. I., & Walter, M. I. (2005). The art of problem posing (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.

Byrne, D. (1977). Intermediate comprehension passages: With recall exercises and aural comprehension tests. Longman.

Chang, M.-M. (2005). Apply self-regulated learning strategies in a web-based instruction—an investigation of motivation perception. Computer Assisted Language Learning, 18(3), 217–230.

Chang, M. M., & Ho, C. M. (2009). Effects of locus of control and learner-control on web-based language learning. Computer Assisted Language Learning, 22(3), 189–206.

Chastain, K. (1988). Developing second language skills: Theory and practice. Orlando, FL: Harcourt Brace Jovanovich.

Chin, C, Osborne, J. (2008) Students’ questions: A potential resource for teaching and learning science. Studies in Science Education 44(1): 1–39.

Chin, C., Brown, D. E., & Bruce, B. C. (2002). Student-generated questions: A meaningful aspect of learning in science. International Journal of Science Education, 24(5), 521–549.

Dikli, S.  (2003).  Assessment at a distance:  Traditional vs.  alternative assessments. The Turkish Online Journal of Educational Technology, 2(3), 13–19.

Dornyei, Z. (2014). Researching complex dynamic systems: ‘Retrodictive qualitative modeling’ in the language classroom. Language Teaching, 47(1), 80-91.

Drake, J. M., & Barlow, A. T. (2008). Assessing students’ levels of understanding multiplication through problem writing. Teaching Children Mathematics, 14(5), 272–277.

Fowler, W. S., & Coe, N. (2002). Nelson English language tests. Frome and London: Butler and Tanner Ltd.

Gheith, G. (2002). Using cooperative learning to facilitate alternative assessment. Retrieved from http://www.exchanges.state.gov/forum/vols/vol40/no3/p26/htm

Gulikerset, J., Bastiaens, T., Kirschner, P. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development 52(3), 67–76.

Guya, Z., & Izadi, S. (2002). The role of teachers in decision-making on curriculum development. Journal of Humanities of Alzahra University, 42, 147–173.

Helmericks, S. (1993). Collaborative testing in social statistics: Toward Gemeinstat. Teaching Sociology, 21, 287-297.

Hidi, S., & Renninger, K. (2006). The four-phase model of interest development. Educational Psychologist, 41(2), 111–127.

Jacobs, G. M., & Farrell, T. S. C. (2003). Understanding and implementing CLT paradigm. RELC Journal, 34(1), 5-25.

Johnson, D. W., Johnson R. T. (2008) Social interdependence theory and cooperative learning: The teacher's role. In R. M. Gillies, A. F. Ashman, & J. Terwel (Eds.), The teacher’s role in implementing cooperative learning in the classroom. Computer-supported collaborative learning, (pp, 9-37), Springer, Boston, MA. https://doi.org/10.1007/978-0-387-70892-8_1

Johnson, D. W. & Johnson, R. (2000). Teaching students to be peacemakers: Results of twelve years of research. http://www.clcrc.com/pages/Meta-Analysis Of Peacemaker Studies.html 

Kaufman, L. M. (2000). Student-written tests: An effective twist in teaching language. Retrieved from http:// www. Lionel/ Kaufman.Com/studentswritingowntests.htm

Kohn, A. (1986). The case against competition. Boston: Houghton Mifflin.

Lam, R. (2014). Can student-generated test materials support learning? Studies in Educational Evaluation, 43, 95–108.

Lan, Y.F., & Lin, P.C. (2011). Evaluation and improvement of student's question-posing ability in a web-based learning environment. Australasian Journal of Educational Technology, 27(4), 581-599.

Lavy, I., & Atara, S. (2010). Engaging in problem posing activities in a dynamic geometry setting and the development of prospective teachers' mathematical knowledge. The Journal of Mathematical Behavior, 29(1), 11-24.

Lee, S., & Pulido, D. (2017). The impact of topic interest, L2 proficiency, and gender on EFL incidental vocabulary acquisition through reading. Language Teaching Research, 21(1), 118–135.

Mazer, J. P. (2013). Validity of the student interest and engagement scales: Associations with student learning outcomes. Communication Studies 64(2),125–140. doi:10.1080/10510974.2012.727943.

Middlecamp, C. H. & Nickel A. L. (2005). Doing science and asking questions II: An exercise that generates questions. Journal of Chemical Education 82(8): 1181–1186.

Muir, S., & Tracy, D. (1999). Collaborative essay testing. College Teaching, 47, 33-36.

Murphy, T. (1991). Student-made tests. Modern English Teacher, 17(2), 28-29.

Murphy, T. (1994). Tests: Learning through negotiated interaction. TESOL Journal, 4(2), 2-16.

Odafe, V. U. (1998). Students generating test items: a teaching and assessment strategy. Mathematics Teacher, 91(3), 198-202.

Offerdahl, E. G., Montplaisir, L. (2014). Student generated reading questions: Diagnosing student thinking with diverse formative assessments. Biochemistry and Molecular Biology Education 42(1), 29–38.

Papinczak, T. R., Peterson, A. S., Babri, K. (2012). Using student-generated questions for student-centered assessment. Assessment & Evaluation in Higher Education 37(4), 439–52.

Quenemoen, R. (2008). A brief history of alternative assessments based on alternative achievement standards. Minneapolis, MN: National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/onlinepubs/Synthesis68/Synthesis68.pdf

Robinson, J. M. (1995). Alternative assessment techniques for teachers. Music Education Journal, 81(5), 28-34.

Sadeghi, E., & Ganji, M. (2020). The effects of cooperative learning on Iranian university students’ class-engagement, self-esteem, and self-confidence. Journal of Modern Research in English Language Studies, 7(4), 89-109.

Sanchez, A. (2004). The task-based approach in language teaching. International Journal of English Studies, 4(1), 39–71.

Schraw, G., & Lehman, S. (2001). Situational interest: A review of the literature and directions for future research. Educational Psychology Review, 13, 23–52.

Shakurnia, A. (2015). The effects of student-generated MCQs on their academic achievement. Iranian Journal of Medical Education, 1, 521-529.

Shakurnia, A., Aslami, M., & Bijanzadeh, M. (2018). The effect of question generation activity on students’ learning and perception. Journal of Advances in Medical Education and Professionalism, 6(2), 70-77.

Tripathy, H. H. (2004). Cooperative learning: A strategy for teaching science. Indian Journal of Psychometry and Education, 35(1), 3-8

Walkington, C. (2013). Using adaptive learning technologies to personalize instruction: The impact of relevant context on performance and learning outcome. Journal of Educational Psychology, 105(4), 932–945.

Wilson, E. V. (2004). Exam Net asynchronous learning network: Augmenting face-to-face courses with student-developed exam questions. Computers & Education, 42(1), 87-107.

Yu, F. Y. (2005). Promoting metacognitive strategy development through online question-generation instructional approach. Proceedings of International Conference on Computers in Education, Nanyang Technological University, Singapore, 564-571.

Yu, F. Y. (2012, November 26-30). Learner-centered pedagogy adaptable and scaffolded learning space design-online student question-generation [Paper Presentation]. International conference on computers in education, Singapore.

Yu, F. Y., Chang, Y. L., & Wu, H. L. (2015). The effects of an online student question-generation strategy on elementary school student English learning. Research and Practice in Technology Enhanced Learning, 10(24), 1-16. Doi: https://doi.org/10.1186/s41039-015-0023-z

Yu, F. Y., & Chen, Y. J. (2014). Effects of student-generated questions as the source of online drill-and-practice on learning. British Journal of Educational Technology, 45(2), 316-329.

Yu, F. Y., & Hung, C. (2006). An empirical analysis of online multiple-choice question-generation learning activity for the enhancement of students’ cognitive strategy development while learning science. Lecture Series on Computer and Computational Sciences, Crete, Greece, 585-588.

Yu, F. Y., & Liu, Y. H. (2005). Potential values of incorporating multiple-choice question-construction for physics experimentation instruction. International Journal of Science Education, 27(11), 1319–1335.

Yu, F. Y., & Liu, Y. H. (2008). The comparative effects of student question-posing and question-answering strategies on promoting college students’ academic achievement, cognitive and metacognitive strategies use. Journal of Education & Psychology, 31(3), 25-52.

Yu, F. Y., & Su, Ch. L. (2015). A student-constructed test learning system: The design, development and evaluation of its pedagogical potential. Australian Journal of Educational Technology, 31(6), 685-698.

Yu, F. Y., & Wu, C. P. (2013). Predictive effects of online peer feedback types on performance quality. Educational Technology and Society, 16(1), 332-341.

 

 

[1]Assistant Professor in TEFL, asgarimaj@gmail.com; Department of English, Hidaj-Branch, Islamic Azad University, Hidaj, Iran.

[2]Assistant Professor in TEFL (Corresponding Author), ganji@cmu.ac.ir; English Department, Chabahar Maritime University, Chabahar, Iran.

Aflalo, E. (2018). Students generating questions as a way of learning. Active Learning in Higher Education. https://doi.org/10. 1177/1469787418769120.
Aikenhead, G. S. (2008). Importation of science programs from Euro-American countries into Asian countries and regions: A recipe for colonization? Paper presented at the Conference of Asian Science Education, Kaohsiung, Taiwan.
Asgari, M., Ketabi, S., & Amirian, Z. (2017). The effect of using interest-based materials on EFL learners' performance in reading: Focusing on gender differences. Iranian Journal of English for Academic Purposes, 6(2), 1-12.
Beckman, M. (1990). Collaborative learning: Preparation for the workplace and democracy. College Teaching, 38(4), 128-133.
Bekkink, M., Donders, A., Kooloos, J., De Waal, R., & Ruiter, D.  (2015). Challenging students to formulate written questions: A randomized controlled trial to assess learning effects. BMC Medical Education, 15(1). https://doi.org/10.1186/s12909-015-0336-z
Bottomley, S., Denny P. (2011). A participatory learning approach to biochemistry using student authored and evaluated multiple‐choice questions. Biochemistry and Molecular Biology Education, 39(5), 352–361.
Bray, G. B., & Brown, S. (2004). Assessing reading comprehension: The effects of text-based interest, gender, and ability. Instructional Assessment, 9, 107-128.
Brown, S. I., & Walter, M. I. (2005). The art of problem posing (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.
Byrne, D. (1977). Intermediate comprehension passages: With recall exercises and aural comprehension tests. Longman.
Chang, M.-M. (2005). Apply self-regulated learning strategies in a web-based instruction—an investigation of motivation perception. Computer Assisted Language Learning, 18(3), 217–230.
Chang, M. M., & Ho, C. M. (2009). Effects of locus of control and learner-control on web-based language learning. Computer Assisted Language Learning, 22(3), 189–206.
Chastain, K. (1988). Developing second language skills: Theory and practice. Orlando, FL: Harcourt Brace Jovanovich.
Chin, C, Osborne, J. (2008) Students’ questions: A potential resource for teaching and learning science. Studies in Science Education 44(1): 1–39.
Chin, C., Brown, D. E., & Bruce, B. C. (2002). Student-generated questions: A meaningful aspect of learning in science. International Journal of Science Education, 24(5), 521–549.
Dikli, S.  (2003).  Assessment at a distance:  Traditional vs.  alternative assessments. The Turkish Online Journal of Educational Technology, 2(3), 13–19.
Dornyei, Z. (2014). Researching complex dynamic systems: ‘Retrodictive qualitative modeling’ in the language classroom. Language Teaching, 47(1), 80-91.
Drake, J. M., & Barlow, A. T. (2008). Assessing students’ levels of understanding multiplication through problem writing. Teaching Children Mathematics, 14(5), 272–277.
Fowler, W. S., & Coe, N. (2002). Nelson English language tests. Frome and London: Butler and Tanner Ltd.
Gheith, G. (2002). Using cooperative learning to facilitate alternative assessment. Retrieved from http://www.exchanges.state.gov/forum/vols/vol40/no3/p26/htm
Gulikerset, J., Bastiaens, T., Kirschner, P. (2004). A five-dimensional framework for authentic assessment. Educational Technology Research and Development 52(3), 67–76.
Guya, Z., & Izadi, S. (2002). The role of teachers in decision-making on curriculum development. Journal of Humanities of Alzahra University, 42, 147–173.
Helmericks, S. (1993). Collaborative testing in social statistics: Toward Gemeinstat. Teaching Sociology, 21, 287-297.
Hidi, S., & Renninger, K. (2006). The four-phase model of interest development. Educational Psychologist, 41(2), 111–127.
Jacobs, G. M., & Farrell, T. S. C. (2003). Understanding and implementing CLT paradigm. RELC Journal, 34(1), 5-25.
Johnson, D. W., Johnson R. T. (2008) Social interdependence theory and cooperative learning: The teacher's role. In R. M. Gillies, A. F. Ashman, & J. Terwel (Eds.), The teacher’s role in implementing cooperative learning in the classroom. Computer-supported collaborative learning, (pp, 9-37), Springer, Boston, MA. https://doi.org/10.1007/978-0-387-70892-8_1
Johnson, D. W. & Johnson, R. (2000). Teaching students to be peacemakers: Results of twelve years of research. http://www.clcrc.com/pages/Meta-Analysis Of Peacemaker Studies.html 
Kaufman, L. M. (2000). Student-written tests: An effective twist in teaching language. Retrieved from http:// www. Lionel/ Kaufman.Com/studentswritingowntests.htm
Kohn, A. (1986). The case against competition. Boston: Houghton Mifflin.
Lam, R. (2014). Can student-generated test materials support learning? Studies in Educational Evaluation, 43, 95–108.
Lan, Y.F., & Lin, P.C. (2011). Evaluation and improvement of student's question-posing ability in a web-based learning environment. Australasian Journal of Educational Technology, 27(4), 581-599.
Lavy, I., & Atara, S. (2010). Engaging in problem posing activities in a dynamic geometry setting and the development of prospective teachers' mathematical knowledge. The Journal of Mathematical Behavior, 29(1), 11-24.
Lee, S., & Pulido, D. (2017). The impact of topic interest, L2 proficiency, and gender on EFL incidental vocabulary acquisition through reading. Language Teaching Research, 21(1), 118–135.
Mazer, J. P. (2013). Validity of the student interest and engagement scales: Associations with student learning outcomes. Communication Studies 64(2),125–140. doi:10.1080/10510974.2012.727943.
Middlecamp, C. H. & Nickel A. L. (2005). Doing science and asking questions II: An exercise that generates questions. Journal of Chemical Education 82(8): 1181–1186.
Muir, S., & Tracy, D. (1999). Collaborative essay testing. College Teaching, 47, 33-36.
Murphy, T. (1991). Student-made tests. Modern English Teacher, 17(2), 28-29.
Murphy, T. (1994). Tests: Learning through negotiated interaction. TESOL Journal, 4(2), 2-16.
Odafe, V. U. (1998). Students generating test items: a teaching and assessment strategy. Mathematics Teacher, 91(3), 198-202.
Offerdahl, E. G., Montplaisir, L. (2014). Student generated reading questions: Diagnosing student thinking with diverse formative assessments. Biochemistry and Molecular Biology Education 42(1), 29–38.
Papinczak, T. R., Peterson, A. S., Babri, K. (2012). Using student-generated questions for student-centered assessment. Assessment & Evaluation in Higher Education 37(4), 439–52.
Quenemoen, R. (2008). A brief history of alternative assessments based on alternative achievement standards. Minneapolis, MN: National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/onlinepubs/Synthesis68/Synthesis68.pdf
Robinson, J. M. (1995). Alternative assessment techniques for teachers. Music Education Journal, 81(5), 28-34.
Sadeghi, E., & Ganji, M. (2020). The effects of cooperative learning on Iranian university students’ class-engagement, self-esteem, and self-confidence. Journal of Modern Research in English Language Studies, 7(4),89-109.
Sanchez, A. (2004). The task-based approach in language teaching. International Journal of English Studies, 4(1), 39–71.
Schraw, G., & Lehman, S. (2001). Situational interest: A review of the literature and directions for future research. Educational Psychology Review, 13, 23–52.
Shakurnia, A. (2015). The effects of student-generated MCQs on their academic achievement. Iranian Journal of Medical Education, 1, 521-529.
Shakurnia, A., Aslami, M., & Bijanzadeh, M. (2018). The effect of question generation activity on students’ learning and perception. Journal of Advances in Medical Education and Professionalism, 6(2), 70-77.
Tripathy, H. H. (2004). Cooperative learning: A strategy for teaching science. Indian Journal of Psychometry and Education, 35(1), 3-8
Walkington, C. (2013). Using adaptive learning technologies to personalize instruction: The impact of relevant context on performance and learning outcome. Journal of Educational Psychology, 105(4), 932–945.
Wilson, E. V. (2004). Exam Net asynchronous learning network: Augmenting face-to-face courses with student-developed exam questions. Computers & Education, 42(1), 87-107.
Yu, F. Y. (2005). Promoting metacognitive strategy development through online question-generation instructional approach. Proceedings of International Conference on Computers in Education, Nanyang Technological University, Singapore, 564-571.
Yu, F. Y. (2012, November 26-30). Learner-centered pedagogy adaptable and scaffolded learning space design-online student question-generation [Paper Presentation]. International conference on computers in education, Singapore.
Yu, F. Y., Chang, Y. L., & Wu, H. L. (2015). The effects of an online student question-generation strategy on elementary school student English learning. Research and Practice in Technology Enhanced Learning, 10(24), 1-16. Doi: https://doi.org/10.1186/s41039-015-0023-z
Yu, F. Y., & Chen, Y. J. (2014). Effects of student-generated questions as the source of online drill-and-practice on learning. British Journal of Educational Technology, 45(2), 316-329.
Yu, F. Y., & Hung, C. (2006). An empirical analysis of online multiple-choice question-generation learning activity for the enhancement of students’ cognitive strategy development while learning science. Lecture Series on Computer and Computational Sciences, Crete, Greece, 585-588.
Yu, F. Y., & Liu, Y. H. (2005). Potential values of incorporating multiple-choice question-construction for physics experimentation instruction. International Journal of Science Education, 27(11), 1319–1335.
Yu, F. Y., & Liu, Y. H. (2008). The comparative effects of student question-posing and question-answering strategies on promoting college students’ academic achievement, cognitive and metacognitive strategies use. Journal of Education & Psychology, 31(3), 25-52.
Yu, F. Y., & Su, Ch. L. (2015). A student-constructed test learning system: The design, development and evaluation of its pedagogical potential. Australian Journal of Educational Technology, 31(6), 685-698.
Yu, F. Y., & Wu, C. P. (2013). Predictive effects of online peer feedback types on performance quality. Educational Technology and Society, 16(1), 332-341.