Iranian Medical ESP Practitioners' Reading Comprehension Assessment Literacy: Perceptions and Practices (Research Paper)

Document Type : Original Article

Authors

English Department, Faculty of Foreign Languages, Sheikhbahaee University

Abstract

The current study investigated the language assessment literacy of two groups of Iranian medical ESP practitioners while teaching reading comprehension under formative assessment lens. To do so, 21 ESP instructors, 8 content and 13 ELT instructors, from five universities in Iran were recruited to participate in the study. A 40-item questionnaire, semi-structured interview, and non-participatory observations were employed by the researchers to collect the required data. The descriptive results of both quantitative and qualitative data indicated that the ELT and content teachers’ reading assessment performances were almost the same. The results also revealed that the content teachers had limited knowledge about formative classroom assessment (FCA). Although the ELT instructors were in broader agreement on the knowledge category, in practice, little trace of it was found. In terms of the principles, both the ELT and content instructors agreed that assessment principles were necessary for measuring students’ achievement. For the skill category, contrary to the content teachers, the ELT instructors were on the belief that this category was crucial; however, they did not implement it in their assessment activities.

Keywords


Article Title [Persian]

سوادسنجش درک مطلب معلمان ایرانی دروس زبان انگلیسی برای اهداف ویژه در رشته های پزشکی: ادراک و عملکردها

Authors [Persian]

  • مژده شاهزمانی
  • محمد حسن تحریریان
گروه زبان انگلیسی، دانشکده زبانهای خارجی، دانشگاه شیخ بهایی
Abstract [Persian]

. مطالعه حاضر سواد سنجش درک مطلب دو گروه از معلمان ایرانی دروس زبان انگلیسی برای اهداف ویژه در رشته های پزشکی را از نظر سنجش تکوینی مورد مطالعه و بررسی قرار داد. برای این کار ، 21 مدرس زبان ویژه ، 8 مدرس محتوا و 13 مدرس زبان انگلیسی ، از پنج دانشگاه در این مطالعه شرکت کردند. داده های مورد نیاز ازطریق یک پرسشنامه 40 سوالی ، مصاحبه نیمه ساختار و مشاهدات غیر مشارکتی جمع آوری شد . نتایج توصیفی داده های کمی و کیفی نشان داد که عملکرد سنجش درک مطلب مدرسین زبان انگلیسی و محتوا تقریباً یکسان است. همچنین نتایج نشان داد که معلمان محتوا دانش کمی در مورد سنجش تکوینی کلاس درسی داشتند. اگرچه مربیان زبان انگلیسی در مورد دانش توافق گسترده تری داشتند ، اما در عمل رد و اثر کمی از آن یافت شد. از نظر اصول ، هم مدرسین زبان انگلیسی و هم محتوا توافق داشتند که اصول سنجش برای اندازه گیری پیشرفت دانشجویان ضروری میباشد. برای مقوله مهارت ، برخلاف معلمان محتوا ، مدرسین زبان انگلیسی بر این عقیده بودند که این مقوله بسیار اساسی است.ولی ،آنها این مقوله را در فعالیتهای ارزیابی خود بکار نبردند..

Keywords [Persian]

  • سنجش تکوینی
  • زبان انگایسی برای اهداف ویژه برای دانشجویان پزشکی
  • معلمین زبان
  • معلمین محتوا
  • سواد سنجش زبان

Iranian Medical ESP Practitioners' Reading Comprehension Assessment Literacy: Perceptions and Practices

[1] Mozhdeh Shahzamani*

[2] Mohammad Hassan Tahririan

  Research Paper                                                   IJEAP- 2009-1614                          DOR: 20.1001.1.24763187.2021.10.1.1.5

Received: 2020-09-13                          Accepted: 2020-12-29                        Published: 2021-01-08

Abstract

The current study investigated the language assessment literacy of two groups of Iranian medical ESP practitioners while teaching reading comprehension under formative assessment lens. To do so, 21 ESP instructors, 8 content and 13 ELT instructors, from five universities in Iran were recruited to participate in the study. A 40-item questionnaire, semi-structured interview, and non-participatory observations were employed by the researchers to collect the required data. The descriptive results of both quantitative and qualitative data indicated that the ELT and content teachers’ reading assessment performances were almost the same. The results also revealed that the content teachers had limited knowledge about formative classroom assessment (FCA). Although the ELT instructors were in broader agreement on the knowledge category, in practice, little trace of it was found. In terms of the principles, both the ELT and content instructors agreed that assessment principles were necessary for measuring students’ achievement. For the skill category, contrary to the content teachers, the ELT instructors were on the belief that this category was crucial; however, they did not implement it in their assessment activities.

Keywords: Language Assessment Literacy, ESP Practitioners, Formative Assessment, Content Teachers, ELT Teachers, Assessment Knowledge

  1. Introduction

Language assessment literacy is a fundamental requirement for language teachers (Stoynoff & Coombe, 2012) to provide students with optimum learning conditions and classroom effectiveness. It is considered a part of teachers' knowledge and a fundamental requirement for language teaching professionalism (Abell & Siegel, 2011, Engelsen & Smith, 2014). As such, assessment literacy of language teachers can influence their assessment practices in encouraging students' learning. To improve the quality of students' learning, classroom assessment, which is at "the heart of a teacher's assessment literacy practices" (Fox, 2013, p. 4), can provide teachers with the necessary condition to reach the goal; classroom assessment is a formative approach to assessment to raise the quality of students' learning.

Hence, what has gained ground is using an assessment that contributes to the development of effective teaching and learning rather than an assessment of learning that happens at the end of the course and is incapable of providing teachers with optimal opportunity for remedial course instruction. However, due to the growing emphasis placed on CA for years (Deluca, Coombs, MacGregor & Rasooli, 2019; Rahimi Rad, 2019) contributing to every other teacher function and the formative role of assessment, few teachers are prepared enough to meet classroom assessment challenges (Malone, 2013) and generally teachers consider assessment as a latecomer in the academic year (Rahimi Rad, 2019), which could suggest lack of AL literacy among teachers.

It could be because of little attention paid to such concepts as assessment of learning and assessment for learning. Assessment for learning, also known as formative assessment is an ongoing process (Inbar-Louri, 2008a) which shapes students' learning and seeks and interprets “evidence for use by learners and their teachers to decide where the learners are in their learning, where they need to go and how best to get there" (Assessment Reform Group, 2002, as cited in Inbar-Louri, 2008a, p. 39). On the contrary, assessment of learning, identified as summative assessment, refers to "the application of external measurement like standardized tests" (Gardner, 2006, p.10). 

Paying little attention to assessment of learning and assessment for learning concepts, teachers in higher education are expected to give final exams and report the result as the learning outcomes (Hapm-Lyons, & Schmitt, 2015). Such summative assessments do not accurately reflect students' learning as measured in a given time, but indicate how faithfully they adhere to the established examination system, social expectations, and expectations of the school administrators. This is also true in Iranian academic medical ESP contexts, in which documented dissatisfactions with the outcome of the courses (Atai & Tahririan, 2003; Iranmehr, 2018) have been repeatedly reported, and, therefore, more research is still needed in this area to find out about sources of the discontentment.

  1. Literature review

Language Assessment Literacy (LAL), taken from the generic term assessment literacy (AL), is ''an individual’s understanding of the fundamental assessment concepts and procedures deemed likely to influence educational decisions'' (Popham 2011, p. 267). It is distinct from AL and defined as the required knowledge stakeholders engaged in assessment practices need to have to direct language assessment practices (Fulcher 2012; Taylor 2013). In other words, as Inbar-Lourie (2008b, 2013) also mentions, what makes LAL distinct from general AL is language-focused practices. She relates LAL to layers of knowledge with layers at the bottom comprising general assessment literacy, forming a foundation for the language-oriented issues (2017). She believes LAL is an additional component not merely acquaintances with tools and procedures for evaluating students’ language abilities, but the ability to give constructive feedback to learners to lead learners to set and reach their future learning aims (2008a).

Thus, LAL is a teachers' professional knowledge in the language assessment domain acknowledged as essential for decision making on language-related issues. It can also be considered as a gateway or threshold to further learning, as it provides individuals with both "the necessary skills and knowledge about good assessment practices and facilitates the evaluation of educational situations and decision making regarding which assessment-related skills should be deployed when and for what purpose" (Price, Rust, O'Donovan & Handley, 2012, p. 9). All in all, definitions for language assessment literacy that have been offered by different scholars vary, however, the common thread among them is the teachers’ recognizing and understanding of the different purposes of assessment and using them accordingly. Such teachers can be considered assessment literate.

Characteristics of assessment literates, irrespective of being teachers or other stakeholders, have been mentioned by several scholars. Stiggins (2005) believes that being clear about why and what they are assessing, what problem could go with the assessment procedure, and how best to assess various targets, to develop a sound sample of performance, and to face with specific strategies are characteristics of assessment literate educators.

Anyhow, the need to define LAL has stemmed from admitting the increasingly central role teachers have to play in the assessment process (Inbar-Lourie, 2017; Scarino, 2013). Moreover, the emergence of the concept can also be the result of social and contextual changes in language testing and the developments in language teaching, which demand new assessment modes. Importantly, the application of these changes to the individuals who are affected by decisions and assessment methodology, for example, second language learners in the school context (Barni, 2015), made the shift more meaningful. Hence, AL should be taken into account as a social activity which could be better understood in specific contexts. Xu and Brown (2016) suggest a framework of teacher assessment literacy that includes six factors. They are the knowledge base, teachers’ assessment perception, teachers’ learning, identity, and socio-cultural and institutional context. In other words, AL is a dynamic process, not a set of knowledge or skills, in which interaction of various factors with each other at various levels impact teachers’ assessment practices (Jiang, 2020).

These changes, along with the growing emphasis on the formative role of assessment (Black & Wiliam, 1998), particularly on the need to provide constructive feedback to advance learning, have resulted in considering other major players in language teaching and assessment scene more than language testing experts. In other words, a gradual transition in LAL discourse towards a more expanded repertoire conceptually and practically (Inbar-Louri, 2017) puts LAL in a new and more comprehensive and practical assessment framework. Therefore, considering as both independent assessors and consumer of assessment results, teachers are required to have knowledge about assessment of means of assessing what students know and can do, how to interpret assessment results, and how to apply the results to improve the students’ learning and program effectiveness (Webb, 2002, p. 1)

The necessity and importance of LAL for language teachers have been accentuated in numerous studies (e.g. Giraldo, 2018; Rahimi Rad; 2019; Scarnio, 2013;). Fulcher (2012) notes the need for teacher assessment literacy: “If language teachers are to understand the forces that impact upon the institutions for which they work and their daily teaching practices and to have a measure of control over the effects that these have, they need to develop their assessment literacy” (pp. 114–115). The noteworthiness of inclusion of AL in teacher training programs refers to the fact that one of the most critical responsibilities of classroom teachers is assessing student performance, as it influences everything that teachers do (Mertler, 2009). As Rahimi Rad (2019) suggested, AL had meaningful impact on assessment efficiency in the classroom.

However, along with training program, Scarnio (2013) stated that there is a need to broaden our comprehension of LAL for teachers. She believed that it is needed for the teachers to explore and evaluate their perceptions of LAL to develop an understating of the nature of assessment in depth. Because a good deal of variability in assessment practices can emerge from different teachers’ beliefs. In this respect, Brown (2008) stated, teachers’ perception of assessment indicates their learning and teaching perspective as well as their epistemological beliefs; it plays a crucial role in the teachers' interpretation of assessment knowledge and their actual assessment activities. In this respect, a few studies have been conducted. For example, Sahinkarakas (2012) explored the role of teaching experience in teachers' perception of language assessment. She came up with four themes as the results of her study: Assessment as a formative tool, assessment as a summative tool, assessment as something agitating, and assessment as a sign of self-efficacy. Teachers' perceptions and practices of classroom-based English language assessment were investigated by Shim (2009), who concluded that L2 teachers were assessment literate and aware of the principles of assessment but didn't remain loyal to those principles. Munoz, Palacio, and Escobar (2012) investigated sixty-two Colombian teachers' beliefs about language assessment and practices and found a gap between perceptions and practices of the teachers. In another study, Firozi, Razavipour, and Ahmadi (2019) noted that the teachers' perceptions of language assessment needed to be changed. Moreover, as the results of a recent study on language teachers' perceptions of classroom-based assessment, Zulaiha, Mulyono, Ambarsari (2020) found that the teachers' level of LAL was appropriate.

Despite the importance of AL for ELTs, some teachers, unfortunately, do not have an adequate level of assessment knowledge (Crusan, Plakans, & Gebril, 2016; Melone, 2013). Some studies have documented the teachers’ weak assessment skills in different geographical locations (Brookhart, 2011; Campbell, Holt, & Murphy, 2002), attempting to dispel the LAL mist, recommending or designing professionalization initiatives. In Iranian context, Esfandiari and Nouri (2016) suggest that enhancing teachers’ awareness of assessment literacy would enable them to evaluate the performance of learners. Similarly, Bayat and Rezaei (2105) conclude that teachers must develop language assessment literacy to prevent serious consequences. Moreover, Rezaei Fard and Tabatabaei (2018) found that Iranian ELTs' level of LAL was low. Shahahmadi and Ketabi (2019), too, concluded that the current level of Iranian EFL teachers’ LAL was not ideal, though LAL was of their concern. However, Jannati (2015) in his study argues that ELT teachers are assessment literate, but this literacy is not reflected in their practices.

Although the LAL studies above directly or indirectly have suggested the importance of LAL for language teachers, who directly receive and engage with assessment results, the studies mostly indicated that assessment literacy, for a long time, was specifically examined from the knowledge of educational measurement and the beliefs towards assessment practices has gained little attention. Therefore, to partly fill this gap as well as to a dearth of research on Iranian ESP practitioners’ assessment literacy, this study tens to explore the perceptions and actual classroom assessment practices of two groups of Iranian medical ESP practitioners, content and ELT teachers, who take the responsibility of teaching medical ESP courses and decide about their students' academic progress, to gain a more vivid picture of their assessment practices and a better understanding of the status quo of medical ESP assessment activities. The discrepancies of medical ESP teachers' assessment procedures can also be delineated, which can eventually result in helping medical ESP teachers to be better teachers who can encourage their students to learn how to be better ESP learners. To this end, the following research questions were formulated.

Research Question One: What are language instructors' and content teachers' perceptions about assessing the students' reading skills in medical ESP courses?

Research Question Two: What are language instructors' and content teachers' techniques for assessing the students' reading skills in medical ESP courses?

  1. Methodology

3.1. Design

This study is a descriptive-analytical one that attempts to elicit the ESP teachers' perceptions and practices in reading skills assessment. Both qualitative (observation, and semi-structured interviews) and quantitative (questionnaire) procedures were employed, hoping to enhance the worth of the study.

3.2. Context and Participants

A total number of 35 medical ESP practitioners, teaching medical ESP at Isfahan, Tehran, Kerman, Shiraz, and Iran University of Medical Sciences, were recruited to participate in this study. However, from among the total number of the participants, only 21 medical ESP teachers, 13 ELT, and 8 content teachers, volunteered to cooperate with and provide researchers with qualified data for two phases of data collection, observation, and interview sessions. The participants were selected by purposeful sampling, as the study found this way of selection helpful to exploit a wide valid comparatively comprehensive repertoire of beliefs and practices of the two groups of medical ESP practitioners on reading assessment literacy.

The ELT teachers had formal education in teaching English while content teachers were specialists in their field, medicine, and had no formal education in English teaching. The participants were in two groups of male and female with Ph.D. and M.A. degrees, for ELT teachers, and M.D. degrees, for content instructors. Both groups of the participants were the natives of the Persian language and interested in teaching English with experience of teaching English ranging from 4 to 42 years. A brief account of the participating instructors' demographic information appears in Table1.

 

 

 

 

 

 

 

 

 

Table 1: The Demographic Information About the Participants

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

Participants

ELT

 

 

ELT

ELT

ELT

ELT

ELT

ELT

ELT

ELT

ELT

ELT

ELT

ELT

 

Cont.

Cont.

 

Cont.

Cont.

Cont.

Cont.

Cont.

Cont.

Gender/

Age

M/

72

M/

53

M

32

M

37

 

M

32

M

31

F

37

F

45

M

70

M

32

F

39

M

32

M

66

M

31

F

38

M

40

M

55

F

48

F

51

M

62

M

54

field /degree

TEFEL/

Ph.D.

TEFEL/

Ph.D.

TEFEL/

M.A

TEFEL/

Ph.D.

TEFEL

M.A

TEFEL

M.A

TEFEL

Ph.D.

TEFL

Ph.D.

TEFL

Ph.D.

TEFL

M.A

TEFL

M.A

TEFL

Ph.D.

TEFL

M.A

Med

M.D.

Med

M.D.

Med

M.D.

Med

M.D

Med

M.D

Med

M.D

Med

M.D

Med

M.D

English and ESP teaching experience

40/

40

35/

35

8/4

12/5

7/5

9/6

18/8

25

/18

37/

30

10/8

12/

10

13/5

27

/20

10/7

19/

10

15

/10

22/

8

20/

10

18/

13

25/

15

17/

12

 

 

 

 

 

 

 

 

 

 

 

 

3.3. Materials and Instruments

A questionnaire, semi-structured interviews, and observations were used for data collection.

3.3.1. The questionnaire

The survey instrument was developed by the researchers of the current study in a series of steps. Reviewing the related literature and examining the available questionnaire in the field of AL for instance (Fulcher, 2012), it was decided, then, to prepare an ad-hock questionnaire. By considering the three components of knowledge, skill, and principles discussed by Davice (2008) and InbarLouri’s (2008a) elaboration on assessment culture, the researcher developed a 40 item questionnaire that contained four parts to gain the rate of the participants’ agreement or disagreement on each. It included 1) components about knowledge and background about assessment (n=19), 2) assessment practices and perceptions of the participants while teaching reading comprehension to medical ESP students (n=11), 3) the techniques they applied in teaching medical ESP courses (n=7), and 4) the local culture of the assessment (n=3).

The questionnaire was comprised of closed and open-ended questions written in the participants' native language, Farsi, to make sure that any misunderstanding or misinterpretation of the related terms and concepts couldn’t interfere with the data collection process. The purpose of open-ended questions was to provide the researcher with short answers in the participants’ own words. Then, the questionnaire was judged by three experts to see whether it served the purposes of the study. After, the designed questionnaire was piloted to see whether there were any problems with the items, time and if there were problems in nature. It was then analyzed in terms of the internal consistency of the items; the Cronbach Alpha value proved to be .88 that was reasonably fine.

3.3.2. Observations

To follow the teachers' actual assessment activities in close, the researchers made class observations of the medical ESP classes. The researchers attended, for at least, four class sessions as non-participant observers from the beginning of the semester. A checklist based on the questionnaire’s categories was developed to gain a general picture of the teachers’ assessment activities.

3.3.3. Field notes

As a complementary technique of data collection, the researchers used field note-taking to assist them to keep records of classroom events as a quick source of reference for later potential questions emerging in the interview sessions with teachers.

3.3.4. Interview

To grasp the teachers' underlying beliefs and elicit confirmatory information regarding the teachers' practices, it was planned to conduct a semi-structured interview with the participants (n=21). To have a smooth and fluent interview with the instructors and encourage them to speak of their in-depth comments and ideas; the researcher tried to conduct the process mostly in an interactive manner.

3.4. Procedure

The researchers, first, gave the questionnaire to the participants and instructed them to fill in it in Persian. The estimated time for the questionnaire completion was 20 minutes; however, it took the participants 10 minutes to complete it. Then, a semi-structured interview was conducted with 21 of the teachers. The interview sessions lasted about 45 minutes to 1 hour for each teacher. All the process of the interview was conducted in the native language of the participant, Farsi. Further, to validate the process, using any specific expression regarding assessment was avoided. The interview process contained two sections. The first part of the interview was done before the class observations to probe the teachers' beliefs about formative assessment. The second phase was done after observation sessions to clarify the teachers' reasons behind their assessment activities.

Third, to scrutinize the medical ESP practitioners’ assessment practices, the researchers made class observations of the medical ESP classes. But, before that, the ethical issues were taken into account. The participant's consent for their class observations was gained. The researchers assured the instructors that the intention of the study was not to evaluate their performance, and the results would report anonymously. Each class was observed and audio-recorded for full four sessions. To show the process of observation occurs as naturally as possible, check the teachers' assessment practices did not change, as well as become sure that the presence of the observer researcher did not affect the ESP teachers' assessment practices, the researchers made the class observations irregularly. Then, to seek the actual teachers' assessment practices, the recorded data was transcribed and summarized.

Fourth, the observer researchers also made notes during class performances to jot down any possible raised questions in her mind as well as to keep track of what was happing during observation for further future inspection and potential questions emerging in the interview sessions. The recorded notes were immediately categorized and summarized after each class observation. The reason behind it was to have a quick reference for comparing each teachers’ assessment practices with those in the following sessions as well as with those of their coworkers.

3.5. Data analysis

For analyzing the data, several steps were taken. The data from the interviews, and the field notes were transcribed, categorized, and analyzed. Minor and major themes, too, were extracted and classified into independent units. To check the reliability of data analyzing and validity of data, the researcher rechecked the data at the proper time intervals for at least two weeks. Random cross-checking was done to check the accuracy and interpretation of the analyzed data as well. The collected data from the survey instrument were coded and categorized to extract the similarities and differences within and across two groups of the teachers; then, descriptive statistics was used to explain them. Also, in the interview phase, the reasons behind the teachers' practices were traced to help the researcher identify the differences more vividly.

  1. Results

The findings of the study are presented in three main sections: the participants' responses to the questionnaire, their perceptions about FCA, and their actual assessment practices.

4.1. The participants' actual responses to the questionnaires

The results of the questionnaire were summarized in Table 2. It indicates the general overview of assessment categories for each group of the teachers.

Table 2: Descriptive Statistics of The Teachers’ Responses

Groups Categories

 Principles

Skill

Knowledge

Culture

ELT

94.73%

90.51%

68.32%

89.85%

content

80.26%

19.69%

36.90%

100%

Table 2 indicates the frequency and percentage of each assessment category for the two groups of ELT and content teachers teaching medical ESP courses.

As the table shows, the majority of the participants in ELT and content groups agreed that assessment principles are essential (94.73%) and (80. 26%), respectively. For the skill category, the majority of the ELT instructors (90.51%) agreed with teaching and assessing the skills while the content instructors were in low agreement (19.69%) in this regard. This point is an indication of the participants’ inclination and prioritization of the skills that the teachers preferred to assess in their medical ESP courses.

The third category, knowledge, came to the most agreement of the ELTs, while a slim majority of the content teachers agreed with the category. It shows the teachers’ awareness and unawareness of what is being assessed in their classes. Compared with other categories, local culture and assessment practices came to complete the content teachers’ agreement (100%). It reveals that the teachers care about the role social context play in the assessment process.

4.1.2. Open-ended questions

 For part three, the technical skill section, containing 11 questions in the open-ended form, most of the participants of the study didn’t provide the researcher with obvious answers. To explain in detail, the content teachers didn’t' reply to and comment on the questions of this part. But 9 ELT teachers put much emphasis on this part and commented that the techniques were very important to them and applied them while teaching reading skills. They did not, however, build up a vivid picture of "how" they applied the technical reading skills.

 4.2. The Interview phases

 4.2.1. Phase one

Some common themes that emerged from the interview are as follows. First, the participants (8 content and 13 ELT instructors) believed that formative assessment lets the students know how to get ready for the mid and final exam, as it is mostly indicative of their strengths and weaknesses. "CA is a synopsis of the most important points we are teaching in classes…" it stops students’ cramming for mid and final tests….” the ELT and content teachers reported respectively.

The second category was "causing the students to read". Both groups of the ESP practitioners (6 content and 10 ELT) commented that FCA made the students study. Both ELT and content teachers reported that “ FCA is a motivating factor making students study it makes the students study more it decreases the burden of studying for the final exam…” However, the content instructors explained that despite their tendency toward FCA, they were not able to make it because of a lack of time and a large number of students in a class.

For the third theme, the majority of the participants (7 content and10 ELT) believed that the final score was representative of the students' learning to a large extend “it shows their advancement and learning...it is the sign that they have learnt…”; the teachers commented. However, 7 ELT instructors out of 21 participants were opposed to this matter; “no…human factors interfere ... it is not the complete factor…of course not, many factors are in between like stress...”.

The techniques used by medical ESP practitioners to assess their students' learning falls into the next category. The ELT teachers' (n=13) reading assessment techniques, as they claimed, were class quizzes, asking questions from the reading passage, asking the main idea, and a summary from the reading passage; “ these are the techniques we always use for reading assessment…”. Translation, lecture presentation, and asking vocabulary questions from the reading passage were different methods the content teachers (n=8) used for FCA; “presentation lets students speak.. …translation of technical words let students find out the nuances between the words…technical words are important and grow them academically…”. Moreover, the instructors claimed that experience and students' feedback let them know which methods to use for assessing their students reading comprehension.

Fifth, the participants of the study in ELT teachers’ groups (n=11) were on the belief that their assessment practices are limited to university regulations in contrast to the content teachers (n=8) who felt free in this respect. “there is no much room for FCA…we are limited to the grade the English department predetermined for class activity…”, the ELTs stated.

The next category spoke of "the necessary background knowledge for implementing FCA". Nearly most ELT instructors (n= 11) were on the belief that “having backgrounds in testing, teaching and assessment knowledge enhances the quality of FCA…without these backgrounds, how can we decide ion the final score?…”, they stated. Whereas; the content teachers (n=7) did not know the backgrounds necessary at all. Most content teachers adhered exclusively to the belief that “nothing can enhance the quality of CA but content knowledge…”.

The seventh theme referred to as "formative assessment and the learning context are interrelated factors". Most of the participants (6 content and 12 ELT) believed that the context of teaching and assessment were inseparable. “the ESP teachers should take into account human factors as well”, they suggested.

Last, for most of the medical ESP teachers, the paper-pencil test had a valuable role to play in the classrooms. Interestingly, even some of the ESP instructors pointed to the expense of papers as the main reason for not implementing FCA; “papers are expensive to take FCA test…” .

4.2.2. The second phase

This phase was immediately held after four sessions of the class observations to recount the assessment actions and practices the teachers had done. What follows are the common themes derived from the class observations of the participants' reading comprehension assessment practices.

First, in ESP classes taught by the teachers, the students were supposed to study the reading passage beforehand; “they should come to class with readiness...”The students' readiness save class time...", the ELT and content teachers argued respectively. Reading aloud was the content and ELT teachers’ technique. Clearly, they connected reading comprehension with correct pronunciation.

Asking the students the new words meaning from the text was the second category. The ELT teachers reasoned that "it is the focus of their final English tests...". In the content teachers’ classes, meanings of technical words were translated and finally elaborated on by the teachers. “they should learn these words …these are the basics of their field…", they reported.

Paragraph to paragraph translation spoke of the third category. Some ELT but most content teachers wanted the students to read the passage aloud and translate it. A translation assignment with a bonus was their common assessment practices as well. “The translation is a skill.. …it is the fifth skill... it improves their reading skill.”; The ELT teachers contented. The content teachers reported that “The students have to pass the course…these grades help them pass the course”.

The fourth common theme among some ELT teachers was asking some questions from the reading passages. however; the teachers themselves took care of the correct answer if the students did not provide the teachers with the response; “ if … I involve students more in a lesson… the students will afraid of me…” they reported. Content teachers did not have such assessment activities in their classes.

Fifth, in most classes, the teachers were the main respondents to their presented questions. "the students have different levels of ability in English… they are not motivated …they usually are apprehensive of class questions… ", the teachers mentioned. Moreover, they related the students' low-class participation to the low score allocated for CA.

Pronunciation and oral presentation, as subcategories, were underlined in medical ESP classes. Six ELT teachers, just like the content teachers, preferred to have the students present a lecture in English.

  1. Discussion

A close analysis of the findings demonstrated the participants of the study considered FCA as a classroom management tool to encourage the students to study. The reason can refer to the point that they were negligent of the fact that the purpose of FA is providing learners with learning opportunities, not keeping students in line with scores' power. This, too, could be because of their traditional view on FCA.

Next, the teachers’ applied a limited range of assessment techniques to assess the students’ reading comprehension, though a wide range of assessment options is available for CA (Poham,1992). Applying a limited range of FCA can relate to the instructors' unfamiliarity with options available to FCA. Moreover, their unpreparedness to apply different FCA can be another reason that could indicate the teachers’ insufficient training in FCA as part of their professional preparation. This finding is compatible with Firoozi, Razavipour, and Ahmadi (2019) suggesting that teachers need training in assessment. Another reason can go to the teachers’ having an unclear vision of the purpose of the assessment practices they applied in the class. Thus, having a clear perception of a particular assessment purpose is, what assessment methods can be used and how it can relate to the learning goals is essential to sound CA. In other words, the medical ESP practitioners' clear understanding of the purpose of assessment helps them employ proper assessment-related activities to ensure the fact that the learners are meeting the instructional goals. Just like every good mechanic or carpenter that needs the right tool to do the job right, ESP practitioners will need proper FCA techniques to answer the question "are the students' language expectations and needs satisfied?”.

Further, their limited understanding of FCA purposes led the medical ESP teachers, wittingly or unwittingly, to teach to the test. For example, they emphasized on vocabulary teaching as the students would have vocabulary questions in the final test.

Misinterpretation of the course objective speaks of the third reason. The first step in having a sound classroom assessment is having a vivid picture of the course objectives and the targets the students have to hit. However, the participants of the study have forgotten the purpose of the medical ESP course in Iran, developing reading comprehension skills. Thus, the assessment activities they applied in their classes did not match with the course objectives. Compared to the ELT teachers, it seemed that the content teachers were much more in line with their theory and practice than their ELT coworkers; because they did not know much about developing reading comprehension skills as the main aim of the ESP course.

Moreover, while the participants of the study believed practice makes perfect, and by trial and error they had learned about FCA, they had not developed LAL from their assessment practices. The evidence came from the common pattern emerging from their actual CA activities. The first reason may be due to the teachers’ insufficient training in how to develop their LAL for implementing appropriate language assessment practices and conducting classroom assessment (Lam, 2015) as well as their not receiving enough encouragement to extend their LAL (Scarino, 2013). Scarino (2013) and Vogta and Tsagari (2014) also reported that teachers could learn about assessment on the job, but it was not the case in this study.

The second reason relates to the context, socially and locally, they are involved. As assessment does not function in isolation, the teachers' assessment practices and training needs can be affected by the local instructional context, institutional regulations about teaching and assessment, educational policies, and socio-cultural values regarding language teaching and assessment. Most of the instructors were bound to educational policies and the limitations their educational context imposed on them; they had to observe their institutional regulations about teaching and assessment. For example, they had to cover many units of a book.

Regardless of these variations, however, some similarities hold true for both teachers of the two camps. First, the medical ESP teachers in both groups did not make a difference between FCA and numbered tests. But they do mean something different. CA includes any formal or informal evidence eliciting techniques employed by teachers to make inferences about their students' learning while a test is instead a final product of a course (Poham, 2009). In other words, formative assessment is a process-oriented activity supplying evidence that will enhance students' learning, but final tests do not give the teachers such opportunity. Placing assessment and grades on the same level is indicative of the teachers’ limited approach to language assessment. López and Bernal (2009) believed that teachers with no language assessment training equate assessment with grades.

Next, the traditional method of assessment, paper, and pencil test, especially by considering a numerical value for it, had a particular position in the teachers’ view. It is the indication of the fact that new language assessment techniques are still lagging.

Third, it seems, the majority of the ESP practitioners has passed their responsibility for not implementing FCA to such various obstacles as contradictory demands of the educational system, class size, lack of time, the bulk of chapters to be covered, unmotivated students, and their workload. Even for teachers buying into the principles of formative assessment and trying to implement them, those were serious impediments. However, these roadblocks to the implementation of FCA are not insurmountable.

Fourth, classroom students’ responses can be regarded as a good source of evidence of their current state of learning. But if questions repeatedly go unanswered, how much can the ELT and the content teachers, running medical ESP courses, understand about student learning, let alone what instructional changes could be introduced? The reason goes to the dominant culture in the Iranian academic context that demands teachers’ self-answering. Another reason could be the teachers’ negligence of the fact that providing the students with superabundant teacher self-answering fails them to create further learning opportunities for students and may result in teacher-dependent learners.

Fifth, as the findings suggest, both groups of the participants cared for summative assessment when it is too late to improve and reform the instruction or obviate the learners' language needs. The reason refers to the invested interest in summative approach to assessment while at the same time neglecting the vital role of FA. It appears, unfortunately, as we race forward in time, we find more interest has been invested in the issue. Environmental factors and prevailing testing culture dominating the ESP environment can be responsible for such an approach to assessment.

For years, the ELT instructors have been taught to testing courses with much of the focus of attention on technical skills, like item writing and statistical skills, standardized tests and formal training on testing theories, etc., but less on classroom assessment theories and practices. In addition, they are not even offered enough opportunities to practice the learned theories. As Pastore and Andradeit (2019) believe "it is now necessary to translate assessment literacy into practice" (p. 9). To that end, it seems that medical ESP practitioners are urged to go beyond the existing shortcomings and equip themselves with a theory of practice. Hence, medical ESP teachers need to theorize from their practice and practice what they theorize to find out how to integrate their assessment practices with FCA theories to support their teaching and students' learning within a standards-based framework of the context of their workplace. Thereby, in response to scholars who claim teachers could develop their LAL on the job, it can be said that "if" teachers are enabled to make connections between theory and practice, they will learn about assessment. Meaning that, if LAL has to emerge from teachers' daily assessment practices, they are required to be supplied with a theory of practice.

Hence, it appears if the ESP teachers equip themselves with thinking action, they can connect assessment theories to their assessment practices. As Edge (2001) states, "the thinking teacher is no longer perceived as someone who applies theories, but someone who theorizes practice" (p.6). Therefore, reflection is a critical dimension of pedagogical development and could be a rich source of teacher-generated information that could encourage the ESP teachers to find out where they are right now and then decide where they want to go hereafter.

Last, the results of the study indicated that the teachers in both groups attended to the learners’ pronunciation and oral presentation. It can be because of the ELT teachers’ institutional experience they had had, as they liked to practice the experiences they had gained and the content teachers’ interest in practicing pronunciation, they commented.

  1. Conclusion and implication

The findings of the study revealed that the participants developed a traditional view toward FCA and placed assessment and grades on the same level; they considered FCA as a classroom management tool and paper and pencil test had a particular position in the teachers’ view. Therefore, medical ESP practitioners have limited knowledge about FCA despite the emphasis placed on FCA for years.

Further, the data analysis indicated that there was a wide gap between perceptions and actual practices of ELTs, teaching medical ESP courses, of FCA. The ELTs did not practice what they preached, and their classroom assessment practices were almost similar to those of their counterpart, the content teachers.

Although teacher training courses are mostly suggested in various studies to bridge the gap formally and engage the teachers in AFL, teachers should not exclusively rely on a teacher education program to advance their AL. The medical ESP teachers can improve their LAL by applying theory of practice, putting themselves in a dialogic manner with other colleagues and communities following the same aims. They can also advance their LAL level through reflecting and revising their conceptions of FA, compromising with various contextual tensions, and seeking support from different stakeholders and the ESP community.

Regarding the limitations of this study, the researchers admit that the size of the sample of the participants may limit the strength of the findings. In fact, due to the difficult time and the intermittent breaks imposed by Covid-19 from February 2020, when the data collection procedure started, many ESP classes were cancelled and, in spite of the great effort made, several ESP practitioners avoided the interview sessions. However, the findings of the study can deepen our current understanding of the medical ESP practitioners’ assessment reading comprehension activities and contribute to assessment training not only for in-service medical ESP teachers but also for prospective ones that are likely to be a member of the ESP community if they are to maintain the course quality and meet the students’ language needs. To achieve these long-term goals, further studies could investigate LAL of medical ESP teachers with a larger sample and gather data on the aspects of language assessment literacy on which the teachers need training.

References

Abell, S. K., & Siegel, M. A. (2011). Assessment literacy: What science teachers need to know and be able to do. In D. Corrigan, J. Dillon, & R. Gunstone (Eds.), The Professional knowledge base of science teaching, (205-221). Dordrecht: Springer.

Atai, M. R., & Tahririan, M. H. (2003). Assessment of the status of ESP in the current Iranian higher educational system. Proceedings of LSP: Communication, culture and knowledge conference. Guilford, England: University of Surrey.

Barni, M. (2015). In the name of the CEFR: Individuals and standards. In Spolsky, B., Inbar-Lourie, O. & Tannenbaum, M. (Eds.), Challenges for language education and policy: Making space for people (pp. 40–51). New York: Routledge.

Bayat, K., & Rezaei, A. (2015). Importance of teachers’ assessment literacy. International Journal of English Language Education, 3(1), 139-146.

Black, P. J., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in education: Principles, Policy and Practice, 5(1), 7–74.

Brookhart, S. M. (2002). What will teachers know about assessment, and how will that improve instruction. In R. W. Lizzitz, & W. D. Schafer (Eds.), Assessment in educational reform: Both means and ends (pp. 2-17). Boston, MA: Allyn & Bacon.

Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational measurement: issues and practice, 30(1), 3–12.

Campbell, C., Murphy, J. A., & Holt, J. K. (2002). Psychometric analysis of an assessment literacy instrument: Applicability to preservice teachers. In Annual meeting of the mid-western educational research association, Columbus, OH.

Carless, D. (2011). From testing to productive student learning: Implementing formative assessment in Confucian-Heritage Settings. New York, NY: Routledge.

Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers' knowledge, beliefs, and practices. Assessing Writing, 28, 43-56.

Davidheiser, S. A. (2013). Identifying areas for high school teacher development: A study of assessment literacy in the Central Bucks School District. Unpublished Doctoral dissertation. Drexel University, USA.

Davies, A. (2008). Textbook trends in teaching language testing. Language Testing, 25(3), 327–347.

DeLuca C, Coombs A, MacGregor S & Rasooli A. (2019). Toward a differential and situated view of assessment literacy: Studying teachers' responses to classroom assessment scenarios. Frontiers in Education. 4, 110-119.

Edge, J. (2001). Action research. Washington, DC, TESOL.

Engelsen, K. S., & Smith, K. (2014). Assessment literacy. In C. Wyatt-Smith,V. Klenowski, & P. Colbert (Eds.), The enabling power of assessment: Designing assessment for quality learning (pp. 140-162). New York: Springer.

Esfandiari, R., & Nouri, R. (2016). A mixed-methods, cross-sectional study of assessment literacy of Iranian University instructors: implications for teachers’ professional development. Iranian Journal of Applied Linguistics, 19(2), 115–154.

Firoozi, T., Razavipour, K., & Ahmadi, A. (2019). The language assessment literacy needs of Iranian EFL teachers with a focus on reformed assessment policies. Language Testing in Asia, 9(2), 1-14.

Fox, J., (2013). Book review: The Cambridge guide to second language assessment. Language Testing, 30(3), 413-415.

Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113–132.

Gardner, J. (2006). Assessment for learning: A compelling conceptualization. In J. Gardner (Ed.), Assessment and Learning. (pp. 279-286). London: Sage.

Inbar-Lourie, O. (2008a). Language assessment culture. In E. Shohamy (Ed.), Language testing and assessment (Vol. 7). N. Hornberger (General Ed.), Encyclopedia of language and education. (pp. 285–300). New York: Springer.

Inbar-Lourie, O. (2008b). Constructing a language assessment knowledge base: A focus on language assessment courses. Language Testing, 25(3), 385–402.

Inbar-Lourie, O. (2013). Guest Editorial to the special issue on language assessment literacy. Language Testing, 30(3) 301-307.

Inbar Louri, O. (2017). Language assessment literacy. In E. Shohamy et al. (eds.), Language Testing and Assessment (pp.275-270). New York: Springer.

Iranmehr, A. Atai, M.R., Babaii, E. (2018). Evaluation of EAP programs in Iran: Document analysis and expert ‎perspectives. Applied Research on English Language, 7(2), 171-194.

Jannati, S. (2015). ELT teachers’ language assessment literacy: perceptions and practices. International Journal of Research in Teacher Education, 6(2), 26-37.

Jiang, Y. (2020). Teacher classroom questioning practice and assessment literacy: Case studies of four English language teachers in Chinese universities. Frontiers in Education, 5(23), 1-17.

Lam (2015), Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 30(2), 32(2) 169–197.

López Mendoza, A. A., & Bernal Arandia, R. (2009). Language testing in Colombia: A call for more teacher education and teacher training in language assessment. PROFILE, 11(2), 55–70.

Melone, M. E. (2013). The essentials of assessment literacy: contrasts between testers and users. Language Testing, 30(3), 329–344.

Mertler, C. A. (2009). Teachers’ assessment knowledge and their perceptions of the impact of classroom assessment professional development. Improving Schools, 12(2), 101–113.

Muñoz, A., Palacio, M., & Escobar, L. (2012). Teachers’ beliefs about assessment in an EFL context in Colombia. Profile: Issues in Teachers’ Professional Development, 14(1), 143-158.

Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory into practice, 48(4), 4–11.

Popham, W. J. (2011). Assessment literacy overlooked: A teacher educator’s confession. The Teacher Educator, 46(4), 265–273.

Price, M., Rust, C., O'Donovan, B., & Handley, K. (2012). Assessment literacy: The foundation for improving student learning. Oxford: Oxford Brookes University.

Rahimi Rad, M. (2019). The impact of EFL teachers' assessment literacy on their assessment efficiency in classroom. Journal of Britain International of Linguistics Arts and Education (BIoLAE), 1(1), 9-17.

Rezaei Fard, Z., & Tabatabaei, O. (2018). Investigating assessment literacy of EFL teachers in Iran. Journal of Applied Linguistics and Language Research, 5(3), 91-100.

Shahahmadi, M.R., Ketabi, S. (2019). Features of language assessment literacy in Iranian English language teachers' perceptions and practices. Journal of Teaching Language Skills (JTLS), 38 (1), 191-223.

Sahinkarakas, S. (2012). The role of teaching experience on teachers’ perceptions of language assessment. Social and Behavioral Sciences 47(3), 1787 – 1792.

Serafina P., & Heidi L. A. (2019). Teacher assessment literacy: A three-dimensional model. Teaching and Teacher Education, 84, 128-138.

Scarino, A. (2013). Language assessment literacy as self-awareness: Understanding the role of interpretation in assessment and in teacher learning. Language Testing, 30(3), 309-327.

Shim, K. N. (2009). An investigation into teachers' perceptions of classroom-based assessment of English as a foreign language in Korean primary education. Unpublished doctoral dissertation. University of Exeter, Exeter.

Schmitt, D. & Hapm-Lyons, L. (2015). The need for EAP teacher knowledge in assessment. Journal of English for Academic Purposes, 18, 1-5.

Stiggins (1992). High quality classroom assessment: What does it really mean? onlinelibrary.wiley.com

Stiggins (2005). From formative assessment to assessment for learning: A path to success in standards-based schools. Phi Delta Kappan, 87, 324-328.

Stoynoff, S., & Coombe, C., (2012). The Cambridge guide to second language assessment. Cambridge University Press.

Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403–412.

Tsagari & Vogt (2017). Assessment literacy of foreign language teachers around Europe: research, challenges and future prospects. Language Testing and Assessment, 6(1), 41-61.

Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a European study. Language Assessment Quarterly, 11(4), 374–402.

Webb, N. (2002). Assessment literacy in a standards-based education setting. A paper presented at the annual meeting of the American Educational Research Association, New Orleans, Louisiana, 1-5.

Xu, Y., & Brown, G. T. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58, 149-162.

Zolfaghari, S., & Ashraf, H. (2015). The relationship between EFL teachers’ assessment literacy, their teaching experience, and their Age: A case of Iranian EFL teachers. Theory and Practice in Language Studies, 5(12), 2550-2556.

Zulaiha, S., Mulyono, H., Ambarsari, L. (2020). An investigation into EFL teachers’ assessment literacy: Indonesian teachers’ perceptions and classroom practice. European Journal of Contemporary Education, 9(1), 178-190.

 

 

[1] PhD student of TEFL, shahzamani.m@gmail.com; Department of English Language, Sheikhbahee University, Isfahan, Iran..

[2] Professor, M.H.Tahririan@shbu.ac.ir; Department of English Language, Sheikhbahee University, Isfahan, Iran.

Abell, S. K., & Siegel, M. A. (2011). Assessment literacy: What science teachers need to know and be able to do. In D. Corrigan, J. Dillon, & R. Gunstone (Eds.), The Professional knowledge base of science teaching, (205-221). Dordrecht: Springer.
Atai, M. R., & Tahririan, M. H. (2003). Assessment of the status of ESP in the current Iranian higher educational system. Proceedings of LSP: Communication, culture and knowledge conference. Guilford, England: University of Surrey.
Barni, M. (2015). In the name of the CEFR: Individuals and standards. In Spolsky, B., Inbar-Lourie, O. & Tannenbaum, M. (Eds.), Challenges for language education and policy: Making space for people (pp. 40–51). New York: Routledge.
Bayat, K., & Rezaei, A. (2015). Importance of teachers’ assessment literacy. International Journal of English Language Education, 3(1), 139-146.
Black, P. J., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in education: Principles, Policy and Practice5(1), 7–74.
Brookhart, S. M. (2002). What will teachers know about assessment, and how will that improve instruction. In R. W. Lizzitz, & W. D. Schafer (Eds.), Assessment in educational reform: Both means and ends (pp. 2-17). Boston, MA: Allyn & Bacon.
Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Educational measurement: issues and practice, 30(1), 3–12.
Campbell, C., Murphy, J. A., & Holt, J. K. (2002). Psychometric analysis of an assessment literacy instrument: Applicability to preservice teachers. In Annual meeting of the mid-western educational research association, Columbus, OH.
Carless, D. (2011). From testing to productive student learning: Implementing formative assessment in Confucian-Heritage Settings. New York, NY: Routledge.
Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers' knowledge, beliefs, and practices. Assessing Writing, 28, 43-56.
Davidheiser, S. A. (2013). Identifying areas for high school teacher development: A study of assessment literacy in the Central Bucks School District. Unpublished Doctoral dissertation. Drexel University, USA.
Davies, A. (2008). Textbook trends in teaching language testing. Language Testing, 25(3), 327–347.
DeLuca C, Coombs A, MacGregor S & Rasooli A. (2019). Toward a differential and situated view of assessment literacy: Studying teachers' responses to classroom assessment scenarios. Frontiers in Education. 4, 110-119.
Edge, J. (2001). Action research. Washington, DC, TESOL.
Engelsen, K. S., & Smith, K. (2014). Assessment literacy. In C. Wyatt-Smith,V. Klenowski, & P. Colbert (Eds.), The enabling power of assessment: Designing assessment for quality learning (pp. 140-162). New York: Springer.
Esfandiari, R., & Nouri, R. (2016). A mixed-methods, cross-sectional study of assessment literacy of Iranian University instructors: implications for teachers’ professional development. Iranian Journal of Applied Linguistics, 19(2), 115–154.
Firoozi, T., Razavipour, K., & Ahmadi, A. (2019). The language assessment literacy needs of Iranian EFL teachers with a focus on reformed assessment policies. Language Testing in Asia9(2), 1-14.
Fox, J., (2013). Book review: The Cambridge guide to second language assessment. Language Testing, 30(3), 413-415.
Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113–132.
Gardner, J. (2006). Assessment for learning: A compelling conceptualization. In J. Gardner (Ed.), Assessment and Learning.(pp. 279-286). London: Sage.
Inbar-Lourie, O. (2008a). Language assessment culture. In E. Shohamy (Ed.), Language testing and assessment (Vol. 7). N. Hornberger (General Ed.), Encyclopedia of language and education. (pp. 285–300). New York: Springer.
Inbar-Lourie, O. (2008b). Constructing a language assessment knowledge base: A focus on language assessment courses. Language Testing, 25(3), 385–402.
Inbar-Lourie, O. (2013). Guest Editorial to the special issue on language assessment literacy. Language Testing30(3) 301-307.
Inbar Louri, O. (2017). Language assessment literacy. In E. Shohamy et al. (eds.), Language Testing and Assessment (pp.275-270). New York: Springer.
Iranmehr, A. Atai, M.R., Babaii, E. (2018). Evaluation of EAP programs in Iran: Document analysis and expert ‎perspectives. Applied Research on English Language, 7(2), 171-194.
Jannati, S. (2015). ELT teachers’ language assessment literacy: perceptions and practices. International Journal of Research in Teacher Education, 6(2), 26-37.
Jiang, Y. (2020). Teacher classroom questioning practice and assessment literacy: Case studies of four English language teachers in Chinese universities. Frontiers in Education5(23), 1-17.
Lam (2015), Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing30(2), 32(2) 169–197.
López Mendoza, A. A., & Bernal Arandia, R. (2009). Language testing in Colombia: A call for more teacher education and teacher training in language assessment. PROFILE, 11(2), 55–70.
Melone, M. E. (2013). The essentials of assessment literacy: contrasts between testers and users. Language Testing, 30(3), 329–344.
Mertler, C. A. (2009). Teachers’ assessment knowledge and their perceptions of the impact of classroom assessment professional development. Improving Schools12(2), 101–113.
Muñoz, A., Palacio, M., & Escobar, L. (2012). Teachers’ beliefs about assessment in an EFL context in Colombia. Profile: Issues in Teachers’ Professional Development, 14(1), 143-158.
Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory into practice48(4), 4–11.
Popham, W. J. (2011). Assessment literacy overlooked: A teacher educator’s confession. The Teacher Educator46(4), 265–273.
Price, M., Rust, C., O'Donovan, B., & Handley, K. (2012). Assessment literacy: The foundation for improving student learning. Oxford: Oxford Brookes University.
Rahimi Rad, M. (2019). The impact of EFL teachers' assessment literacy on their assessment efficiency in classroom. Journal of Britain International of Linguistics Arts and Education (BIoLAE), 1(1), 9-17.
Rezaei Fard, Z., & Tabatabaei, O. (2018). Investigating assessment literacy of EFL teachers in Iran. Journal of Applied Linguistics and Language Research, 5(3), 91-100.
Shahahmadi, M.R., Ketabi, S. (2019). Features of language assessment literacy in Iranian English language teachers' perceptions and practices. Journal of Teaching Language Skills (JTLS), 38 (1), 191-223.
Sahinkarakas, S. (2012). The role of teaching experience on teachers’ perceptions of language assessment. Social and Behavioral Sciences 47(3), 1787 – 1792.
Serafina P., & Heidi L. A. (2019). Teacher assessment literacy: A three-dimensional model. Teaching and Teacher Education, 84, 128-138.
Scarino, A. (2013). Language assessment literacy as self-awareness: Understanding the role of interpretation in assessment and in teacher learning. Language Testing30(3), 309-327.
Shim, K. N. (2009). An investigation into teachers' perceptions of classroom-based assessment of English as a foreign language in Korean primary education. Unpublished doctoral dissertation. University of Exeter, Exeter.
Schmitt, D. & Hapm-Lyons, L. (2015). The need for EAP teacher knowledge in assessment. Journal of English for Academic Purposes, 18, 1-5.
Stiggins (1992). High quality classroom assessment: What does it really mean? onlinelibrary.wiley.com
Stiggins (2005). From formative assessment to assessment for learning: A path to success in standards-based schools. Phi Delta Kappan, 87, 324-328.
Stoynoff, S., & Coombe, C., (2012). The Cambridge guide to second language assessment. Cambridge University Press.
Taylor, L. (2013). Communicating the theory, practice and principles of language testing to test stakeholders: Some reflections. Language Testing, 30(3), 403–412.
Tsagari & Vogt (2017). Assessment literacy of foreign language teachers around Europe: research, challenges and future prospects. Language Testing and Assessment, 6(1), 41-61.
Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a European study. Language Assessment Quarterly, 11(4), 374–402.
Webb, N. (2002). Assessment literacy in a standards-based education setting. A paper presented at the annual meeting of the American Educational Research Association, New Orleans, Louisiana, 1-5.
Xu, Y., & Brown, G. T. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58, 149-162.
Zolfaghari, S., & Ashraf, H. (2015). The relationship between EFL teachers’ assessment literacy, their teaching experience, and their Age: A case of Iranian EFL teachers. Theory and Practice in Language Studies, 5(12), 2550-2556.
Zulaiha, S., Mulyono, H., Ambarsari, L. (2020). An investigation into EFL teachers’ assessment literacy: Indonesian teachers’ perceptions and classroom practiceEuropean Journal of Contemporary Education, 9(1), 178-190.