The Impact of Alternative Assessment Knowledge, Teaching Experience, Gender, and Academic Degree on EAP Teachers’ Assessment Literacy (Research Paper)

Document Type : Original Article

Authors

1 Department of English Language, Faculty of Humanities, Bu-Ali Sina University, Hamedan

2 Department of English Language, Bu-Ali Sina University, Hamedan

Abstract

This study investigated the impact of Iranian EAP teachers’ familiarity with or knowledge of alternative assessment (AA), teaching experience, academic degree and gender on their assessment literacy. To this end, 106 EAP teachers (74 males and 32 females) holding PhD degree (N=77) or MA/MSc degree (N=29) in different fields of study selected based on convenience sampling, participated in the study. The participants completed Classroom Assessment Literacy Inventory (CALI) developed by Mertler and Campbell (2005). The results of descriptive statistics indicated that observation, as one form of AA, was used most frequently by EAP teachers. The results of Independent Samples t-test showed that EAP teachers’ knowledge of AA significantly differentiated them in terms of assessment literacy. The results of Independent Samples t-test further showed that there was no significant difference between male and female EAP teachers regarding their assessment literacy, while EAP teachers who held Ph.D. degree possessed significantly higher levels of assessment literacy compared to their MA/MSc-holding counterparts. Moreover, the results of one-way ANOVA indicated that EAP teachers’ years of teaching experience had no significant effect on their assessment literacy. The interview results also indicated that half of the participants were either unfamiliar (33%), or had low familiarity (17%) with AA. It was also revealed that lack of knowledge of AA and overcrowded classes were two main reasons for the EAP teachers’ lack of use of AA. Furthermore, the majority of the participants who did not use AA, reported they mainly used summative assessment instead.

Keywords


Article Title [Persian]

تاثیر دانش سنجش جایگزین، تجربۀ تدریس، میزان تحصیلات آکادمیک و جنسیت مدرسان ایرانی انگلیسی با اهداف ویژه بر دانش سنجش ایشان

Authors [Persian]

  • حسن سودمند افشار 1
  • سمیه توفیقی 2
  • مریم آسوده 2
  • ناصر رنجبر 2
1 گروه زبان انگلیسی، دانشکده علوم انسانی، دانشگاه بوعلی سینا
2 گروه زبان انگلیسی، دانشکده علوم انسانی، دانشگاه بوعلی سینا
Abstract [Persian]

مطالعۀ حاضر، تاثیر دانش سنجش جایگزین، تجربۀ تدریس، تحصیلات آکادمیک و جنسیت مدرسان زبان انگلیسی با اهداف ویژه را بر دانش سنجش آنها بررسی نمود. بدین منظور، 106 مدرس انگلیسی با اهداف ویژه (شامل 74 مرد و 32 زن و 77 مدرس دارای مدرک دکترای تخصصی و 29 مدرس دارای مدرک کارشناسی ارشد) در پژوهش حاضر شرکت کردند. شرکت کنندگان، پرسشنامۀ دانش سنجش کلاسی مرتلر(2002) را تکمیل نمودند. نتایج آمار توصیفی نشان داد که مشاهده، بیشتر از سایر انواع سنجش جایگزین استفاده شد. همچنین، نتایج آزمون تی مستقل نشان داد که میزان دانش سنجش جایگزین مدرسان انگلیسی با اهداف ویژه، تاثیر مثبت و معناداری بر سطح دانش سنجش ایشان داشت. همچنین، تفاوت معناداری بین مدرسان مذکر و مونث در خصوص دانش سنجش یافت نشد. اما، مدرسان با مدرک تحصیلی دکتری نسبت به همتایان خود با مدرک کارشناسی ارشد، سطح دانش سنجش بالاتری داشتند. نتایج تحلیل واریانس یک سویه نشان داد که سابقۀ تدریس مدرسان، تاثیری بر دانش سنجش آنها نداشت. به علاوه، نتایج مصاحبه های نیمه ساختمند نشان داد تقریبا نیمی از مصاحبه شوندگان یا آشنایی مختصری از اشکال سنجش جایگزین داشته، و یا اساسا فاقد چنین دانشی بودند که فقدان دانش سنجش و کلاس های پرجمعیت، دو عامل اصلی برای عدم استفاده مدرسان انگلیسی با اهداف ویژه از اشکال سنجش جایگزین بودند. همچنین، اکثریت مصاحبه شوندگانی که با دانش سنجش جایگزین آشنایی نداشتند، اظهار داشتند که اساسا از آزمون های سنتی پایان دوره استفاده می کردند.

Keywords [Persian]

  • سنجش جایگزین
  • دانش سنجش
  • جنسیت
  • تجربۀ تدریس و سطح تحصیلی

 

The Impact of Alternative Assessment Knowledge, Teaching Experience, Gender, and Academic Degree on EAP Teachers’ Assessment Literacy

[1] Hassan Soodmand Afshar *

[2] Somaye Tofighi

IJEAP-1901-1330

[3] Maryam Asoudeh

[4] Naser Ranjbar

 

Abstract

This study investigated the impact of Iranian EAP teachers’ familiarity with or knowledge of alternative assessment (AA), teaching experience, academic degree and gender on their assessment literacy. To this end, 106 EAP teachers (74 males and 32 females) holding PhD degree (N=77) or MA/MSc degree (N=29) in different fields of study selected based on convenience sampling, participated in the study. The participants completed Classroom Assessment Literacy Inventory (CALI) developed by Mertler and Campbell (2005). The results of descriptive statistics indicated that observation, as one form of AA, was used most frequently by EAP teachers. The results of Independent Samples t-test showed that EAP teachers’ knowledge of AA significantly differentiated them in terms of assessment literacy. The results of Independent Samples t-test further showed that there was no significant difference between male and female EAP teachers regarding their assessment literacy, while EAP teachers who held Ph.D. degree possessed significantly higher levels of assessment literacy compared to their MA/MSc-holding counterparts. Moreover, the results of one-way ANOVA indicated that EAP teachers’ years of teaching experience had no significant effect on their assessment literacy. The interview results also indicated that half of the participants were either unfamiliar (33%), or had low familiarity (17%) with AA. It was also revealed that lack of knowledge of AA and overcrowded classes were two main reasons for the EAP teachers’ lack of use of AA. Furthermore, the majority of participants who did not use AA, reported they mainly used summative assessment instead.

Keywords: Alternative Assessment, Assessment Literacy, EAP Teachers, Gender, Teaching Experience, Academic Degree

 

  1. Introduction

As it has been indicated over the years and by many researchers such as Brookhart (2011) and Plake and Impara (1993), nearly 50 percent of the class time is devoted to testing and assessment of the students’ learning. Corroborating this, Bachman (2014) also maintains that teachers spend a minimum of one-third of their instructional time on assessment-related activities. Thus, assessment literacy is a highly required prerequisite for those teachers who intend to be successful in their career. To be able to help the students reach their full potential, teachers need to be aware of various ways of assessing the learners and also the process of assessment as far as possible. As Stiggins (2013) puts it, “without the ability to effectively assess student attainment of those achievement targets at the classroom, building, and district level, we will remain unable to help students attain higher levels of academic achievement” (p. 238). According to Paterno (2001, as cited in Mertler, 2003, p. 9), assessment literacy is defined as:

The possession of knowledge about the basic principles of sound assessment practice, including terminology, the development and use of assessment methodologies and techniques, familiarity with standards of quality in assessment and familiarity with alternative to traditional measurement of learning.

Traditional ways of assessment mainly focused on passive skills of learners and did not consider standards in terms of attention to the students’ needs and wants. However, it is now believed that a literate teacher should also be aware of alternative forms of assessment which highlight the students’ needs, focus on active skills, take place in an authentic environment, and require real-world activities. Alternative forms of assessment consider the relationship between assessment and teaching or learning as an important link and focus on multiple aspects of the learners’ learning and evaluate not only the learners, but also the instruction. They include such instruments as checklists, journals, videotapes and audiotapes, portfolios, conferences, diaries, self- and peer-assessments and observations.

To the best of the researchers’ knowledge and based on the extensive review of the related literature, the causal relationship between EAP teachers’ knowledge of alternative forms of assessment and their assessment literacy levels has not been seemingly investigated. Although teachers should be assessment-literate to be successful and that they need to be aware of the new forms of assessment to focus on all aspects of students’ learning, according to Stiggins (2001, p. 6) “we are seeing unacceptably low levels of assessment literacy among practicing teachers and administrators in our schools”. Some experts in the field of assessment literacy such as Mertler (1999) and Stiggins (1995) emphasize the importance of being assessment-literate for the teachers, a manifestation of which is familiarity with and adoption of alternative forms of assessment during the instruction. Therefore, there seems to be some kind of relationship between teachers’ knowledge of alternative forms of assessment and their assessment literacy levels. However, this relationship especially the causal association, has not seemingly been examined with regard to EAP teachers so far especially in the Iranian context. Thus, this study aimed at investigating the effect of EAP teachers’ knowledge of AA, teaching experience, gender, and academic degree on their assessment literacy levels.

 

  1. Literature Review

2.1. Assessment Literacy

Assessment literacy has its roots in general education which was conveyed into the language education and has been prone to many changes over time. Although assessing students’ learning is considered as one of the most important responsibilities of the teachers, some teachers are not well-prepared to perform this task. As Lam (2015) asserts, regarding assessment-related activities, many decisions are being made by the teachers who do not have sufficient background or training in assessment. Because a great amount of teachers' time in class is dedicated to activities related to assessment, assessment literacy should be taken as a serious responsibility for the teachers who are required to do assessment in one way or another. Popham (2011) defines assessment literacy as “… an individual’s understanding of the fundamental assessment concepts and procedures deemed likely to influence educational decisions” (p. 265).

In distinguishing assessment-literates and assessment-illiterates, Stiggins (1991, P. 535) maintains that, “assessment-illiterates cannot evaluate critically the data they use”, they accept or reject achievement data “at face value” and they “lack the tools to be critical consumers of assessment data”. Additionally, he believes that assessment literates have “a basic understanding of the meaning of high- and low-quality assessment”, and they can use this knowledge and know the meaning of high quality assessment (Stiggins, 1991, p. 535). Xu and Brown (2016) also note that constant reflection on the assessment practice, participation in the assessment-related professional activities, engagement in the assessment-related professional conversations, interrogating one’s own conceptions of assessment, and seeking for “resources to gain a renewed understanding of assessment and their own roles as assessors” are among the characteristics of assessment literate teachers (p. 159).

As Stiggins (1995) suggests, all assessment-literate educators start assessment with specific and clear reasons, focus on different achievement goals of every learner, sample learner achievement, select appropriate methods for assessment, and avoid bias and distortion. This means that if teachers observe these five standards of quality, they might become effective assessors because appropriate assessments originate from clear purposes, well-defined achievement targets, proper assessment methods, effective sampling and bias avoidance.

Brookhart (2011) maintains that, to be effective and good assessors, teachers should, among other things, be able to analyze tests, questions, and assessment tasks to become sure about the knowledge and skills needed by the students to do them, be able to provide effective feedback on the students’ activities, possess the skills to construct scoring schemes that measure students’ performance on classroom assessment which eventually provide useful information for decision making about the students, be able to make the interpretations of assessment results appropriate to the specific context, possess the skills to help students use assessment results to make sound decisions, and understand and observe their ethical responsibilities in the process of assessment, all of which help shape their assessment literacy.

Given the advantages of assessment literacy mentioned above, it sounds vital to guide teachers toward assessment knowledge and skills (DeLuca, Chavez, & Cao, 2013). Mertler (2003b), based on a joint effort between the American Federation of Teachers, the National Council on Measurement in Education, and the National Education Association, proposes some standards for teacher competence in the educational assessment of the students. These standards consist of the following principles:

Choosing sound methods of assessment for instructional decision-making, developing the sound methods of assessment for instructional decision-making, administering, scoring, and explaining the results of assessment methods, using assessment results for decision-making about students, developing valid scoring procedures, discussing assessment results with students, parents and other stakeholders, and specifying illegal and inappropriate methods (p. 8).

Another important issue to note here is that the development of assessment literacy is in need of being positioned within the necessities of various educational contexts which might result in different primacies at different times and places (Vogt & Tsagari, 2014).

2.2. Alternative Assessment in ESP

Hamayan (1995) presents two reasons for changing assessment practices and feeling a need for assessment reform. The first one is “the increasing importance of the relationship between assessment and both teaching and learning and the second one is the evolving nature of educational goals which are currently directed to more sophisticated and higher standards” (p. 213).Alternatives to standardized assessment have been given several names in the literature like AA, performance assessment, descriptive assessment and direct assessment.

Navarrete, Wilde, Nelson, Martinez, and Hargett (1990, as cited in Hamayan, 1995) define AA as the procedures and techniques which are employed in the context of teaching and can be enriched with authentic and meaningful activities while not comparing a student with a larger group beyond a classroom. Huerta-Macias (1995) listed such AA procedures as checklists, journals, videotapes and audiotapes, portfolios, conferences, diaries, self- and peer-assessments, and observations.

AA is in contrast with traditional assessment. Traditionally, the purpose of assessment was only evaluating the student, but later another purpose was presented which included evaluating the instruction. According to Hamayan (1995, p. 216) “assessment literacy has two purposes and uses: (a) evaluating the learner; (b) evaluating the instruction”. Moreover, McMillan (2014) states that traditional or objective assessment tends to be associated with measuring lower-order thinking skills, while innovative or AA is associated with measuring higher-order thinking skills. In AA, as Brown and Hudson (1998) maintain, authentic contexts (i.e. real-world activities) and both process and product of tasks are considered to be important. Herman, Aschbacher, and Winters (1992) hold that alternative forms of assessment require higher levels of thinking and professional skills, use meaningful tasks and activities, involve scoring and judging by human beings not machines, and make teachers take new responsibilities in assessment and teaching.

On the other hand, decision-making is a crucial element in various kinds of assessment (i.e. AA and traditional assessment). As in all other forms of assessment, the designers of AA try to plan, pilot, analyze, and administer the procedures as much as possible to reach the best results, like the desired reliability and validity.

2.3. Previous Research Findings

Over the past few years, the meanings of assessment and the process of testing have been changed dramatically. Accordingly, many studies have been conducted on assessment literacy of teachers and AA. For one, DeLuca and Klinger (2010) investigated the assessment literacy of 700 teachers. They concluded that “even teachers with little formal instruction in assessment felt generally confident in several aspects of assessment practice and theory” (p. 435).

In another study, DeLuca et al. (2013) found that communicating with peers about reading, the process of assessment, dilemmas of practice, and “analytic scaffolding and scale for these conversations” (p. 133) were critical in broadening 97 teacher candidates’ understanding of educational assessment. Similarly, Smith, Worsfold, Davies, Fisher, and McPhail (2013) investigated the relationship between assessment literacy and learning of 369 undergraduate students. They indicated that helping learners to judge and assess their own learning could enhance their learning outcomes and achievement. By the same token, Leighton, Gokiert, Cor, and Heffernan (2010), investigating 600 participants, found a gap in their teacher assessment literacy. They also hold that teachers might wrongly argue that classroom tests present more cognitive diagnostic information than large-scale tests regarding the processes of student learning, outcomes of learning, and using learning strategies.

Similarly, Siegel and Wissehr (2011), investigating assessment perception of 11 pre-service teachers enrolled in a secondary science methods course, found that they saw both advantages and disadvantages for various assessment practices. They found more disadvantages than advantages to presentations, performance events and tests and recognized that assessment should help both the teacher and the students learn. These pre-service teachers also showed varying degrees of knowledge in assessment for the purposes of student learning. The results also indicated that pre-service teachers thought quite differently about assessment.

In another study, Demir, Ozturk, and Dokme (2011) examined the perspectives of 99 primary school teachers taking in-service training courses about alternative measurement and evaluation techniques. The findings indicated that the teachers with in-service training were more knowledgeable in AA and used them more than the others. Similarly, Mertler and Campbell (2005), investigating 152 pre-service teachers’ knowledge and application of classroom assessment concepts, found that “the role of teaching experience may be too important to overlook” (p. 14). In another study by Mertler (2003b), pre-service and in-service teachers’ assessment literacy was examined. Sixty seven undergraduate students majoring in secondary education participated in the study. The results revealed that “in-service teachers showed the highest performance on Administering, Scoring and Interpreting” the assessment results and the lowest one on “Developing valid grading procedures” (p. 2).

Investigating 69 teacher candidates’ assessment literacy, Volante and Fazio (2007) revealed that “the pre-service program needs to devote more careful attention to a broader array of classroom assessment techniques that are noted within the course outline” (p. 760). Herman, Osmundson, Dai, Ringstaff, and Timms (2011) also investigated the association between teacher knowledge, assessment practice, and student learning. The results indicated that the students who received more feedback on their work and from their teachers showed higher levels of learning outcome and achievement than those who did not receive feedback from their teachers.

Some studies have also been conducted on EAP assessment issues in Iranian context. For one, Soodmand Afshar and Movassagh (2016), in a large-scale (i.e. adoption of 881 student and teacher participants from various universities throughout the country), multi-informant (i.e. exploring both teachers and students as well as syllabus designers’ perceptions), and multi-method study (i.e. adoption of both questionnaire surveys, and interviews as well as observation), investigated the perception of needs in EAP from the viewpoints of the three groups of stakeholders mentioned above (i.e. teachers, students, and syllabus designers). Their results indicated significant differences among the perceptions of the three groups in terms of needs. Their findings further indicated that EAP education in Iran suffered from such serious problems as use of uniform tailor-made one-size-fits-all out-of-date course books, shortage of time and lack of priority for EAP, overcrowded classes, lack of pair/group work and interaction in the class and overuse of traditional assessment methods (e.g., summative translation and multiple-choice reading tests) and little use of alternative forms of assessment (e.g., portfolios, journals, tasks, self- and-peer-assessment, etc.) Moreover, only 38 percent of the participants thought the assessment methods adopted by the teacher were satisfactory. This finding is fully supported by those of Tavakoli and Tavakol (2018) who, problematizing EAP education in Iran, maintain that the EAP assessment in Iran is only confined to mid-term and final-exam summative evaluations usually in the form of translation, vocabulary and grammar tests. Corroborating this, the teacher participants in their study asserted that overcrowded classes left them neither time nor energy to devote to formative evaluation. Tavakoli and Tavakol’s (2018) findings in this respect also confirm those of Soodmand Afshar and Movassagh (2016) who found overcrowded classes as one of the major barriers in the way of EAP education and assessment in Iran.

2.4. Significance of the Study

As stated earlier, overall, “evaluation has been neglected in ESP” (Robinson, 1991, p. 65) although Douglas (2013) suggests that assessment in ESP is not very much different from other areas of language assessment. Thus, given the crucial role assessment in general and AA in particular can play in educational and decision-making processes, to the best of our knowledge, little research has been conducted to investigate the causal relationship between EAP teachers’ knowledge of alternative forms of assessment and their assessment literacy levels. Also, the role of teaching experience, gender and academic degree in EAP teachers’ assessment literacy in the context of Iran has rarely been investigated systematically in a single study. Moreover, the bulk of the studies previously done on the topic in Iranian context has been confined to EFL rather than EAP teachers. The present study was thus set out to delve into the topic to fill the research gap felt in this regard by answering the following research questions.

  1. What type of AA is used more frequently by the Iranian EAP teachers?
  2. Does Iranian EAP teachers’ familiarity with or knowledge of alternative assessment significantly differentiate them in terms of their assessment literacy levels?
  3. Does teaching experience have any statistically significant effect on Iranian EAP teachers’ assessment literacy?
  4. Is there any statistically significant difference between male and female Iranian EAP teachers regarding their assessment literacy?
  5. Does academic degree significantly differentiate Iranian EAP teachers on their assessment literacy?

 

 

  1. Methodology

3.1. Participants

The participants of the study included 106 Iranian EAP teachers (74 males and 32 females) from the universities of Hamedan and Kermanshah provinces in Iran who were selected based on convenience sampling. Among the participants, 77 held PhD and 29 held MA/MSc degrees and their ages ranged from 34 to 52 years old. Participants with 1-5 years of teaching experience were considered as low-experienced (N=31), those between 6-10 years as mid-experienced (N=26), and those with more than 10 years of teaching experience as high-experienced teachers (N=49).

3.2.  Instruments

In the present study, a mixed methods design including a semi-structured interview and a questionnaire survey were adopted to investigate the impact of Iranian EAP teachers’ knowledge of alternative forms of assessment, their teaching experience, gender and academic degree on their assessment literacy.

Classroom Assessment Literacy Inventory (CALI) developed by Mertler and Campbell (2005) (See Appendix A) was utilized in the present study. This questionnaire includes 35 Likert-scale items and seven additional items which encompass demographic information. Each item has one point and the questionnaire has 35 points in total. The reliability of this questionnaire was reported by Mertler and Campbell (2005) to be 0.74 which is acceptable. CALI has been used widely and validated vastly throughout the world.

To investigate the EAP teachers’ use of alternative forms of assessment and to triangulate the data, a semi-structured interview including five questions, judged by two experts in the field for validity purposes, was also conducted by one of the researchers with the interview participants individually (See Appendix B). For the EAP teachers to better understand and discuss the issues, the interview was conducted in Persian. The interview questions have, of course, been translated into English in Appendix B.

3.3. Procedure

The data of the study were gathered from 106 EAP teachers teaching at the universities of Hamedan and Kermanshah. First, the questionnaire (i.e. CALI) was submitted to the participants individually by one of the researchers, the completion of which took about half an hour by each participant. Secondly, 32 EAP teachers were selected to be interviewed. Each interview took 10 to 20 minutes. The informed consent of the participants at both stages was obtained.

3.4. Data Analysis

To answer the first research question of the study, descriptive statistics and frequency analysis, obtained from the responses to the third question of the interview, were employed to estimate the type of alternative forms of assessment adopted by Iranian EAP teachers. To answer questions two, four and five of the study, three Independent Samples t-tests were run to investigate the effect of EAP teachers’ knowledge of alternative forms of assessment on their assessment literacy levels, and the possible difference between male and female EAP teachers regarding their assessment literacy as well as the impact of academic degree on the assessment literacy of the EAP teachers. Furthermore, a one-way ANOVA was run to see if the EAP teachers’ teaching experience had any significant effect on their assessment literacy (i.e. to answer question three of the study).

After recording the interviews, the template organizing style, developed by Crabtree and Miller (1999), was employed for data analysis. The results obtained from the interviews were coded drawing upon the template. Frequency analysis was then used to “quantatize” the data (Dörnyei, 2007) and to calculate the frequencies and the percentages of EAP teachers’ responses to interview questions.

 

 

  1. Results

4.1. Research Question 1

In order to answer the first research question of the study, descriptive statistics were calculated, the results of which are shown in Table 1.

Table 1: Descriptive Statistics of Alternative Types of Assessment used by Iranian EAP Teachers

 

Frequency

 Per cent

Valid Percent

Cumulative Percent

 

None

34

32.1

32.1

32.1

Journal writing

8

7.5

7.5

39.6

Portfolio

7

6.6

6.6

46.2

Observation

42

39.6

39.6

85.8

conferences and interview

11

10.4

10.4

96.2

self and peer assessment

4

3.8

3.8

100.0

Total

106

100.0

100.0

 

As the results show, 68 per cent of the participants employed alternative forms of assessment of some sorts. Journals were used by 7.5%, portfolios by 6.6%, conferences and interviews by 10.4%, and self- and peer-assessments by 3.8% of the participants. The most frequently used form of AA was found to be observation which was adopted by nearly 40% of the participants. Thus, the results show that observation was the most frequently used mode of AA by the participants of the study.

4.2. Research Question 2

To answer the second research question of the study as to whether Iranian EAP teachers’ familiarity with or knowledge of AA significantly differentiated them in terms of assessment literacy levels, an Independent-Samples t-test was run. It should be noted here that the participants’ familiarity with alternative forms of assessment was measured through the data obtained from the first question of the interview which yielded a categorical/nominal variable of “no familiarity at all” (i.e. “No” group in Table 2) and “some familiarity” comprising ‘low’, ‘mid’ and ‘high’ familiarity (i.e. “Yes” group in Table 2). The participants’ assessment literacy was also measured by the (interval) scores obtained from CALI as explained earlier. First, the results of descriptive statistics are displayed in Table 2.

Table 2: Descriptive Statistics demonstrating the Assessment Literacy of EAP Teachers With and Without Familiarity with AA

Familiarity with AA

N

Mean

Std. Deviation

Std. Error Mean

Yes

72

12.90

2.02

.23

No

34

10.47

2.20

.37

 

The results of the Independent Samples t-test comparing the two groups are presented in Table 3.

 

 

 

 

 

 

Table 3: Independent Samples t-test comparing the Two Groups on the Impact of EAP Teachers’ Familiarity with AA on their Assessment Literacy Level

 

Levene’s          Test for Equality of Variances

 

           t-test for comparing Means

 

 

 

 

  F

 

 

 

 

 Sig.

 

 

 

 

  t

 

 

 

 

df

 

 

Sig. (2-tailed)

 

 

Mean Difference

 

 

Std. Error Difference

95% Confidence Interval of the Difference

Lower

Upper

 

Equal variances assumed

.21

.64

5.61

104

.000

2.43

.43

1.57

3.29

Equal variances not assumed

 

 

5.44

60.01

.000

2.43

.44

1.53

3.32

 

As shown in Table 3, EAP teachers’familiarity with AA had statistically significant effect on their assessment literacy levels (p=.000 < 0.05). That is, the participants who were familiar with or possessed some knowledge of AA were found to also possess higher assessment literacy levels.

4.3. Results of Question 3

In order to answer the third research question as to the impact of teaching experience on Iranian EAP teachers’ assessment literacy, a one-way ANOVA was run. Table 4 shows the results of the descriptive statistics first.

Table 4: Descriptive Statistics of Teachers’ Years of Teaching  Experience

 

 

 

 

N

 

 

Mean

 

 

Std. Deviation

 

 

Std. Error

95% Confidence Interval for Mean

 

 

Minimum

 

 

Maximum

Lower Bound

Upper Bound

low experienced

31

12.26

2.23

.40

11.44

13.08

7

16

Mid- experienced

26

11.96

2.67

.52

10.88

13.04

7

15

High- experienced

49

12.12

2.31

.33

11.46

12.79

8

16

Total

106

12.12

2.36

.23

11.67

12.58

7

16

 

As the descriptive statistics in Table 4 show, the mean and standard deviation for high-experienced group were 12.12 and 2.31 respectively. Mid-experienced teachers had a mean of 11.96 and SD of 2.67. The low-experienced group, however, had a mean and standard deviation of 12.26 and 2.23 respectively. The results of one-way ANOVA analysis are now summarized in Table 5.

 

 

 

Table 5: ANOVA investigating the Effect of Teaching Experience on EAP Teachers’ Assessment Literacy Level

 

 

Sum of Squares

df

Mean Square

  F

Sig.

Between Groups

1.24

2

.62

.10

.89

Within Groups

586.16

103

5.69

 

 

Total

587.40

105

 

 

 

 

As Table 5 shows, teaching experience had no effect on EAP teachers’ assessment literacy (p=.89 > 0.05).

4.4. Results of Question 4

To answer the fourth research question i.e. to see if there existed any significant difference between male and female EAP teachers in terms of their assessment literacy, an Independent Samples t-test was run. First, the results of the descriptive statistics are shown in Table 6.

Table 6: Descriptive Statistics of Male and Female EAP Teachers regarding their Assessment Literacy

gender

N

Mean

Std. Deviation

Std. Error Mean

male

74

12.01

2.43

.28

female

32

12.38

2.22

.39

As indicated in Table 6, the mean and standard deviation for males were 12.01 and 2.43 and for females 12.38 and 2.22 respectively. The results of the Independent Samples t-test analysis are now shown in Table 7.

 

Table 7: Independent Samples t-test results of the Difference between Male and Female EAP Teachers in their Assessment Literacy

 

 

 

Levene's Test for Equality of Variances

 

t-test for Comparing Means

 

 

 

 

 

      F

 

 

 

 

    Sig.

 

 

 

 

       t

 

 

 

 

    df

 

 

 

Sig. (2-tailed)

 

 

Mean Difference

 

 

 

Std. Error Difference

95% Confidence Interval of the Difference

 

 

Lower

 

Upper

 

 

Equal variances assumed

2.19

.14

-.72

104

.47

-.36

.50

-1.35

.63

Equal variances not assumed

 

 

-.74

63.98

.45

-.36

.48

-1.32

.60

                                         

As Table 7 indicates, there is no significant difference between male and female EAP teachers regarding their assessment literacy (t=-.72 p > 0.05).

4.5.  Results of Question 5

To answer the last research question as to whether academic degree significantly differentiated Iranian EAP teachers on their assessment literacy, an Independent Samples t-test was run. First, Table 8 shows the descriptive statistics.

 

 

 

Table 8: Descriptive Statistics of the Teachers’ Academic Degree

Degree

N

Mean

Std. Deviation

Std. Error Mean

 

MA

29

11.14

2.58

.48

 

Ph.D.

77

12.49

2.18

.24

 

 

As shown in Table 8, the mean and standard deviation of EAP teachers holding PhD degree are 12.49 and 2.18 respectively and those holding MA/MSc are 11.14 and 2.58 regarding their assessment literacy. The results of Independent-Samples t-test are now shown in Table 9.

 

Table 9: Independent Samples t-test Investigating the Difference between Teachers with PhD and MA Degree Regarding their Assessment Literacy

 

 

Levene's Test for Equality of Variances

 

t-test for Comparing Means

 

 

 

 F

 

 

 

Sig.

 

 

 

   t

 

 

 

df

 

Sig. (2-tailed)

 

Mean Difference

 

 

Std. Error Difference

95% Confidence Interval of the Difference

Lower

Upper

 

Equal variances assumed

.934

.33

-2.70

104

.00

-1.35

.50

-2.34

-.36

Equal variances not assumed

 

 

-2.50

43.82

.01

-1.35

.54

-2.44

-.26

 

As shown in Table 9, there was a significant difference between teachers holding PhD degree and those holding MA./MSc degree regarding their assessment literacy (t=-2.70, p=.000 < 0.05). This means that the teachers who held PhD (M=12.49, SD=2.18) had significantly higher levels of assessment literacy than their counterparts who held MA (M = 11.14, SD=2.58).

4.6.  Results of the Interview

A semi-structured interview was conducted with 32 of the participants as mentioned earlier. The interview data were transcribed and the template organizing style, developed by Crabtree and Miller (1999) was employed for content analysis of the interviews. Frequency analyses were then applied to find out the frequency of the templates or codes.

The results of the first question of the interview indicated that nearly 32% of the participants were quite unfamiliar with or totally lacked knowledge of AA.  Seventeen per cent, 31% and 19% had ‘low’, ‘mid’ and ‘high’ familiarity with AA respectively. In response to the second question of the interview, more than half of the participants (53%) stated they never used AA and about one third (29%) and less than one fifth (18%) reported they sometimes and usually used AA respectively. In response to the third question of the interview, almost eight and nearly 40% of those who reported they used AA stated that they adopted journals and observations respectively. Nearly seven percent reported they used portfolios and about ten percent stated they adopted conferences and interviews. Finally, nearly four per cent reported they used self- and peer-assessment as a form of AA. In response to the fourth question of the interview, 38 percent attributed their lack of use of AA to their lack of knowledge of the subject and 39 percent to overcrowded classes. Additionally, twenty three per cent attributed their lack of use of AA to time constraints. In response to the fifth and last question of the interview, the majority of the participants (73%) who did not adopt AA, stated they used summative paper and pencil evaluation instead of AA and less than one third (27%) reported they used formative paper and pencil assessment instead.

  1. Discussion

The results of the first research question of the study, (i.e. as to what type of AA was used more frequently by the Iranian EAP teachers), showed that observation was the most frequently used form of AA. The results showed that the EAP teachers seldom used journals, portfolios, or conferences and interviews. The interview findings implied that EAP teachers’ problem with the other forms of AA than observation resulted from the implicitness, ambiguity, and difficulty of scoring of these methods. EAP teachers did not even sometimes know the meaning of these forms of assessment (i.e. journals, conferences and interviews, and portfolios) especially the former (i.e. journals). That is, EAP teachers themselves were not even aware of the procedures and processes which should be observed in writing a journal. Therefore, the most plausible reason for observation being selected by many EAP teachers as the most frequently used form of AA might lie in its relative explicitness and ease of application.

As the results of the second research question indicated, EAP teachers’ familiarity with or knowledge of AA significantly differentiated them in terms of assessment literacy. It could thus be argued that training in the area of assessment is one of the most necessary requirements for the EAP teachers. In this regard, the authorities can help most by changing the bases of curriculum development, syllabus design, and teacher selection and education programs by incorporating in them an awareness of assessment literacy in general and various forms of AA in particular.

Therefore, it might be argued that one of the most important responsibilities of the EAP teacher educators in particular is the empowerment of teachers to use the type of AA which can suit the given context based on the students’ needs, wants, and interests. This responsibility requires both assessment literacy and experience on the part of the teachers as two important factors influencing their use and choice of AA.

The results of the third research question (i.e. does EAP teachers’ years of teaching experience have any statistically significant effect on their assessment literacy level) revealed that EAP teachers’ years of teaching experience had no significant effect on their assessment literacy.  That is, there was no significant difference among high-experienced, mid-experienced, and low-experienced EAP teachers regarding their assessment literacy levels. The results here are not in line with the findings of Mertler (2003) who showed that experience was an important factor in developing assessment literacy, which is in line with Mertler’s (2005) assertion that, “the role of teaching experience may be too important to overlook” (p. 14). The findings of the present study in this regard might imply that there must be a preliminary knowledge base of assessment in teachers’ repertoires to make them develop it through experience. If this knowledge does not exist, the teachers will not be able to develop assessment literacy from the vacuum. This necessitates the authorities to hold training courses on assessment in general and AA in particular before the teachers start their careers as well as while they are teaching (i.e. they should hold both pre-service and in-service training courses on assessment and especially on AA).

The results of the fourth research question, as to whether there existed any statistically significant difference between male and female EAP teachers regarding their assessment literacy, showed that there was no significant difference between the two groups in this respect. To the best of the researchers’ knowledge and investigation, no research has been conducted in this regard so far as relates to Iranian EAP teachers. It might thus be stated that gender is not a determining factor in assessment literacy of the EAP teachers. Therefore, based on the results of the studies conducted in this respect like those of DeLuca and Klinger (2010) and Siegel and Wissehr (2011), the important factors for being considered as assessment-literate include training, experience, knowledge of the existing literature, and higher academic education levels, but perhaps not gender.

The results of the fifth research question as to whether there existed any statistically significant difference between teachers holding PhD and those holding MA/MSc degrees regarding their assessment literacy showed that there was a significant difference between the two groups of teachers regarding their assessment literacy levels. The results here show an implicit parallelism with the findings of Mertler and Campbell (2005), Volante and Fazio (2007), and DeLuca, et al. (2013), which showed implicitly the role of education level/academic degree in assessment literacy. As a result, every piece of research that has proved the value of training in developing assessment literacy would also prove the role played by education in this regard. If we consider assessment literacy as, the knowledge of the basic assumptions, principles methods, processes and procedures of assessment, and knowledge of how to move beyond the traditional to alternative forms of assessment (Paterno, 2001), it could then be argued that in developing assessment literacy, education, training, practice, and knowledge are considered of paramount importance and that the teachers’ level of education might play an important role in gaining this knowledge which naturally necessitates the teachers to increase their educational levels to become successful teachers and assessors.

To sum up, it could thus be suggested that for being successful, teachers should try to gain more experience and progress to higher educational levels to become more assessment-literate as a result of which they can become aware of alternative forms of assessment and move beyond the one-size-fits-all traditional forms of assessment so that they can adopt the modern alternative forms of assessment based on the type of the students and their needs and wants, the given context, and the existing facilities.

Also, the results of the interview showed that half of the participants were either unfamiliar or had low familiarity with AA. The interview findings also indicated that EAP teachers’ lack of use of AA could be attributed to their lack of knowledge of AA and overcrowded classes. Moreover, the majority of the participants who did not adopt AA, reportedly used mainly summative assessment instead. These findings and others of the ilk (e.g., Soodmand Afshar & Movassagh, 2016; Tavakoli & Tavakol, 2018 to name only a few) indicate that EAP education in Iran in general and EAP assessment in particular need to be revisited and improved in light of more modern approaches to education and assessment. That is, it could be argued that alternative forms of assessment like collective journal writing, conferences, peer assessment, portfolios, etc. are more in line and congruent with such recent theories of foreign/second language learning and teaching as constructivism and interactionism where the learning, teaching and consequently the assessment practices and activities are constructed and organized interactively through the participants’ joint efforts which are worth attention and time and resources allocation. Iranian EAP teachers thus need to become aware of these paradigm shifts and tricks of the field if they intend to ameliorate the EAP teaching and assessment situations in the country.

  1. Conclusion and Implications

In the present study, the findings showed that, observation was the most frequently used form of AA by the EAP teachers. In addition, knowledge of AA was found to significantly impact upon the assessment literacy levels of the Iranian EAP teachers. Also, gender and experience had no significant effect on the assessment literacy of the EAP teachers. However, teachers’ level of education/academic degree made a significant difference in that the teachers with higher educational levels were found to be more assessment-literate and used more alternative forms of assessment than their lower educational-level counterparts. The findings might be fruitful for the EAP educational system, EAP education policy and decision makers, EAP curriculum and syllabus designers and material developers, EAP teacher educators, and finally for EAP teachers themselves. The authorities and EAP education and assessment policy and decision makers should pay sufficient attention to the design and development of EAP curricula, syllabi, materials and courses for the EAP teachers to become familiar with and be explicitly trained on EAP assessment in general and alternative forms of assessment in particular. EAP teachers themselves are recommended to enhance their knowledge of assessment especially that of alternative forms of assessment by studying the findings of recent research on the topic and browsing the existing literature on assessment. They should foster their inner motivation to become more and more assessment-literate because as mentioned previously, whether one likes it or not, a great deal of class time is spent on assessment and evaluation. The teachers and the authorities involved should thus try to overcome the obstacles reported in the current study to block the adoption of alternative forms of assessment by EAP teachers (e.g., lack of enough knowledge, overloaded classes, time constraints, cost, etc.) by an appropriate planning and a proper management system. Therefore, as Soodmand Afshar and Movassagh (2016, P. 144) rightly maintain, AA (e.g., portfolios, tasks, journals, etc.), “should be judiciously incorporated in EAP courses” which can create more positive washback effect and help EAP learners master their knowledge of the content (Hung, 2012).

This study, like many other studies, might suffer some limitations. Further studies could be conducted to investigate the issue (i.e. EAP teachers’ assessment literacy and knowledge of AA) further to present some hands-on techniques and practical ways as to how to develop and enhance Iranian EAP teachers’ assessment literacy in general and their knowledge and use of AA in particular in EAP teacher training programs, a link of crucial importance which seems to be basically missing in the EAP education/development programs of the country at the moment. This became empirically evident in the present study from the participants’ responses to question 40 of CALI where they reported almost none had taken and passed any systematic courses of any sorts on assessment literacy in general and AA in particular as an EAP teacher preparation course.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Airasian, A. (1997). Classroom assessment (3rd Ed.). New York: McGraw-Hill.

Aschbacher, P. A. (1991). Performance assessment: State activity, interest, and concerns. Applied Measurement in Education, 4, 275-288.

Bachman, L. F. (2014). Ongoing challenges in language assessment. In A. J. Kunnan (Ed.), The companion to language assessment (1st ed., Vol. 3), (pp. 1586-1603). Oxford: John Wiley and Sons.

Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Education Measurement: Issues and practice, 30(1), 3-12.

Brown, D.J., & Hudsen, T. (1998). The alternatives in language assessment. TESOL Quarterly, 32, 653-675.

Crabtree, B.F., & Miller, W.L. (1999). Using codes and code manuals. In B.F. Crabtree & W.L. Miller (Eds.), Doing qualitative research. London: Sage.

DeLuca, C., Chavez, T., Bellara, A., & Cao, C. (2013). Pedagogies for pre-service assessment Education: Supporting teacher candidates’ assessment literacy development. The Teacher Educator, 48(2), 128-142.

DeLuca, C., & Klinger, D. A. (2010). Assessment literacy development: Identifying gaps in teacher candidates’ learning. Assessment in Education: Principles, Policy & Practice, 17(4), 419-438.

Demir, R., Ozturk, N., & Dokme, I. (2011). The views of the teachers taking in-service training about alternative measurement and evaluation techniques: The samples of primary school teachers. Procedia-Social and Behavioral Sciences, 15, 2347-2352.

DÖrnyei, Z. (2007). Research methods in applied linguistics. Oxford: Oxford University Press

Douglas, D. (2013). ESP and assessment. The Handbook of English for Specific Purposes. John Wiley & Sons, Inc. 367-383.

Dudley-Evans, T., & St John, M. J. (1981). Development in English for specific purposes. Cambridge: Cambridge University Press.

Grabin, L. A. (2007). Alternative assessment in the teaching of English as a foreign language in Israel (Doctoral dissertation). Retrieved from http://uir.unisa.ac.za/handle/10500/2286

Hamayan, E. V. (1995). Approaches to alternative assessment. Annual Review of Applied Linguistics, 15, 212-226.

Herman, J., Aschbacher, P. R., & Winters, L. (1992). A practical guide to alternative assessment. Alexandria, VA: Association for supervision and curriculum development.

Herman, J., Osmundson, E., Dai, Y., Ringstaff, C., & Timms, M. (2011). Relationships between teacher knowledge, assessment practice, and learning-chicken, egg, or omlete. Los Angeles, CA: University of California, National Center forResearch on Evaluation, Standards, and Student Testing.

Hills, J. R. (1991). Apathy concerning grading and testing. Phi Delta Kappan, 72, 540-545.

Huerta-Macias, A. (1995). Alternative assessment: responses to commonly asked question. TESOL Journal, 5(1), 8-11.

Lam, R. (2015). Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 32(2), 169-197.

Leighton, J. P., Gokiert, R. J., Cor, M. K., & Heffernan, C. (2010). Teacher beliefs about the cognitive diagnostic information of classroom-versus large-scale tests: Implications for assessment literacy. Assessment in Education: Principles, Policy & Practice, 17(1), 7-21.

McMillan, J. H. (2014). Classroom assessment: Principles and practice for effective standards-based instruction (6th ed.). Boston: Pearson Education.

 Mertler, C. A. (1999). Assessing student performance: A descriptive study of the classroom assessment practices of Ohio teachers. Education, 120(2), 285-296.

Mertler, C. A. (2000). Teacher-centered fallacies of classroom assessment validity and reliability. Mid-Western Educational Researcher, 13(4), 29-35.

Mertler, C. A. (2003b, October). Preservice versus inservice teachers’ assessment literacy: Does classroom experience make a difference? Paper presented at the annual meeting of the Mid-Western Educational Research Association, Columbus, Ohio.

Mertler, C. A., & Campbell, C. (2005). Measuring teachers’ knowledge and application of Classroom assessment concepts: Development of the assessment literacy inventory. Paper presented at the Annual Meeting of the American Educational Research Association, Montreal, QC.

Navarrete, C. J., Wilde, C., Nelson, R., Martinez & G. Hargett. (1990). Informal assessment in education evaluation: Implications for bilingual education programs. Washington, DC: National Clearinghouse for Bilingual Education.

Paterno, J. (2001). Measuring success: A glossary of assessment terms. Building cathedrals: comparison for the 21st century.

Plake, B. S., & Impara, J. C. (1993). Teacher assessment literacy: development of training Modules. Paper presented at the. Annual meeting of the National Council on Measurement in Education. Atlanta: GA.

Popham, W. J. (2011). Classroom assessment: What teachers need to know (6th Ed.). Boston, MA: Pearson.

Robinson, P. (1991). ESP Today: A practitioners’ guide. Hemel Hampstead, UK: Prentice-Hall.

Shohamy, E. (2008). Introduction. In Shohamy, E., & Hornberger, N. (Eds.), Encyclopedia of language and education (2nd ed.): Vol. 7. Language testing and assessment. New York: Springer Science Business Media.

Siegel, M. A., & Wissehr, C. (2011). Preparing for the plunge: Pre-service teachers’ assessment literacy. Journal of Science Teacher Education, 22(4), 371-391.

Smith, C. D., Worsfold, K., Davies, L., Fisher, R., & Mcphail, R. (2013). Assessment literacy and student learning: The case for explicitly developing students’ assessment literacy. Assessment & Education in Higher Education, 38(1), 44-60.

Stiggins, R. J. (1995). Assessment literacy for 21st century. Phi Delta Kappan, 77(3), 238-245.

Stiggins, R. J. (1991). Assessment literacy. Phi Delta Kappan, 72, 534-539.

Stiggins, R. J. (2001). Student-involved classroom assessment (3rd ed.). Upper Saddle River, NJ: Prentice Hall.

 Stiggins, R. J. (2013). Productive classroom assessment in college courses. Retrieved from: https://www.amazon.com/gp/product/0615827799/ref=dbs_a_def_rwt_bibl_vppi_i0.

Soodmand Afshar, H., & Movassagh, H. (2016). EAP education in Iran: Where does the problem lie? Where are we heading? Journal of English for Academic Purposes, 22, 132-151.

Strevens, P. (1998). ESP after twenty years: A reappraisal. In Tickoo, M. (Ed.), ESP: State of the art. Singapore: Regional Language Centre.

Tavakoli, M., & Tavakol, M. (2018). Problematizing EAP education in Iran: A critical ethnographic study of educational, political, and sociocultural roots. Journal of English for Academic Purposes, 31, 28-43.

Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of an European study. Language Assessment Quarterly, 11(4), 374-402.

Volante, L., & Fazio, X. (2007). Exploring teacher candidates’ assessment literacy: Implications for teacher education reform and professional development. Canadian Journal of Education, 30(3), 749-770.

Xu, Y., & Brown, G. T. L. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58(), 149-162.

 



[1] (Corresponding Author), Associate professor, soodmand@basu.ac.ir; Department of English Language, Bu-Ali Sina University, Hamedan, Iran

[2] Somaye Tofighi, PhD student of TEFL, tofighi.somaye@gmail.com; Department of English Language, Bu-Ali Sina University, Hamedan, Iran

[3] Maryam Asoudeh, MA of TEFL, ma.hm1327@yahoo.com; Department of English Language, Bu-Ali Sina University, Hamedan, Iran

[4] Naser Ranjbar, PhD student of TEFL, ranjbar.nasser@gmail.com; Department of English Language, Bu-Ali Sina University, Hamedan, Iran

 

Airasian, A. (1997). Classroom assessment (3rd Ed.). New York: McGraw-Hill.
Aschbacher, P. A. (1991). Performance assessment: State activity, interest, and concerns. Applied Measurement in Education, 4, 275-288.
Bachman, L. F. (2014). Ongoing challenges in language assessment. In A. J. Kunnan (Ed.), The companion to language assessment (1st ed., Vol. 3), (pp. 1586-1603). Oxford: John Wiley and Sons.
Brookhart, S. M. (2011). Educational assessment knowledge and skills for teachers. Education Measurement: Issues and practice, 30(1), 3-12.
Brown, D.J., & Hudsen, T. (1998). The alternatives in language assessment. TESOL Quarterly, 32, 653-675.
Crabtree, B.F., & Miller, W.L. (1999). Using codes and code manuals. In B.F. Crabtree & W.L. Miller (Eds.), Doing qualitative research. London: Sage.
DeLuca, C., Chavez, T., Bellara, A., & Cao, C. (2013). Pedagogies for pre-service assessment Education: Supporting teacher candidates’ assessment literacy development. The Teacher Educator, 48(2), 128-142.
DeLuca, C., & Klinger, D. A. (2010). Assessment literacy development: Identifying gaps in teacher candidates’ learning. Assessment in Education: Principles, Policy & Practice, 17(4), 419-438.
Demir, R., Ozturk, N., & Dokme, I. (2011). The views of the teachers taking in-service training about alternative measurement and evaluation techniques: The samples of primary school teachers. Procedia-Social and Behavioral Sciences, 15, 2347-2352.
DÖrnyei, Z. (2007). Research methods in applied linguistics. Oxford: Oxford University Press
Douglas, D. (2013). ESP and assessment. The Handbook of English for Specific Purposes. John Wiley & Sons, Inc. 367-383.
Dudley-Evans, T., & St John, M. J. (1981). Development in English for specific purposes. Cambridge: Cambridge University Press.
Grabin, L. A. (2007). Alternative assessment in the teaching of English as a foreign language in Israel (Doctoral dissertation). Retrieved from http://uir.unisa.ac.za/handle/10500/2286
Hamayan, E. V. (1995). Approaches to alternative assessment. Annual Review of Applied Linguistics, 15, 212-226.
Herman, J., Aschbacher, P. R., & Winters, L. (1992). A practical guide to alternative assessment. Alexandria, VA: Association for supervision and curriculum development.
Herman, J., Osmundson, E., Dai, Y., Ringstaff, C., & Timms, M. (2011). Relationships between teacher knowledge, assessment practice, and learning-chicken, egg, or omlete. Los Angeles, CA: University of California, National Center forResearch on Evaluation, Standards, and Student Testing.
Hills, J. R. (1991). Apathy concerning grading and testing. Phi Delta Kappan, 72, 540-545.
Huerta-Macias, A. (1995). Alternative assessment: responses to commonly asked question. TESOL Journal, 5(1), 8-11.
Lam, R. (2015). Language assessment training in Hong Kong: Implications for language assessment literacy. Language Testing, 32(2), 169-197.
Leighton, J. P., Gokiert, R. J., Cor, M. K., & Heffernan, C. (2010). Teacher beliefs about the cognitive diagnostic information of classroom-versus large-scale tests: Implications for assessment literacy. Assessment in Education: Principles, Policy & Practice, 17(1), 7-21.
McMillan, J. H. (2014). Classroom assessment: Principles and practice for effective standards-based instruction (6th ed.). Boston: Pearson Education.
 Mertler, C. A. (1999). Assessing student performance: A descriptive study of the classroom assessment practices of Ohio teachers. Education, 120(2), 285-296.
Mertler, C. A. (2000). Teacher-centered fallacies of classroom assessment validity and reliability. Mid-Western Educational Researcher, 13(4), 29-35.
Mertler, C. A. (2003b, October). Preservice versus inservice teachers’ assessment literacy: Does classroom experience make a difference? Paper presented at the annual meeting of the Mid-Western Educational Research Association, Columbus, Ohio.
Mertler, C. A., & Campbell, C. (2005). Measuring teachers’ knowledge and application of Classroom assessment concepts: Development of the assessment literacy inventory. Paper presented at the Annual Meeting of the American Educational Research Association, Montreal, QC.
Navarrete, C. J., Wilde, C., Nelson, R., Martinez & G. Hargett. (1990). Informal assessment in education evaluation: Implications for bilingual education programs. Washington, DC: National Clearinghouse for Bilingual Education.
Paterno, J. (2001). Measuring success: A glossary of assessment terms. Building cathedrals: comparison for the 21st century.
Plake, B. S., & Impara, J. C. (1993). Teacher assessment literacy: development of training Modules. Paper presented at the. Annual meeting of the National Council on Measurement in Education. Atlanta: GA.
Popham, W. J. (2011). Classroom assessment: What teachers need to know (6th Ed.). Boston, MA: Pearson.
Robinson, P. (1991). ESP Today: A practitioners’ guide. Hemel Hampstead, UK: Prentice-Hall.
Shohamy, E. (2008). Introduction. In Shohamy, E., & Hornberger, N. (Eds.), Encyclopedia of language and education (2nd ed.): Vol. 7. Language testing and assessment. New York: Springer Science Business Media.
Siegel, M. A., & Wissehr, C. (2011). Preparing for the plunge: Pre-service teachers’ assessment literacy. Journal of Science Teacher Education, 22(4), 371-391.
Smith, C. D., Worsfold, K., Davies, L., Fisher, R., & Mcphail, R. (2013). Assessment literacy and student learning: The case for explicitly developing students’ assessment literacy. Assessment & Education in Higher Education, 38(1), 44-60.
Stiggins, R. J. (1995). Assessment literacy for 21st century. Phi Delta Kappan, 77(3), 238-245.
Stiggins, R. J. (1991). Assessment literacy. Phi Delta Kappan, 72, 534-539.
Stiggins, R. J. (2001). Student-involved classroom assessment (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
 Stiggins, R. J. (2013). Productive classroom assessment in college courses. Retrieved from: https://www.amazon.com/gp/product/0615827799/ref=dbs_a_def_rwt_bibl_vppi_i0.
Soodmand Afshar, H., & Movassagh, H. (2016). EAP education in Iran: Where does the problem lie? Where are we heading? Journal of English for Academic Purposes, 22, 132-151.
Strevens, P. (1998). ESP after twenty years: A reappraisal. In Tickoo, M. (Ed.), ESP: State of the art. Singapore: Regional Language Centre.
Tavakoli, M., & Tavakol, M. (2018). Problematizing EAP education in Iran: A critical ethnographic study of educational, political, and sociocultural roots. Journal of English for Academic Purposes, 31, 28-43.
Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of an European study. Language Assessment Quarterly, 11(4), 374-402.
Volante, L., & Fazio, X. (2007). Exploring teacher candidates’ assessment literacy: Implications for teacher education reform and professional development. Canadian Journal of Education, 30(3), 749-770.
Xu, Y., & Brown, G. T. L. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58(), 149-162.