Assessment of ESP Students' Writing Performance: A Translation-Based Approach

Document Type : Original Article

Author

University of Zabol, Iran

Abstract

 
Emerging in the 1960s as an area of English language teaching (ELT), English for Specific Purposes (ESP) has been designed to instruct students to meet their academic needs. ESP is characterized by focusing on learners' needs, content knowledge, appropriateness of language used, single-skill instruction (usually reading), and its difference with English for general purposes. The purpose of the present study was to determine the reliability and validity of translation test as a writing ability assessment in relation to the method of testing, direct writing. To answer the research questions of this study, 100 Iranian ESP students were selected and tested. The results indicated that translation test was not only reliable, but also a valid testing method for assessing writing ability of ESP students (0.85, 0.82). Due to the second research question, the effects of proficiency level (between-subject factor) and test method (within-subject factor), the results clearly indicated that ESP students' writing performance was influenced. In other words, translation compositions were scored better and higher than direct compositions based on the effects of both factors (66.70, 79.86, 89.14), respectively. To see the interactional effects of both factors on the students' writing ability, the ANOVA results and the Scheffe test revealed that the two factors (test method proficiency level) had an F of (322.01) and (61.89), respectively. In sum, one can clearly observe an increase due to all proficiency levels in the new test (translation composition test). Regarding the third research question, raters' perception on the usefulness of testing types, they strongly believed that higher-level students more enjoyed in direct writing, while low-level subjects benefited more from translation test type. Implications are discussed.

Keywords


Universally, English is continuously regarded as the 'lingua franca' in various areas of occupation. Since the growing demand of English for specific purposes is quickly increasing, many English learners learn it in line with their specific fields and needs. As English continues to dominate in technology, business, education, media, medicine, and research, the demand for English for specific purposes (ESP) is quickly growing to meet individuals' instrumental purposes (Tsao, 2011; Xu, 2008). ESP has been exercised since the early 1960s. ESP courses are presented to learners for fulfilling their specific needs, responding to the significant demand for English in vocational and academic settings (Chang, 2009; Tsao, 2011).

Hutchinson and Waters (1987) indicated that "ESP is an approach to language teaching in which all decision as to (the) content and method are based on the learner's reason for learning" (p. 19).  Strevens (1988) described ESP as English language teaching that is designed to meet the specified needs of a learner. Lorenzo (2005) stated that ESP students are usually adults who already have some acquaintance with English and are learning the language in order to communicate with a set of professional skills and to perform particular job-related functions.

When it comes to assessment, the specificity of ESP represents a general assumption that in ESP "the test content and test method are derived from a particular language use context rather than more general language use situations" (Alderson & Banerjee, 2001, p. 222).

ESP tests are developed based on specific language contexts and typically fall along a continuum between general purpose tests and those for highly specialized contexts and include tests for academic purposes (e.g., International English Language Testing System, IELTS) and for occupational or professional purposes, for instance, Occupational English Test, OET (Alderson & Banerjee, 2001).

Dudley-Evans and St. John (1998) believed that as ESP tests should gauge the objectives of the course, constructing ESP tests is much more time-consuming and difficult than general-purpose tests because features of target language use situation should be carefully examined. These scholars also noted that like English for General Purposes (EGP) tests, tests of English for Specific Purposes (ESP) need to have clarity, explicitness, reliability, objectivity and validity.

As the educational aim of English for Academic Purposes (EAP) is to help learners gain English language skills that they employ in their academic courses (James, 2010), and because the purpose of ESP and EAP is to prepare learners to be autonomous members of the target professional community, test tasks should be as similar as possible to real life tasks in order to enable learners for the target situation. Therefore, the ESP approach in testing is based on the analysis of learners' target language use situations and special knowledge of utilizing English for actual communication.  One of these kinds of assessment tasks is translation from native language to target language.

According to Widdowson (1983), translation is a communicative activity – the learners translate in class for peers, decode signs and pay attention to the environment, and translate instructions and letters for friends and relations, so on. Translation seems to be an often used strategy and preferred language practice technique for many students in EFL settings. As such, it undoubtedly has a place in the language classroom. It can be invaluable in provoking discussion and helping us increase our own and our students’ awareness of the inevitable interaction between the mother tongue and the target language occurring in the process of language acquisition.

However, a review of the related literature reveals that the role of translation in language learning especially in ESP contexts has not received sufficient attention. Therefore, to bridge this lacuna, the present study sought to explore the role of translation in assessing one language skill, writing ability, among a sample of ESP learners. As such, the study aimed at answering the following research questions:

  1.  What is the relationship between translation ability, language proficiency and writing ability of ESP students?
  2.  What is the effect of writing test type (translation composition vs. indirect composition) and proficiency level (low, intermediate, and high) on the ESP students' writing performance?
  3.  What is the perception of raters on the usefulness of testing type (translation composition vs. direct composition)?

2. Literature Review

2.1 Writing Assessment

Testing writing has drawn so much attention from teachers, educators, administrators, and experts in the field of writing because writing of students in all disciplines seems to be so important and crucial in conveying the writers, intentions not only for EFL/ESL students but also for ESP and EGP ones (Hyland, 2002; Nemati, 2000; Uzawa, 1995; Weir, 2004).

Two main ways are reported in the literature by which writing can be measured. They are referred to as direct and indirect (Badger & White, 2000; Cumming & Riazi, 2000).

Twenty years or so ago, many specialists in the field of testing writing believed that writing should be validly tested by an indirect test of writing, but during the 1950s and 1960s even prior to the psychometric-structuralist era it was the direct method of writing assessment which was practiced. However, under the influence of researchers in the field of language testing in North America it was an objective test of writing which replaced essay test, during these two decades (Hamp-Lyons, 1995; Hyland, 2002).

But in the 1970s, researchers turned their attention to the direct tests of writing because of the effects of the humanistic and communicative task-based approaches to learning (Hill & Parry, 1994; Imhoof & Majure, 1994). Consequently, in comparison to indirect assessment of writing, direct writing assessment has received more approval from researchers and this approval has steadily been increasing as this approach adjusted in line with the research results (Nemati, 2000).

2.2 L1 and L2 Writing

Over the past decades, the role of the first language (L1) in second language writing (L2) has been an area of research pool. Although the use of L1 by L2 learners has long been criticized primarily due to L1 interference, more positive role of L1 in L2 writing has begun to be acknowledged.

Actually, Lay (1982) was one of the pioneering researchers interested in the use of L1 and its role in L2 writing. Lay found that Chinese subjects tended to switch to their first language when writing about a topic and were more dependent on their first language background. She also reported that students' first language served as an aid but not a hindrance to writing, since such subjects used Chinese when they were stuck in English to find a key word, for instance. 

Inspired by Lay's discoveries, many studies were conducted by researchers, attempting to show when and how L1 was used by writers at different levels in L2 writing (Cumming, 1989; Wang & Wen, 2002). Cumming (1989) reported that inexpert French ESL writers use their first language to generate content, and expert writers, in contrast, use translation not just to generate content but to verify appropriate word choice. In this situation, these writers seem to know that their first language will enhance their writing in English. Uzawa (1995) conducted a similar study by comparing second language learners’ L1 writing, L2 writing, and translation from L1 into L2. He noted that it was students with lower proficiency who benefited most from the translation task. However, research is still needed to shed further light on the effect of L1 on L2 writing particularly in ESP contexts.

3.  Methodology

3.1 Participants

The participants consisted of 100 ESP undergraduate students (55 male & 45 females) studying in different majors and different branches of MS (Master of Sciences) and MA (Master of Arts). All the students who took part in this study had already passed their General English Courses and ESP and some had attended extra general English ability classes (evening classes).

Because of some limitations, the participants taking part in this study were not randomly selected. To this end, intact classes were randomly selected. Table 1 gives the specification of the participants in terms of number, field of study, and gender.

Table 1

Students' Specification

No

Field   of Study

Number

      Gender

 

 

 

Male

Female

1

Humanities

 20

12

    8

2

Agriculture

 35

20

15

3

Science

 25

10

15

4

Engineering

 20

13

    7

 

Total

100

55

45

3.2 Instruments

To answer the research questions of this study, the following data collection instruments were used.

3.2.1 The TOEFL

The purpose of the TOEFL PBT, 2004 used in this study was to divide students into three proficiency levels (low, 36.48; intermediate, 64.75; high, 85.69). The TOEFL has three sections, Listening, Comprehension, Structure and Written Expression, and Reading Comprehension. Because of some administration restrictions, two parts of this test were applied; namely, Structure and Written Expression, and Reading Comprehension.

3.2.2 Translation Ability Test

Two reading passages or texts from the TOEFL were used to measure the ESP students’ translation ability. To evaluate the translation ability of the participants, two criteria; namely, "naturalness" in translation and 'accuracy' in translation proposed by Nida and Taber (1969) were used. They believe that naturalness refers to the extent if a translation sounds clear and unambiguous in the target language. To them, accuracy in translation means if a translation exactly carries the information in the source text. The following Table indicates inter-rater reliability (Pearson’s Product Moment Correlation) for the scores of translation production test. The correlation between the two raters was estimated to be (0.80), so it can be considered as substantial and significant.

Table 2

Translation Production Test: Inter-Rater Reliability

Raters

Mean Score

          SD

       Correlation

1

5.25

1.35

 

2

5.28

1.48

0.80**

**P < 0.0001

3.2.3 Composing Measuring Instruments

To find answers to the research questions and because the main purpose of the present study is to determine the degree of relationship between translation ability and writing ability, and to see if translation can be a reliable and valid test of ESP students' writing ability in comparison to the direct composition, the following instruments were used, respectively.

3.2.3.1 Topic-based composition test

This test is also known as Direct Composition Test. To do so, a topic was given to the subjects and they were asked to write a composition directly in English to figure out their writing ability.

3.2.3.2 Composition test based on translation

To check the effectiveness of two different composing methods, Direct Composition Test and Translation Composition Test on the students' writing performance and to see if Translation Composition Test is reliable and valid, a Persian composition having the same content and about the same topic was given to the participants. The participants were asked to translate it into English; so that, they could exhibit their writing ability in English by means of translation.

3.2.4 An interview

An interview was conducted with the two raters, experts in TEFL. Here, raters were asked to determine how they perceived the usefulness and effectiveness of the two testing methods of writing, Direct Composition versus Translation Composition, on the participants’ writing performance in relation to their general English proficiency level.

3.3Procedure

As stated before, the subjects participating in this study were selected from among MA and MS students who were studying different fields of study. All these students had already passed some courses in English; namely, prerequisite English courses, general English courses for their BA and BS; and TOEFL preparation courses during their MA and MS education.

The two raters were also selected from amongst the academic members of English department who were the researchers’ colleagues and were teaching different courses to the English translation students, so they were proficient enough to evaluate students' translation ability.

After sample selection, a TOEFL test was administered to the students to specify their proficiency levels. Based on their scores, they were divided into different proficiency levels as Low, Intermediate and High. The time allotted was an 80-minute time limit.

To evaluate the students' writing ability by means of both Direct Composition and Translation Composition, some steps were taken as follows:

Firstly, the participants were informed to write a five-paragraph composition by means of the two methods. Secondly, to test the students' writing ability in terms of direct composition, a topic was given about which the students should write an essay directly in English within 100 minutes. Thirdly, with a two-week time interval, the participants were asked to write a translation composition about the same topic. Here, again, the aim was to evaluate students’ writing ability by means of translation.

3.4 Scoring

For the scoring procedures, two scoring schemes were used as the following:

The first one was used for estimating the translation ability of ESP students as a Translation Production Test (TPT). Having applied Nida and Taber's (1969) scoring scheme, the researcher came up with the scoring model presented in Table 3.

Table 3

The TPT Scoring Scheme

        Scoring Criteria

Percent

1

2

Accuracy

50

Naturalness

50

Based on this framework, it was tried to expose the TPTs to objectified scoring. Albeit time-consuming, this model tested to be efficient; that was, a correct sentence which did not preserve the content, received no score. If the target version conveyed the message in a structure which distorted the meaning, the translation received no score. Moreover, if the message was carried, albeit in a grammatically unnatural form, the TPT got half a score.

The second one was used for scoring the compositions. To score the compositions (both direct compositions and translation compositions); (Jacobs et al., 1981) composition scoring scheme was used as shown in Table 4.

 Table 4

The Scoring Model (According to Jacobs et al.'s Scoring Scheme)

Scoring Criteria

Points

1

Content

5

2

Organization

5

3

Language

5

According to this scoring framework, evaluations done on a 3-part scale, for 16 analytical subcomponents, making up the three main elements: (1) content: knowledge of topic, substance, development of thesis, relevance; (2) organization: fluency of expression, clear statement and full support of ideas, succinctness, being well organized, logical sequence, being cohesive; and (3) language: range of vocabulary, effectiveness of word/idiom choice and usage, word form, appropriateness of register, type of construction, number of errors of agreement, tense, number, word order, functions, articles, pronouns and prepositions.

4.  Results

As it was stated before, the study was carried out (1) to clarify the relationship between translation ability and such variables as language proficiency and writing ability, (2) to specify the reliability and validity of translation tests as writing testing methods in comparison to that of so-called topic-based tests of writing, (3) to investigate the effects of test type and proficiency level on the subjects’ performance in writing, and (4) to examine the perception of the raters in relation to the effectiveness of test type in writing.

4.1 Results Related to the First Research Question

To examine the possible correlation which might exist between ESP students’ translation ability and variables as language proficiency and writing ability, some statistical analyses were conducted. The results along with their relevant discussion are given as the following:

Regarding the first research question, Table 5 below clearly shows the correlational pattern between translation ability, language proficiency, and writing ability of ESP students.

Table 5

The Correlational Pattern between Translation Ability (TA), Language Proficiency (LP), and Writing Ability (WA)

Variables

TA

ELP

WA

TA

1

0.71**

0.72**

ELP

 

1

0.76**

WA

 

 

1

**P<0.0001

The correlational matrix for the scores of the translation test, the proficiency test, and the writing test obviously indicates that language proficiency and writing ability almost had the same level of correlation with translation ability (0.71, 0.72). So, it clearly shows a significant relationship between language proficiency and writing ability (0.71, 0.76) of ESP students.

4.2 Reliability and Validity of Translation Tests Used to Measure Writing Ability of ESP Students

Because a new test method named translation composition test (TCT) has been developed, it makes it a necessity to examine the reliability and validity of this new test used to measure writing ability of ESP students. In order to touch these issues, this study employed the following statistical analyses:

4.2.1 Reliability

To check the reliability of this newly developed testing method, interrater reliability estimate was used between raters. To this end, a Persian composition was given to the total number of participants. They were asked to translate it into an English composition to test their writing ability. Then, they were rated by the two raters.

The results presented in Table 6 point out that inter-rater reliability of (TCT) is quite above the acceptable level (0.85), and can have the sufficient potentiality for making a reliable measurement method of ESP students’ writing ability.

Table 6

Inter-Rater Reliability of Translation Composition Test

Raters

M Score

SD

Correlation

1

11.75

1.67

0.85**

2

11.62

1.49

**P < 0.0001

 

4.2.2 Validity

To examine the validity of this new test (TCT), criterion-related validity measurement was used to see if it was really estimating writing ability of ESP students.

Table 7 clearly gives a sound picture of the correlation between direct composition test (DCT) and translation composition test (TCT). As it is seen in Table 7, there is a significant correlation between these two testing methods (r = 0.82, P < 0.0001) focusing on the point that the newly developed test of writing(TCT) was the same as the more commonly used measure of writing, direct composition(DC).

Table 7

Correlation Analysis between TC and DC

Test   Method

M

SD

Correlation

Translation Composition Test(TCT)

72.46

11.59

0.82**

Direct   Composition Test(DCT)

61.56

13.78

**P< 0.0001

4.3. Results Related to the Second Research Question

As stated before, the present study tried to examine two main aspects: first, to explore the degree of correlation between translation ability and such variables as language proficiency and writing ability and to check the reliability and validity of translation test; and the second one was to examine the effects of proficiency level (Between-Subject Factor) and test method (Within-Subject Factor) on participants' writing test performance. The following Tables present the results related to the second research question as the following:

4.3.1 Participants' proficiency level

To determine the Participants’ Proficiency Level, a proficiency test (TOEFL) was used to divide them into three proficiency levels; that are, low, intermediate and high as it was given in Table 8.

 Table 8

 Descriptive Statistics of Proficiency Level

Proficiency   Level

M

SD

N

Low

37.15

2.37

31

Intermediate

68.12

11.83

44

High

88.85

3.41

25

Total

64.70

18.13

100

4.3.2 Effects of proficiency level and test method

To explore the effects of two factors: 1) proficiency level & 2) test method or composing process, a two-way mixed design ANOVA was used. Table 9 summarizes the results of participants' writing ability in relation to both factors (proficiency level and test method).

Table 9

Participants' Writing Ability

Proficiency   Level

Direct Composition(DC)

Translation     Composition(TC)

M

SD

M

SD

Low

46.15

8.75

66.70

5.45

Intermediate

67.38

10.65

79.86

7.85

High

66.12

14.71

89.12

8.98

 

As indicated in Table 9 above, the two factors; namely, composing process and proficiency level were figured out to influence the ESP students' writing quality. In sum, translation compositions were scored better and higher than direct compositions (66.70, 79.86, 89.12), orderly. This tendency was much more effective with the low-level group, which resembled significant growth in the scores of translation compositions. Regarding the proficiency level, high-level subjects outperformed the other two levels in the translation compositions. But, a small difference was found between high and intermediate groups in direct compositions than in translations (66.12 & 67.38).

4.3.3 Effects of proficiency level (between-subject factor) and test method (within-subject-factor) on writing

As presented in Table 10, the results of ANOVA clearly resembled quite satisfactory effects for both factors; namely, proficiency level [F(2,111) = 61.89, P<0.0001] and composing process(test method) [F(1, 111) = 322.01, P<0.0001] and for the interactional effects of bot factors [F(2, 111) = 9.74, P<0.01].

Table 10

 Two-Way Mixed Design ANOVA: Writing: Content, Organization, and Language)

Factors

Sum   of Squares

df

Mean   Square

F

A:   Proficiency Level

1789736

2

8992.87

61.89**

Error

16111.67

111

141.59

 

B:   Composing Process

9112.05

1

9112

322.01**

A×B

Proficiency   × Process

553.69

2

279.58

9.74**

Error

2988.75

11

25.93

 

**P < 0.01

As given in Table 10, the two factors (test method and proficiency level) resembled an F of (322.01) and (61.89), respectively. As we can see, the two test types were different. The three proficiency levels benefited from the newly-developed test (translation composition). However, the low group benefited significantly more than the other two groups (Figure 1).

                       

 

 

 

 

 

 

 

 Figure 1. Writing performance: Both factors

 

Figure 1 indicates that all the three proficiency levels did benefit from TC (translation composition) in comparison to that of DC (direct composition). However, the low group did better in translation composition.

4.3.4 Differences among groups and interactional effects

To show where the differences lay, to clarify the influence of each factor, and to specify the interactional effect of both factors on students' writing ability, a Scheffe test was applied. The results are presented in Table 11.

Table 11

Scheffe Test: Differences among Groups (Writing Ability)

Groups

Compared Groups

Mean Difference

Sig.

Direct Composition Low

Translation Composition Low

-17.88

0.001

Direct Composition Intermediate

-19.25

0.001

Translation Composition   Intermediate

-31.12

0.001

Direct Composition High

-28.79

0.001

Translation Composition High

-38.69

0.001

Translation Composition Low

Direct Composition Intermediate

-1.012

0.99

Translation Composition   Intermediate

-13.10

0.0001

Direct Composition High

-11.15

0.004

Translation Composition High

-22.13

0.001

Direct Composition Intermediate

Translation Composition   Intermediate

-12.12

0.001

Direct Composition High

-10.17

0.001

Translation Composition High

-19.59

0.001

Translation Composition   Intermediate

Direct Composition High

11.9

0.95

Direct Composition High

Translation Composition High

-7.64

0.02

Translation Composition High

-10.08

0.01

 

Table 11 indicates that the low group did better on translation composition, an increase of (17.88) percent (P < 0.001). The intermediate ones resembled a growth of (12.12) in the newly developed test (TC), (P < 0.001). Similarly, the high group performed better in the new test (10.08), (P < 0.001). In sum, we can clearly observe an increase due to all proficiency levels in the new test (TC). Nevertheless, this increase was better with the low group students. Composition mean difference of low participants in the new test was not significant in comparison to that of intermediate group in the direct composition (DC). Likewise, mean difference of the intermediate group in the translation composition (TC) was not significant to that of high group in direct composition.

4.4 Results related to the Third Research Question (Raters' Perception)

Here, raters were asked to give their understandings about the goodness and significance of the test methods. They claimed that higher-level students more enjoyed in direct writing, while low-level subjects benefited more from the translation composition. The main reasons they stated for the high group were: better organization, more natural, English-like expressions; and better grammar. Reasons for the low group were: more ideas available in the translation composition and enough time for writing.

They were also asked about the priority and easiness of test method for different participants. They strongly declared that direct writing was more preferable and easier to use for the higher-level participants, whereas the lower level students showed a preference toward a translation composition. Reasons suggested by the raters for the preference of direct writing were: the difficulty of translation especially in carrying the subtle aspects of meaning, the use of known words and structures, and simpler development of ideas. Reasons given for priority and easiness of translation composition were: ideas were available and easier to develop, clarity of the thoughts, and accessibility of needed words by the use of a bilingual dictionary.

When they were asked about the effect of the composing method on the overall language development of ESP students, one rater believed that in order to increase students’ linguistic ability we should make use of direct writing for all the three proficiency levels to enable them to think directly in English. Another rater focused on the use of translation composition in writing advocating the use of translation in composing as a new language learning technique in writing.

Raters also reported the amount of students' thinking in Persian while writing directly in English. They even said that nearly half of the higher-level students thought in Persian while writing directly in English.

5. Discussion

This study was an attempt to investigate the relationship between translation ability and variables such as language proficiency and writing ability of ESP students, to determine the reliability and validity of translation test as a writing ability measurement, to figure out the interactional effects of both factors, language proficiency and composing method, on the writing ability of ESP students, and to see the raters' understanding of the usefulness of test methods in writing. 

Due to the various aspects of this study, it was found that translation ability was not only highly correlated with language proficiency and writing ability; but also, it can be used as both reliable and valid test of writing. The interactional effects of Within-Subject-Factor (test method) and Between-Subject-Factor (proficiency level) on the ESP students writing performance was found to be high, especially with the lower-level participants. In other words, the findings suggest that the writing performance of both intermediate and high group students showed an acceptable increase in translation composition. However, it was not as much high as it was for the lower group students, and it indicated an increase in terms of the three factors; language, content, and organization.

Due to the raters' perception, they strongly believed that the high-proficiency students could benefit much more from the direct writing because they are not dependent so much upon their mother tongue (first language) and can better manifest their abilities in terms of direct writing. However, for the intermediate and low-proficiency students, they believed that the use of translation composition would be better since they have not been fully developed in terms of linguistic skills and are eager to rely on their first language (mother tongue).

The results of the current study confirmed those of previous research (Cumming, 1989; Uzawa, 1996; Wang & Wen, 2002). It seems that translation can be a support for the writing process, especially at lower levels. Learners have more access to information in their own L1, which they can then translate.

6. Conclusion and Pedagogical Implications

1) since translation ability was highly correlated with such variables as language proficiency and writing ability, syllabus designers can develop a syllabus which improves the effective skills or factors involved in translation (such as: reading, writing, and language proficiency). On the other hand, the inclusion of translation practices in the course books may increase the writing ability of the students.

2) Based on the integrative view of language, translation can be a very beneficial classroom activity in TEFL. It is especially valuable in the monolingual classrooms and can be tailored to be highly practical, learner-focused, and process-based. Some researchers believe that translation can be an influential way of making students familiar with the linguistic, semantic and pragmatic features of the target language (Popovic, 1999; Stoddart, 2000; Urgese).

3) the findings of the present study revealed the fact that translation is not only reliable but also a valid test method for measuring ESP students’ writing ability, so EFL teachers and test developers can make use of this method in association with other academically accepted test methods of writing ability of both EFL and ESP students.

Alderson, J. C., & Banerjee, J. (2001). Language testing and assessment (part I). Language Teaching, 34, 213-236.
Badger, R., & White, G. (2000). A process genre approach to teaching writing.  ELT Journal, 54, 153-160.
Chang, W. Y.  (2009). A needs analysis of applying an ESP program for hotel employees. Yu Da Academic Journal, 21, 1-16.  
Cumming, A. (1989). Writing expertise and second language proficiency. Language Learning, 39, 81-141.
Cumming, A., & Riazi, A. (2000). Building models of adult second language writing instruction. Learning and Instruction, 10, 55-71.
Dudley-Evans, T., & John, M. J. (1998). Developments in English for specific purposes: A multi-disciplinary approach. Cambridge: Cambridge University Press.
Jacobs, H. L., Zingraf, D. R., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: A practical approach. Rowley, MA: Newbury House Publishers.
James, M. K. (2010). Transfer climate and EAP education: Students’ perceptions of challenges to learning tTransfer. English for Specific Purposes, 29, 133-147.
Hamp-Lyons, E. (1995). Rating non-native writing: The trouble with holistic scoring. TESOL Quarterly, 29, 759-762.
Hill, C., & Parry, K. (1994). From testing to assessment: English as an international language. London: Longman.
Hutchinson, T. & Waters, A. (1987). English for specific purposes: A learning-centered approach. Cambridge: Cambridge University Press.
Hyland, K. (2002). Teaching and researching writing. London: Longman.
Imhoof, M., & Majure, R. (1994). Writing as an integrative activity. Forum, 21, 19-22.
Lorenzo, F. (2005). Teaching English for specific purposes (ESP), from http://usingenglish.com/teachers/articles/teaching-english-for-specific-purposes-esp.html  
Lay, N. (1982). Composing process of adult ESL learners: A case study. TESOL Quarterly 16, 406-407.
Nemati, M. (2000). The role of mode of discourse on EFL writing performance. Unpublished PhD. Dissertation. University of Leicester, UK.
Nida, E. A., & Taber, C. R. (1969). The theory and practice of translation. Leiden, Netherlands: E. J. Brill.
Popovic, R. (1999). The Place of translation in language teaching. Bridges 5, Greece: Teachers of English Union.
Stoddart, J. (2000). Teaching through translation. British Council Journal.Lisbon, N. 11.Available at: http:// Britishcouncil.Org/Portugal-Inenglish- 2000apr- Teaching-Through-Translation.Pdf
Strevens, P. (1988). ESP after twenty years: A re-appraisal. In M. Tickoo (Ed.), ESP: State of the art (pp. 1-13). SEAMEO Regional Language Centre.
Tsao, C. H.  (2011). English for specific purposes in the EFL context: A survey of student and faculty perceptions.  Asian ESP Journal, 7(2), 126-149.
Uzawa, K. (1995). Translation, L1 and L2 writing of Japanese ESL learners. Journal of the Canadian Association of Applied Linguistics, 160-175.
Uzawa, K. (1996). Second language learners’ processes of L1 writing, L2 writing and translation from L1 into L2. Journal of Second Language Writing, 5, 271–294.
Wang, Y & Wen, Q. F. 2002.  L1 use in the L2 composing process: An explanatory study of 16 Chinese EFL writers.  Journal of Second Language writing online. 11 (1), 225-246.
Weir, C. J. (2004). Language testing and validation: An evidence-based approach. Palgrave Macmillan.
Widdowson, H.G. (1983). Learning purpose and language use. Oxford: Oxford University Publications.
Xu, X. (2008).  Influence of instrumental motivation on EFL learners in China and its implication on TEFL instructional design. Educational Communications and Technology, University of Saskatchewan.  Retrieved from http:// usask.ca/education/coursework/802papers/xu/index.htm