Teaching requestive downgraders in L2: Can learners’ MI modify the effects of focused tasks?

Document Type : Original Article

Authors

1 Islamic Azad University, Larestan Branch

2 Tarbiat Modares University

10.22132/tel.2014.53819

Abstract

As a follow-up to our previous study (i.e., Ahmadi, Ghafar Samar, & Yazdani moghaddam, 2011), we utilized dictogloss (DIG) as an output-based task and consciousness raising (CR) as an input-based task to explore the interaction between the effects of these tasks and EFL learners’ multiple intelligences (MI) on the development of requestive downgraders. Prior to the experiment, 110 Iranian EFL learners were asked to participate in the study to help the researchers develop the instruments, i.e., a recognition and a production test. Also, 43 American native English speakers were employed to provide the baseline data for the development of the recognition test and the instructional materials. To carry out the study, the researchers matched 60 subjects in two groups based on their scores on the Oxford Placement Test (2004) and their inclination to interpersonal or linguistic intelligence. The groups were then randomly assigned to instructional conditions, namely the DIG and CR tasks. These tasks were utilized to implement the instructional treatment in eight sessions. The results revealed no significant differences between the participants in the DIG and CR tasks on the production and recognition measures. Due to the initial differences on the recognition measure, the pretest-posttest mean differences revealed that the DIG task was more effective in the enhancement of learners’ recognition ability. While the effects of MIwere not significant on pragmatic measures, a significant interaction in favor of learners with an inclination to interpersonal and linguistic intelligences was respectively observed for the participants in the DIG and CR tasks.

Keywords


1. Introduction

Attaching the same significance to pragmatics as to the other SLA components, Kasper and Rose (1999) state that, from the interlanguage pragmatic (ILP) perspective, pragmatics is akin to morphosyntax, lexis and phonology with the same restrictions on learners’ knowledge, use and acquisition of second language pragmatics. Although Rose and Kasper (2001) attest to the paucity of research in L2 pragmatics, ILP researchers have recently shown their interest in the application of the principles of instructed SLA to the realm of pragmatics. 

In this regard, Alcon-Soler and Martinez (2005) claim that pragmatic studies should be subjected to the same three conditions as any other types of knowledge in the target language, namely appropriate input, opportunities for output and feedback provision. Contrary to what Alcon-Soler and Martinez propose, the bulk of the studies have drawn their attention to the implicit and explicit teaching of pragmatic features (e.g., Alcon-Soler, 2005; House, 1996; Takahashi, 2001; Tateyama, 2001; Yoshimi, 2001) or examined the effects of input-based tasks on the learners’ enhancement of pragmatic competence (i.e., Takimoto, 2006, 2009). The instructed ILP studies have rarely compared the effects of input-based and output-based tasks on learners’ enhancement of pragmatic features.

As another principle of SLA, Ellis (2005) also reminded the educators about the role of individual differences (ID) as the essential ingredients in the process of SLA. In pragmatic studies, however, the significance of learner variables has been passed into oblivion. In this regard, Robinson (1997, 2005) argues for investigating the interaction of ID variables with specific learning processes such as attention and noticing. He states that similar to studies focusing on morphosyntactic features, it would be advisable to investigate the possible effect of ID variables on the processing of L2 pragmatic input.

In line with the foregoing discussions and as a follow-up to Ahmadi, Ghafar Samar, and Yazdani Moghaddam’s (2011) study, this study explores the role of MI as an individual factor in modifying the effects of the focused tasks on the EFL learners’ acquisition of L2 pragmatic features.

To fulfill the goal of the study, the researchers utilized consciousness raising (CR) as an input-based task and dictogloss (DIG) as an output based task to consider the interaction effects of these tasks and EFL learners’ MI on the development of requestive downgraders. Ahmadi et al. (2011) argue why they delimited the scope of their study to the request and, more specifically, to internal requestive downgraders. Inspired by Gardner’s (1993) definitions of the interpersonal and linguistic intelligences and its close relation to Leech’s (1983) division of pragmatics into sociopramatics and pragmalinguistics, the researchers consider the inclusion of these two intelligences to examine how they modify the effects of CR and DG tasks on the learning of requestive downgraders.

 

2. Literature Review

2.1 ILP Studies and Individual Differences

In addition to the theoretical support attached to the role of ID in language models (e.g., Backman & Palmer, 1996), different educators have more directly addressed their rolein pragmatic studies. Robinson (2005) proposes that research into acquisition of L2 pragmatics and its susceptibility to instruction is beginning to address the role of noticing and awareness and the extent individuals vary in this. Similarly, Simard and Wong (2001) argue that the relation between attention and awareness provides a link both to the study of ID in language learning and the role of instructional tasks in making formal features of the target language more salient. Robinson (2005) also stipulates the necessity for the investigation of the influence of ID in studies implemented under different conditions of exposure or task types.

Despite its significance, the role of ID in the instructed ILP studies has not been deeply researched. In this regard, Takahashi (2005) unveiled a relationship between learners’ motivation and the noticing of pragmalinguistic features under the implicit input condition. In the current study, the researchers sought to uncover the awareness of pragmalinguistic features under the explicit conditions. Since the instructional treatments were implemented explicitly through an input-based and an output-based task, the researchers were interested in exploring whether learners’ inclination to linguistic or interpersonal intelligence can modify the results to the advantage of learners’ with a particular intelligence profile.

2.2 Form Focused Instruction (FFI)

Due to the EFL learners’ needs for more formal instruction on the internal requestive downgraders (Faerch & Kasper, 1989; Hassel, 2001), this pragmatic feature was subjected to FFI. Ellis (2001) defines FFI as, “any planned or incidental instructional activity that is intended to induce language learners to pay attention to a linguistic form” (pp.1-2). Nassaji and Fotos (2004) state that FFI can be designed through focused tasks to raise learners’ awareness of a particular target feature. In the current research, the effects CR and DIG as two types of focused task were considered in tandem with the role of MI in the improvement of learners’ pragmatic ability.

2.2.1 Consciousness Raising Task      

CR tasks classified as a type of FFI aim at making learners aware of how linguistic features work (Ellis, 2003). Unlike tasks which can be organized around content of a general nature, Skehan (2004, p.7) holds that, “CR tasks are those where a specific feature of language itself is part of the task, and the focus is on explicit learning”. In CR tasks, learners are provided with the instances of targeted linguistic features and are expected to induce rules about the forms (Leowen, 2011). In this regard, past morphosyntactic studies have stipulated the effectiveness of CR tasks in the learners’ development of L2 explicit knowledge (Fotos, 1993, 1994; Fotos & Ellis, 1991).

Originally devised for morphosyntactic studies, Ahmadi et al. (2011) revised the CR procedures cited in Ellis (2002, 2003) to meet pragmatic teaching purposes. As stated in Ahmadi et al., revisions included a focus on the specific pragmalinguistic features, providing data on the target features and expecting learners to deduce them and, finally, learners’ verbalization of the pragmalinguistic and sociopragmatic features of the target structures.

2.2.2 Dictogloss Task           

To explore the relationship between production tasks and pre-selected language structures, Loschky and Bely-Vroman (1993, as cited in Skehan, 2004) argue while the incorporation of a particular target feature in a task design might enhance the naturalness and efficiency of the task, some tasks necessitate the use of a particular structure.  This means that without the use of a particulars structure, these tasks cannot be completed (Ellis, 2003). He claims that Dictogloss (DIG) as a structure-based production task not only meets ‘the essential requirement’ of a task but also results in the very explicit attention that is the characteristic of CR tasks.

Swain and Lapkin (2001) define the DIG task as an activity in which students hear a passage twice and then work in pairs or groups to reconstruct it. As students try to reconstruct the text, they may not be able to accurately produce the forms realizing the intended meanings of the text. Therefore, focus on form occurs (Leowen, 2011). Past studies such as Swain (1998) and Swain and Lapkin (2001) reported the effectiveness of the DIG task in the production of the target features in their studies.

According to Ahmadi et al. (2011), ILP researchers had not utilized this task in pragmatic research. Therefore, they revised the phases of DIG task to meet the need of their study. Following Ahmadi et al., the researchers implemented the task as follows: (a) reading a request letter with the specific contextual variables in mind, (b) students’ reconstruction of the same or similar text (i.e., lesson), (c) modeling the students’ production, and (d) students’ reflections on their own and peer productions and metapragmatic discussion of the pramalingusitc and sociopragmatic features (i.e., reflection).

2.3 Empirical Interventional Studies

Relevant to this study, empirical studies on the instructed ILP can be reviewed from three perspectives: instructed ILP studies and ID, input-based and output-based tasks, and the implicit and explicit teaching of pragmatic features.

 

2.3.1 Instructed ILP Studies and ID

Takahashi (2005) examined the role of ID in constraining learners’ awareness about pragmatic features. He explored the relationship between motivation and proficiency as two ID variables and Japanese EFL learners’ awareness of L2 pragmalinguistic features under an implicit input condition. The analysis of retrospective awareness questionnaire administered after the treatment indicated that the awareness of pragmalinguistic features was different among learners and the awareness of target features was correlated with intrinsic motivation, but not proficiency.

 

2.3.2 Input-Based and Output-Based Tasks

Providing an opportunity for the learners’ production of requests through the role play tasks, Fukuya and Hill (2006) utilized the recast to give an implicit focus to the inaccurate and inappropriate requests. Discourse completion tests showed that the pragmatic recast group performed better than the control group.

Takimoto (2006) examined the effects of CR and CR with feedback as two types of input based instructions on the learners’ learning of requestive downgraders. The results revealed that both groups outperformed the control group on a planned discourse completion and role-play test. Takimoto (2009) also compared the effectiveness of structured input tasks with and without explicit information and problem solving tasks in teaching English polite request forms to Japanese intermediate learners of English. The results revealed that the treatment groups significantly outperformed the control group on different pragmatic measures.

Ahmadi et al. (2011) compared the effects of CR as an input-based task and DIG as an output-based task on the learners’ enhancement of pragmatic competence. They revealed that the effects of treatment and time were not significant on pragmatic measures. Findings demonstrated that both groups maintained the positive effects of the treatment in the delayed posttest on the production and perception measures. The participants in the DIG task significantly fell to a lower level in the delayed posttest on the recognition measure.   

 

2.3.3 Implicit and Explicit Teaching of Pragmatic Features

Although the majority of instructed ILP studies have been conducted from the explicit and implicit perspectives, the results are not yet conclusive. Reporting a slight advantage of explicit instruction in the development of L2 pragmatics in their meta-analysis, Jean and Kaya (2006, as cited in Takimoto, 2009) noted that, due to limited available data, the seemingly superior effects of explicit pragmatic instruction should be explored in greater detail in future studies. In this regard, House (1996), Rose and Ng Kwai-fun (2001), Takahashi (2001) all attested to the effectiveness of explicit instruction in the development of pragmatic competence in their study. As opposed to these studies, Yoshimi (2001) revealed that the effects of the explicit instruction of the discourse markers was not significant on the comprehension and production of Japanese discourse markers.

Other studies might still add to the complexities of the results in this perspective. Alcon-Soler (2005) and Martinez-Flor and Fukuya (2005) revealed no significant differences between the implicit and explicit teaching of pragmatic features. Encouraging more studies in this perspective, Koike and Pearson (2005) and Tatayama’s (2001) study might illustrate that the interaction of test methods and implicit/explicit teaching of pragmatic features can be the focus of future research. In their studies, they uncovered that the explicit instruction might be more effective in the analyses required for the completion of the multiple choice tasks.

Although more studies on the instructed ILP still need to be conducted from the implicit/ explicit perspective, this review specifically indicates the scarcity of research both on the effects of the focused tasks and the interaction effects of different ID and task types on the development of EFL learners’ pragmatic competence.  

 

2.4 The Present Study

In the light of the above discussions, studies exploring the role of ID in pragmatic studies are urgently needed. To this end, this study examines the interaction between the effects of focused tasks and learners’ MI on the development of pragmatic competence. As an extension to our previous research (i.e., Ahmadi et al., 2011), we investigated how EFL learners’ MI could modify the effects of CR as an input-based task and DG as an output-based task on the development of learners’ production and recognition of requestive downgraders. To achieve this goal, the researchers need to consider the main effects of instruction, the main effects of MI and their interaction effects on pragmatic measures. Therefore, the following research questions are investigated in this study:

  1. Are there are any significant differences between the effects of CR as an input-based task and DIG as an output-based task on the development of EFL learners’ recognition and production of requestive downgraders?
  2. Are the main effects of linguistic and interpersonal intelligences significant on EFL learners’ recognition and production of requestive downgraders?
  3. Are there any significant interactions between the effects of the focused tasks and EFL learners’ MI on the recognition and production of the requestive downgraders?

 

3. Method

3.1 Participants

Prior to the experimental phase of the study, 110 Iranian EFL learners studying at Islamic Azad University, Larestan-Branch were employed to take part in the study over a semester to prepare the instruments. These senior students majoring in the English language and literature at B.A. level were 20 males and 90 females ranging in age from 21 to 26. The participants had never experienced life in a second language environment and their exposure to the English language was only through formal education in high school and university.

To provide the baseline data for different phases of the study, 43 American native speakers of English were asked to take part. The participants in this phase were 34 males and 9 females ranging in age from 20 to 63. Native speakers were from different fields of the study e.g., physics, history, linguistics, etc and their education backgrounds ranged from B.A. to Ph.D. To obtain the most representative and natural answers, we did not restrict native speakers in terms of their age and education.

For the experimental phase of the study, 150 Iranian EFL learners (who had not taken part in the earlier phases of the study) sat for the Oxford Placement Test (2004) and a multiple intelligences survey. Based on their OPT scores and intelligence profiles, the researcher matched 60 students with two standard deviations (SD = 15) above and below the mean (M=120) in the two experimental groups. More specifically, there were five upper intermediate, 10 intermediate and 15 elementary learners in each group. In one group (i.e., CR), there were 13 learners with a better linguistic intelligence profile and 17 with a better interpersonal intelligence. In the other group (i.e., DIG), 12 learners had a better interpersonal intelligence and 18 had a better linguistic intelligence. Eight males and 22 females ranging in age from 18 to 26 comprised the participants in each group. They were mainly juniors and seniors but some freshmen and sophomores were also included.    

3.2 Instruments

3.2.1 Oxford Placement Test

To ensure the participants’ homogeneity, the researcher administered the Oxford placement test (OPT) to the participants. Allan (2004), the test constructor, claims that the OPT has been calibrated against the proficiency levels based on the Common European Framework of Reference for Languages (CEF), the Cambridge ESOL Examinations, and other major international examinations such as TOEFL.

The test comprising listening and grammar sections includes 100 items in each part. Yamini and Tahriri (2010) propose that the performance on the listening section is based on applying knowledge of sound and writing systems at a speed well within the native speakers’ competence. For the grammar section, they believe that the test measures grammar, vocabulary and reading skills together in contextualized items. Having utilized the OPT to determine proficiency level of participants, Birjandi and Sayyari (2010) also established the concurrent validity of the OPT with a retired paper-based TOEFL scores. The results revealed a very high correlation between the OPT and TOEFL subskills and total scores.

3.2.2 The Multiple Intelligences Checklist

A multiple intelligences survey (Armstrong, 1993) was utilized to collect information concerning the intelligence profiles of the participants. The checklist consists of eight sections representing the eight types of intelligence based on Gardner’s (1993) classification of intelligence types. Learners were required to read each section including 10 items very carefully and check the statement applied to them. The whole checklist was translated into Persian by the researchers to facilitate EFL learners’ understanding of the items. To ensure the reliability of the translation, two academics translated back the whole checklist into English. In this way, some ambiguous items were identified, reviewed and modified.

The MI checklist has been widely used in numerous studies dealing with multiple intelligences theory. For example, according to Han (2006), Cronbach alpha coefficient of the overall MI inventory was found to be 0.97. Based on the significant correlation between students’ Chinese academic scores and the related intelligences (e.g., gregariousness score and interpersonal intelligence or logical-mathematical intelligence and mathematics score), Han also reported an acceptable validity for this inventory.

 

3.2.3 Construction of the Scenarios

As stated in Ahmadi et al. (2011), we followed Liu’s (2007) procedure in the construction of scenarios. That is, we constructed scenarios in the three phases of ‘the examplar generation stage’, ‘the likelihood situation’ and ‘the matapragmatic assessment’. These procedures helped researchers construct the scenarios which were typical of Iranian EFL learners’ daily lives and included sociolinguistic variables of power, social distance and the size of imposition in different combinations. In the subsequent combination, [±/-power, +social distance, + imposition], the scenario was a request to a person with equal or greater power than the speaker, who was unknown and for a relatively big favor. In the other combination, [±/+power, -social distance, - imposition], the scenario was a request to a person with equal or less power than the speaker, who was known, and for a relatively small favor. These scenarios served as the production and recognition tests.  

 

3.2.4 Production Test

To explore the interaction between the effects of focused tasks and learners’ MI on the learners’ production of requestive downgraders, the researchers utilized the production test developed for Ahmadi et al’s. (2011) study. This test required EFL learners to write their requests for each situation. Prior to the study, two native speakers were requested to review the linguistic accuracy and the appropriacy of the situations. This instrument was pretested with a group of 20 EFL learners, which showed the reliability (i.e., Cronbach alpha) of 0.80.

Following Fukuya and Hill (2002), Koike and Pearson (2005) and Martinez-Flor and Fukuya (2005), the researchers used an analytic assessment to score the learners’ responses to the production test. When the participants could internally modify the target head acts in the appropriate context they were rewarded for their grammatical accuracy. On the contrary, if they used an accurate linguistic form in an inappropriate context, they received no points. If the employed form was pragmatically appropriate but linguistically inaccurate, half of the score was given.

3.2.5 Recognition Test

To fulfill the goal of the study, similar to the production test, the researchers took the advantage of the recognition test used in Ahmadi et al’s. (2011) study. Since the test construction procedure has been mentioned there, the details are not repeated. Following Farhady, Jafarpour, and Birjandi’s (1994) argument for functional approach to language testing, there were four options for each request scenario: linguistically accurate and pragmatically appropriate utterances; pragmatically appropriate but linguistically inaccurate utterances; pragmatically inappropriate but linguistically accurate utterances; and linguistically
inaccurate and pragmatically inappropriate utterances. The design of the test required learners not only to select the best choices for the provided scenarios but to explain the shortcoming of other choices. Prior to the study, the recognition test was pretested (i.e., with the same groups employed to pretest the production test) and its reliability was shown to be 0.92.

3.3 Target Requestive Downgraders

In compliance with earlier studies such as Fukuya and Hill (2002), Takahashi (2005), and the data collected from the American native speakers of English, requestive downgraders such as I would be grateful…, Is there any way if…, would it be possible to…etc. were considered as appropriate to the scenarios possessing the subsequent contextual variables, [±/-power, +social distance, + imposition]. For this combination, the participants were required to make a request to a person with equal or greater power than the speaker and for a relatively big favor. The interlocutors were also socially distant from each other. Requestive downgraders such as would you mind… or do you think were also used for the scenarios having sociolinguistic variables of [±/+power, -social distance, - imposition]. For this combination, a request was made to a person with equal or less power than the speaker and for a relatively small favor. There was also no social distance between the interlocutors.

3.4 Instructional Treatments

As a follow-up to Ahmadi et al.’s (2011) study, we employed the same instructional tasks (i.e., CR as an input-based task and DIG as an output-based task) to explore their interaction with the learners’ MI on the development of requestive downgraders. Based on Takimoto (2009), the researchers operationalized CR in four stages: a pragmalinguisitc activity, a sociopragmatic focused activity, a pragmalinguistic-sociopragmatic connection activity and metapragmatic assessment. While learners compared the form of requestive downgraders in the pragmalinguistic activity, they rated both the interlocutors’ power and social distance relationship and the size of imposition the request on a five point scale in the second stage. In the pragmalinguisitc-sociopragmatic connection activity, the learners considered how the sociolinguistic variables could affect the use of more or less polite form of requestive downgraders. Finally, the participants and the teacher discussed the features of target structures.

Since DIG task provides opportunities for collaborative learning and production on the part of the learners (Leowen, 2011; Doughty & William, 1998), students with low proficiency level were paired with more proficient ones for this study. As stated in Doughy and William (1998), this task was unfolded in three phases of lesson, modeling and reflection. In the lesson phase, the teacher read a request letter written in the light of the sociolinguistic variables of power, social distance and the size of imposition of the request. Each pair took notes, shared their ideas and reconstructed a similar request letter. In the modeling phase, some of the pairs were required to read their letter. As they were reading the letters, the teacher provided some feedback on the linguistic accuracy of the students’ production and wrote different requestive forms on the board so that the student could compare them. In the reflection phase, the learners were given some time to reflect on their own and peer productions. At this stage, the teacher and the students explicitly discussed the pragmalinguistic and sociopragmatic features inherent in the requests.

 

3.5 Procedure

Following Ary, Jacob, and Razavieh (1996), this research employed a factorial design to fulfill the purposes of the study. To collect data on Iranian EFL learners’ intelligence profiles as naturally as possible, the researchers allowed them to go through the whole checklist although the linguistic and interpersonal intelligences were the only concerns of this study. Having reviewed the checklist completed by the learners, the researchers learned that few students had equally checked the statements that applied to them from the linguistic and interpersonal intelligences. Therefore, the researchers, in an interview session, asked them to review the checklists again and decide which intelligence types they were more strongly connected to.  

Prior to the experimental study, the OPT (2004) was also administered to Iranian EFL learners to ensure the homogeneity of the participants. Based on the learners’ OPT scores and their intelligence profiles, the researcher matched participants in two groups. They were so matched that, within each group, learners with a better interpersonal intelligence were not significantly different from those with a better linguistic intelligence in their OPT mean scores. The groups were then randomly assigned to different experimental conditions. About 6 weeks later, the participants took part in pretests which took place over two days. The production test lasting about 60 minutes was administered on the first day and the recognition test taking about 70 minutes was administered on the second day. This order of administration withheld learners form carrying any clues to the second test. Furthermore, the participants were not informed in advance about follow up tests. A week after the pretests, the instructional treatments started. The treatments were offered in eight sessions, each lasting 60 minutes. Right after the treatment, posttests with the same procedure and order of test presentations as the pretests were administered to the learners.

4. Results

Prior to the study, the researchers matched participants in the experimental groups based on their OPT scores. Leven’s test for the equality of variance, F = 0.625, p = 0.423 and independent T-test, t (58) = 0.50, p = 0.96, showed no initial significant differences between the groups at the specified .05 level. Besides the OPT scores, the researchers also took into consideration learners’ intelligence profiles when matching subjects in experimental groups. Table 1 demonstrates no significant differences between participants with better linguistic and interpersonal intelligences on the OPT.

      Table 1: OPT mean score comparison of learners with different

       intelligence profiles

Tasks

MI

N

OPT M

MD

P

 

Both   Tasks

Inter

29

122.24

 

 

Ling

31

118.61

-3.62

0.36

 

CR

Inter

17

119.70

 

 

Ling

13

121.00

1.29

0.82

 

DIG

Inter

12

125.83

 

 

Ling

18

116.88

-8.94

0.11

          Note. * p<.05; M= Mean; MD=mean differences; MI= multiple intelligence;  

                                 Ling=linguistic intelligence; Inter= interpersonal intelligence

     Descriptive statistics in Table 2 was presented to compare the effects of instructional treatments on the recognition and production measures. According to the post instructional results in this table, the participants in the DIG as an output-based tasks were better in both measures than those in the CR as an input-based task.

       Table 2: Descriptive statistics on the pragmatic measures

Measure

Inst. Tasks

N

Pretest M

Posttest M

 

Production

CR

30

12.06

25.58

DIG

30

12.30

26.90

 

Recognition

CR

30

32.83

40.76

DIG

30

27.63

44.03

       Note: Inst. =instruction; N= number of subjects; M = mean

To examine whether mean differences were significant between participants in both tasks on the production and recognition measures, the researchers utilized the Univariate Test, yielded by the Multivariate Analysis of Variance (MANOVA). Findings in Table 3 demonstrate no significant differences between the participants in the CR and DIG tasks on the production measure. Since mean differences in the pretest were not either significant between the participants in both tasks, findings suggest that participants in both tasks did almost the same on this measure although pretest-posttest mean differences for the DIG task (14.6) were slightly better than those for the CR task (13.52).  

Table 3: Univariatetestshowing the effects of treatment on pragmatic measures

     Source

Measure

df

MS

F

     P

Par. Eta Sq.

 

 

Treatment

Pre-Production

1

2.72

0.11

0.74

0.00

Post-Production

1

25.30

0.45

0.50

0.00

Pre-Recognition

1

378.2

4.25*

0.04

0.07

Post-Recognition

1

213.5

1.21

0.27

0.02

  

Note: *P<.05; MD=mean difference; MS=mean score; df=degree of freedom; Par. Eta Sq.= partial eta squared          

    

For the recognition measure, findings in Table 3 similarly reveal that the mean differences between the participants in both tasks were not significant in the posttest. This means that participants in the CR and DIG tasks seem to be the same on this measure. Due to the initial differences on the recognition measure, the pretest-posttest mean differences demonstrate that the participants in the DIG (44.3 - 27.80 = 16.5) outperformed those in the CR task (40.80 – 32.83 = 7.97). This implies that the DIG task was more effective in the development of learners’ recognition ability than the CR task.

Although the discussions in the preceding paragraph imply that the DIG as an output-based task was more effective than the CR as an input-based task, the Repeated Measure Analysis in Table 4 displays that both were effective in the enhancement of learners’ performance on the measures of pragmatic competence from the pretest to the posttest. Findings in this table also demonstrate a significant interaction between the effects of the treatment and instructional tasks on the recognition measure.

Table 4: Repeated measure analysis on the effects of treatment on pragmatic measures 

Source

Measure

SS

df

MS

F

P

Par.   Eta Sq.

 

Instruction

Recognition

4380.20

1

4380.2

60.3*

0.00

0.51

Production

5929.10

1

5929.1

211.1*

0.00

0.78

Instruction

×

Task   Type

Recognition

516.67

1

516.67

7.12*

0.01

0.10

Production

8.80

1

8.80

0.31

0.50

0.00

                 

Note: *P<.05; MD=mean difference; MS=mean score; SS=sum of squares; Par. Eta Sq.= partial eta squared         

Since the repeated measure analysis did not display the results for each separate task, a pair-wise comparison was carried out to investigate the effects of treatment within each experimental group on different pragmatic measures. Table 5 reveals that both instructional tasks were effective in the improvement of learners’ pragmatic competence from the pretest to the posttest.

   Table 5: Bonferroni pair-wise comparison on pragmatic measures

Task

Measure

Pretest

Posttest

MD

P

 

CR

Recognition

32.83

40.76

7.93*

.011

Production

12.06

25.58

13.52*

.000

 

DIG

Recognition

27.63

44.03

16.4*

.000

Production

12.30

26.90

14.6*

.000

       Note: *P<.05; MD = mean difference

    

Mean plots in Figures 1 and 2 further illustrate the improvement of the participants in both tasks from the pretest to the posttest on the recognition and production measures. Figure 1 also testifies to the significant ‘task × instruction’ interaction in Table 4 for the recognition measure. While the learners in the DIG task had a lower mean in the pretest on the recognition measure, they showed a better performance in the posttest.

 

                       

Figure 1: Estimated marginal means of     Figure 2: Estimated marginal means

recognition measure                                     of production measure

                                                                                                          

                                                                      

The second research question explored the main effects of learners’ MIon pragmatic measures. Motivated by Gardner’s (1993) definition of the interpersonal and linguistic intelligences, the researchers delimited the focus of the study only to these two intelligences. In this regard, the descriptive statistics in Table 6 demonstrates how learners’ with linguistic and interpersonal intelligences performed on the pragmatic measures. 

Table 6: Descriptive statistics for the effects of ‘MI’ and ‘MI× instructional task’ on pragmatic measures

Measure

Inst. Task

MI

N

Pretest   M

Posttest   M

 

 

 

Recognition

 

CR

Ling

13

33.76

43.61

Inter

17

32.11

38.58

 

DIG

Ling

18

26.77

40.44

Inter

12

28.91

49.41

 

Total

Ling

31

29.87

41.77

Inter

29

30.79

43.06

 

 

 

Production

 

CR

Ling

13

12.30

28.19

Inter

17

11.88

23.58

 

DIG

Ling

18

11.38

25.66

Inter

12

13.66

28.75

 

Total

Ling

31

11.77

26.72

Inter

29

12.66

25.72

 Note:. Inter: interpersonal; Ling=linguistic; Inst = instructional; M=mean     

     

Despite the slight differences in the mean of learners with different intelligence profiles, findings in Table 7 reveal no significant differences between the participants with a better interpersonal and linguistic intelligence on the pragmatic measures. Therefore, it might be postulated that MI by themselves cannot be effective in the improvement of learners’ pragmatic competence.

 

Table 7: The univariatetestshowing the effects of ‘MI’ and ‘MI× instructional task’ on pragmatic measures

Variable

      Measure

Df

MS

 F

     P

Par. Eta Sq.

 

 

         MI

Pre-Recognition

1

.8650

.010

.922

.000

Post Recognition

1

56.67

.323

.572

.006

 Pre-Production

1

12.49

.502

.482

.009

Post-Production

1

8.42

.153

 .697

.003

 

 

 

MI × Instruction

Pre-Recognition

1

52.31

.589

.466

.010

Post   Recognition

1

713.60

4.1*

.049

.068

Pre-Production

1

26.60

1.06

.306

.019

Post-Production

1

215.18

3.9

.053

.065

               

Note: *P<.05; MS=mean score; df =degree of freedom; Par. Eta Sq.= partial eta squared

 Findings in Table 7 also descriptively illustrated the interaction between the effects of instructional treatments and learners’ MIon pragmatic measures. As opposed to the mean of learners with different intelligences in the pretest, learners with a merit in interpersonal and linguistic intelligences had a better performance on pragmatic measures respectively in the DIG and CR tasks in the posttest. In this regard, the researchers utilized the univariate test, generated by MANOVA, to examine whether or not mean differences were significant.As observed in Table 7, the simultaneous effects of ‘instructional treatments and learners’ MI were shown to be marginally significant on the recognition and production measures.  Figures 3 and 4 illustrate that learners with better interpersonal and linguistic intelligences were respectively better in the DIG as an output-based task and CR as an input-based task.

                                   

Figure 3: The interaction of MI and           Figure 4: The interaction of MI

Instructional Tasks on the Production         and instructional Tasks on the          

Measure                                                       Recognition Measure 

 

5. Discussion

Accounting for the goals of the study subsumed investigating the main effects of instruction, MI and their interactions on the pragmatic measures. This enables the researchers to consider which factor or factors (i.e., instruction, MI or MI and instruction) can more significantly explain learners’ variances on pragmatic measures. In this section, the results for each research question are firstly discussed and then the contributions of different factors to the development of learners’ pragmatic competence are discussed.

Regarding the first research question, findings showed that the participants in the DIG task were better, although not significantly, than those in the CR task on the production measure. Based on this finding, the explicitness of the task might be regarded as an overriding factor. This means that when the learners’ attention is explicitly drawn to the target features, input-based and output-based tasks can work successfully to improve learners’ pragmatic competence.

Findings also revealed that the DIG as an output-based task was much better than the CR task in the improvement of learners’ recognition of pragmatically appropriate and linguistically accurate utterances. Logically, it is expected that the DIG as an output-based task should improve learners’ production rather than recognition ability. In this regard, the better performance of the participants in the DIG task on the recognition measure can be explained in terms of the design of the test. The test required learners not only to select the best choice of the provided scenarios but to explain why they had not selected other choices. This means that behind the simple recognition, learners had to analyze, judge and identify the appropriacy and accuracy of each choice. Therefore, the participants in the DIG task who had both an opportunity for production and their attention was explicitly drawn to the target features gave a better performance on this measure than those in the CR as an explicit input-based task.

Closely related with the above mentioned findings, past studies (e.g., Rose & Ng Kwai-fun, 2001; Takahashi, 2001; Takimoto, 2006, 2009; Tateyama, 2001) also attested to the merits of the explicit teaching of pragmatic features.

Findings for the second research question revealed that the main effects of learners’ MI were not significant on pragmatic measures. Although the non-significant effect of MI might imply that this learner variable was not a determining factor in the enhancement of learners’ pragmatic ability, the results need to be interpreted cautiously. Firstly, the lack of significant effect of MIcan be explained in the light of sample size effect. Since this study was part of a larger project in which the researchers compared the performance of the participants in four experimental groups, the number of learners with different intelligence profiles within each group was not entirely satisfactory. Therefore, studies with a larger sample can lead to more revealing findings. Secondly, the non-significant effect of MI can also be attributed to the interactive nature of different intelligences. Due to the lack of subjects, the researchers were solely concerned with the learners’ inclination toward the linguistic and interpersonal intelligences and other intelligence types were left out of consideration. This means that learners’ merits in other intelligences might have affected the outcome of the study.

Contrary to the above finding, the interaction effects of learners’ MIand the focused tasks were marginally significant on the pragmatic measures (the third research question). In this regard, the related mean plots illustrated that learners with better interpersonal and linguistic intelligences were respectively at an advantage in the DIG and CR on pragmatic measures. Since learners with interpersonal intelligences have capacity to discern and respond appropriately and pragmatically to the moods, motivation and desires of other people (Gardner, 1993), the relative merits of learners with interpersonal intelligence in the production task might be justifiable. Gardner also states that learners with the linguistic intelligence are more sensitive to the sound, structure, meaning and function of words and language. Therefore, it seems logical that such learners gave a better performance in the CR as an input-based task.

 To sum up, the results revealed that unlike the main effect of instruction and MI,their interaction effect was marginally significant on the pragmatic measures. Therefore, it seems that the interaction effect overrode the main effects of instruction and MI on the pragmatic measures. In this regard, the partial eta squared might help us interpret how influential this interaction effect was on these measures.

For the recognition measure, the partial eta squared value of 6.8% shows that the interaction effect of ‘instruction and MI’ was not very high in accounting for the learners’ variances. Therefore, it should be figured out that other factors might also explain learners’ variances on this measure. In the current study, the researchers also carried out a repeated measure analysis to consider the effect of instruction on pragmatic measures from the pretest to the posttest. The partial eta-squared value of 51% implies that both instructional conditions were highly effective in the development of learners’ recognition ability from the pretest to the posttest. This is why the main effect of instruction was not significant on the recognition measure and the related partial eta-squared was low in Table 3.

Similarly for the production measure, the better partial eta squared of 6.5% might suggest that the interaction effect of ‘MI and the instructional task’ was greater the main effect of instruction (.08%) in accounting for learners’ variances. The partial eta squared value of 78.5% obtained through repeated measure analysis, however, reveals that both instructional conditions could almost equally improve the learners’ production ability from the pretest to the posttest. This explains why the main effect of instruction was not strong and the partial eta squared value was low on this measure. As a result, it might be postulated that instructional tasks had more profound impact on the pragmatic measures than the interaction effect of ‘MIand instructional tasks’.

 

6. Conclusion

Due to the rarity of studies exploring the interaction between the effects of focused tasks and learners’ MI on the learners’ development of L2 pragmatic ability, the justification and discussion reached here need to be validated in the light of findings in future studies.

Findings of the study reveal that the explicit provision of pragmatic instruction can be highly significant in EFL contexts where learners are deprived of the direct contact with native speakers and the related social norms. Findings also illuminate that the addition of output to the explicit instruction can boost learners’ performance far better. This conclusion can provide some grounds for further research. Future studies, for example, can compare the effectiveness of two output-based tasks when one draws the learners’ attention to target features explicitly and the other does the same implicitly.

Although the analyses demonstrated that MIdid not have any main effect on pragmatic measures after the treatments, future studies need to investigate both MI and other learner variables to more objectively discuss their roles in pragmatic studies. Based on the results, it may be also concluded that learners with a stronger tendency to the interpersonal intelligence might be at an advantage in output-based tasks. Although this result is worthy of attention, the interaction effect was not as great as the effect of instruction in accounting for the learners’ variances on pragmatic measures. Since this phase of the study is exploratory in nature, it is strongly suggested that these findings be validated by future studies with a larger sample size.

 

Ahmadi, A. H., Ghafar Samar, R., & Yazdani Moghaddam, M. (2011). Teaching requestive downgraders in L2: How effective are input-based and output-based tasks? Iranian Journal of Applied Linguistics (IJAL), 14(2), 1-30.
Alcon-Soler, E. (2005). Does instruction work for learning pragmatics in the EFL context? System, 33, 417-435.
Alcon-Soler, E., & Martinez-Flor, A. (2005). Editors’ introduction to pragmatics in instructed language learning. System, 33, 381-384.
Allen, D. (2004). Oxford placement test 1. Oxford: Oxford University Press.
Armstrong, T. (1993). 7 kinds of smart: Identifying and developing your many intelligences. New York: Plume, Penguin Group.
Ary, D., Jacobs, L. C., & Razavieh, A. (1996). Introduction to research education (5th ed.). Orlando, Fl: Harcourt Brace College.
Backman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University Press.
Birjandi, P., & Sayyari, M. (2010). Self-assessment and peer-assessment: A comparative study of their effects on writing performance and rating accuracy. Iranian Journal of Applied Linguistics, 13(1), 23-45.
 
 
Doughty, C., & William, J. (1998). Pedagogical choices on focus on form. In C. Doughty & J. Williams (Eds.), Focus on form in second language acquisition (pp. 114-139). Cambridge: Cambridge University Press
Ellis, R. (1994). Understanding second language acquisition. Oxford: Oxford University Press.
Ellis, R. (1997). SLA research and language teaching: Oxford: Oxford University Press.
Ellis, R. (2001). Introduction: Investigating form-focused instruction. Language Learning, 51(Supplement 1), 1-46.
Elli, R. (2002). Grammar teaching- Practice or Consciousness raising? In J. C. Richards & W.A. Renandya (Eds.), Methodology in language teaching: An anthology of current practice (pp.167-175). Cambridge: Cambridge University Press.
Ellis, R. (2003). Task-based language learning and teaching. Oxford: Oxford University Press.
Ellis, R. (2005). Principles of instructed language learning. System, 33, 209-224.
Faerch, C., & Kasper, G. (1989). Internal and external modification in interlanguage request realization. In S. Blum-Kulka, J. House & G. Kasper (Eds.), Cross cultural pragmatics: Requests and apologies (pp. 221-247). Norwood, NJ: Ablex.
Farhadi, H., Jafarpour, A., & Birjandi, P. (1994). Testing language skills: From theory to Practice. Tehran: SAMT Publication.
Foto, S. (1993). Consciousness raising and noticing through focus on form: Grammar task      performance vs. formal instruction. Applied Linguistics, 14(4), 385-407.
Foto, S. (1994). Integrating grammar instruction and communicative language use through grammar consciousness-raising tasks. TESOL Quarterly, 28(2), 323-351.
Foto, S., & Ellis, R. (1991). Communicating about grammar: A task based approach. TESOL Quarterly, 25(4), 605-628.
Fukuya, Y. J., & Hill, Y. Z. (2002). Effects of recasts on EFL learners’ acquisition of pragmalinguistic conventions of request. Second Language Studies, 21(1), 1-47.
Fukuya, Y. J., & Hill, Y. Z. (2006). The effect of recasting on the production of pragmalinguistic conventions of request by Chinese learners of English. Issues in Applied Linguistics, 15(1), 59-91.
Gardner, H. (1993). Multiple intelligences: The theory in practice. New York: Basic books.
Han, W. J. (2006). The correlation between elementary school students' Multiple Intelligences and English reading proficiency. Retrieved from http://nccur.lib.nccu.edu.tw/ bitstream/140.119/ 33438/7/95101307.pdf
Hassell, T. J. (2001). Modifying requests in a second language. International Review of Applied Linguistics, 39, 259-283.
House, J. (1996). Developing pragmatic fluency in English as a foreign language. Studies in Second Language Acquisition, 14, 1-23.
Kasper, G. (2001). Classroom research on interlanguage pragmatics. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 33-60). Cambridge: Cambridge University Press.
Kasper, G., & Rose, K. (1999). Pragmatics and SLA. Annual Review of Applied Linguistics, 19, 81-104.
Kasper, G., & Rose, K. (2001). Pragmatics in language teaching. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 1-11). Cambridge: Cambridge University Press.
Koike, D. A., & Pearson, L. (2005). The effect of instruction and feedback in the development of pragmatic competence. System, 33, 481-501.
Leech, G. (1983). Principles of pragmatics. London: Longman.
Leowen, S. (2011). Focus on form. In In E. Heinkle (Ed.), Handbook of research in second language teaching and learning (pp. 576-593). New York: Routledge.
Liu, J. (2007). Developing a pragmatic test for Chinese EFL learners. Language Testing, 24(3), 391-415.
Martinez-flor, A., & Fukuya, Y. J. (2005). The effects of instruction on learners’ production of appropriate and accurate suggestion. System, 33, 463-480.
Nassaji, H., & Fotos, S. (2004). Current developments in research on the teaching of grammar. Annual Review of Applied Linguistics, 24, 126-145.
Robinson, P. (1997). Individual differences and the fundamental similarity of implicit and explicit adult second language learning, Language Learning, 47(1), 45-99.
Robinson, P. (2005). Aptitude and second language acquisition. Annual Review of Applied Linguistics. 25, 46-73.
Rose. K. R. (2005). On the effects of instruction in second language pragmatics. System, 33, 385-399.
Rose, K. R., & Kasper, G. (2001) (Eds.). Pragmatics in language teaching. Cambridge: Cambridge University Press.
 
Rose, K. R., & Ng Kwai-fun, C. (2001). Inductive and deductive teaching of compliment and compliment responses. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 145-171). Cambridge: Cambridge University Press.
Simiard, D., & Wong, W. (2001). Alertness orientation and detection: The conceptualization of attentional function in SLA. Studies in Second Language Acquisition, 23, 103-124.
Skehan, P. (2004). Task based instruction. Language Teaching, 36, 1-14.
Swain, M. (1998). Focus on form through conscious reflection. In C. Doughty & J. Williams (Eds.), Focus on form in second language acquisition (pp. 64-81). Cambridge: Cambridge University Press.
Swain, M., & Lapkin, S., (2001). Focus on form through collaborative dialogue: Exploring task effects. In M. Bygate, P. Skehan & M. Swain (Eds.), Researching pedagogic tasks: Second language learning, teaching and testing (pp. 95-99). Harlow: Longman.
Takahashi, S. (2001). The role of input enhancement in developing pragmatic competence. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 171-200). Cambridge: Cambridge University Press.
Takahashi, S. (2005). Pragmalinguistic awareness: Is it related to motivation or proficiency? Applied Linguistics, 26(1), 90-120.
Takimoto, M. (2006). The effect of explicit feedback and form-meaning processing on the development of pragmatic proficiency in consciousness-raising tasks. System, 34, 601-614.
Takimoto, M. (2009). The effect of input-based tasks on the development of learners’ pragmatic proficiency. Applied Linguistics, 30, 1-25.  
Tateyama, Y. (2001). Explicit and implicit teaching of pragmatic routines. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 200- 223). Cambridge: Cambridge University Press.
Yamini M, & Tahriri, A. (2010). On teaching to diversity: The effectiveness of MI-inspired instruction in an EFL context. The Journal of language Teaching kill (JLTS). 2(1), 1-17.
Yoshimi, D. R. (2001). Explicit instruction and JFL learners' use of interactional discourse markers. In K. R. Rose & G. Kasper (Eds.), Pragmatics in language teaching (pp. 223-245). Cambridge: Cambridge University Press.