Keywords
Peer Review, Weblogs, Writing Skills
This article is included in the Research Synergy Foundation gateway.
Peer Review, Weblogs, Writing Skills
The involvement of peers in language learning is indispensable as language learning has become an act of interacting and collaborating among learners as they work and negotiate to form meaning or make sense of what is being taught (Gupta et al., 2019; Richards, 2006). At present, learning activities in class are not drills but pair or group work, role playing or project-based assignments. Learners are supposed to produce work in the second language and make all the mistakes they “need to make” to learn the rules of the language “inductively” (Richards, 2006). The errors made are perceived from a different point of view, and the learners are expected to learn the language rules by getting feedback for their trials and errors. The feedback received is also no longer from their language teachers as they shoulder more responsibilities for their own learning and rely on their peers for support (Gupta et al., 2019; Blackstone et al., 2007). Peer feedback, which is also referred to as peer review, peer editing, peer evaluation, peer response, and peer assessment (Gedera, 2012; Zhang et al., 2014) has helped to effectively supplement teachers' feedback and self-feedback, which is deemed as one of the most important forms of feedback in the context of English as a Second Language (ESL) (Gupta et al., 2019).
Blogging has the greatest pedagogical potential for academic purposes compared to other Web 2.0 tools (Blackstone et al., 2007) and can be a technology tool to be exploited for peer involvement, such as for Written Corrective Feedback (WCF) (Lee, 2020). In Malaysia, Abdullah (2011) indicated that only few empirical studies have investigated the effects of Web-based Asynchronous Peer Feedback (WAPF) on enhancing writing proficiency. Hence, by using weblogs, this study aims to look at the effects of using peer review in the writing performance of second language learners of English in a Malaysian university.
This study qualifies as a qualitative case study as it was a direct and in-depth analysis of a small sample of participants. The data were also analysed inductively to form a bigger picture from the information collected (Fraenkel et al., 2012; Ary et al., 2006). Moreover, continuous data over a period were collected to ascertain if improvement could be observed in the participants’ writing performance. A purposive, non-random sampling method was used to access a specific subset of participants that fits a particular profile, i.e., Generation Y’s second language (L2) of English who were geographically convenient to be approached.
An advertisement was posted on the researcher’s social media page inviting volunteers to join a special English writing class as part of a research project. A total of 36 individuals showed interest but due to the inability to find a common time for all, many had to withdraw. Nevertheless, in total, eight students (six Malaysians and two Namibian students) agreed to volunteer for the project that lasted for 28 weeks. Ethical approval (EA0772021) for this study was obtained from Multimedia University, and all 8 participants who volunteered gave their written consent to be involved in the research. The participants were trained for 2 hours a week for a total of 5 weeks. In those preparatory weeks, they were briefed on the structures and requirements of an essay. The students were provided a scoring guide which was an adapted version of the Northwest Regional Educational Laboratory and Reid’s guide (Education Northwest, 2014; Reid, 1993) (see underlying data) (Chua, 2021). Using the scoring criterion, the students were trained via hands-on practices to assess sample essays on their content, organisation, vocabulary, language, and mechanics. The students gave scores to the essays and provided some feedback at the end of the essay. Then the researcher of this study discussed and moderated the marks given so that the students could provide fair, consistent, and accurate feedback and scores. Lee (2020) encouraged teachers to ensure that “opportunities are provided for students to interact with peers and the teacher so that they can engage with Written Corrective Feedback (WCF) at a “deeper level” (p. 5).
After the completion of the training, the peers were randomly paired. They proceeded to create their personal blogging accounts on www.blogger.com and wrote their introductory posts which were evaluated by the researcher to gauge their proficiency levels. As a result, four pairs of blogging buddies were created: Pair 1 (Weak+ Average), Pair 2 (Good+ Good), Pair 3 (Average + Average), and Pair 4 (Weak + Average) for the entire duration of the study. It should be noted that there were two pairs of Weak and Average peers.
The students were asked to post their first essay on their personal blogs, and their respective blogging buddies left feedback and a score based on the scoring guide used during training. The student would then revise and repost the final version of that essay on their blogs. The revised version would reflect the adoption and/or rejection of their peers’ reviews. The researcher gave the bloggers and their buddies a week to complete the entire cycle. There was a total of three cycles as three essays were assigned.
The researcher with more than 9 years of teaching experience privately assessed and graded the essays twice (The original essay pre peer review and the revised essay post peer review). The same scoring guide was used by the researcher, who did not leave any comments or feedback on the students’ weblogs so that it would not overshadow any feedback provided by their peers.
Table 1 shows the overall scores given to the students’ posts. The English levels indicated the students’ proficiency levels which were determined based on the self-introductory essay posted as their first entry on their personal blogs. The researcher graded the essays with the following grading system: A+ (90-100), A (80 <90), A- (75<80), B+ (70<75), B (65<70), B- (60<65), C+ (55<60), C (50<55), C- (47<50), D+ (44<47) and F (0<44). Students with a score of above 75 were noted as having “Good language proficiency” while students who scored 60 to 74 were categorised as “Average” and those below 60 were considered as “Weak” (see underlying data) (Chua, 2021).
After ascertaining their proficiency levels, marks were given for the essays before and after peer review. After the peer review process was carried out for three rounds, it was apparent that peer review worked differently for different pairs. Blackstone et al. (2007) explained that blogging buddies could have a dual role, “as a good conscience”, where the buddy serves a motivational purpose, and a “proof-reader” buddy who acts as a surrogate teacher that teaches and edits the peer’s work (Blackstone et al., 2007).
Weaker students who were paired with buddies with higher proficiency, benefitted from the peers who played the role of proof-readers, as shown by the scores of Pair 1(Table 1). The student with weak proficiency had an increase of 45% to 55% and 58.3% to 66.7% after her Average peer reviewed her first and second essay, respectively. The same can be observed for the weak student in Pair 4. The scores of all three reposted essays after her average peer reviewed her work were increased from 43.3% to 65%, 60% to 61.7% and 65% to 66.7%, respectively. Zhang et al. (2014) mentioned that peer feedback helps students to notice the cognitive gap that exists especially if the L2 learners are unable to identify their own mistakes. Once their weaknesses are identified, these students can improve their work. Lundstrom and Baker (2009) commented that peer review can be very effective, especially for students with lower proficiency. Ge (2011) also highlighted that “the most low-ability students had made good use of peer feedback”.
However, the improvement in writing scores of average students depended on the proficiency of their buddies. In Pair 1, the average student did not show any score difference between his original and revised posts after his work was reviewed by his peer with weak proficiency. The average student maintained the score of 68.3% for the second, and 70% for the third essay. The same could be observed in Pair 4 since the average student who was paired with a weak proficiency blogging buddy maintained the same scores for all three essays. This pattern could perhaps be explained by Nassaji & Swain (as cited in Lundstrom & Baker, 2009), who highlighted that if the writer’s Zone of Proximity Development (ZPD) is different from the reviewer, feedback that scaffolds learning may not fully benefit the writer. Hence, these average students who had been paired with a weaker peer might not see their peers’ feedback as significant enough to adjust or improve their subsequent posts.
In Pair 3 with both peers considered as students with average proficiency, Student 6 decided not to write essays 1 and 3, yet he provided feedback on the first and second essay of Student 5, which resulted in improved scores for those essays. The last essay received no feedback, but Student 5 took the initiative to approach another peer with an average proficiency to review her essay, and she managed to show an improved score (60% to 71.7%) in her final post. Student 5 just completed one essay which showed an increase from 70% to 71.7% after peer review, but he did not post the rest of the essays despite many reminders.
For students with good proficiency who were paired together, it can be said that these peers would play the role of a good conscience. This form of good conscience motivates good students to be more accountable and to impress or try to be on par with their peers. Hence in Pair 2, an increase can be seen in the scores for Student 3’s first essay (66.7% to 83.3%), and in Student 4’s first and third essay which increased from 80% to 88.3% and 71.7% to 86.7%, respectively. However, it could also be observed that these students may also maintain the same scores for pre and post peer review. For instance, for Student 3, the same scores were recorded pre and post peer editing for the second and last essay. Student 4’s second essay also recorded the same scores. It could be because their first drafts were of good quality, and their peers could have provided more commendations than recommendations. Ge (2011) also found a similar trend in his advanced students whose writings were almost always favourably commented by others, and hence, saw no need to rewrite or make further improvements to their subsequent posts. Hence, the role of their peers was to support and monitor them to ensure that they were performing at the level that they were expected.
These findings show that blogging and peer review are exciting and useful collaborative writing activities that can help students to enhance their writing skills (Yu & Hu, 2017). In this case study, good proficiency students who were paired with similar peers made sure that their work was revised to get a better score or at least their standard of writing was maintained although their peers might have not made much contribution in their subsequent posts (Ge, 2011). Hence it is suggested that high proficiency students ought to be paired with someone with similar proficiency (Yu & Hu, 2017).
Average students who were paired with students of similar level were able to improve as shown in Student 5’s performance. Nevertheless, average students who were paired with a weak student might have been helped less by the peer review process regarding scores. Yet, average students who are paired with lower proficiency students should not feel that they have been taken advantage of as the most beneficial aspect of peer review is providing feedback or being a reviewer, instead of receiving feedback, as they will be able to scrutinise their own work better (Yu & Hu, 2017; Lundstrom & Baker, 2009).
Students with weak proficiency showed encouraging score improvements after being assisted by another peer, especially if their peers had high English proficiency (Allen & Mills, 2016). Ge (2011) also mentioned that the lower proficiency students were the ones who “made the greatest progress” and were “the most prominent beneficiaries”. However, it is suggested that perhaps low proficiency students should not be paired together or with high proficiency students as the help they render each other may be not be bilateral, compared to pairing them with an average proficiency student (Yu & Hu, 2017).
It can be concluded that students, regardless of their English proficiency levels, can be positively impacted by having blogging buddies who also serve as their peer reviewers. However, the writing instructor has to ensure that proper groundwork is laid before autonomy is given to the students to carry out this writing activity. Adequate training needs to be provided with sample essays examined and scoring guides used to help students evaluate the different elements of an essay. During the training, the instructor should also carry out open discussions and go through the moderation process with the students while providing constructive feedback and support to the students.
In addition, proper pairing of peers must be done to ensure bilateral benefit for the blogging pairs. A pre-test that ascertains the students’ proficiency level ensures that the pair’s Zone of Proximity Development is not too different and feedback that scaffolds learning can be given to each other. Students who have similar proficiency levels should be paired together, but students who have lower proficiency levels ought to be paired with students who have an average proficiency level. The instructor would want to avoid pairing students with low proficiency levels together.
With sufficient training as well as proper pairing, the students will be able to benefit from this writing activity. In addition, weblog’s public nature will further augment the benefits of this writing activity as the students are aware that they are not just writing for their teachers or for their peers, but they are in fact producing a piece of writing that will be presented to everyone in the blogosphere. However, since this was a case study that only involved eight students, the findings of this small sample size cannot be applied to a bigger population of L2 learners. Moreover, since only three essays were given to the students throughout the study, perhaps more rounds of essay writing can be done for a longer period to track and observe further patterns that can be found in their writing performance which is supported by a blogging buddy.
Data Archiving and Networked Services (DANZ): A close-up of the use of weblogs and blogging buddies: A case study in Malaysia.
DOI: https://doi.org/10.17026%2Fdans-xvp-kdz2 (Chua, 2021)
This project contains the following underlying data:
Data file 1. (Data Set and Metadata for A Close-up of the Use of Weblogs and Blogging Buddies)
Data file 2. (Data Set containing the Writing Scores for Bloggers Before and After Receiving Feedback from Blogging Buddies)
Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication).
Chua Yong Eng: Conceptualization, investigation, formal analysis, writing.
Sareen Kaur Bhar: Resources, validation, writing (review and editing).
The author would like to thank the eight students from Multimedia University for their contribution towards the smooth completion of the project.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
Partly
References
1. Liu N, Carless D: Peer feedback: the learning element of peer assessment. Teaching in Higher Education. 2006; 11 (3): 279-290 Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: ESL, Reading comprehension, flipped learning, metacognition and comprehension
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Partly
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Yes
Are the conclusions drawn adequately supported by the results?
Yes
Competing Interests: No competing interests were disclosed.
Is the work clearly and accurately presented and does it cite the current literature?
Partly
Is the study design appropriate and is the work technically sound?
Yes
Are sufficient details of methods and analysis provided to allow replication by others?
Partly
If applicable, is the statistical analysis and its interpretation appropriate?
Partly
Are all the source data underlying the results available to ensure full reproducibility?
Partly
Are the conclusions drawn adequately supported by the results?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Online Language learning
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 1 15 Feb 22 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)