OpenLearning

's Space


Simon D Angus

Monday, 12 Oct 2015


Simon D Angus   Simon D Angus  •  Monday, 12 Oct 2015

Journeys in Educational Innovation and Summative Peer Review, Iteration 2: Getting Back on the Podium

Earlier in the year, I shared my bruising experiences of introducing ‘flipped assessment’ (summative peer review of essays) into my 2nd year Economics class during semester one, 2014.
 
The experience caused a lot of soul searching and reflection on the unique nature of educational innovation (as opposed to scientific innovation), and later, musings on what academic circumstanceswould lead to the effective incubation of classroom innovation.
 
At the time of writing those reflections, I was in the middle of implementing a round of much-needed updates to the peer-review system and I promised to provide a report on the outcomes. Well, here it is.
 
First, a quick re-cap on what I changed:
  1. Because some students were unhappy with the lack of faculty involvement in their marking and feedback, I added a layer of faculty marking to all ‘essay outline’ (one page) assignments, such that students received three peer assessments (50% total) together with one faculty assessment (the remainder);
  2. Further, I developed software that would pre-identify around 15% of students who received the most in-coherent set of assessments (i.e. in the 19 binary rubric assessment assertions) on the major (2,000 word) essay, pre-enrolling them in a faculty assessment layer, again taking up 50% of these students marks;
  3. Next, I added an online Topic opt-in stage, so that the peer allocation system would ensure that assessors didnt mark the same topic as they submitted (thus addressing a particular gripe of the high-fliers who, in 2014, took the rational strategy of submitting vague or incomplete minor assignments so that other students wouldnt ‘steal’ their ideas for the main essay!);
  4. Finally, we tweaked the rubric to make it clearer and reduce further any ambiguity.
 
And importantly, I wrote at-length about these updates in FAQ posts pre-loaded onto forums as the students started the new semester in 2015. I wanted the students to know that Id learned, listened, and changed how things were done. I told them in lectures that they were part of an innovation cycle, and that I was constantly monitoring all aspects of the system to their benefit. I asked them for their heightened scrutiny and feedback.
 
I told them, above all, that I wanted the system to deliver educationally. If it didnt, I said, Id junk it.
 
So what were the results?
 
First, the value of the system  I re-ran a targeted, anonymous, student survey on the peer review system, obtaining nearly 50% response rate. Figure 1 below shows the inter-year comparison on the students overall satisfaction with the system.
 
Figure 1: Inter-year comparison of overall student satisfaction with the peer review system.

 

 
As you can see, the key difference is in the tails. The updates appear to have had the biggest impact on reducing the fraction of very unhappy students, whilst increasing the fraction of very happystudents. The right hand side of Figure 1 groups together the ‘Strongly Agree’ and ‘Agree’ into a ‘Positive’ sentiment column, and compares it to a similarly grouped ‘Negative’ column. Iteration 2 of the system sees almost two thirds of students happy with how it was done. About one in five are still not that pleased. This is down from the one in three of 2014, but still, there’s obviously some work to do.
 
Figure 2: Inter-year comparison of student evaluation of the impact of the peer review system on critical writing skills.

 

What about critical writing skills  something I was particularly keen to improve using the system? Figure 2 gives a similar analysis to Figure 1, based on the anonymous survey. Here, the shift from Iteration 1 to Iteration 2 has been less about the tails, and more about a general shift up-field: on average, students moved from a negative disposition towards a neutral or positive disposition.
 
The right hand side of Fig. 2 shows this well  from one in five of 2014 students being negative about the benefits of the system to their critical writing skills, the number is now just one in ten; a fraction Im comfortable with. On the other hand, almost three in five students are now positive towards the value that the system brings to their critical writing skills. This is a good number. But again, it could be better.
 
Taken together, it was encouraging to see that my changes had a measurably positive impact on student perceptions.
 
Of course, this is not the same as saying that the system has actually improved the students writing skills, but presumably we can assume a good degree of correlation.
 
Second  what about perceptions of the overall quality of the unit? This matters to me since, during 2014, the units standardised evaluations took a big hair-cut. Indeed, the hair-cut was across the board: despite my innovation being about feedback, and the few normal content updates, every one of the five standard areas in our unit evaluation system (learning objectivesintellectually stimulatinglearning resourcesfeedback, and overall satisfaction’) took a dive of between 0.4 to 0.2 (out of 5).
 
Im pleased to report that the evaluations for 2015 bounced back strongly: I received my highest ever overall satisfaction rating, with the feedback segment leading the way, also at its highest point Ive ever had (and miles above the faculty average for this dimension). Similarly, across the board, in those seemingly uncorrelated dimensions such as intellectually stimulating, the responses all went back up to their 2013 high-points.
 
Oh, and since Ive previously compared my experiences to Chris Froomes Tour de France results, it was interesting to note that like my fortunes, Froome bounced back from a poor showing in 2014 to be back on the top step in 2015. Lets see how long the synchrony lasts!
 
As my thoughts start to turn towards semester 1, 2016, Im in a better place than Spring 2014, but theres still work to do. For one, Ive committed to keeping the peer review system: the educational benefits now seem to be finding their way to the majority of students; the systems and software I have now allow me to monitor what is happening with far greater detail and intervene as I need to; and, the system is scalable  ensuring a degree of future-proofing as my enrolments likely float north of the 100 or so I have now.
 
The challenge now is to take another forensic look at the student comments, to seek the further wisdom from my own peers, and to look carefully at any final pressure points in the educational experience from a student perspective.
 
... Seems that my educational innovation has more in common with Chris Froomes Team Sky than just the results  theyve been pursuing marginal performance gains for years!!
Loading

About This Blog

OpenLearning is an online learning platform governed by a pedagogy comprised of student empowerment, authentic, active learning experiences, and community and connectedness. This blog covers:

1. The Educationist is an email publication driven by external authors sharing ideas, opinions and academic work on education discourse. All contributions within this category are licensed based on the author's discretion and written specifically for this blog

2. Shared know-how and first-hand MOOC experiences from the OpenLearning Team. 

3. The latest news and education trends happening on the platform and within the online education space.



Get Involved


Categories

Error: Loading CSS chunk default-node_modules_tanstack_react-query_build_umd_index_production_js-ReduxStorage_Certific-542710 failed. (error: https://assets.openlearning.com/chunks_live/default-node_modules_tanstack_react-query_build_umd_index_production_js-ReduxStorage_Certific-542710.a7746914f99916c6.css)

Chat