ComPAIR Pilot

The ComPAIR pilot took place at UBC during the 2014/15 academic year. The LT Hub gathered feedback from instructors, teaching assistants (TAs), and students to evaluate the tool’s perceived strengths and weaknesses, and to develop implementation recommendations. Another round of student feedback was also collected during the 2018/19 academic year.

mattermost logo

ComPAIR is a peer assessment and feedback application in which students first answer an assignment and then compare and respond to pairs of peer answers. For each pair, students pick the answer they think better meets instructor-set criteria (e.g., “Which is better articulated?”, “Which is more accurate?”) and write feedback to each peer.

Because ComPAIR shows work in pairs, students can tap into their natural skill of comparative judgement (Thurstone, 1927) to identify strengths, weaknesses, and constructive criticism of other work as well as reflect back on their own work.

ComPAIR was developed at UBC as a collaboration between instructors, researchers, and technology experts, with ample feedback from students and TAs. A beta version of ComPAIR launched in three courses representing first-year English, third-year Math, and third-year Physics in 2014/15. Two of the courses were blended and one fully online. All three instructors participated in interviews, and six TAs and 168 students responded to separate surveys. The student evaluation is further detailed in a Teaching & Learning Inquiry article.

In 2018/19, 407 students responded to a revised survey across eight courses representing Applied Science, English, Integrated Sciences, Physics, and Visual Arts at the first-year or third-year level.

Instructors rated ease-of-use high and all gave their experience a “very positive” rating. TAs rated ease-of-use mostly high and most gave the experience a “somewhat positive” rating.

Perceived strengths

Exposure to peer work: In ComPAIR, students see peer work during the review process and after, when all answers are available for the class to review. Instructors felt students benefited from this exposure and the opportunity it afforded students to informally assess how they compared to peers. TAs also used the pool to select numerous anonymous examples to discuss further in tutorial groups.

Training for evaluation: Instructors could productively reverse roles, asking students to assess work in ComPAIR using customizable criteria that also trained them how to evaluate. The paired context also better simulated how instructors/TAs themselves mark.

High degree of flexibility: Instructors found ComPAIR simply facilitated a complex, often paper-based process, in a way that enabled cross-discipline use with a variety of assignment types.

Levelling the playing field: The online and anonymous nature of ComPAIR assignments meant instructors and TAs believed more students could contribute equally and comfortably, both in answering and exchanging feedback.

Preparing students for class: TAs noted students seemed better prepared for class, as the ComPAIR assignments set the stage for discussing the material as well as engaging in constructive criticism.

Perceived weaknesses

Varying feedback quality: If not marking or monitoring the feedback in ComPAIR to create accountability, instructors and TAs noticed variable quality responses from students.

Questionable ranking accuracy: Compared with traditional grading, the ranking generated by ComPAIR did not always correlate strongly, making it less reliable for those instructors wishing to crowdsource part or all of the student marks.

Missing features: Instructors and TAs would have liked features that were missing at the time, namely, a student view (since added), ability to handle late assignments, and an easier interface or downloadable report overview of all student submissions (since updated).

Lack of integration: During the pilot, ComPAIR worked as a standalone application, so classlists had to be maintained by the development team manually. (ComPAIR is now integrated with Canvas.)

Student experience in the initial pilot varied by course, with English and Physics students reporting a more positive experience (66% and 88%, respectively) than Math students (41%). In the broader 2018/19 survey, students across courses reported a more consistently positive experience (83%). Overall results from this survey are shown in the chart below.

Very negative (0.2%)

Somewhat negative (3.2%)

Neutral (14.8%)

Somewhat positive (45.2%)

Very positive (37.5%)

Perceived strengths

Strong ease-of-use: In both the initial pilot and later survey, students reported high usability for ComPAIR. The latter data are displayed below.

Initially, ComPAIR was…

Very confusing (1.0%)

More confusing than easy (9.3%)

Neither confusing nor easy (20.1%)

More easy than confusing (39.8%)

Very easy (29.7%)

Later, ComPAIR was…

Very confusing (0%)

More confusing than easy (0%)

Neither confusing nor easy (7.6%)

More easy than confusing (27.0%)

Very easy (65.4%)

 

Anonymity promotes honesty: Students in both rounds of evaluation appreciated the online, anonymous nature of the application, which lowered the social risk of exchanging genuine feedback with peers.

“You are able to anonymously compare the works of two anonymous people…it allows for a great amount of honesty.”
“I liked how it gave me the opportunity to receive anonymous feedback, outside of class time, which people might otherwise have been uncomfortable giving in person.”

Encourages self-reflection: Students also echoed in both sets of feedback the benefits of informally comparing their work with peers and identifying areas for self-improvement.

“Nice to see others’ [assignments] while also critically analyzing and critiquing others’ work. It helped me understand what I could do better.”
“By making comparisons based on a set of criteria, it allows you to think objectively about your own work.”
“Lets you think about your own work as well as how you could improve.”

Increases understanding and skills: Particularly in the later survey, students emphasized how ComPAIR assignments helped broaden their understanding of how to do and evaluate the work.

“Gives you the ability to view examples of others students’ ideas…to better understand the topic of the assignment.”
“Helps you judge what a good answer is by comparing two so you can find what qualities makes one better.”
“I learned how to skim through people’s assignments quickly and…tell what was a ‘good’ assignment and what was a ‘bad’ one.”

Perceived weaknesses

Comparisons can create confusion: Both rounds of feedback highlighted how choosing between answers in some pairs was more challenging than others, particularly when students had less criteria to consider. This made students unsure how to decide and increased the time commitment for completing the comparison phase.

“What should we be focused on?”
“[Need] more guidance on how to choose which one is better”

Some unhelpful peer feedback: As with any peer review process, sometimes peers did not put sufficient effort into writing thoughtful, detailed feedback. At other times, different peers wrote contradictory feedback statements for a single student answer, particularly in courses with more subjective assignments.

“The way some people gave feedback was not helpful at all or really difficult to understand.”
“People gave very opposing opinions; sometimes I didn’t know which direction I should move in.”

Strict deadlines caused stress: Since ComPAIR does not allow late submissions for any phase, students felt additional pressure about getting their answers, comparisons, and self-evaluations done on time.

“Ideally submissions would be time stamped and marks would simply be deducted for late submissions but assignments could still be submitted, within reason.”

Comparison interface tedious to navigate: Especially in the recent survey, students noted the inefficient nature of the comparison screen, particularly for reviewing file uploads.

“Made me scroll between the question box and the submission back and forth, making it very time consuming to write out a good comparison.”

The following are recommendations for how ComPAIR could best be implemented to maximize perceived benefits and minimize perceived shortcomings as a pedagogical tool.

  1. Build to or from ComPAIR assignments: Assignments felt more beneficial to students when presented as part of a larger process. It may be better for students to use ComPAIR in the context of a bigger course goal (e.g., prep for writing term paper, evaluation of a project draft), rather than for standalone assignments.
  2. Clearly explain the objective: Students who better understood what underlying skill they were practicing in the application felt they learned more from the assignments. It is key for students to know the end goal of using ComPAIR—how comparing will concretely help their learning—not only how it works.
  3. Provide a low-stakes training round: Students felt more confident when practice in comparing answer pairs, especially with review of outcomes, happened prior to marked assignments in ComPAIR.
  4. Use detailed, multiple criteria/rubrics: Students given more guidance said they learned more from comparing. Explaining precisely what to look for during comparisons—with multiple descriptive criteria in the application and/or detailed external rubrics—may result in stronger learning and possibly richer feedback.
  5. Require 2+ comparisons per assignment: Despite low reported confidence giving peer feedback, many students said they learned simply from practicing this with answer pairs. A minimum of two comparisons per assignment is suggested to give students repeated practice to learn from, though most instructors use and find success with the default of three or more.
  6. Use ComPAIR’s ranking for marking only with validation: Our internal research so far indicates that ComPAIR may not reliably map to traditional grading, as it ultimately relies on the skills and training of novices (students) to provide accurate, well-informed rankings of their peers’ answers. Many instructors have found it effective to mark students with some combination of their participation in the process and the quality of work they submit.
  7. Mark the peer feedback: Attaching weight to how earnestly students respond to one another’s work has helped instructors increase the quality of peer feedback given.

For help getting set up with ComPAIR at UBC, contact the LT Hub or your Instructional Support Unit.

For more information on this pilot and its outcomes, contact Tiffany Potter, James Charbonneau, or Letitia Englund.

To learn more about the pilot process at UBC, visit the page on how pilots work.