FRAMING FEEDBACK

Choosing Review Environment Features that Support High Quality Peer Assessment.
Framing of a peer review task can significantly impact reviews. In this 3-part study, subtle feature changes in rubrics, task structure, and artifact representation resulted in reviews that were significantly different in both the quality and focus of reviewer feedback.

Numeric ratings prompt more explanation

Shorter tasks prompt goal-oriented feedback

Drafts encourage reviewers to focus on process

Quick summary

Numeric scales elicit explanatory but lower quality reviews
When peer reviewers are given objective scales like number or letter ratings for critical evaluation, their number of explanatory feedback significantly increased, although the feedback became less focused on improvement for the reviewee.
Structuring review tasks elicits more diverse feedback
Structured review environment elicited feedback that focused more on the goal of an artifact. In the experiment, reviewers in the structured environment were prompted to attend to not only the aesthetic design of an artifact, but also the underlying message of the work. As a result, their comments were more diverse, more positive, and higher in quality.
Showing drafts elicits goal-oriented feedback
Peer reviewers who were shown sketches of a website design provided longer feedback that emphasized more on the user experience and global goals for the webpage. It is possible that hand-drawn representations may have cued reviewers to consider the dynamic process behind a static outcome.

Meet the team


Catherine Hicks
Postdoctoral Fellow in the UC San Diego Design Lab

Ailie Fraser
Second year Computer Science PhD student

Vineet Pandey
Third year Computer Science PhD student

Crystal Kwok
UI/UX Designer at CaseStack
UC San Diego alumni

Rachel Chen
Fourth year Psychology undergraduate student

Scott Klemmer
Associate Professor of Cognitive Science and Computer Science & Engineering