Thursday, February 16, 2012

How To Minimize Confusion Caused By Rater Disagreements In 360-Degree Feedback?

Current research suggests that rater agreement will indeed vary in 360-degree feedback assessments largely because feedback research shows that different rater sources provide unique, performance-relevant information (Nowack, 20091; Lance, Hoffman, Gentry & Barankik, 20082).

Given these findings, vendors who do not provide a way for participants to evaluate within-rater agreement in feedback may increase the probability that average scores used in reports can be easily misinterpreted, particularly if they are used by coaches to help participants using 360-degree feedback assessment focus on specific competencies and behaviors for developmental planning purposes.

Participants should reflect on why specific behaviors might be perceived and experienced positively by some raters and negatively by others.  A large discrepancy by raters often suggests a polarized perspective and one that might require additional information gathering to truly understand the meaning of within-rater differences on the 360-degree feedback behavior in question.

Coach's Critique:

Imagine the confusion that can be caused as a result of a participant's bosses, peers, direct reports all rating them differently… Imagine the challenge that participant faces when trying to interpret such discrepant views… Imagine the assumptions and misinterpretations that occur as a result of these differences…

When interpreting 360-degree feedback results, it seems necessary to have a way to gauge at the different ratings not only between groups but within groups. For one thing, the 360 tool should be able to demonstrate these differences so that a participant can determine the weights of each set of scores. Since the interpretation process may be somewhat subjective, it seems important to utilize a tool that can demonstrate results in the most objective way.

Another recommended strategy to minimize the challenges of rater differences is to work with a coach, consultant help participants properly interpret the results, by focusing on patterns of ratings rather than individual scores, and facilitate a dialouge that determines reasons for variability in opionions of raters.

What are your suggestions about how to handle rater disagreements in 360-degree feedback reports?

  1. Nowack, K. (2009). Leveraging Multirater Feedback to Facilitate Successful Behavioral Change. Consulting Psychology Journal: Practice and Research, 61, 280-297. []
  2. Lance, C.E., Hoffman, B.J., Gentry, W. & Baranik, L.E. (2008). Rater source factors represent important subcomponents of the criterion construct space, not rater bias. Human Resources Management Review, 18, 223-232 []
Thanks to Sandra Mashihi / Result Envisia Learning
http://results.envisialearning.com/
 
 
 
 
 

No comments: