Procedural justice and Interactional justice: Assessing employees' perception of fairness of performance appraisal; an empirical study of a small manufacturing company.
This research identifies procedural justice and interactional justice influences on perceived fairness of performance appraisal at a small private manufacturing company located in Newcastle, UK. Distributive justice refers to the perceived fairness of procedures used to determine the appraisal ratings. Interactional justice refers to the perceived fairness of the rater's interpersonal treatment of the ratee during the appraisal process. A qualitative and quantitative case study method was used to obtain an understanding of employee perceptions of the fairness of their performance appraisal. Data from both interviews with nine employees was collected along with questionnaires completed by these participants. Two hypotheses were developed. Both hypotheses were supported by research data.
The researcher will assess the relationship between perceived fairness of justice among employees of the performance appraisal system. The aim of this research is to prove through this study that level of employees' satisfaction with the appraisal system is influenced by the employees' perceived fairness of procedural and interactional justice of the performance appraisal session.
Performance appraisal is a process designed to evaluate, manage and ultimately improve employee performance. It should allow the employer and employee to openly discuss the expectations of the organisation and the achievements of the employee. That is, the primary emphasis is on future development of the employee within the objectives of the organisation.
There is no universally accepted model of performance appraisal. However, more often than not this process is designed around the following elements: setting performance goals and objectives; measurement of performance against those goals and objectives; feedback of results; amendments to goals and objectives. Performance appraisal systems can provide organisations with valuable information to assist in the developments of organisational strategies and planning. The information gained from this process can assist: in identifying and developing future management potential; in increasing performance and overall productivity; it works towards identifying strengths and managing weaknesses; in providing clarity to employees about an organisation's expectations regarding performance levels; in providing an opportunity to audit and evaluate current human resources and identify areas for future development.
Managers may conduct appraisals primarily to affect employee input through the feedback process, or justify some sort of human resource action (termination, transfer, promotion etc). Jawroski and Kohli (1991) identify other benefits that can be obtained from performance appraisals. Among these benefits are increase in role clarity, performance, and job satisfaction. Given the positive returns obtained from performance appraisals, one could reasonably expect that organisations would devote considerable resource to the appraisal process. Correspondingly, it may be anticipated that managers try to make certain that the dimensions of the appraisal process are known, understood, and supported by the participants.
There's probably no management process that has been the subject of more research than the performance appraisal. At the best managed companies, the performance appraisal is no joke - it is a serious business that powers the success of the organization. (Montague, 2007)
It has been suggested that to enhance satisfaction, managers should consider expanding the evaluation criteria to include those criteria which are important to the employee, perhaps creating a participatory performance appraisal system. (Thomas and Bertz 1994) In fact, employee input into the process has been described as having an impact on the perceived fairness of the evaluation (Latham at el. 1993). It has been stated that the opinions of employees, as they pertain to the appraisal system, may be greater determinant of the system's effectiveness than the validity or reliability of the system itself (Wanguri 1995). As stated by Thomas and Bretz (1994) without a sense of ownership, both managers and employees may view the process with fear and loathing. Thus, they contend that a major concern in the evaluation process is an acceptance of the system by those employees being evaluated. To this end, if employees believe they are evaluated based upon inappropriate criteria, it would follow that their commitment to and satisfaction with the organisation supporting this particular evaluation system would be correspondingly reduced.
Levy & Williams (2002) argue that identifying, measuring, and defining the organizational context in which appraisal takes place is integral to truly understanding and developing effective performance appraisals. Further, it is believed that this has been the framework driving the performance appraisal research since about the 1990 and into the beginning of the 21st century. Whether it is discussed as the social-psychological process of performance appraisal (Murphy & Cleveland, 1991), the social context of performance appraisal (Ferris, Judge, Rowland, & Fitzgibbons, 1994) the social milieu of performance appraisal (Ilgen at al. 1993), performance appraisal from the organisational side (Levy & Steelman, 1997) the games that rates and rates play (Kozlowski, Chao & Morrison, 1998), or the dues process approach to performance appraisal (Folger, Konovsky & Cropanzao, 1998) it is argued along with other scholars that performance appraisal takes place in a social contact and the context plays a major role in the effectiveness of the appraisal process and how participants react to the process (Farr & Levy, 2004).
It has been suggested elsewhere that research over the last 10 years has moved noticeable away from a limited psychometric scope and toward an emphasis on variables that compose the social context (Fletcher, 2001).
Levy & Williams (2002) definition of distal variables is generally consistent with Murphy and & Cleveland (1995). Specifically, distal variables are broadly construed as contextual factors that affect many human resource systems, including performance appraisal. In other words, distal variables are not necessarily related to performance appraisal, but they may have unique effects on the performance appraisal process that are useful to understand and consider.
Distal factors include but are not limited to organisational climate and culture, organisational goals, human resource strategies, external factors, technological advances, and workforce composition. Levy & Williams (2002) believe these factors have an effect on rater and rate behaviour, although not directly. For instance, an organisation that espouses a continuous learning culture may structure and implement a very different type of performance appraisal system than an organisation without such a culture.
A review of the performance appraisal literature over the last 7-10 years reveals little systematic empirical work on the distal variables other than some studies on culture, climate and technology issues (see, e.g. Hebert & Vorauer, 2003). While this is at some levels disappointing, it is rather understandable. First, there is little theory specific to performance appraisal to methodically guide this level of research. Second, the breadth of the constructs Levy & Williams (2002) construe as distal make it difficult to measure and implement within the research setting. Third, given the distal nature of these factors, their direct effects on performance appraisal behaviour may be small. Perhaps closer examination of the relationships between distal and proximal relationships would prove more fruitful. Even with the difficulties regarding this type of research, however, it is believed it will be important to continue examining these factors to fully understand the social context in which performance appraisal operates. (Levy & Williams, 2002)
The next two section of the paper will underscore those proximal variables (both process and structural) receiving attention in the recent appraisal literature. Some researchers chose to categorize the proximal variables as either process (i.e. having a direct impact on how the appraisal process is conducted including things such as accountability or supervisor - subordinate relationships), or structural (i.e. dealing with the configuration or makeup of the appraisal itself and including areas such as appraisal dimensions or frequency of appraisal).
Rater affect is one of the most studied rater variables. Although the literature has not been consistent regarding a formal definition of affect in performance appraisal (Lefkowitz, 2000), a good general definition linked to most of this research involves liking or positive regard for one's subordinate. The Affect Infusion Model (Forgas & Georges, 2001) suggests that affective states impact on judgements and behaviours and, in particular, affect or mood plays a large role when tasks require a degree of constructive processing. For instance, in performance appraisal, raters in good moods tend to recall more positive information from memory and appraise performance positively. Consistent with the Affect Infusion Model, a few recent studies have examined the role of mood or affect in performance appraisal. Lefkowitz (2000) reported that affective regard is related frequently to higher appraisal ratings, less inclination to punish subordinates, better supervisor - subordinate relationships, greater halo effect, and less accuracy. A couple of recent studies have looked at the role of similarity in personality and similarity in affects levels between raters and rates, finding similarity is related to appraisal ratings. Antonioni and Park (2001) found that affect was more strongly related to rating leniency in upward and peer ratings than it was in traditional top-down ratings and the this effect was stronger when raters had observational time with their subordinates. They concluded from this that raters pay so much attention to their positive regard for subordinates that increased observations result in noticing more specific behaviours that fit their affect-driven schema. It was also found that although affect is positively related to appraisal ratings; it is more strongly related to more subjective trait-like ratings, then to ostensibly more objective task-based ratings. Further, keeping performance diaries tended to increase the strength of that relationship between affect and performance ratings leading the authors to conclude that perhaps affect follows from subordinate performance level rather than the other way around.
A second broad area related to raters that has received considerable research attention has to do with the motivation of the raters. Traditionally, research seemed to assume that raters were motivated to rate accurately. More recently, however, researchers have begun to question whether all or even most raters are truly motivated to rate accurately. One line of research related to rater's motivation has focused on the role of individual differences and rating purpose on rating leniency. Most practitioners report overwhelming leniency on the part of their raters and this rating elevation has been found in empirical papers as well as surveys of organisations (Murphy & Cleveland, 1995).
The role of attributions in the performance appraisal process has also attracted some recent research attention. In some of these studies investigators have examined how the attributions that raters make for ratees' behaviours affect their motivation to rate or their actual rating. For instance, using a traditional social psychological framework, researchers found that whether individuals opted for consoling, reprimanding, transferring, demoting, or firing a hypothetical employee depended in large part on the extent to which rater believed that the exhibited behaviour was due to ability or effort. It was found that both liking and attributions mediated relationships between reputation and reward decisions. More specifically, raters consider ratees' behaviour and their reputations when drawing attributional inferences and deciding on appropriate rewards. The implications of this line of research are clear: attributional processing is an important element of the rating process and these attributions, in part, determine raters' reactions and ratings. (Murphy & Cleveland, 1995)
A second line of research related to rater motivation has to do with rater accountability, which is the perceived potential to be evaluated by someone and being held responsible for one's decisions or behaviours (Frink & Ferris, 1998) With respect to performance appraisal, accountability is typically thought of as the extent to which a rater is held answerable to someone else for his or her ratings of another employee. They concluded that accountability can result in distortions of performance ratings. It is demonstrated that raters told that ratees had been rated too low in the past responded by inflating ratings while others told that they would have to defend their ratings in writing provided more accurate ratings. In a follow up to this study it was hypothesized that the accountability pressure on raters to justify ratings may operate through an increased motivation to better prepare themselves for their rating task. This was manifested in raters paying more attention to performance and recording better performance-related notes. A related study looking at accountability forces in performance appraisal found that raters inflated ratings when they were motivated to avoid a negative confrontation with poor performers, but did not adjust ratings downward when good performers rated themselves unfavourably (Levy & Williams 1998).
A second major of focus of performance appraisal research consist of research centred on the performance appraisal ratee. Two areas were covered, in particular, the links between performance ratings and rewards; those elements of the performance appraisal process which increases ratees' motivation such as participation. Related article argues the while pay is an important motivator along with recognition, work enjoyment , and self-motivation, very few organisations actually link the performance appraisal system to pay or compensation in a clear, tangible way (Mani, 2002). Both traditional academic research (Roberts & Reed, 1996) and more practitioner-focused research (Shah & Murphy, 1995) have identified the significance of participation in the appraisal process as an antecedent of ratees' work motivation. It suggests that participation is simply essential to any fair and ethical appraisal system. It was identified that participation and perception of fairness as integral to employees' perceptions of job satisfaction and organisational commitment. Roberts & Reed (1996) take somewhat similar track in proposing that participation, goals, and feedback impact on appraisal acceptance which affects appraisal satisfaction and finally employee motivation and productivity. Performance appraisals are no longer just about accuracy, but are about much more including development, ownership, input, perceptions of being valued, and being a part of an organisational team. Focus on reactions to the appraisal process Cardy and Dobbins (1994) arguing that perhaps the best criterion to use in evaluating performance appraisal systems was the reactions of ratees. The claim was that even the most-psychometrically-sound appraisal system would be ineffective if ratees (and raters) did not see it as fair, useful, valid, accurate, etc. Good psychometrics cannot make up for negative perceptions on the part of those involved in the system. Folger et al. (1992) define three elements that must be present to achieve higher perceptions of fairness: adequate notice, fair hearing, and judgement based on evidence. Although they identified specific interventions that should be implemented to increase due process, they cautioned that “due process mechanisms must be implemented in terms of guiding principles (i.e. designed with process goals in mind) rather than in a legalistic, mechanical, rote fashion.
In general studies have found that both ratees and raters respond more favourably to fair performance appraisal systems (e.g. less emotional exhaustion, more acceptances of the feedback, more favourable reactions toward the supervisor, more favourable reactions toward the organisation, and more satisfaction with the appraisal system and the job on the part of both rater and rate) (Taylor et al. 1995, 1998).
Researchers have posited that trust is the key element in managing the supervisor - employee relationship. According to Mayer and Davis (1999) trust is made up of three components: ability, benevolence, and integrity. In other words, if an employee believes a supervisor has the skills to properly appraise, has the interests of the employee at the heart, and believes the supervisor upholds standards and values, the employee is likely to trust that supervisor. Interest in understanding the processes related to trust are the result of research that support both direct and indirect effects of trust on important organisational and individual outcomes. For instance it is supported by research the relationship between trust and outcomes such as employee attitudes, cooperation, communication, and organizational citizenship behaviours. As with appraisal perceptions and reactions it is also believed that trust issues can limit the effectiveness of performance appraisal. If ratees have low levels of trust for their supervisor, they may be less satisfied with the appraisal and may not as readily accept feedback from the source. More to come...
Behaviourally Anchored Rating Scales (BARS) are a relatively new approach to performance evaluation. They are in effect a combination of the graphic rating scales and the critical incident method. An actual description of important job behaviour is developed and “anchored” alongside the scale. The evaluator is then asked to select the description of behaviour which best matches actual employee behaviour for the rating period.
In a controlled field study, Silverman and Wexley (1984) used BARS to test the effect of rate participation on the appraisal process. BARS were developed for each of the following job classifications: clerical, non-clerical staff, technical and professional, nursing, management/supervisory. Those employees who participated in creating, and were evaluated by, the behaviourally-based scales, had a more positive reaction to the entire performance appraisal process. Specifically, they felt that the performance appraisal interviews were more useful, that their supervisors were more supportive, and that the process produced more motivation to improve job performance.
BARS address many of the problems often found in traditional evaluation approaches such as the halo effect, leniency, and the central tendency error. In addition, research suggests that many employees prefer this evaluation method (Rarick & Baxter, 1986) BARS are however not a panacea for management and do possess both advantages and disadvantages. According to Rarick and Baxter (1986) advantages of BARS are: clearer standards - both subordinate and superior have a clearer idea of what constitutes good job performance. Ambiguity concerning expectations is reduced; more accurate measurement - because individuals involved with the particular job develop the BARS instrument they have a good understanding of the requirements for good performance; better performance feedback - since the BARS is based on specific behaviour it provides a guideline for improving future work performance; better consistency - BARS have been shown to be more consistent in terms of reliability than more traditional evaluation methods. In other words, when more than one supervisor rates the same employee, the results are more similar when BARS is the evaluation method. Behaviourally Anchored Rating Scales are, however, not without drawbacks. Disadvantages of BARS are: more costly - more time and effort, and eventually more expense is involved in the construction and implementation of BARS; possible activity trap - since BARS are more activity oriented, they may cause both supervisor and subordinate to become more concerned with activity performance rather than accomplishing actual results; not exhaustive behaviour scale - even if the rator posses lengthy listing of behaviour examples he/she may not be able to match the observed behaviour with the stipulated anchor.
As Rarick and Baxter (1986) note Behaviourally Anchored Rating Scales have the potential to increase both the accuracy of employee appraisal and ultimately the effectiveness of the organization. BARS are equally useful as a judgemental instrument and as an employee developmental device. They are designed to make performance appraisal more accurate by minimising ambiguity and focusing on specific behaviour. BARS move employee performance appraisal away from the subjective opinions of the evaluator and closer towards on objective measure of true performance.
The advantages and disadvantages of using performance assessment in making employment decisions are well documented (e.g. Murphy & Cleveland, 1995). The limitations of performance assessment, such as inflated ratings, lack of consistency, and the politics of assessment (Tziner, Latham, Price & Haccoun, 1996), often lead to their abandonment. Managers responsible for delivering performance reviews who are uncomfortable with the performance rating system may give uniformly high ratings that do not discriminate between rates. Poor ratings detract from organisational uses and increase employee mistrust in the performance appraisal system (Tziner & Murphy, 1999). Employees on the receiving end of the appraisal often express dissatisfaction with both the decisions made as a result of performance assessment and the process of performance assessment (Milliman, Nason, Zhu & De Cieri, 2002), which may have longitudinal effects on overall job satisfaction (Blau, 1999) and commitment (Cawley, Keeping & Levy, 1998). The extensive research on performance appraisal (Arvey & Murphy, 1998: Fletcher, 2001: Fletcher & Perry, 2001, Murphy & Cleveland 1995) has not addressed the fundamental problems of the performance appraisal process the performance appraisal is influenced by a variety of relevant, non-performance factors such as cultural context (Latham & Mann, 2006), that it does not provide either valid performance data or useful feedback to individuals (Fletcher, 2001) , or that performance appraisal instruments often measure the “wrong things” (Latham & Mann, 2006).
Murphy and Cleveland (1995) state that “a system that did nothing more than allow the making of correct promotion decisions would be a good system, even if the indices used to measure performance were inaccurate or measure the wrong set of constructs.” No assessment system, however, would meet with success if it did not have the support of those it assessed. In developing a new performance appraisal system it is important to use past research on performance appraisals that identified a number of factors that lead to greater acceptance of appraisals by employees. Firstly, legally sound performance appraisals should be objective and based on a job analysis, they should also be based on behaviours that relate to specific functions that are controllable by the rate, and the results of the appraisal should be communicated to the employee (Malos, 1998). Secondly, the appraisals must be perceived as fair. Procedural fairness is improved when employees participate in all aspects of the process, when there is consistency in all processes, when the assessments are free of supervisor bias, and when there is a formal channel for the employees to challenge or rebut their evaluations (Gilliland & Langdon, 1998). In addition to perceptions of fairness, participation by employees in the appraisal process is related to motivation to improve job performance, satisfaction with the appraisal process, increased organisational commitment and the utility or value that the employees place on that appraisal (Cawley et al. 1998).
To overcome the problem of job-specific performance dimensions, the performance assessment system was based on behaviourally defined core competencies (Dubois 1993; Klein 1996). The core competencies had been previously identified through an extensive process as being common to all positions; these competencies were to become the basis for training new recruits and for the continuous development of existing members (Himelfarb, 1996). Fletcher & Perry (2001) stated the “the elements constituting what we normally think of as performance appraisal will increasingly be properly integrated into the human resources policies of the organisation - using the same competency framework for all HR processes, linking individual objectives with team and business unit objectives framing the input of appraisal to promotion assessment in an appropriate manner, and so on” making it “more effective mechanism and less of annual ritual that appears to exist in a vacuum.” Along the same lines, Smither (1998) went on to note that the same competency model should guide “numerous human resource initiatives”.
The competency development process used for this study followed the suggestions of Fletcher & Perry (2001) and Smither (1998) and included a review of functional job analysis data for general police constables that covered a majority of the different job positions. In this sense, the competencies were blended by incorporating the values and specific attributes (Schippmann et al., 2000). A blended approach is one that couples and organisation's strategy in the derivation of the broad competencies with the methodological rigor of task analysis. As Lievens, Sanchez, and De Corte (2004) note, blended approach is likely to improve the accuracy and quality of inferences made from the resulting competency model because a blended approach capitalizes on the strength of each method. Strategy is used as a frame of reference to guide subject matter experts to identify those worker attributes or competencies that are aligned with the organisation's strategy and the to use the task statements to provide more concrete referents for the associated job behaviours (Lievens et al., 2004)
The study of justice of fairness has been a topic of philosophical interest that extends back at least as far as Plato and Socrates (Ryan, 1993). In research in the organizational sciences, justice is considered to be socially constructed. That is, an act is defined as just if most individuals perceive it to be on the basis of empirical research (Cropanzao & Greenberg 1997). Each approach propose a different way of conceptualizing justice, from the provision of process control (Thibaut & Walker, 1975) to a focus on consistency control (Leventhal et al. 1980) and an examination of interpersonal treatment (Bies & Moag, 1986).
Performance appraisal systems are among the most important human resource systems in organizations insofar as the yield decisions integral to various human resource actions and outcomes (Murphy and Cleveland 1995). Reactions to appraisal and the appraisal process are believed to significantly influence the effectiveness and the overall viability of appraisal systems (Bernardin and Beatty 1984; Cardy and Dobbins 1994; Carroll and Schneier 1982, Lewer 1994), For instance. Murphy and Cleveland (1995:314) contended that “reaction criteria are almost always relevant and an unfavourable reaction may doom the most carefully constructed appraisal system”. Perceptions of fairness are important to all human resource processes, e.g., selection, performance appraisal, and compensation, and particularly so, to the performance appraisal process. Indeed, a decade ago, Cardy and Dobbins (1994:54) asserted that “with dissatisfaction and feelings of unfairness in process and inequity in evaluations, any appraisal system will be doomed to failure.” Other researchers have also acknowledged the importance of fairness to the success or failure of appraisal system (Taylor et al. 1995).
Procedural justice refers to the perceived fairness of the procedures used to determine appraisal outcomes (Greenberg 1986a), independent of favourability or fairness of the performance rating or its administrative consequences (Skarlicki, Ellard and Kelln 1998). Folger et. al (1992) have developed a procedural justice model for performance appraisal, rooted in the due process of law, and possessing three basic factors: adequate notice, a fair hearing and judgment based on evidence. Adequate notice involves giving employees knowledge of appraisal system and how it affects them well ahead of any formal appraisal. More specifically, it entails developing performance standards and objectives before the appraisal period commences. These standards and objectives must be well documented, clearly explained, fully understood and preferable set by mutual agreement, with employees only held accountable for standards and objectives properly communicated to them. Adequate notice also involves high appraisal frequency and giving employees constant feedback on timely basis throughout the performance evaluation period, so that employees can rectify any performance deficiencies before the appraisal is conducted (Folger et al. 1992). Studies show that adequate notice is important to employee perceptions of procedural fairness. Williams and Levy's (2000) study of 128 employees from three US banks revealed that system knowledge significantly predicts appraisal satisfaction and procedural fairness, controlling for the much smaller effect on organizational level. The second factor that affects employee perceptions of procedural fairness is a fair hearing. A fair hearing means several things in a performance appraisal context. These include: an opportunity to influence the evaluation decision through evidence and argument, access to the evaluation decision, and an opportunity to challenge the evaluation decision (Folger et al. 1992). Fundamentally, a fair hearing entails two-way communication, with employee input or voice in all aspects of the appraisal decision-making process.
Several researchers have consistently found the ‘voice' effects procedural justice in a variety of work contexts (Greenberg, 1986; Korsgaard and Robertson, 1995). In a study of 128 food service employees and their 23 supervisor at a large, US university, Dulebohn and Ferris (1991) found that the informal voice provided by influence tactics affected employee perceptions of fairness in the appraisal process. Two types of influence tactics were differentiated: the first on the supervisor and the second on the job. Influence of the supervisor focused on, for example, efforts at ingratiation. Influence on the job focused on, for example, manipulating performance data. Uses of supervisor-focused, influence tactics were positively associated with employees' perceptions of procedural justice, but uses of job-focused influence tactics were negatively associated. The authors argue that this negative association may result from reverse causation: perception of unjust appraisal procedures may encourage employees to adopt job-focused influence tactics.
The third procedural justice factor is the judgment based on evidence. This means convincing employees that ratings do accurately reflect performance by justifying evaluation decisions in terms of performance-related evidence (Erdogan, 2001). It also includes that judgement made on job-relevant criteria by a supervisor who received adequate training to conduct such assessment. Ratings overtly based on performance records and notes appear objective, unbiased. Those not so obviously based on such evidence appear subjective, judgemental. If a judgment is based on the evidence, it necessarily means that it is not based on external pressure, personal bias and dishonesty (Folger et al. 1992). A performance rating must therefore withstand scrutiny and reflect principles of sincerity and fairness. (S. Narcissa and M. Harcourt, 2008)
Leventhal at el. (1980) theory of procedural justice judgements focused on six criteria that a procedure should meet if it is to be perceived as fair. Procedures should (a) be applied consistently across people and across time, (b) be free from bias, (ensure that third party has no vested interest in a particular settlement), (c) ensure that accurate information is collected and used in making decisions, (d) mechanism implies the existence of opportunities to ask for modification of decisions, (e) conform to personal or prevailing standards of ethics and morality and (f) ensure that the options of various groups affected by the decision have been taken into account.
The term interactional justice refers to people concerns about the quality of interpersonal treatment they receive during the enactment of organizational procedures' (Bies 2001). In performance appraisals, interactional justice focuses on the quality of the interpersonal treatment employees receive during the appraisal process (Bies 2001).
The relationship between procedural and interactional justice has been a contentious issue (Bies 2001). Rahim, Magner and Shapiro (2000) note that some justice researchers view interactional justice as an interpersonal subcomponent of procedural justice, but others argue that they are conceptually distinct. Nevertheless, Bies (2001) concludes that ‘ it make theoretical and analytical sense to maintain the distinction between interactional justice and procedural justice', and research substantiates this view. For instance, in a study of 107 employees and supervisors at a large university, Cropanzano, Prehar and Chen (2002) found that procedural justice was associated with trust in top management and satisfaction with performance appraisal system, whereas interactional justice was associated with the perceived quality of treatment received from supervisors. Similarly, Masterson, Lewis, Goldman and Taylor (2000) found that interactional justice perceptions were directly related to employees' assessment of their supervisor, whereas procedural justice perceptions were related to employees' assessment of the organisation's system.
Bies (2001) has identified four factors that influence how fairly employees feel they have been treated by supervisors. These include: deception, invasion of the employee's privacy, disrespectful treatment and derogatory judgments. Deception occurs if a supervisor's words and actions are inconsistent, as for example, when a supervisor promises a pay increase if performance improves, but later refuses to honour that promise. People obviously expect trust in the employment relationship, and naturally feel a sense of grievance when that trust is violated. Invasion of privacy occurs if the supervisor gossips, spreads rumours, or unnecessarily discloses confidential information about an employee. Individuals obviously do not want certain information about themselves revealed to others, who have no right or reason to know. Disrespect is demonstrated if supervisors are abusive or inconsiderate in their words or actions. Abuse includes every conceivable kind of insult from racist remarks to “name-calling” to public humiliation. No one likes to be belittled. Derogatory judgments refer to wrongful and unfair statements and judgments about the employee's performance, for example, a supervisor accuses a subordinate of not having satisfactorily completed a task, even though the supervisor had failed to supply adequate resources to do so. No one enjoys being accused of something that they have not actually done or are not responsible for having done. Two studies have examined interactional injustice in other situations. Mikula, Petri and Tanzer (1990) looked at 280 descriptions of unjust incidents from the everyday lives of students. In this study 22 categories of incidents were identified. Incidents involving reproach and accusation (12,1%), lack of consideration for others (12,1%), and the breaking of promises (9,6%) accounted for the highest proportions of incidents people regarded as unfair. Other incidents involving betrayal of confidence (1,8%), talking behind someone's back (1,4%), and abusive or aggressive treatment (5%), though less common, were also deemed to be unfair. In a study, Bies and Tripp (1996) obtained data from 90 MBA students about work incidents that triggered thoughts and feelings of revenge. For instance, a supervisor lost a subordinate's trust by not fulfilling a promise to support an application for a promotion at a management meeting. In another incident, an individual was wrongly accused of stealing ideas from a boss when it was the boss who had stolen the ideas.
The case study is a research strategy which focuses on understanding the dynamics present within single settings. Examples of case study research include Slezinsky (1989) description on TVA, Allison's (1971) study of Cuban missile crisis, and Pettigrew's research on decision making at a British retailer. Cases studies can involve either single or multiple cases, and numerous levels of analysis (Yin, 1984). For example, Harris and Sutton (1986) studied 8 dying organisations, Bettenhausen and Murnigham (1986) focused on the emergence of norms in ten laboratory groups, and Leonard-Barton (1988) tracked the progress of 10 innovation projects. Moreover, cases studies can employ an embedded design, that is, multiple levels of analysis within a single study (Yin, 1984). For example the Warwick study of competitiveness and strategic change within major UK corporations is conducted at two levels of analysis: industry and firm (Pettigrew, 1988), and the Mintzberg and Waters (1982) study of Steinberg's grocery empire examines multiple strategic changes within the single firm. Case studies typically combine data collection methods such as archives, interviews and questionnaires, and observations. The evidence may be qualitative (e.g. words) quantitative (e.g. numbers) or both. For example, Sutton and Callahan (1987) rely exclusively on qualitative data in their study of bankruptcy in Silicon Valley, Mintzberg and McHugh (1985) use qualitative data supplemented by frequency counts in their work on the National Film Board of Canada, and Eisenhardt and Bourgeois (1988) combine quantitative data from questionnaires with qualitative evidence from interviews and observations. Case studies can be used to accomplish various aims: to provide description (Kidder, 1992) test theory (Pinkfiled, 1986; Anderson, 1983) or generate theory (e.g. Gersick, 1998), Harris & Sutton, 1986). A well-known example of case study research is Biggart's (1977) classic study of change at the U.S. post office. Another example is Heracleous and Barrett's (2001) well-executed case study of the implementation of electronic trading on the London Insurance Market, which was published in AMJ.
In many social science disciplines case study is frequently used, especially in political science, anthropology, comparative sociology and education. There are many ways that the case study can be conducted and each discipline uses its own variance. From a general methodological point of view there is much confusion about the case study as a way of conducting social science research. Ambiguity and lack of clarity concern the object of research and the way the object is defined. (Werschuren, 2003)
At first sight there is some consensus about the object of a case study. Following several influential authors in this field the object of a case study is one single case, temporally, physically or socially limited in size, complex in nature, unique and thus not comparable with other cases (Merriam 1988, Yin 1989, Ragin and Becker 1994, Creswll 1994, Stake 1995). A confusing aspect concerns the research methods to be used in a case study. Some of the definitions of case study are hazy at this point, others specify very little, while still others advocate the use of any research method, as long as it contributes to our knowledge of the case to be studied. Several authors agree that in a case study a variety of labour-intensive methods are to be used (Campbell 1975, Creswell 1994). Others implicitly or explicitly say that only qualitative methods should be used (Ragin 1989, Creswell 1994, Stake 1995), while still others also advocates the use of quantitative methods (Yin 1989, Lee 1999). A further ambiguity concerning a case study design relates to the scientific quality of the results to be obtained. Some people question the researcher's independence from these results, because of some variants of case study the researcher plays an interactive role instead of acting at a distance, and because methods are used that may be easily linked to the personality of the researcher, such as participant observation and unstructured, open ended, in-depth interviews. Others doubt the internal validity. The most frequently heard objection to case study, however is its low generalizability as a consequence of the fact thet only one or two cases are studied (Merriam, 1988, Ragin 1989, Yin 1989, Creswell 1994).
There is little consensus as to the methodological status of a case study as a type of empirical research As Mitchell puts it; “The term “ case study” may refer to several very different epistemological entities...”(Mitchell 1983). Divergence occurs with respect to: the empirical object of a case study and the way we look at it; the research methods that are used; and the adequacy of the result to be obtained.
Creswell (1994) for instance states: “case studies in which the researcher explores a single entity or phenomenon (the case), bounded by time and activity (a programme, event, process, institution, or social group) and collects detailed information by using a variety of date-collecting procedures during a sustained period of time”. Another definition is formulated by Yin (1989). In his opinion: “A case study is an empirical inquiry that: investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used.” (Yin 1989). Ragin (1989) for instance states that: “Case-oriented studies, by their nature, are sensitive to complexity and historical specificity... This strategy highlights complexity, diversity, and uniqueness, and it provides a powerful basis for interpreting cases historically.” Closely related to this definition, Stake (1995) argues that “...case study is the study of the particularity and complexity of a single case...” Mitchell too stresses the unique character of a case study, as he states: “In ordinary English usage there is a strong connotation that the word ‘case' implies a chance of haphazard occurrence. This connotation is carried over into more technical and sociological language in the form of implying that a case history or case material refers to one ‘case' and is therefore unique or is a particularity.” (Mitchell, 1983). Campbell, in discussing his strong rejection of the single case study strategy, seems to stress the gain in analytical power and pervasiveness making possible more, and more thorough, knowledge. He states: “How much more valuable the study be if the one set of observations were reduced by half and the saved effort directed to the study in equal detail of an appropriate comparison instance.” (Campbell, 1975).
The second ambiguity concerning the object of a case study concerns the limits of the object. For instance, Creswell (1994) and Yin (1989) hold an opposite opinion about the boundaries between the phenomenon to be studied (the case) and its context. For Creswell this boundary is clear, while it is not for Yin.
The third ambiguity to be mentioned here concerns the way the object of a case study is observed: either as a whole or as a conglomeration of different parts and aspects. For instance, Stroecker (1991) formulates: “Case studies are those research projects which attempt to explain (w)holistically the dynamics of a certain historical period of a particular social unit.”
As to the discussion about research methods to be used in a case study, here too there is a rather confusing state of affairs. Many authors, implicitly or explicitly promote the use of qualitative methods such as participant observation, qualitative content analysis of written and audio-visual documents and in depth interviews with key informants. But some other authors advocate the use of quantitative methods as well. According to Lee (1999) a case study may also include quantitative analysis such as log linear modelling and logistic regression. Yin (1989) too recommends the use of quantitative methods in a case study whenever appropriate, just like Stake who mentions the importance of quantitative methods in medicine and education (Stake 1995).
Qualitative research is multi-method research that uses interpretive, naturalistic approach to its subject matter (Denzin & Lincoln, 1994). Qualitative research emphasizes qualities of entities - the process and meanings that occur naturally (Denzin & Lincoln, 2000). Qualitative research often studies phenomena in the environments in which they naturally occur and uses social actors' meanings to understand the phenomena (Denzin & Lincoln, 1991). Qualitative research addresses questions about how social experience is created and given meaning and produces representations of the world that make the world visible (Denin & Lincoln, 2000). Beyond this qualitative research is “particularly difficult to pin down” because of its “flexibility and emergent character” (Van Maanen, 1998). Qualitative research is often designed at the same time it is being done; it requires “highly contextualized individual judgments”(Van Maanen, 1998); more over it is open to anticipated events, and it offers holistic depictions of realities that cannot be reduced to a few variables. (Gephart, 2004). Clarity can be gained by contrasting qualitative research with quantitative research that “emphasizes measurement and analysis of casual relations among variables”(Denzin & Lincoln, 2000). Although the two research genres overlap, qualitative research can be conceived of as inductive and interpretive (Van Maanen, 1998). It provides a narrative of people's view(s) of reality and it relies on words and talk to crate texts. Qualitative work is highly descriptive and often recounts who said what to whom as well as how, when, and why. An emphasis on situational details unfolding over time allows qualitative research to describe processes. (Gephart, 2004).
As Bryman, A. and Bell, E. (2007) note quantitative research has two distinct advantages. The first (if it designed and conducted properly) is that the results are statistically reliable. That is Quantitative research can be reliable to determine whether one concept or idea is better than other alternatives. The second distinct advantage is that results are projectable to the population. Quantitative research is essentially systematic and evaluative, not generative.
Survey research cannot capture the richness, complexity, and depth of value questions. It pays no attention to levels of meaning, nuances in language, or lived values. Quantitative researchers often treat numerical attitude scales as if they were providing interval data. In fact, it is often possible to attitudinal data in this way, but it is important to bear in mind that findings are at best an approximation to the actual underlying values. It is not justifiable to treat such scales as ratio data.
The researcher will assess the relationship between perceived fairness of justice of employees about the performance appraisal system. The aim of this research is to prove through this study that level of employee satisfaction with the appraisal system is influenced by the employee's perceived fairness of procedural and interactional justice of the performance appraisal session.
This research is rooted in an interpretive paradigm which maintains the social world is mostly what individuals perceive it to be, and that reality is socially constructed as individuals attach meaning to their experiences (Neuman, 2003). This approach and the Folger et al. (1992) model are particularly appropriate for this research, because of its potential for providing in-depth insights into the underlying phenomenon. (See also data collection - interviews)
The essence of the Leventhal et al (1980) model is that perceptions of procedural justice are likely to influence reactions towards the organization or organizational systems. Satisfaction with the performance appraisal system is an organization or system-referenced outcome. Therefore, perceptions of procedural justice are likely to be related to satisfaction with the appraisal system. (See also data collection - questionnaire)
Hypothesis 1: The higher the levels of a) applied consistency, b) free from bias, c) accurate information, d) mechanism, e) standards of ethics, f) consideration of options are during performance appraisal sessions, the higher the perceived procedural justice of the performance appraisal session will be.
As previously noted, interactional justice refers to the sensitivity of treatment and the adequacy of explanations offered by one's supervisor. Bies (2001) model proposes that interactional justice likely influences reactions toward supervisor, where as perceptions of procedural justice likely influences reactions toward organization or organizational systems. Employees are likely to be satisfied with an evaluator who treats them with sensitivity and adequately explains the performance appraisal procedure, the rationale for the performance evaluation, and communicated all of this information truthfully and honestly. (See data collection & questionnaire)
Hypothesis 2: The higher the levels of a) transparency (i.e. no deception), b) respect for employee's privacy, c) treatment, d) non-derogatory judgements are during performance appraisal sessions, the higher the perceived interactional justice of the performance appraisal session will be.
To date, there are limited studies available where the combined form of procedural justice and interactional justice is assessed by using Folger et al (1992) model and Leventhal et al (1980) model with Bies (2001) model and their influence on employee's satisfaction with performance appraisal system. The researcher did not find any study where a small size private engineering company operating in United Kingdom would have been the participant. Thus, it is an assumption that high level perceived fairness of procedural justice and interactional justice will result high level employee's satisfaction with performance appraisal system.
This research will involve employees from a small sized private engineering company operating in Newcastle, United Kingdom. There are around 50 staff and 10 management members. Participants shall have at least three years services with the company which leads to a sample of 32. Participating in this research was on a voluntary basis and finally, 16 participants showed interest in this research.
Data for this study was collected from two main sources: face to face interviews and questionnaires.
A semi-structured interview format was used with open-ended questions. Patton (2002) notes that ‘open-ended responses permit one to understand the world as seen by the respondents...without predetermining those points of view through prior selection of questionnaire categories'. Therefore, the use of open-ended questions provided opportunities for respondents to articulate their perceptions of fairness in their performance appraisal. The interviews were conducted on site in a separated meeting room to encourage employees to openly express their views. The researcher's main interest was in the participants' perceptions of events during the appraisal period (i.e. before receiving an appraisal score), during the appraisal meeting, and after the appraisal meeting, in terms of administrative action taken.
Respondents were required to complete a questionnaire using multiple- indicator measures as the concept in which the researcher is interested comprises different dimensions. (I.e. procedural justice; interactional justice)
Some variables of this model will be measured with only one item of the questionnaire. In most cases, however, variables will be measured using sum scores: Scores on the items measuring the variable will be the sum score divided by the number of items.
The researcher aims to calculate mean score, standard deviation, Cronbach's alpha and Pearson's r correlations between variables s. The mean of a sample or a population is computed by adding all of the observations and dividing by the number of observations. The standard deviation is the square root of the variance. In a population, variance is the average squared deviation from the population mean. Cronbach's alpha is a commonly used test of internal reliability. A computed alpha coefficient will vary between 1 and 0. An alpha coefficient 1 refers to perfect internal reliability and 0 refers to no internal liability. The 0.80 is employed as a rule of thumb to refer an acceptable level of internal liability. Pearson's correlation coefficient measures the strength of the linear association between variables. The value of a correlation coefficient ranges between -1 and 1. The greater the absolute value of a correlation coefficient, the stronger the linear relationship. The strongest linear relationship is indicated by a correlation coefficient of -1 or 1. The weakest linear relationship is indicated by a correlation coefficient equal to 0. A positive correlation means that if one variable gets bigger, the other variable tends to get bigger. A negative correlation means that if one variable gets bigger, the other variable tends to get smaller.
“Applied consistency” was measured by two questions. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A sample question reads as follows: “How fair is the way your supervisor is administering the performance appraisal session for all subordinates involved?”
“Free from bias” was measured by one question. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A sample question reads as follows: “How fair is it to say that the performance appraisal session is free from bias?”
“Accurate information” was measured by two questions. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A sample question reads as follows: “How fair is it to say that the information collected for performance appraisal is accurate?”
“Mechanism” was measured by one question. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A question reads as follows: “How fair is it to say that the mechanism is used to correct flawed or inaccurate decisions is effectively considered during performance appraisal?”
“Standards of ethics” was measured by two questions. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A sample question reads as follows: “How fair is it to say that the prevailing standards of ethics form a part of the performance appraisal?
“Consideration of option” was measured by one question. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A question reads as follows: “How fair is it to say that the performance appraisal considers opinions of various groups within the company?”
“Deception” was measured by one question. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A question reads as follows: “How fair is it to say that the supervisor's words and actions are consistent?
“Employee's privacy” was measured by one question. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A question reads as follows: “How fair is it to say that your supervisor does not gossip or spread rumours?
“Treatment” was measured by one question. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A sample question reads as follows: “How fair is it to say that you have never experienced that your supervisor humiliated you during performance appraisal?
“Judgment” was measured by two questions. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A sample question reads as follows: “How fair is it to say that your supervisor has never communicated any unfair statement about your performance?
“Perceived procedural justice” was measured by one question. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A question reads as follows: “How fair were the procedures used to administer the performance appraisal session?
“Perceived interactional justice” was measured by one question. A five-point Likert scale format will be used (5-point scale from 1 = very unfair...... 5 = very fair). A question reads as follows: “How fair were the interactions between you and your supervisor during the performance appraisal session?
Although 16 participants indicated interest in this research, due to internal and external factors finally, nine participants were able to take part in the interviews. This created a 28% responds rate compare to a total sample. Four participants were female and 5 participants were male.
Results are outlined for procedural and interactional justice factors identified in the existing literature.
Folger et. al (1992) due process model performance appraisal stipulates three essential factors for procedural justice: adequate notice, fair hearing and judgement based on evidence. The company organisational policy requires supervisors to establish performance standards for each employee at the start of the appraisal period. All of the interviewed participants were given performance standards or objectives in advance.
“I have received objectives...in theory they were very well defined, but in practice I am occasionally confused by conflicting goals / targets and bad organisation / communication at times.”
“We have very specific objectives. My objectives are very well defined they are measurable and achievable but not necessary in a steady progressive manner; however most of the time the time scale agreed is realistic.”
Employee participation in setting standards (goals) was different among respondents.
“We discuss our goals together and review it at least on a monthly base.”
“I work closely alongside my manager to set achievable goals for myself and other work colleagues within the business unit.”
“I was not really involved in setting standards, some of these standards appear in company policies but those are quite general, but my specific goals are defined by my supervisor. I wish to be more involved in it.”
Results revealed that a lack of feedback could produce negative perceptions of appraisal fairness.
“Unfortunately, I do not regularly receive feedback on my performance from my supervisor between two appraisal sessions. I think it would help me to keep myself focused on requirements and improve my performance instantly and not wait until the next appraisal meeting. I think a responsible supervisor should give feedback often.
Only 4 participants (44%) indicated that they had received supervisory feedback during the appraisal period. Feedback was strongly appreciated.
“We discuss our performance on weekly or bi-weekly base and agree what is important to keep up as it is and what needs to be improved.”
Comments like these provide evidence that some employees care about appraisal discussions and think they are very important.
Folger et al. (1992) argue that empowering employees to challenge appraisal ratings is important. Specifically, empowered employees can provide vital information on performance constraints like inadequate technology of which the appraiser may not be aware. One respondent said: “It was difficult to explain to my supervisor why his ratings are not correct. As a matter of fact he did not change the ratings at our meeting. It took me a lot of effort after the appraisal meeting to prove that his ratings were incorrect. But at the end he corrected my scores and we both agreed on it.”
Another employee praised his supervisor for being approachable with concerns. “ I was able to discuss my concerns with my supervisor because he is very supportive, though the outcome of the appraisal was not changed at least I had the opportunity to voice my concerns”
In general, the respondents supported Korsgaard and Robertson's (1995) non-instrumental perspective of voice in intrinsically valuing it, “regardless whether the input influences the decision.”
Overall, participants indicated that Folger et al.'s (1992) fair hearing criterion was important in providing a sense of procedural fairness.
The third essential feature of a due process performance appraisal system is judgment based on evidence. This has three components: data accuracy, rater bias, and consistent application of performance standards. Leventhal, Karuza and Fry (1980) argue that employees' fairness perceptions of their appraisal ratings are enhanced by the use of accurate records and information. Some participant said they were dissatisfied with their appraisal, partly because it was based on inaccurate information. Here is an example of this:
“I believe the skills matrix is important however, this can just be of one person's assumption and not a true reflection with enough evidence.”
On the other hand some participants said they were satisfied with their appraisal, because it was based on accurate information.
“They are rational; they are based on tasks given to me. There seems to be nothing subjective in making judgement on my performance.” Or another one said: “The ratings of my appraisal are informal but documentation is what is needed. The objective support is a comparison between what progress has been made and what agreed at the previous appraisal.”
Rater bias is the second dimension of judgement based on evidence. In some research, rater bias was usually attributed to flaws in the appraisal rating instrument. More recently, the focus has shifted to appraisal policies and procedures that promote or allow bias (Greenberg 1986). In this study, participants similarly complained about rater bias allowed by policies and procedures rather than any inherent to the appraisal instrument. Two out of 9 participants expressed concerns about biased appraisal decision. More generally, some participants reported receiving low scores after openly expressed dissatisfaction with organisational decisions or policies.
“One appraisal was not good to me. I do not know what else I am supposed to do when it comes to my job. I never thought that speaking your mind about things you think are wrong means that you get judged negatively.”
In one example a participant felt that an earlier disagreement with the supervisor over mobile phone usage reflected low rating in his/her appraisal. “No crime is committed when someone has a different view on issues. One of incident should not influence the whole outcome of the appraisal, when I was always responsible in performing my duties.”
Both participants mentioned that after these incidents they “had a quiet talk with their supervisor and things were discussed and cleared up.”
Consistent application of performance standards is the third dimensions of judgement based on evidence, and widely considered to be one of the most important determinants of procedural justice (Leventhal et al. 1980; Greenberg 1986; Erdogan et al. 2001). Most participants said the performance standards are well defined and communicated by the supervisor. When participants were asked to give a rate on how well job relevant performance standards and job relevant behaviour standards were communicated and applied during appraisal: three participants answered that performance standards were absolutely clearly communicated and six participants answered that it was communicated and applied very well.
Interactional justice refers to the quality of the employee's interpersonal treatment during the enactment of performance appraisal procedures (Bies 2001). Bies (2001) identifies four factors which affect employees' fairness perception of the interpersonal treatment received from the supervisor: the extent, to which the supervisor is deceptive, invades employees' privacy, is disrespectful, and makes derogatory judgments about employees. Result reveals only one or a few instances of injustice for each of these factors. Given the few cases reported, it is clear that the interactional in justice is usually not a problem. However, those incidents reported confirm that these factors are sometimes relevant to at least some people.
The first interactional justice factor is deception. Deception occurs when a supervisor's words and actions are incongruous (Bies 2001). In this study one incident of deception were described. An employee who was selected for this study claimed that the supervisor had not had a one to one discussion with him while others had. The reason why he did not have a discussion because he was a new employee at that time and supervisor considered him to involve into the appraisal process at a later time. The employee felt overlooked. He understood later why it was reasonable to exclude him from the appraisal process.
Invasion of privacy refers to the supervisor's disclosure of personal information about an employee to another person (Bies 2001).For example, a supervisor who speaks to a subordinate in a harsh manner in the presence of other employees shows a lack of respect for the subordinate's feelings. There were no incidents of disrespect mentioned in this study. Participants expressed their definitive opinions that their privacy has never been violated.
The third interactional justice factor is treatment during appraisal session. Participants were very comfortable with the treatment received during appraisal session. Some examples:
“The communication is open and honest. Occasionally, than can be some slight misunderstanding but we always manage to discuss it honestly.”
“I think I was treated very well during my appraisal session. The communication we had was very good never had problems with it.”
“Very good communication with questions asked from both sides.”
“Good. He listens to what I say, he listens to my opinions and gives me advice and some good ideas.”
The forth interactional justice factor is derogatory judgments. Derogatory judgements refer to untrue statements and inaccurate judgements made by the supervisor about employee's performance (Bies 2001). The result of this study revealed a single incident of this. The affected participant stated: “I was wrongly accused by my supervisor; however I had a chance soon after to clarify the matter and change my supervisor's opinion on it and we both came to the same understanding on it.”
Mean score and standard deviations of the scores on all variables are presented in Table 1. For the multiple-item additive scales, the Cronbach's alpha reliability coefficients were computed. These reliability coefficients are presented in the fifth column of Table 1. Cronbach's alpha is the most widely used reliability coefficient and is a measure of internal consistency. From Table 1 it can be seen that all Cronbach's alpha reliability coefficients were higher than 0.80. This implies that all additives scales were highly reliable good measures. Table 1 also shows all variables, mean scores rather positive. All means are above the theoretical midpoint of the scale.
Variable
N items
Mean Score
Standard Deviation
Cronbach's alpha
Applied consistency
2
3.78
0.82
0.83
Free from bias
1
3.67
0.91
Accurate information
2
3.61
0.8
0.82
Mechanism
1
3.32
0.92
Standards of ethics
2
3.83
0.83
0.84
Consideration of options
1
3.70
0.8
Deception
1
3.33
0.91
Employee's privacy
1
4.00
0.78
Treatment
1
4.67
0.8
Judgements
2
4.22
0.79
0.89
Procedural Justice
1
3.89
0.76
Interactional Justice
1
4.33
0.91
Table 2 summarize the Pearson Correlation Coefficients between all variables of the model of procedural justice of performance appraisal sessions. It can be seen that all correlations are positive. Moreover, almost all correlations reach the level of statistical significance.
Hypotheses 1 were strongly supported by the research data. The level of perceived procedural justice of performance appraisal session correlated positively with the six procedural criteria of consistency, free from bias, accurate information, mechanism, standards of ethics and consideration of options.
Hypothesis 2 was also supported by the research data. The level of perceived interactional justice of performance appraisal session correlated positively with the four criteria of deception, employee's privacy, treatment and judgment.
With respect to the procedural justice factors, the results provide support for Folger et al.'s (1992) due process model of performance appraisals and are generally consistent with empirical findings from other studies (see for example Korsgaard and Roberson 1995, Williams and Levy 2000). According to the model, procedural justice is dependent on three essential criteria: adequate notice, fair hearing and judgement based on evidence. Adequate notice involves the establishment of performance objectives and standards at the start of the appraisal period, and requires employee input in setting those performance standards. Also, employees should receive continuous and timely feedback throughout the appraisal period. The results fully support the adequate notice criterion. Participants felt that their input in setting performance objectives and the amount of feedback they obtained they obtained from their supervisor were fair. It is likely that participants' fairness perceptions regarding supervisory feedback were influenced by the regular management meetings held weekly on Mondays. The second essential factor, a fair hearing, was also substantiated by the results of this study. Fair hearing refers to the formal provisions made by the rater, such as an appraisal meeting, to discuss the ratee's performance. The final factor of due process appraisals, judgment based on evidence, was partially supported by the findings. Fulfilment of this factor requires unbiased and accurate appraisal ratings, and consistency in the application of performance standards. No doubt, constant communication, including frequent meeting with supervisor, encouraged participants to feel that supervisor was aware of their performance. Findings revealed that that consistent application of performance standard did not influence the participants' fairness perceptions. This is most likely because job descriptions and company policies already provided some information on performance guidelines. Frequent appraisal provides more opportunities for clarifying objectives and seeking feedback, with more adequate notice of, and less chance of a surprise from, a poor appraisal rating. An appraisal based on job-relevant criteria is a natural complement to Folger et al.'s (1992) judgment based on evidence, reducing the potential for biased decisions.
Results from this study confirm that interactional justice perceptions reflect how employees have been treated by their supervisor. In addition, participants evaluated their supervisor's treatment throughout the entire appraisal period, and not just during the appraisal interview. Only a few cases of interactional injustice were reported, indicating participants generally enjoy good relations with their supervisor.
Our two hypotheses were all confirmed. The model was developed out of the procedural and interactional justice literature seems to be a useful one. In particular, criteria from the relational model of procedural justice may contribute to positive perceptions of performance appraisal sessions by participants. Organisations may profit from these findings by designing performance appraisal systems and performance appraisal sessions meet the criteria of procedural and interactional justice and that offer desirable outcome. The results of such fair performance appraisal systems and of the performance appraisal sessions guided by supervisors who rely on excellent soft skills are very positive.
This research employed a qualitative and quantitative case study methodology to gain in-depth understanding of the factors that influence employee's fairness perceptions of their perceptions of their performance appraisals. This study was limited by its focus on just one employer and 9 participants. As a result, its findings may not be representative of other organisations. This highlights a need for research involving larger samples, and a wider range of quantitative data collection methods, which could establish an ability to generalise the findings in this research.
All information (interview transcript & questionnaire) received from participants will remain the property of the researcher and will not be used for any purpose except for execution of this research.
Antonioni, D. and Park, H. (2001) “The relationship between rate effect and three sources of 360-degree feedback ratings”, Journal of Management, Vol. 27, pp.479-495.
Arvey, R.D. and Murphy, K.R. (1998) “Performance evaluation in work settings”, Annual Review of Psychology, Vol. 49. pp. 141-168.
Bettenhausen, K., and Murnighan, J.K. (1986) “The emergence of norms in competitive decision-making grous”, Administrative Science Quarterly, Vol.30, pp. 350-372.
Bies, R. J. and Moag, J. F., (1986)” Interactional justice: Communication criteria of fairness”. In R. J. Lewincki, B. H. Sheppard and M. H. Bazerman (Eds.) Research on negotiations in organizations, Vol. 1. pp. 43-55.
Bies, R.J (2001) “Interactional Justice. The Scared and the Profane” J. Greenberg and R. Cropanzano, Advances in Organizational Justice, (Eds.) Stanford, CA: Stanford University Press, pp. 89-118.
Blau G. (1999) “Testing the longitudinal impact of work variables and performance appraisal satisfaction on subsequent overall job satisfaction”, Human Relations, Vol. 52, pp. 1099-1113.
Boswell, W.R., and Boudreau, J.W. (2000) “Employee satisfaction with performance appraisal and appraisers: The role of perceived appraisal use.” Human Resource Management Review, Vol.7, pp. 270-299.
Bretz, R.D., Milkovich, G.T., Read, W. (1992) “The current state of performance appraisal research and practice: Concerns, directions and implications.” Journal of Management, Vol. 18, pp. 309-321.
Bryman, A. and Emma, B. (2007) Business Research Methods (Eds.) Oxford Univeristy Press, New York
Cardy, R.L. and Dobbins, G.H. (1994) “Performance appraisal: Alternative perspectives” Cincinatti, OH: South Western Publishing
Catano & Associates (1997). “The RCMP NCO Promotion System: Report of the external review team.” Ottawa, ON: RCMP
Catano, V.M., Darr, W., Campbell C.A. (2007) “Performance Appraisal of Behaviour-Based Competencies: A Reliable and Valid Procedure.” Personnel Psychology, Vol. 60, pp. 201-230.
Cawley, B.D., Keeping, L.M., Levy, P.E. (1998). “Participation in the performance appraisal process and employee reactions: A meta-analytical review of field investigations.” Journal of Applied Psychology, Vol. 83, pp. 615-633.
Cleveland, J.N. and Murphy, K.R. (1992). “Analyzing performance appraisal as goal-directed behaviour.” Research in Personnel and Human Resources Management, Vol. 10, pp. 121-185.
Cropanzano, R., Ambrose, M.L. (2001) Procedural and Distributive Justice are More Similar than you think: A Monistic Perspective and a Research Agenda. In Advances in Organizational Justice, (Eds.) J. Greenberg and R. Cropanzano, Stanford, CA: Stanford University Press, 119-151.
Cropanzano, R., Prehar, C.A., Chen, P.Y. (2002) “Using Social Exchange Theory to Distinguish Procedural from Interactional Justice.” Group and Organization Management, Vol. 27, pp. 324-351.
DeNisi, A.S., and Peters, L.H. (1996) “Organization of information in memory and the performance appraisal process: Evidence from the field.” Journal of Applied Psychology, Vol. 81, pp. 717-737.
Denzin, N. and Lincoln, Y.S. (1994) “Introduction: Entering the field of qualitative research” In N.K. Denzin & Y.s. Lincoln (Eds.) Handbook of qualitative research 1-17. Thousand Oak, CA: Sage.
Denzin, N.K. and Lincoln, Y.S. (2000) “Introduction: The discipline and practice of qualitative research” In N.K. Denzin & Y.s. Lincoln (Eds.) Handbook of qualitative research 1-17. Thousand Oak, CA: Sage.
Dubois, D. (1993). “Competency-based performance: A strategy for organisational change.” Boston: HRD Press
Eisenhardt, K. (1989) “Building Theories from Case Study research”, Academy of Management Review, Vol. 14, pp. 532-550.
Erdogan, B., Kraimer, M.L., Liden, R. (2001) “Procedural justice as a two-dimensional construct: An examination in the performance appraisal account.” Journal of Applied Behavioural Science, Vol. 37, pp. 205-222.
Farr, J.L., Levy, P.E. (2004) Performance appraisal. In L.L. Koppes (Eds.), The Science and practice of industrial-organizational psychology: The first hundred years. Erlbaum.
Feldman, J.M. (1981) “Beyond attribution theory: Cognitive processes in performance appraisal.” Journal of Applied Psychology, Vol. 66 pp. 127-148.
Ferris, G.R., Judge, T.A., Rowland, K.M., Fitzgibbons, D.E. (1994) “Subordinate influence and the performance evaluation process: test of model.” Organizational Behaviour & Human Decision Processes, Vol. 58, pp. 101.
Fletcher, C. (2001). “Performance appraisal and management: The developing research agenda.” Journal of Occupational and Organisational Psychology, Vol. 74, pp. 473-487.
Fletcher, C. and Baldry, C. (2000) “A study of individual differences and self-awareness in the context of multi-source feedback.” Journal of Occupational & Organizational Psychology, Vol. 73, pp. 303-319.
Flectcher, C. and Perry, E.L. (2001) “Performance appraisal and feedback: A consideration of national culture and a review of contemporary research and future trends.” In Anderson, N.d., Ones, D.S., Sinahgil, H.K., Viswesvaran, C. (Eds.). Handbook of industrial, work and organisational psychology. London: Sage
Folger, R., and Konovsky, M.A. (1989) “Effects of Procedural and Distributive Justice on reactions to Pay Raise Decisions.” Academy of Management Journal, Vol. 32, pp. 115-130.
Folger, R., and Konovsky, M. A. And Cropanzano, R. (1992) “A Due process metaphor for performance appraisal” Research in Organizational Behaviour, Vol. 14, pp. 129-177
Forgas, J.P., Georges, J.M. (2001) “Affective influences on judgements and behaviour in organizations: An information processing perspective.” Organizational Behaviour & Human Decision Processes, Vol. 86, pp. 3-34.
Frink, D.D., Ferris, G.R. (1998) “Accountability, impression management, and goal setting in the performance evaluation process.” Human Relations, Vol. 51, pp. 1259.
Gilliland, S.W., Langdon, J.C. (1998). “Creating performance management systems that promote perceptions of fairness. In Smither, J.W. (Eds.) Performance appraisal: State of the art in practice. San Francisco: Jossey-Bass.
Giles, Robert and Christine Landauer (1984). “Setting Specific Standards for Appraising Creative Staffs.” Personnel Administrator, Vol. 3, pp. 41.
Harris, M.M. (1994) “Rater motivation in the performance appraisal context: A theoretical framework.” Journal of Management, Vol. 20, pp. 737.
Harris, M.M., Sutton, R. (1986). “Functions parting ceremonies in dying organisations.” Academy of Management Journal, Vol. 29, pp. 5-30.
Heathfiled, S.,(2007) “Performance appraisals don't work - what does?”, The Journal for Quality & Participation, Spring 2007
Hebert, B.G., Vorauer, J.D. (2003) “Seeing through the screen:Is evaluative feedback communicated more effectively in face to face or computer-mediated exchanges?” Computers in Human Behaviour, Vol. 19, pp. 25-38.
Himelfarb, F. (1996) Training and executive development in the RCMP, Ottawa; RCMP.
Holbrook, L. (2002) “Contact Points and Flash Points: Conceptualizing the Use of Justice Mechanism in the Performance Appraisal Interview.” Human Resource Management Review, Vol. 12, pp. 101-123.
Ilgen, D.R., Barnes-Farrel, J.L., McKellin, D.B. (1993) “Performance appraisal process research in the 1980s: What has it contributed to appraisals in use?” Organizational Behaviour & Human Decision Processes, Vol. 54, pp. 321-368.
Klein, A.L. (1996). “Validity and reliability for competency-based systems: Reducing litigation risks.” Compensation and Benefits Review, Vol. 28, pp. 31-37.
Korsgaard, M.A., Robertson, L. (1995) “Procedural Justice in Performance Evaluation: The Role of Instrumental Voice in Performance Appraisal Discussions.” Journal of Management, Vol. 21, pp. 657-669.
Korsgaard, M.A., Robertson, L., Rymph, R.D. (1998) “What Motivates Fairness? The Role of Subordinate Assertive Behaviour on Managers' Interactional Fairness.” Journal of Applied Psychology, Vol. 83, pp. 731-744.
Kozlowski, S.W.J., Chao, G.T., Morrison, R.F. (1998) Games raters play: Politics, strategies, and impression management in performance appraisal. In J. W. Smither (Eds.) Performance appraisal: State of the art in practice: 163-205. San Francisco: Jossey-Bass.
Latham, G.P., Almost, J., Mann, S., Moore, C. (2005). “New developments in performance management.” Group Dynamics, Vol. 34, pp. 77-87.
Latham, G.P., Mann, S. (2006). “Advances in the science of performance appraisal: Implications for practice.” International Review of Industrial and Organisational Psychology, Vol. 21, pp. 295-337.
Lefkowitz, J. (2000) “The role of interpersonal affective regard in supervisory performance ratings: A literature review and proposed casual model.” Journal of Occupational & Organizational Psychology, Vol. 73, pp. 67-85.
Leonard-Barton, D. (1988) “Synergistic design for case studies: Longitudinal single-site and replicated multiple-site.” Paper presented at the NCF Conference on Longitudinal Research Methods in Organizations, Austin.
Leventhal, G.S. (1980) “Beyond fairness: A theory of allocation preferences. In G. Mikula (Ed.) Justice and Social interaction (pp. 167-218) New York: Springer-Verlag.
Levy, P.E., Cawley, B.D., Foti, R.J. (1998) “Reactions to appraisal discrepancies: Performance ratings and attributions.” Journal of Business & Psychology, Vol. 12, pp. 437.
Levy, P.E., Steelman, L.A. (1997) Performance appraisal for team-based organizations: A prototypical multiple rater system. In M. Beyerlein, D. Johnson, S. Beyerlein (Eds.) Advances in interdisciplinary studies of work teams: Team implementation issues. 141-165. Greenwich, CT: JAI.
Levy, P.E., Williams, J.R. (1998) “The role of perceived system knowledge in predicting appraisal reactions, job satisfaction, and organizational commitment.” Journal of Organizational Behaviour, Vol. 19, pp. 53-65.
Lievens, F., Sanchez, J.I., De Corte, W. (2004). “Easing the inferential leap in competency modelling: The effects of task-related information and subject matter expertise.” Personnel Psychology, Vol. 57, pp. 881-905.
Malos, S.B. (1998). “Current legal issues in performance appraisal.” In Smither, J.W. (Eds.) Performance appraisal: State of the art in practice. San Francisco: Jossey-Bass.
Mani, B.G. (2002) “Performance appraisal systems, productivity and motivation: A case study.” Public Personnel Management, Vol. 31, pp. 141-159.
Mayer, R.C., Davis, J.H. (1999) “The effect of the performance appraisal system on trust for management: A field quasi-experiment.” Journal of Applied Psychology, Vol. 84, pp. 123-136.
Miles, M. (1979) “Qualitative data as an attractive nuisance: The problem of analysis.” Administrative Science Quarterly, Vol. 24, pp. 590-601.
Miles, M., Hubarman, A.M. (1984) Qualitative data analysis. Beverly Hills CA: Sage Publications.
Milliman, J., Nason, S., Zhu, C., De Cieri, H. (2002) “An exploratory assessment of the purposes of performance appraisal in North and Central America and the Pacific Rim.” Asia Pacific Journal of Human Resources, Vol. 40, pp. 105-122.
Mintzberg, H. (1979) “An emerging strategy of “direct” research.” Administrative Science Quarterly, Vol. 24, pp. 580-589.
Mintzberg, H. McHugh, A. (1985) “Strategy formation in an adhocracy.” Administrative Science Quarterly, Vol. 30, pp. 160-197
Montague, N., (2007), “The performance appraisal: a powerful management tool”, Management Quarterly, Summer 2007
Murphy, K. R. and Cleveland, J. N. (1991) Performance appraisal: An organizational perspective. Boston: Allyn & Bacon.
Murphy, K. R. and Cleveland, J. N. (1995) “Understanding performance appraisal: social , organizational and goal-based perspectives” Thousands Oaks, CA: Sage.
Patton, F. (1999) “Oops, the future is past and we almost missed it! - Integrating quality and behavioural management methodologies.” Journal of Workplace Learning, Vol. 11, pp. 266-277.
Pettigrew, A. (1988) Longitudinal field research on change: Theory and practice. Paper presented at NSF Conference on Longitudinal Research Methods in Organizations, Austin.
Plachy, Roger, j. (1983). “ Appraisal Scales that Measures Performance Outcomes and Job Results.” Personnel Vol.5-6.
Rarick, C.A., Baxter, G. (1986). “Behaviourally Anchored Rating Scales (BARS): An Effective Performance Appraisal Approach.” Society for Advanced Management Journal, Winter.
Roberts, G.E., Reed, T. (1996) “Performance appraisal participation, goal setting and feedback.” Review of Public Personnel Administration, Vol. 16, pp. 4- 29.
Roberts, G.E. (1998) “Perspectives on Enduring and Emerging Issues in Performance Appraisal.” Public Personnel Management, Vol. 27, pp. 301-320.
Scippmann, J.S., Ash, R.A., Battista, M., Carr, L., Eyde, L.D., Hesketh, B. Et al. (2000). “The practice of competency modelling.” Personnel Psychology, Vol. 53, pp. 703-740.
Shah, J.B., Murphy, J. (1995) “Performance appraisals for improved productivity.” Journal of Management in Engineering, Vol. 11, pp. 26.
Skarlicki, D.P., Folger, R. (1997) “Retaliation in the Workplace: The Roles of Distributive, Procedural and Interactional Justice.” Journal of Applied Psychology, Vol. 82, pp.434-443.
Smither, J.W. (1998). “Lessons learned: Research implications for performance appraisal and management practice.” In Smither, J.W. (Eds.) Performance appraisal: State of the art in practice. San Francisco: Jossey-Bass.
Silverman, Stanley, B., Kenneth, W. (1984). “Reaction of Employees to Performance Appraisal Interviews as a Function of their Participation in Rating Scales Development.” Personnel Psychology, pp.706-707.
Strauss, A. (1987) Qualitative analysis for social scientists. Cambridge, UK: Cambridge University Press.
Stroecker, R. (1991) “Evaluating and rethinking the case study”, The Sociological Review, Vol.39. pp. 88-112.
Sutton, R., Callahan, A. (1987) “The stigma of bankruptcy Spoiled organizational image and its management.” Academy of Management Journal, Vol. 30, pp. 405-436.
Taylor, M.S., Masterson, S.S., Renard, M.K., Tracy, K.B. (1998) “Managers' reactions to procedurally just performance management systems.” Academy of Management Journal, Vol. 41, pp.568-579.
Taylor. M.S., Tracy, K.B., Renard, M.K., Harrison, J.K., and Carroll, S.J. (1995) Due process in performance appraisal: A quasi-experiment in procedural justice. Administrative Science Quarterly, Vol. 40, pp. 495-523.
Taylor, E.K., Wherry, R.J. (1951) “A study of leniency in two rating systems.” Personnel Psychology, Vol. 4, pp. 39-47.
Thibaut, J., and Walker, L. (1975) “Procedural justice: A psychological analysis”. Journal of Applied Psychology, Vol. 46. pp. 34-45.
Tziner, A., Kopelman, R.E., JOanis, C. (1997) “Investigation of raters' and ratees' reactions to three methods of performance appraisal: BOS, BARS and GRS.” Canadian Journal of Administrative Sciences, Vol. 14, pp. 396.
Tziner, A., latham, G.P., Price, B.S., Haccoun, R. (1996) “Development and validation of a questionnaire for measuring perceived political considerations in performance appraisal.” Journal of Organizational Behaviour, Vol. 17, pp. 179-190.
Tziner, A., Murphy, K.R. (1999) “Additional evidence of attitudinal influences in performance appraisal.” Journal of Business and Psychology, Vol. 13, pp. 407-419.
Van Maanen, J. (1988) Tales of the field: On writing Ethnography. Chicago: University of Chicago Press.
Werschuren, P.J.M. (2003) “Case Study as a research strategy: some ambiguities and opportunities.” International Journal of Social Research Methodology, Vol. 6, pp. 121-139.
Williams, J. R. and Levy, P.E. (2000) “Investigating Some Neglected Criteria: The Influence of Organizational Level and Percieved System Knowledge on Appraisal Reactions,” Journal of Business and Psychology, Vol. 14, Nr. 3. pp 501-513.
Yin, R. (1981) “The case study crisis: Some answers.” Administrative Science Quarterly, Vol. 26, pp. 58-65.
Yin, R. (1984) Case Study Research. Beverly Hills, CA: Sage Publications.
Performance Appraisal System. (2017, Jun 26).
Retrieved December 15, 2024 , from
https://studydriver.com/performance-appraisal-system-2/
A professional writer will make a clear, mistake-free paper for you!
Get help with your assignmentPlease check your inbox
Hi!
I'm Amy :)
I can help you save hours on your homework. Let's start by finding a writer.
Find Writer