34. Psychosocial and Organizational Factors
Chapter Editors: Steven L. Sauter, Lawrence R. Murphy, Joseph J. Hurrell and Lennart Levi
Psychosocial and Organizational Factors
Steven L. Sauter, Joseph J. Hurrell Jr., Lawrence R. Murphy and Lennart Levi
Psychosocial Factors, Stress and Health
Social Support: An Interactive Stress Model
Person - Environment Fit
Robert D. Caplan
Hours of Work
Timothy H. Monk
Michael J. Smith
Autonomy and Control
Electronic Work Monitoring
Lawrence M. Schleifer
Role Clarity and Role Overload
Steve M. Jex
Chaya S. Piotrkowski
Job Future Ambiguity
John M. Ivancevich
Amiram D. Vinokur
Total Quality Management
Cary L. Cooper and Mike Smith
Lois E. Tetrick
Organizational Climate and Culture
Denise M. Rousseau
Performance Measures and Compensation
Richard L. Shell
Marilyn K. Gowing
Debra L. Nelson and James Campbell Quick
Type A/B Behaviour Pattern
C. David Jenkins
Suzanne C. Ouellette
John M. Schaubroeck
Locus of Control
Lawrence R. Murphy and Joseph J. Hurrell, Jr.
Ronald J. Burke
D. Wayne Corneil
Gender, Job Stress and Illness
Rosalind C. Barnett
Gwendolyn Puryear Keita
Selected Acute Physiological Outcomes
Andrew Steptoe and Tessa M. Pollard
Töres Theorell and Jeffrey V. Johnson
Bernard H. Fox
Soo-Yee Lim, Steven L. Sauter and Naomi G. Swanson
Carles Muntaner and William W. Eaton
Summary of Generic Prevention and Control Strategies
Cary L. Cooper and Sue Cartwright
Click a link below to view table in the article context.
Point to a thumbnail to see figure caption, click to see figure in article context.
The nature, prevalence, predictors and possible consequences of workplace violence have begun to attract the attention of labour and management practitioners, and researchers. The reason for this is the increasing occurrence of highly visible workplace murders. Once the focus is placed on workplace violence, it becomes clear that there are several issues, including the nature (or definition), prevalence, predictors, consequences and ultimately prevention of workplace violence.
Definition and Prevalence of Workplace Violence
The definition and prevalence of workplace violence are integrally related.
Consistent with the relative recency with which workplace violence has attracted attention, there is no uniform definition. This is an important issue for several reasons. First, until a uniform definition exists, any estimates of prevalence remain incomparable across studies and sites. Secondly, the nature of the violence is linked to strategies for prevention and interventions. For example, focusing on all instances of shootings within the workplace includes incidents that reflect the continuation of family conflicts, as well as those that reflect work-related stressors and conflicts. While employees would no doubt be affected in both situations, the control the organization has over the former is more limited, and hence the implications for interventions are different from those situations in which workplace shootings are a direct function of workplace stressors and conflicts.
Some statistics suggest that workplace murders are the fastest growing form of murder in the United States (for example, Anfuso 1994). In some jurisdictions (for example, New York State), murder is the modal cause of death in the workplace. Because of statistics such as these, workplace violence has attracted considerable attention recently. However, early indications suggest that those acts of workplace violence with the highest visibility (for example, murder, shootings) attract the greatest research scrutiny, but also occur with the least frequency. In contrast, verbal and psychological aggression against supervisors, subordinates and co-workers are far more common, but gather less attention. Supporting the notion of a close integration between definitional and prevalence issues, this would suggest that what is being studied in most cases is aggression rather than violence in the workplace.
Predictors of Workplace Violence
A reading of the literature on the predictors of workplace violence would reveal that most of the attention has been focused on the development of a “profile” of the potentially violent or “disgruntled” employee (for example, Mantell and Albrecht 1994; Slora, Joy and Terris 1991), most of which would identify the following as the salient personal characteristics of a disgruntled employee: white, male, aged 20-35, a “loner”, probable alcohol problem and a fascination with guns. Aside from the problem of the number of false-positive identifications this would lead to, this strategy is also based on identifying individuals predisposed to the most extreme forms of violence, and ignores the larger group involved in most of the aggressive and less violent workplace incidents.
Going beyond “demographic” characteristics, there are suggestions that some of the personal factors implicated in violence outside of the workplace would extend to the workplace itself. Thus, inappropriate use of alcohol, general history of aggression in one’s current life or family of origin, and low self-esteem have been implicated in workplace violence.
A more recent strategy has been to identify the workplace conditions under which workplace violence is most likely to occur: identifying the physical and psychosocial conditions in the workplace. While the research on psychosocial factors is still in its infancy, it would appear as though feelings of job insecurity, perceptions that organizational policies and their implementation are unjust, harsh management and supervision styles, and electronic monitoring are associated with workplace aggression and violence (United States House of Representatives 1992; Fox and Levin 1994).
Cox and Leather (1994) look to the predictors of aggression and violence in general in their attempt to understand the physical factors that predict workplace violence. In this respect, they suggest that workplace violence may be associated with perceived crowding, and extreme heat and noise. However, these suggestions about the causes of workplace violence await empirical scrutiny.
Consequences of workplace violence
The research to date suggests that there are primary and secondary victims of workplace violence, both of which are worthy of research attention. Bank tellers or store clerks who are held up and employees who are assaulted at work by current or former co-workers are the obvious or direct victims of violence at work. However, consistent with the literature showing that much human behaviour is learned from observing others, witnesses to workplace violence are secondary victims. Both groups might be expected to suffer negative effects, and more research is needed to focus on the way in which both aggression and violence at work affect primary and secondary victims.
Prevention of workplace violence
Most of the literature on the prevention of workplace violence focuses at this stage on prior selection, i.e., the prior identification of potentially violent individuals for the purpose of excluding them from employment in the first instance (for example, Mantell and Albrecht 1994). Such strategies are of dubious utility, for ethical and legal reasons. From a scientific perspective, it is equally doubtful whether we could identify potentially violent employees with sufficient precision (e.g., without an unacceptably high number of false-positive identifications). Clearly, we need to focus on workplace issues and job design for a preventive approach. Following Fox and Levin’s (1994) reasoning, ensuring that organizational policies and procedures are characterized by perceived justice will probably constitute an effective prevention technique.
Research on workplace violence is in its infancy, but gaining increasing attention. This bodes well for the further understanding, prediction and control of workplace aggression and violence.
Downsizing, layoffs, re-engineering, reshaping, reduction in force (RIF), mergers, early retirement, and outplacement—the description of these increasingly familiar changes has become a matter of commonplace jargon around the world in the past two decades. As companies have fallen on hard times, workers at all organizational levels have been expended and many remaining jobs have been altered. The job loss count in a single year (1992–93) includes Eastman Kodak, 2,000; Siemens, 13,000; Daimler-Benz, 27,000; Phillips, 40,000; and IBM, 65,000 (The Economist 1993, extracted from “Job Future Ambiguity” (John M. Ivancevich)). Job cuts have occurred at companies earning healthy profits as well as at firms faced with the need to cut costs. The trend of cutting jobs and changing the way remaining jobs are performed is expected to continue even after worldwide economic growth returns.
Why has losing and changing jobs become so widespread? There is no simple answer that fits every organization or situation. However, one or more of a number of factors is usually implicated, including lost market share, increasing international and domestic competition, increasing labour costs, obsolete plant and technologies and poor managerial practices. These factors have resulted in managerial decisions to slim down, re-engineer jobs and alter the psychological contract between the employer and the worker.
A work situation in which an employee could count on job security or the opportunity to hold multiple positions via career-enhancing promotions in a single firm has changed drastically. Similarly, the binding power of the traditional employer-worker psychological contract has weakened as millions of managers and non-managers have been let go. Japan was once famous for providing “lifetime” employment to individuals. Today, even in Japan, a growing number of workers, especially in large firms, are not assured of lifetime employment. The Japanese, like their counterparts across the world, are facing what can be referred to as increased job insecurity and an ambiguous picture of what the future holds.
Job Insecurity: An Interpretation
Maslow (1954), Herzberg, Mausner and Snyderman (1959) and Super (1957) have proposed that individuals have a need for safety or security. That is, individual workers sense security when holding a permanent job or when being able to control the tasks performed on the job. Unfortunately, there has been a limited number of empirical studies that have thoroughly examined the job security needs of workers (Kuhnert and Pulmer 1991; Kuhnert, Sims and Lahey 1989).
On the other hand, with the increased attention that is being paid to downsizing, layoffs and mergers, more researchers have begun to investigate the notion of job insecurity. The nature, causes and consequences of job insecurity have been considered by Greenhalgh and Rosenblatt (1984) who offer a definition of job insecurity as “perceived powerlessness to maintain desired continuity in a threatened job situation”. In Greenhalgh and Rosenblatt’s framework, job insecurity is considered a part of a person’s environment. In the stress literature, job insecurity is considered to be a stressor that introduces a threat that is interpreted and responded to by an individual. An individual’s interpretation and response could possibly include the decreased effort to perform well, feeling ill or below par, seeking employment elsewhere, increased coping to deal with the threat, or seeking more colleague interaction to buffer the feelings of insecurity.
Lazarus’ theory of psychological stress (Lazarus 1966; Lazarus and Folkman 1984) is centred on the concept of cognitive appraisal. Regardless of the actual severity of the danger facing a person, the occurrence of psychological stress depends upon the individual’s own evaluation of the threatening situation (here, job insecurity).
Selected Research on Job Insecurity
Unfortunately, like the research on job security, there is a paucity of well-designed studies of job insecurity. Furthermore, the majority of job insecurity studies incorporate unitary measurement methods. Few researchers examining stressors in general or job insecurity specifically have adopted a multiple-level approach to assessment. This is understandable because of the limitations of resources. However, the problems created by unitary assessments of job insecurity have resulted in a limited understanding of the construct. There are available to researchers four basic methods of measuring job insecurity: self-report, performance, psychophysiological and biochemical. It is still debatable whether these four types of measure assess different aspects of the consequences of job insecurity (Baum, Grunberg and Singer 1982). Each type of measure has limitations that must be recognized.
In addition to measurement problems in job insecurity research, it must be noted that there is a predominance of concentration in imminent or actual job loss. As noted by researchers (Greenhalgh and Rosenblatt 1984; Roskies and Louis-Guerin 1990), there should be more attention paid to “concern about a significant deterioration in terms and conditions of employment.” The deterioration of working conditions would logically seem to play a role in a person’s attitudes and behaviours.
Brenner (1987) has discussed the relationship between a job insecurity factor, unemployment, and mortality. He proposed that uncertainty, or the threat of instability, rather than unemployment itself causes higher mortality. The threat of being unemployed or losing control of one’s job activities can be powerful enough to contribute to psychiatric problems.
In a study of 1,291 managers, Roskies and Louis-Guerin (1990) examined the perceptions of workers facing layoffs, as well as those of managerial personnel working in firms that worked in stable, growth-oriented firms. A minority of managers were stressed about imminent job loss. However, a substantial number of managers were more stressed about a deterioration in working conditions and long-term job security.
Roskies, Louis-Guerin and Fournier (1993) proposed in a research study that job insecurity may be a major psychological stressor. In this study of personnel in the airline industry, the researchers determined that personality disposition (positive and negative) plays a role in the impact of job security or the mental health of workers.
Addressing the Problem of Job Insecurity
Organizations have numerous alternatives to downsizing, layoffs and reduction in force. Displaying compassion that clearly shows that management realizes the hardships that job loss and future job ambiguity pose is an important step. Alternatives such as reduced work weeks, across-the-board salary cuts, attractive early retirement packages, retraining existing employees and voluntary layoff programmes can be implemented (Wexley and Silverman 1993).
The global marketplace has increased job demands and job skill requirements. For some people, the effect of increased job demands and job skill requirements will provide career opportunities. For others, these changes could exacerbate the feelings of job insecurity. It is difficult to pinpoint exactly how individual workers will respond. However, managers must be aware of how job insecurity can result in negative consequences. Furthermore, managers need to acknowledge and respond to job insecurity. But possessing a better understanding of the notion of job insecurity and its potential negative impact on the performance, behaviour and attitudes of workers is a step in the right direction for managers.
It will obviously require more rigorous research to better understand the full range of consequences of job insecurity among selected workers. As additional information becomes available, managers need to be open-minded about attempting to help workers cope with job insecurity. Redefining the way work is organized and executed should become a useful alternative to traditional job design methods. Managers have a responsibility:
Since job insecurity is likely to remain a perceived threat for many, but not all, workers, managers need to develop and implement strategies to address this factor. The institutional costs of ignoring job insecurity are too great for any firm to accept. Whether managers can efficiently deal with workers who feel insecure about their jobs and working conditions is fast becoming a measure of managerial competency.
The term unemployment describes the situation of individuals who desire to work but are unable to trade their skills and labour for pay. It is used to indicate either an individual’s personal experience of failure to find gainful work, or the experience of an aggregate in a community, a geographic region or a country. The collective phenomenon of unemployment is often expressed as the unemployment rate, that is, the number of people who are seeking work divided by the total number of people in the labour force, which in turn consists of both the employed and the unemployed. Individuals who desire to work for pay but have given up their efforts to find work are termed discouraged workers. These persons are not listed in official reports as members of the group of unemployed workers, for they are no longer considered to be part of the labour force.
The Organization for Economic Cooperation and Development (OECD) provides statistical information on the magnitude of unemployment in 25 countries around the world (OECD 1995). These consist mostly of the economically developed countries of Europe and North America, as well as Japan, New Zealand and Australia. According to the report for the year 1994, the total unemployment rate in these countries was 8.1% (or 34.3 million individuals). In the developed countries of central and western Europe, the unemployment rate was 9.9% (11 million), in the southern European countries 13.7% (9.2 million), and in the United States 6.1% (8 million). Of the 25 countries studied, only six (Austria, Iceland, Japan, Mexico, Luxembourg and Switzerland) had an unemployment rate below 5%. The report projected only a slight overall decrease (less than one-half of 1%) in unemployment for the years 1995 and 1996. These figures suggest that millions of individuals will continue to be vulnerable to the harmful effects of unemployment in the foreseeable future (Reich 1991).
A large number of people become unemployed at various periods during their lives. Depending on the structure of the economy and on its cycles of expansion and contraction, unemployment may strike students who drop out of school; those who have been graduated from a high school, trade school or college but find it difficult to enter the labour market for the first time; women seeking to return to gainful employment after raising their children; veterans of the armed services; and older persons who want to supplement their income after retirement. However, at any given time, the largest segment of the unemployed population, usually between 50 and 65%, consists of displaced workers who have lost their jobs. The problems associated with unemployment are most visible in this segment of the unemployed partly because of its size. Unemployment is also a serious problem for minorities and younger persons. Their unemployment rates are often two to three times higher than that of the general population (USDOL 1995).
The fundamental causes of unemployment are rooted in demographic, economic and technological changes. The restructuring of local and national economies usually gives rise to at least temporary periods of high unemployment rates. The trend towards the globalization of markets, coupled with accelerated technological changes, results in greater economic competition and the transfer of industries and services to new places that supply more advantageous economic conditions in terms of taxation, a cheaper labour force and more accommodating labour and environmental laws. Inevitably, these changes exacerbate the problems of unemployment in areas that are economically depressed.
Most people depend on the income from a job to provide themselves and their families with the necessities of life and to sustain their accustomed standard of living. When they lose a job, they experience a substantial reduction in their income. Mean duration of unemployment, in the United States for example, varies between 16 and 20 weeks, with a median between eight and ten weeks (USDOL 1995). If the period of unemployment that follows the job loss persists so that unemployment benefits are exhausted, the displaced worker faces a financial crisis. That crisis plays itself out as a cascading series of stressful events that may include loss of a car through repossession, foreclosure on a house, loss of medical care, and food shortages. Indeed, an abundance of research in Europe and the United States shows that economic hardship is the most consistent outcome of unemployment (Fryer and Payne 1986), and that economic hardship mediates the adverse impact of unemployment on various other outcomes, in particular, on mental health (Kessler, Turner and House 1988).
There is a great deal of evidence that job loss and unemployment produce significant deterioration in mental health (Fryer and Payne 1986). The most common outcomes of job loss and unemployment are increases in anxiety, somatic symptoms and depression symptomatology (Dooley, Catalano and Wilson 1994; Hamilton et al. 1990; Kessler, House and Turner 1987; Warr, Jackson and Banks 1988). Furthermore, there is some evidence that unemployment increases by over twofold the risk of onset of clinical depression (Dooley, Catalano and Wilson 1994). In addition to the well-documented adverse effects of unemployment on mental health, there is research that implicates unemployment as a contributing factor to other outcomes (see Catalano 1991 for a review). These outcomes include suicide (Brenner 1976), separation and divorce (Stack 1981; Liem and Liem 1988), child neglect and abuse (Steinberg, Catalano and Dooley 1981), alcohol abuse (Dooley, Catalano and Hough 1992; Catalano et al. 1993a), violence in the workplace (Catalano et al. 1993b), criminal behaviour (Allan and Steffensmeier 1989), and highway fatalities (Leigh and Waldon 1991). Finally, there is also some evidence, based primarily on self-report, that unemployment contributes to physical illness (Kessler, House and Turner 1987).
The adverse effects of unemployment on displaced workers are not limited to the period during which they have no jobs. In most instances, when workers become re-employed, their new jobs are significantly worse than the jobs they lost. Even after four years in their new positions, their earnings are substantially lower than those of similar workers who were not laid off (Ruhm 1991).
Because the fundamental causes of job loss and unemployment are rooted in societal and economic processes, remedies for their adverse social effects must be sought in comprehensive economic and social policies (Blinder 1987). At the same time, various community-based programmes can be undertaken to reduce the negative social and psychological impact of unemployment at the local level. There is overwhelming evidence that re-employment reduces distress and depression symptoms and restores psychosocial functioning to pre-unemployment levels (Kessler, Turner and House 1989; Vinokur, Caplan and Williams 1987). Therefore, programmes for displaced workers or others who wish to become employed should be aimed primarily at promoting and facilitating their re-employment or new entry into the labour force. A variety of such programmes have been tried successfully. Among these are special community-based intervention programmes for creating new ventures that in turn generate job opportunities (e.g., Last et al. 1995), and others that focus on retraining (e.g., Wolf et al. 1995).
Of the various programmes that attempt to promote re-employment, the most common are job search programmes organized as job clubs that attempt to intensify job search efforts (Azrin and Beasalel 1982), or workshops that focus more broadly on enhancing job search skills and facilitating transition into re-employment in high-quality jobs (e.g., Caplan et al. 1989). Cost/benefit analyses have demonstrated that these job search programmes are cost effective (Meyer 1995; Vinokur et al. 1991). Furthermore, there is also evidence that they could prevent deterioration in mental health and possibly the onset of clinical depression (Price, van Ryn and Vinokur 1992).
Similarly, in the case of organizational downsizing, industries can reduce the scope of unemployment by devising ways to involve workers in the decision-making process regarding the management of the downsizing programme (Kozlowski et al. 1993; London 1995; Price 1990). Workers may choose to pool their resources and buy out the industry, thus avoiding layoffs; to reduce working hours to spread and even out the reduction in force; to agree to a reduction in wages to minimize layoffs; to retrain and/or relocate to take new jobs; or to participate in outplacement programmes. Employers can facilitate the process by timely implementation of a strategic plan that offers the above-mentioned programmes and services to workers at risk of being laid off. As has been indicated already, unemployment leads to pernicious outcomes at both the personal and societal level. A combination of comprehensive government policies, flexible downsizing strategies by business and industry, and community-based programmes can help to mitigate the adverse consequences of a problem that will continue to affect the lives of millions of people for years to come.
One of the more remarkable social transformations of this century was the emergence of a powerful Japanese economy from the debris of the Second World War. Fundamental to this climb to global competitiveness were a commitment to quality and a determination to prove false the then-common belief that Japanese goods were shoddy and worthless. Guided by the innovative teachings of Deming (1993), Juran (1988) and others, Japanese managers and engineers adopted practices that have ultimately evolved into a comprehensive management system rooted in the basic concept of quality. Fundamentally, this system represents a shift in thinking. The traditional view was that quality had to be balanced against the cost of attaining it. The view that Deming and Juran urged was that higher quality led to lower total cost and that a systems approach to improving work processes would help in attaining both of these objectives. Japanese managers adopted this management philosophy, engineers learned and practised statistical quality control, workers were trained and involved in process improvement, and the outcome was dramatic (Ishikawa 1985; Imai 1986).
By 1980, alarmed at the erosion of their markets and seeking to broaden their reach in the global economy, European and American managers began to search for ways to regain a competitive position. In the ensuing 15 years, more and more companies came to understand the principles underlying quality management and to apply them, initially in industrial production and later in the service sector as well. While there are a variety of names for this management system, the most commonly used is total quality management or TQM; an exception is the health care sector, which more frequently uses the term continuous quality improvement, or CQI. Recently, the term business process reengineering (BPR) has also come into use, but this tends to mean an emphasis on specific techniques for process improvement rather than on the adoption of a comprehensive management system or philosophy.
TQM is available in many “flavours,” but it is important to understand it as a system that includes both a management philosophy and a powerful set of tools for improving the efficiency of work processes. Some of the common elements of TQM include the following (Feigenbaum 1991; Mann 1989; Senge 1991):
Typically, organizations successfully adopting TQM find they must make changes on three fronts.
One is transformation. This involves such actions as defining and communicating a vision of the organization’s future, changing the management culture from top-down oversight to one of employee involvement, fostering collaboration instead of competition and refocusing the purpose of all work on meeting customer requirements. Seeing the organization as a system of interrelated processes is at the core of TQM, and is an essential means of securing a totally integrated effort towards improving performance at all levels. All employees must know the vision and the aim of the organization (the system) and understand where their work fits in it, or no amount of training in applying TQM process improvement tools can do much good. However, lack of genuine change of organizational culture, particularly among lower echelons of managers, is frequently the downfall of many nascent TQM efforts; Heilpern (1989) observes, “We have come to the conclusion that the major barriers to quality superiority are not technical, they are behavioural.” Unlike earlier, flawed “quality circle” programmes, in which improvement was expected to “convect” upward, TQM demands top management leadership and the firm expectation that middle management will facilitate employee participation (Hill 1991).
A second basis for successful TQM is strategic planning. The achievement of an organization’s vision and goals is tied to the development and deployment of a strategic quality plan. One corporation defined this as “a customer-driven plan for the application of quality principles to key business objectives and the continuous improvement of work processes” (Yarborough 1994). It is senior management’s responsibility—indeed, its obligation to workers, stockholders and beneficiaries alike—to link its quality philosophy to sound and feasible goals that can reasonably be attained. Deming (1993) called this “constancy of purpose” and saw its absence as a source of insecurity for the workforce of the organization. The fundamental intent of strategic planning is to align the activities of all of the people throughout the company or organization so that it can achieve its core goals and can react with agility to a changing environment. It is evident that it both requires and reinforces the need for widespread participation of supervisors and workers at all levels in shaping the goal-directed work of the company (Shiba, Graham and Walden 1994).
Only when these two changes are adequately carried out can one hope for success in the third: the implementation of continuous quality improvement. Quality outcomes, and with them customer satisfaction and improved competitive position, ultimately rest on widespread deployment of process improvement skills. Often, TQM programmes accomplish this through increased investments in training and through assignment of workers (frequently volunteers) to teams charged with addressing a problem. A basic concept of TQM is that the person most likely to know how a job can be done better is the person who is doing it at a given moment. Empowering these workers to make useful changes in their work processes is a part of the cultural transformation underlying TQM; equipping them with knowledge, skills and tools to do so is part of continuous quality improvement.
The collection of statistical data is a typical and basic step taken by workers and teams to understand how to improve work processes. Deming and others adapted their techniques from the seminal work of Shewhart in the 1920s (Schmidt and Finnigan 1992). Among the most useful TQM tools are: (a) the Pareto Chart, a graphical device for identifying the more frequently occurring problems, and hence the ones to be addressed first; (b) the statistical control chart, an analytic tool for ascertaining the degree of variability in the unimproved process; and (c) flow charting, a means to document exactly how the process is carried out at present. Possibly the most ubiquitous and important tool is the Ishikawa Diagram (or “fishbone” diagram), whose invention is credited to Kaoru Ishikawa (1985). This instrument is a simple but effective way by which team members can collaborate on identifying the root causes of the process problem under study, and thus point the path to process improvement.
TQM, effectively implemented, may be important to workers and worker health in many ways. For example, the adoption of TQM can have an indirect influence. In a very basic sense, an organization that makes a quality transformation has arguably improved its chances of economic survival and success, and hence those of its employees. Moreover, it is likely to be one where respect for people is a basic tenet. Indeed, TQM experts often speak of “shared values”, those things that must be exemplified in the behaviour of both management and workers. These are often publicized throughout the organization as formal values statements or aspiration statements, and typically include such emotive language as “trust”, “respecting each other”, “open communications”, and “valuing our diversity” (Howard 1990).
Thus, it is tempting to suppose that quality workplaces will be “worker-friendly”—where worker-improved processes become less hazardous and where the climate is less stressful. The logic of quality is to build quality into a product or service, not to detect failures after the fact. It can be summed up in a word—prevention (Widfeldt and Widfeldt 1992). Such a logic is clearly compatible with the public health logic of emphasizing prevention in occupational health. As Williams (1993) points out in a hypothetical example, “If the quality and design of castings in the foundry industry were improved there would be reduced exposure ... to vibration as less finishing of castings would be needed.” Some anecdotal support for this supposition comes from satisfied employers who cite trend data on job health measures, climate surveys that show better employee satisfaction, and more numerous safety and health awards in facilities using TQM. Williams further presents two case studies in UK settings that exemplify such employer reports (Williams 1993).
Unfortunately, virtually no published studies offer firm evidence on the matter. What is lacking is a research base of controlled studies that document health outcomes, consider the possibility of detrimental as well as positive health influences, and link all of this causally to measurable factors of business philosophy and TQM practice. Given the significant prevalence of TQM enterprises in the global economy of the 1990s, this is a research agenda with genuine potential to define whether TQM is in fact a supportive tool in the prevention armamentarium of occupational safety and health.
We are on somewhat firmer ground to suggest that TQM can have a direct influence on worker health when it explicitly focuses quality improvement efforts on safety and health. Obviously, like all other work in an enterprise, occupational and environmental health activity is made up of interrelated processes, and the tools of process improvement are readily applied to them. One of the criteria against which candidates are examined for the Baldridge Award, the most important competitive honour granted to US organizations, is the competitor’s improvements in occupational health and safety. Yarborough has described how the occupational and environmental health (OEH) employees of a major corporation were instructed by senior management to adopt TQM with the rest of the company and how OEH was integrated into the company’s strategic quality plan (Yarborough 1994). The chief executive of a US utility that was the first non-Japanese company ever to win Japan’s coveted Deming Prize notes that safety was accorded a high priority in the TQM effort: “Of all the company’s major quality indicators, the only one that addresses the internal customer is employee safety.” By defining safety as a process, subjecting it to continuous improvement, and tracking lost-time injuries per 100 employees as a quality indicator, the utility reduced its injury rate by half, reaching the lowest point in the history of the company (Hudiberg 1991).
In summary, TQM is a comprehensive management system grounded in a management philosophy that emphasizes the human dimensions of work. It is supported by a powerful set of technologies that use data derived from work processes to document, analyse and continuously improve these processes.
Selye (1974) suggested that having to live with other people is one of the most stressful aspects of life. Good relations between members of a work group are considered a central factor in individual and organizational health (Cooper and Payne 1988) particularly in terms of the boss–subordinate relationship. Poor relationships at work are defined as having “low trust, low levels of supportiveness and low interest in problem solving within the organization” (Cooper and Payne 1988). Mistrust is positively correlated with high role ambiguity, which leads to inadequate interpersonal communications between individuals and psychological strain in the form of low job satisfaction, decreased well-being and a feeling of being threatened by one’s superior and colleagues (Kahn et al. 1964; French and Caplan 1973).
Supportive social relationships at work are less likely to create the interpersonal pressures associated with rivalry, office politics and unconstructive competition (Cooper and Payne 1991). McLean (1979) suggests that social support in the form of group cohesion, interpersonal trust and liking for a superior is associated with decreased levels of perceived job stress and better health. Inconsiderate behaviour on the part of a supervisor appears to contribute significantly to feelings of job pressure (McLean 1979). Close supervision and rigid performance monitoring also have stressful consequences—in this connection a great deal of research has been carried out which indicates that a managerial style characterized by lack of effective consultation and communication, unjustified restrictions on employee behaviour, and lack of control over one’s job is associated with negative psychological moods and behavioural responses (for example, escapist drinking and heavy smoking) (Caplan et al. 1975), increased cardiovascular risk (Karasek 1979) and other stress-related manifestations. On the other hand, offering broader opportunities to employees to participate in decision making at work can result in improved performance, lower staff turnover and improved levels of mental and physical well-being. A participatory style of management should also extend to worker involvement in the improvement of safety in the workplace; this could help to overcome apathy among blue-collar workers, which is acknowledged as a significant factor in the cause of accidents (Robens 1972; Sutherland and Cooper 1986).
Early work in the relationship between managerial style and stress was carried out by Lewin (for example, in Lewin, Lippitt and White 1939), in which he documented the stressful and unproductive effects of authoritarian management styles. More recently, Karasek’s (1979) work highlights the importance of managers’ providing workers with greater control at work or a more participative management style. In a six-year prospective study he demonstrated that job control (i.e., the freedom to use one’s intellectual discretion) and work schedule freedom were significant predictors of risk of coronary heart disease. Restriction of opportunity for participation and autonomy results in increased depression, exhaustion, illness rates and pill consumption. Feelings of being unable to make changes concerning a job and lack of consultation are commonly reported stressors among blue-collar workers in the steel industry (Kelly and Cooper 1981), oil and gas workers on rigs and platforms in the North Sea (Sutherland and Cooper 1986) and many other blue-collar workers (Cooper and Smith 1985). On the other hand, as Gowler and Legge (1975) indicate, a participatory management style can create its own potentially stressful situations, for example, a mismatch of formal and actual power, resentment of the erosion of formal power, conflicting pressures both to be participative and to meet high production standards, and subordinates’ refusal to participate.
Although there has been a substantial research focus on the differences between authoritarian versus participatory management styles on employee performance and health, there have also been other, idiosyncratic approaches to managerial style (Jennings, Cox and Cooper 1994). For example, Levinson (1978) has focused on the impact of the “abrasive” manager. Abrasive managers are usually achievement-oriented, hard-driving and intelligent (similar to the type A personality), but function less well at the emotional level. As Quick and Quick (1984) point out, the need for perfection, the preoccupation with self and the condescending, critical style of the abrasive manager induce feelings of inadequacy among their subordinates. As Levinson suggests, the abrasive personality as a peer is both difficult and stressful to deal with, but as a superior, the consequences are potentially very damaging to interpersonal relationships and highly stressful for subordinates in the organization.
In addition, there are theories and research which suggest that the effect on employee health and safety of managerial style and personality can only be understood in the context of the nature of the task and the power of the manager or leader. For example, Fiedler’s (1967) contingency theory suggests that there are eight main group situations based upon combinations of dichotomies: (a) the warmth of the relations between the leader and follower; (b) the level structure imposed by the task; and (c) the power of the leader. The eight combinations could be arranged in a continuum with, at one end (octant one) a leader who has good relations with members, facing a highly structured task and possessing strong power; and, at the other end (octant eight), a leader who has poor relations with members, facing a loosely structured task and having low power. In terms of stress, it could be argued that the octants formed a continuum from low stress to high stress. Fiedler also examined two types of leader: the leader who would value negatively most of the characteristics of the member he liked least (the lower LPC leader) and the leader who would see many positive qualities even in the members whom he disliked (the high LPC leader). Fiedler made specific predictions about the performance of the leader. He suggested that the low LPC leader (who had difficulty in seeing merits in subordinates he disliked) would be most effective in octants one and eight, where there would be very low and very high levels of stress, respectively. On the other hand, a high LPC leader (who is able to see merits even in those he disliked) would be more effective in the middle octants, where moderate stress levels could be expected. In general, subsequent research (for example, Strube and Garcia 1981) has supported Fiedler’s ideas.
Additional leadership theories suggest that task-oriented managers or leaders create stress. Seltzer, Numerof and Bass (1989) found that intellectually stimulating leaders increased perceived stress and “burnout” among their subordinates. Misumi (1985) found that production-oriented leaders generated physiological symptoms of stress. Bass (1992) finds that in laboratory experiments, production-oriented leadership causes higher levels of anxiety and hostility. On the other hand, transformational and charismatic leadership theories (Burns 1978) focus upon the effect which those leaders have upon their subordinates who are generally more self-assured and perceive more meaning in their work. It has been found that these types of leader or manager reduce the stress levels of their subordinates.
On balance, therefore, managers who tend to demonstrate “considerate” behaviour, to have a participative management style, to be less production- or task-oriented and to provide subordinates with control over their jobs are likely to reduce the incidence of ill health and accidents at work.
Most of the articles in this chapter deal with aspects of the work environment that are proximal to the individual employee. The focus of this article, however, is to examine the impact of more distal, macrolevel characteristics of organizations as a whole that may affect employees’ health and well-being. That is, are there ways in which organizations structure their internal environments that promote health among the employees of that organization or, conversely, place employees at greater risk of experiencing stress? Most theoretical models of occupational or job stress incorporate organizational structural variables such as organizational size, lack of participation in decision making, and formalization (Beehr and Newman 1978; Kahn and Byosiere 1992).
Organizational structure refers to the formal distribution of work roles and functions within an organization coordinating the various functions or subsystems within the organization to efficiently attain the organization’s goals (Porras and Robertson 1992). As such, structure represents a coordinated set of subsystems to facilitate the accomplishment of the organization’s goals and mission and defines the division of labour, the authority relationships, formal lines of communication, the roles of each organizational subsystem and the interrelationships among these subsystems. Therefore, organizational structure can be viewed as a system of formal mechanisms to enhance the understandability of events, predictability of events and control over events within the organization which Sutton and Kahn (1987) proposed as the three work-relevant antidotes against the stress-strain effect in organizational life.
One of the earliest organizational characteristics examined as a potential risk factor was organizational size. Contrary to the literature on risk of exposure to hazardous agents in the work environment, which suggests that larger organizations or plants are safer, being less hazardous and better equipped to handle potential hazards (Emmett 1991), larger organizations originally were hypothesized to put employees at greater risk of occupational stress. It was proposed that larger organizations tend to adapt a bureaucratic organizational structure to coordinate the increased complexity. This bureaucratic structure would be characterized by a division of labour based on functional specialization, a well-defined hierarchy of authority, a system of rules covering the rights and duties of job incumbents, impersonal treatment of workers and a system of procedures for dealing with work situations (Bennis 1969). On the surface, it would appear that many of these dimensions of bureaucracy would actually improve or maintain the predictability and understandability of events in the work environment and thus serve to reduce stress within the work environment. However, it also appears that these dimensions can reduce employees’ control over events in the work environment through a rigid hierarchy of authority.
Given these characteristics of bureaucratic structure, it is not surprising that organizational size, per se, has received no consistent support as a macro-organization risk factor (Kahn and Byosiere 1992). Payne and Pugh’s (1976) review, however, provides some evidence that organizational size indirectly increases the risk of stress. They report that larger organizations suffered a reduction in the amount of communication, an increase in the amount of job and task specifications and a decrease in coordination. These effects could lead to less understanding and predictability of events in the work environment as well as a decrease in control over work events, thus increasing experienced stress (Tetrick and LaRocco 1987).
These findings on organizational size have led to the supposition that the two aspects of organizational structure that seem to pose the most risk for employees are formalization and centralization. Formalization refers to the written procedures and rules governing employees’ activities, and centralization refers to the extent to which the decision-making power in the organization is narrowly distributed to higher levels in the organization. Pines (1982) pointed out that it is not formalization within a bureaucracy that results in experienced stress or burnout but the unnecessary red tape, paperwork and communication problems that can result from formalization. Rules and regulations can be vague creating ambiguity or contradiction resulting in conflict or lack of understanding concerning appropriate actions to be taken in specific situations. If the rules and regulations are too detailed, employees may feel frustrated in their ability to achieve their goals especially in customer or client-oriented organizations. Inadequate communication can result in employees feeling isolated and alienated based on the lack of predictability and understanding of events in the work environment.
While these aspects of the work environment appear to be accepted as potential risk factors, the empirical literature on formalization and centralization are far from consistent. The lack of consistent evidence may stem from at least two sources. First, in many of the studies, there is an assumption of a single organizational structure having a consistent level of formalization and centralization throughout the entire organization. Hall (1969) concluded that organizations can be meaningfully studied as totalities; however, he demonstrated that the degree of formalization as well as decision-making authority can differ within organizational units. Therefore, if one is looking at an individual level phenomenon such as occupational stress, it may be more meaningful to look at the structure of smaller organizational units than that of the whole organization. Secondly, there is some evidence suggesting that there are individual differences in response to structural variables. For example, Marino and White (1985) found that formalization was positively related to job stress among individuals with an internal locus of control and negatively related to stress among individuals who generally believe that they have little control over their environments. Lack of participation, on the other hand, was not moderated by locus of control and resulted in increased levels of job stress. There also appear to be some cultural differences affecting individual responses to structural variables, which would be important for multinational organizations having to operate across national boundaries (Peterson et al. 1995). These cultural differences also may explain the difficulty in adopting organizational structures and procedures from other nations.
Despite the rather limited empirical evidence implicating structural variables as psychosocial risk factors, it has been recommended that organizations should change their structures to be flatter with fewer levels of hierarchy or number of communication channels, more decentralized with more decision- making authority at lower levels in the organization and more integrated with less job specialization (Newman and Beehr 1979). These recommendations are consistent with organizational theorists who have suggested that traditional bureaucratic structure may not be the most efficient or healthiest form of organizational structure (Bennis 1969). This may be especially true in light of technological advances in production and communication that characterize the postindustrial workplace (Hirschhorn 1991).
The past two decades have seen considerable interest in the redesign of organizations to deal with external environmental threats resulting from increased globalization and international competition in North America and Western Europe (Whitaker 1991). Straw, Sandelands and Dutton (1988) proposed that organizations react to environmental threats by restricting information and constricting control. This can be expected to reduce the predictability, understandability and control of work events thereby increasing the stress experienced by the employees of the organization. Therefore, structural changes that prevent these threat-ridigity effects would appear to be beneficial to both the organization’s and employees’ health and well-being.
The use of a matrix organizational structure is one approach for organizations to structure their internal environments in response to greater environmental instability. Baber (1983) describes the ideal type of matrix organization as one in which there are two or more intersecting lines of authority, organizational goals are achieved through the use of task-oriented work groups which are cross-functional and temporary, and functional departments continue to exist as mechanisms for routine personnel functions and professional development. Therefore, the matrix organization provides the organization with the needed flexibility to be responsive to environmental instability if the personnel have sufficient flexibility gained from the diversification of their skills and an ability to learn quickly.
While empirical research has yet to establish the effects of this organizational structure, several authors have suggested that the matrix organization may increase the stress experienced by employees. For example, Quick and Quick (1984) point out that the multiple lines of authority (task and functional supervisors) found in matrix organizations increase the potential for role conflict. Also, Hirschhorn (1991) suggests that with postindustrial work organizations, workers frequently face new challenges requiring them to take a learning role. This results in employees having to acknowledge their own temporary incompetencies and loss of control which can lead to increased stress. Therefore, it appears that new organizational structures such as the matrix organization also have potential risk factors associated with them.
Attempts to change or redesign organizations, regardless of the particular structure that an organization chooses to adopt, can have stress-inducing properties by disrupting security and stability, generating uncertainty for people’s position, role and status, and exposing conflict which must be confronted and resolved (Golembiewski 1982). These stress-inducing properties can be offset, however, by the stress-reducing properties of organizational development which incorporate greater empowerment and decision making across all levels in the organization, enhanced openness in communication, collaboration and training in team building and conflict resolution (Golembiewski 1982; Porras and Robertson 1992).
While the literature suggests that there are occupational risk factors associated with various organizational structures, the impact of these macrolevel aspects of organizations appear to be indirect. Organizational structure can provide a framework to enhance the predictability, understandability and control of events in the work environment; however, the effect of structure on employees’ health and well-being is mediated by more proximal work-environment characteristics such as role characteristics and interpersonal relations. Structuring organizations for healthy employees as well as healthy organizations requires organizational flexibility, worker flexibility and attention to the sociotechnical systems that coordinate the technological demands and the social structure within the organization.
The organizational context in which people work is characterized by numerous features (e.g., leadership, structure, rewards, communication) subsumed under the general concepts of organizational climate and culture. Climate refers to perceptions of organizational practices reported by people who work there (Rousseau 1988). Studies of climate include many of the most central concepts in organizational research. Common features of climate include communication (as describable, say, by openness), conflict (constructive or dysfunctional), leadership (as it involves support or focus) and reward emphasis (i.e., whether an organization is characterized by positive versus negative feedback, or reward- or punishment-orientation). When studied together, we observe that organizational features are highly interrelated (e.g., leadership and rewards). Climate characterizes practices at several levels in organizations (e.g., work unit climate and organizational climate). Studies of climate vary in the activities they focus upon, for example, climates for safety or climates for service. Climate is essentially a description of the work setting by those directly involved with it.
The relationship of climate to employee well-being (e.g., satisfaction, job stress and strain) has been widely studied. Since climate measures subsume the major organizational characteristics workers experience, virtually any study of employee perceptions of their work setting can be thought of as a climate study. Studies link climate features (particularly leadership, communication openness, participative management and conflict resolution) with employee satisfaction and (inversely) stress levels (Schneider 1985). Stressful organizational climates are characterized by limited participation in decisions, use of punishment and negative feedback (rather than rewards and positive feedback), conflict avoidance or confrontation (rather than problem solving), and nonsupportive group and leader relations. Socially supportive climates benefit employee mental health, with lower rates of anxiety and depression in supportive settings (Repetti 1987). When collective climates exist (where members who interact with each other share common perceptions of the organization) research observes that shared perceptions of undesirable organizational features are linked with low morale and instances of psychogenic illness (Colligan, Pennebaker and Murphy 1982). When climate research adopts a specific focus, as in the study of climate for safety in an organization, evidence is provided that lack of openness in communication regarding safety issues, few rewards for reporting occupational hazards, and other negative climate features increase the incidence of work-related accidents and injury (Zohar 1980).
Since climates exist at many levels in organizations and can encompass a variety of practices, assessment of employee risk factors needs to systematically span the relationships (whether in the work unit, the department or the entire organization) and activities (e.g., safety, communication or rewards) in which employees are involved. Climate-based risk factors can differ from one part of the organization to another.
Culture constitutes the values, norms and ways of behaving which organization members share. Researchers identify five basic elements of culture in organizations: fundamental assumptions (unconscious beliefs that shape member’s interpretations, e.g., views regarding time, environmental hostility or stability), values (preferences for certain outcomes over others, e.g., service or profit), behavioural norms (beliefs regarding appropriate and inappropriate behaviours, e.g., dress codes and teamwork), patterns of behaviours (observable recurrent practices, e.g., structured performance feedback and upward referral of decisions) and artefacts (symbols and objects used to express cultural messages, e.g., mission statements and logos). Cultural elements which are more subjective (i.e., assumptions, values and norms) reflect the way members think about and interpret their work setting. These subjective features shape the meaning that patterns of behaviours and artefacts take on within the organization. Culture, like climate, can exist at many levels, including:
Cultures can be strong (widely shared by members), weak (not widely shared), or in transition (characterized by gradual replacement of one culture by another).
In contrast with climate, culture is less frequently studied as a contributing factor to employee well-being or occupational risk. The absence of such research is due both to the relatively recent emergence of culture as a concept in organizational studies and to ideological debates regarding the nature of culture, its measurement (quantitative versus qualitative), and the appropriateness of the concept for cross-sectional study (Rousseau 1990). According to quantitative culture research focusing on behavioural norms and values, team-oriented norms are associated with higher member satisfaction and lower strain than are control- or bureaucratically -oriented norms (Rousseau 1989). Furthermore, the extent to which the worker’s values are consistent with those of the organization affects stress and satisfaction (O’Reilly and Chatman 1991). Weak cultures and cultures fragmented by role conflict and member disagreement are found to provoke stress reactions and crises in professional identities (Meyerson 1990). The fragmentation or breakdown of organizational cultures due to economic or political upheavals affects the well-being of members psychologically and physically, particular in the wake of downsizings, plant closings and other effects of concurrent organizational restructurings (Hirsch 1987). The appropriateness of particular cultural forms (e.g., hierarchic or militaristic) for modern society has been challenged by several culture studies (e.g., Hirschhorn 1984; Rousseau 1989) concerned with the stress and health-related outcomes of operators (e.g., nuclear power technicians and air traffic controllers) and subsequent risks for the general public.
Assessing risk factors in the light of information about organizational culture requires first attention to the extent to which organization members share or differ in basic beliefs, values and norms. Differences in function, location and education create subcultures within organizations and mean that culture-based risk factors can vary within the same organization. Since cultures tend to be stable and resistant to change, organizational history can aid assessment of risk factors both in terms of stable and ongoing cultural features as well as recent changes that can create stressors associated with turbulence (Hirsch 1987).
Climate and culture overlap to a certain extent, with perceptions of culture’s patterns of behaviour being a large part of what climate research addresses. However, organization members may describe organizational features (climate) in the same way but interpret them differently due to cultural and subcultural influences (Rosen, Greenlagh and Anderson 1981). For example, structured leadership and limited participation in decision making may be viewed as negative and controlling from one perspective or as positive and legitimate from another. Social influence reflecting the organization’s culture shapes the interpretation members make of organizational features and activities. Thus, it would seem appropriate to assess both climate and culture simultaneously in investigating the impact of the organization on the well-being of members.
There are many forms of compensation used in business and government organizations throughout the world to pay workers for their physical and mental contribution. Compensation provides money for human effort and is necessary for individual and family existence in most societies. Trading work for money is a long-established practice.
The health-stressor aspect of compensation is most closely linked with compensation plans that offer incentives for extra or sustained human effort. Job stress can certainly exist in any work setting where compensation is not based on incentives. However, physical and mental performance levels that are well above normal and that could lead to physical injury or injurious mental stress is more likely to be found in environments with certain kinds of incentive compensation.
Performance Measures and Stress
Performance measurements in one form or another are used by most organizations, and are essential for incentive programmes. Performance measures (standards) can be established for output, quality, throughput time, or any other productivity measure. Lord Kelvin in 1883 had this to say about measurements: “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.”
Performance measures should be carefully linked to the fundamental goals of the organization. Inappropriate performance measurements have often had little or no effect on goal attainment. Some common criticisms of performance measures include unclear purpose, vagueness, lack of connection (or even opposition, for that matter) to the business strategy, unfairness or inconsistency, and their liability to be used chiefly for “punishing” people. But measurements can serve as indispensable benchmarks: remember the saying, “If you don’t know where you are, you can’t get to where you want to be”. The bottom line is that workers at all levels in an organization demonstrate more of the behaviours that they are measured on and rewarded to evince. What gets measured and rewarded gets done.
Performance measures must be fair and consistent to minimize stress among the workforce. There are several methods utilised to establish performance measures ranging from judgement estimation (guessing) to engineered work measurement techniques. Under the work measurement approach to setting performance measures, 100% performance is defined as a “fair day’s work pace”. This is the work effort and skill at which an average well-trained employee can work without undue fatigue while producing an acceptable quality of work over the course of a work shift. A 100% performance is not maximum performance; it is the normal or average effort and skill for a group of workers. By way of comparison, the 70% benchmark is generally regarded as the minimum tolerable level of performance, while the 120% benchmark is the incentive effort and skill that the average worker should be able to attain when provided with a bonus of at least 20% above the base rate of pay. While a number of incentive plans have been established using the 120% benchmark, this value varies among plans. The general design criteria recommended for wage incentive plans provide workers the opportunity to earn approximately 20 to 35% above base rate if they are normally skilled and execute high effort continuously.
Despite the inherent appeal of a “fair day’s work for a fair day’s pay”, some possible stress problems exist with a work measurement approach to setting performance measures. Performance measures are fixed in reference to the normal or average performance of a given work group (i.e., work standards based on group as opposed to individual performance). Thus, by definition, a large segment of those working at a task will fall below average (i.e., the 100% performance benchmark) generating a demand–resource imbalance that exceeds physical or mental stress limits. Workers who have difficulty meeting performance measures are likely to experience stress through work overload, negative supervisor feedback, and threat of job loss if they consistently perform below the 100% performance benchmark.
In one form or another, incentives have been used for many years. For example, in the New Testament (II Timothy 2:6) Saint Paul declares, “It is the hard-working farmer who ought to have the first share of the crops”. Today, most organizations are striving to improve productivity and quality in order to maintain or improve their position in the business world. Most often workers will not give extra or sustained effort without some form of incentive. Properly designed and implemented financial incentive programmes can help. Before any incentive programme is implemented, some measure of performance must be established. All incentive programmes can be categorized as follows: direct financial, indirect financial, and intangible (non-financial).
Direct financial programmes may be applied to individuals or groups of workers. For individuals, each employee’s incentive is governed by his or her performance relative to a standard for a given time period. Group plans are applicable to two or more individuals working as a team on tasks that are usually interdependent. Each employee’s group incentive is usually based on his or her base rate and the group performance during the incentive period.
The motivation to sustain higher output levels is usually greater for individual incentives because of the opportunity for the high-performing worker to earn a greater incentive. However, as organizations move toward participative management and empowered work groups and teams, group incentives usually provide the best overall results. The group effort makes overall improvements to the total system as compared to optimizing individual outputs. Gainsharing (a group incentive system that has teams for continuous improvement and provides a share, usually 50%, of all productivity gains above a benchmark standard) is one form of a direct group incentive programme that is well suited for the continuous improvement organization.
Indirect financial programmes are usually less effective than direct financial programmes because direct financial incentives are stronger motivators. The principal advantage of indirect plans is that they require less detailed and accurate performance measures. Organizational policies that favourably affect morale, result in increased productivity and provide some financial benefit to employees are considered to be indirect incentive programmes. It is important to note that for indirect financial programmes no exact relationship exists between employee output and financial incentives. Examples of indirect incentive programmes include relatively high base rates, generous fringe benefits, awards programmes, year-end bonuses and profit-sharing.
Intangible incentive programmes include rewards that do not have any (or very little) financial impact on employees. These programmes, however, when viewed as desirable by the employees, can improve productivity. Examples of intangible incentive programmes include job enrichment (adding challenge and intrinsic satisfaction to the specific task assignments), job enlargement (adding tasks to complete a “whole” piece or unit of work output), nonfinancial suggestion plans, employee involvement groups and time off without any reduction in pay.
Summary and Conclusions
Incentives in some form are an integral part of many compensation plans. In general, incentive plans should be carefully evaluated to make sure that workers are not exceeding safe ergonomic or mental stress limits. This is particularly important for individual direct financial plans. It is usually a lesser problem in group direct, indirect or intangible plans.
Incentives are desirable because they enhance productivity and provide workers an opportunity to earn extra income or other benefits. Gainsharing is today one of the best forms of incentive compensation for any work group or team organization that wishes to offer bonus earnings and to achieve improvement in the workplace without risking the imposition of negative health-stressors by the incentive plan itself.
The nations of the world vary dramatically in both their use and treatment of employees in their contingent workforce. Contingent workers include temporary workers hired through temporary help agencies, temporary workers hired directly, voluntary and “non-voluntary” part-timers (the non-voluntary would prefer full-time work) and the self-employed. International comparisons are difficult due to differences in the definitions of each of these categories of worker.
Overman (1993) stated that the temporary help industry in Western Europe is about 50% larger than it is in the United States, where about 1% of the workforce is made up of temporary workers. Temporary workers are almost non-existent in Italy and Spain.
While the subgroups of contingent workers vary considerably, the majority of part-time workers in all European countries are women at low salary levels. In the United States, contingent workers also tend to be young, female and members of minority groups. Countries vary considerably in the degree to which they protect contingent workers with laws and regulations covering their working conditions, health and other benefits. The United Kingdom, the United States, Korea, Hong Kong, Mexico and Chile are the least regulated, with France, Germany, Argentina and Japan having fairly rigid requirements (Overman 1993). A new emphasis on providing contingent workers with greater benefits through increased legal and regulatory requirements will help to alleviate occupational stress among those workers. However, those increased regulatory requirements may result in employers’ hiring fewer workers overall due to increased benefit costs.
An alternative to contingent work is “job sharing,” which can take three forms: two employees share the responsibilities for one full-time job; two employees share one full-time position and divide the responsibilities, usually by project or client group; or two employees perform completely separate and unrelated tasks but are matched for purposes of headcount (Mattis 1990). Research has indicated that most job sharing, like contingent work, is done by women. However, unlike contingent work, job sharing positions are often subject to the protection of wage and hour laws and may involve professional and even managerial responsibilities. Within the European Community, job sharing is best known in Britain, where it was first introduced in the public sector (Lewis, Izraeli and Hootsmans 1992). The United States Federal Government, in the early 1990s, implemented a nationwide job sharing programme for its employees; in contrast, many state governments have been establishing job sharing networks since 1983 (Lee 1983). Job sharing is viewed as one way to balance work and family responsibilities.
Flexiplace and Home Work
Many alternative terms are used to denote flexiplace and home work: telecommuting, the alternative worksite, the electronic cottage, location-independent work, the remote workplace and work-at-home. For our purposes, this category of work includes “work performed at one or more ‘predetermined locations’ such as the home or a satellite work space away from the conventional office where at least some of the communications maintained with the employer occur through the use of telecommunications equipment such as computers, telephones and fax machines” (Pitt-Catsouphes and Marchetta 1991).
LINK Resources, Inc., a private-sector firm monitoring worldwide telecommuting activity, has estimated that there were 7.6 million telecommuters in 1993 in the United States out of the over 41.1 million work-at-home households. Of these telecommuters 81% worked part-time for employers with less than 100 employees in a wide array of industries across many geographical locations. Fifty-three% were male, in contrast to figures showing a majority of females in contingent and job-sharing work. Research with fifty US companies also showed that the majority of telecommuters were male with successful flexible work arrangements including supervisory positions (both line and staff), client-centred work and jobs that included travel (Mattis 1990). In 1992, 1.5 million Canadian households had at least one person who operated a business from home.
Lewis, Izraeli and Hootsman(1992) reported that, despite earlier predictions, telecommuting has not taken over Europe. They added that it is best established in the United Kingdom and Germany for professional jobs including computer specialists, accountants and insurance agents.
In contrast, some home-based work in both the United States and Europe pays by the piece and involves short deadlines. Typically, while telecommuters tend to be male, homeworkers in low-paid, piece-work jobs with no benefits tend to be female (Hall 1990).
Recent research has concentrated on identifying; (a) the type of person best suited for home work; (b) the type of work best accomplished at home; (c) procedures to ensure successful home work experiences and (d) reasons for organizational support (Hall 1990; Christensen 1992).
The general approach to social welfare issues and programmes varies throughout the world depending upon the culture and values of the nation studied. Some of the differences in welfare facilities in the United States, Canada and Western Europe are documented by Ferber, O’Farrell and Allen (1991).
Recent proposals for welfare reform in the United States suggest overhauling traditional public assistance in order to make recipients work for their benefits. Cost estimates for welfare reform range from US$15 billion to $20 billion over the next five years, with considerable cost savings projected for the long term. Welfare administration costs in the United States for such programmes as food stamps, Medicaid and Aid to Families with Dependent Children have risen 19% from 1987 to 1991, the same percentage as the increase in the number of beneficiaries.
Canada has instituted a “work sharing” programme as an alternative to layoffs and welfare. The Canada Employment and Immigration Commission (CEIC) programme enables employers to face cutbacks by shortening the work week by one to three days and paying reduced wages accordingly. For the days not worked, the CEIC arranges for the workers to draw normal unemployment insurance benefits, an arrangement that helps to compensate them for the lower wages received from their employer and to relieve the hardships of being laid off. The duration of the programme is 26 weeks, with a 12-week extension. Workers can use work-sharing days for training and the federal Canadian government may reimburse the employer for a major portion of the direct training costs through the “Canadian Jobs Strategy”.
The degree of child-care support is dependent upon the sociological underpinnings of the nation’s culture (Scharlach, Lowe and Schneider 1991). Cultures that:
will devote greater resources to supporting those programmes. Thus, international comparisons are complicated by these four factors and “high quality care” may be dependent on the needs of children and families in specific cultures.
Within the European Community, France provides the most comprehensive child-care programme. The Netherlands and the United Kingdom were late in addressing this issue. Only 3% of British employers provided some form of child care in 1989. Lamb et al. (1992) present nonparental child-care case studies from Sweden, the Netherlands, Italy, the United Kingdom, the United States, Canada, Israel, Japan, the People’s Republic of China, Cameroon, East Africa and Brazil. In the United States, approximately 3,500 private companies of the 17 million firms nationwide offer some type of child-care assistance to their employees. Of those firms, approximately 1,100 offer flexible spending accounts, 1,000 offer information and referral services and fewer than 350 have onsite or near-site child-care centres (Bureau of National Affairs 1991).
In a research study in the United States, 44% of men and 76% of women with children under six missed work in the previous three months for a family-related reason. The researchers estimated that the organizations they studied paid over $4 million in salary and benefits to employees who were absent because of child-care problems (see study by Galinsky and Hughes in Fernandez 1990). A study by the United States General Accounting Office in 1981 showed that American companies lose over $700 million a year because of inadequate parental leave policies.
It will take only 30 years (from the time of this writing, 1994) for the proportion of elderly in Japan to climb from 7% to 14%, while in France it took over 115 years and in Sweden 90 years. Before the end of the century, one out of every four persons in many member States of the Commission of the European Communities will be over 60 years old. Yet, until recently in Japan, there were few institutions for the elderly and the issue of eldercare has found scant attention in Britain and other European countries (Lewis, Izraeli and Hootsmans 1992). In America, there are approximately five million older Americans who require assistance with day-to-day tasks in order to remain in the community, and 30 million who are currently age 65 or older. Family members provide more than 80% of the assistance that these elderly people need (Scharlach, Lowe and Schneider 1991).
Research has shown that those employees who have elder-care responsibilities report significantly greater overall job stress than do other employees (Scharlach, Lowe and Schneider 1991). These caretakers often experience emotional stress and physical and financial strain. Fortunately, global corporations have begun to recognize that difficult family situations can result in absenteeism, decreased productivity and lower morale, and they are beginning to provide an array of “cafeteria benefits” to assist their employees. (The name “cafeteria” is intended to suggest that employees may select the benefits that would be most helpful to them from an array of benefits.) Benefits might include flexible work hours, paid “family illness” hours, referral services for family assistance, or a dependent-care salary-reduction account that allows employees to pay for elder care or day care with pre-tax dollars.
The author wishes to acknowledge the assistance of Charles Anderson of the Personnel Resources and Development Center of the United States Office of Personnel Management, Tony Kiers of the C.A.L.L. Canadian Work and Family Service, and Ellen Bankert and Bradley Googins of the Center on Work and Family of Boston University in acquiring and researching many of the references cited in this article.
The process by which outsiders become organizational insiders is known as organizational socialization. While early research on socialization focused on indicators of adjustment such as job satisfaction and performance, recent research has emphasized the links between organizational socialization and work stress.
Socialization as a Moderator of Job Stress
Entering a new organization is an inherently stressful experience. Newcomers encounter a myriad of stressors, including role ambiguity, role conflict, work and home conflicts, politics, time pressure and work overload. These stressors can lead to distress symptoms. Studies in the 1980s, however, suggest that a properly managed socialization process has the potential for moderating the stressor-strain connection.
Two particular themes have emerged in the contemporary research on socialization:
Information acquired by newcomers during socialization helps alleviate the considerable uncertainty in their efforts to master their new tasks, roles and interpersonal relationships. Often, this information is provided via formal orientation-cum-socialization programmes. In the absence of formal programmes, or (where they exist) in addition to them, socialization occurs informally. Recent studies have indicated that newcomers who proactively seek out information adjust more effectively (Morrison l993). In addition, newcomers who underestimate the stressors in their new job report higher distress symptoms (Nelson and Sutton l99l).
Supervisory support during the socialization process is of special value. Newcomers who receive support from their supervisors report less stress from unmet expectations (Fisher l985) and fewer psychological symptoms of distress (Nelson and Quick l99l). Supervisory support can help newcomers cope with stressors in at least three ways. First, supervisors may provide instrumental support (such as flexible work hours) that helps alleviate a particular stressor. Secondly, they may provide emotional support that leads a newcomer to feel more efficacy in coping with a stressor. Thirdly, supervisors play an important role in helping newcomers make sense of their new environment (Louis l980). For example, they can frame situations for newcomers in a way that helps them appraise situations as threatening or nonthreatening.
In summary, socialization efforts that provide necessary information to newcomers and support from supervisors can prevent the stressful experience from becoming distressful.
Evaluating Organizational Socialization
The organizational socialization process is dynamic, interactive and communicative, and it unfolds over time. In this complexity lies the challenge of evaluating socialization efforts. Two broad approaches to measuring socialization have been proposed. One approach consists of the stage models of socialization (Feldman l976; Nelson l987). These models portray socialization as a multistage transition process with key variables at each of the stages. Another approach highlights the various socialization tactics that organizations use to help newcomers become insiders (Van Maanen and Schein l979).
With both approaches, it is contended that there are certain outcomes that mark successful socialization. These outcomes include performance, job satisfaction, organizational commit-ment, job involvement and intent to remain with the organization. If socialization is a stress moderator, then distress symptoms (specifically, low levels of distress symptoms) should be included as an indicator of successful socialization.
Health Outcomes of Socialization
Because the relationship between socialization and stress has only recently received attention, few studies have included health outcomes. The evidence indicates, however, that the socialization process is linked to distress symptoms. Newcomers who found interactions with their supervisors and other newcomers helpful reported lower levels of psychological distress symptoms such as depression and inability to concentrate (Nelson and Quick l99l). Further, newcomers with more accurate expectations of the stressors in their new jobs reported lower levels of both psychological symptoms (e.g., irritability) and physiological symptoms (e.g., nausea and headaches).
Because socialization is a stressful experience, health outcomes are appropriate variables to study. Studies are needed that focus on a broad range of health outcomes and that combine self-reports of distress symptoms with objective health measures.
Organizational Socialization as Stress Intervention
The contemporary research on organizational socialization suggests that it is a stressful process that, if not managed well, can lead to distress symptoms and other health problems. Organizations can take at least three actions to ease the transition by way of intervening to ensure positive outcomes from socialization.
First, organizations should encourage realistic expectations among newcomers of the stressors inherent in the new job. One way of accomplishing this is to provide a realistic job preview that details the most commonly experienced stressors and effective ways of coping (Wanous l992). Newcomers who have an accurate view of what they will encounter can preplan coping strategies and will experience less reality shock from those stressors about which they have been forewarned.
Secondly, organizations should make numerous sources of accurate information available to newcomers in the form of booklets, interactive information systems or hotlines (or all of these). The uncertainty of the transition into a new organization can be overwhelming, and multiple sources of informational support can aid newcomers in coping with the uncertainty of their new jobs. In addition, newcomers should be encouraged to seek out information during their socialization experiences.
Thirdly, emotional support should be explicitly planned for in designing socialization programmes. The supervisor is a key player in the provision of such support and may be most helpful by being emotionally and psychologically available to newcomers (Hirshhorn l990). Other avenues for emotional support include mentoring, activities with more senior and experienced co-workers, and contact with other newcomers.
The career stage approach is one way to look at career development. The way in which a researcher approaches the issue of career stages is frequently based on Levinson’s life stage development model (Levinson 1986). According to this model, people grow through specific stages separated by transition periods. At each stage a new and crucial activity and psychological adjustment may be completed (Ornstein, Cron and Slocum 1989). In this way, defined career stages can be, and usually are, based on chronological age. The age ranges assigned for each stage have varied considerably between empirical studies, but usually the early career stage is considered to range from the ages of 20 to 34 years, the mid-career from 35 to 50 years and the late career from 50 to 65 years.
According to Super’s career development model (Super 1957; Ornstein, Cron and Slocum 1989) the four career stages are based on the qualitatively different psychological task of each stage. They can be based either on age or on organizational, positional or professional tenure. The same people can recycle several times through these stages in their work career. For example, according to the Career Concerns Inventory Adult Form, the actual career stage can be defined at an individual or group level. This instrument assesses an individual’s awareness of and concerns with various tasks of career development (Super, Zelkowitz and Thompson 1981). When tenure measures are used, the first two years are seen as a trial period. The establishment period from two to ten years means career advancement and growth. After ten years comes the maintenance period, which means holding on to the accomplishments achieved. The decline stage implies the development of one’s self-image independently of one’s career.
Because the theoretical bases of the definition of the career stages and the sorts of measure used in practice differ from one study to another, it is apparent that the results concerning the health- and job-relatedness of career development vary, too.
Career Stage as a Moderator of Work-Related Health and Well-Being
Most studies of career stage as a moderator between job characteristics and the health or well-being of employees deal with organizational commitment and its relation to job satisfaction or to behavioural outcomes such as performance, turnover and absenteeism (Cohen 1991). The relationship between job characteristics and strain has also been studied. The moderating effect of career stage means statistically that the average correlation between measures of job characteristics and well-being varies from one career stage to another.
Work commitment usually increases from early career stages to later stages, although among salaried male professionals, job involvement was found to be lowest in the middle stage. In the early career stage, employees had a stronger need to leave the organization and to be relocated (Morrow and McElroy 1987). Among hospital staff, nurses’ measures of well-being were most strongly associated with career and affective-organizational commitment (i.e., emotional attachment to the organization). Continuance commitment (this is a function of perceived number of alternatives and degree of sacrifice) and normative commitment (loyalty to organization) increased with career stage (Reilly and Orsak 1991).
A meta-analysis was carried out of 41 samples dealing with the relationship between organizational commitment and outcomes indicating well-being. The samples were divided into different career stage groups according to two measures of career stage: age and tenure. Age as a career stage indicator significantly affected turnover and turnover intentions, while organizational tenure was related to job performance and absenteeism. Low organizational commitment was related to high turnover, especially in the early career stage, whereas low organizational commitment was related to high absenteeism and low job performance in the late career stage (Cohen 1991).
The relationship between work attitudes, for instance job satisfaction and work behaviour, has been found to be moderated by career stage to a considerable degree (e.g., Stumpf and Rabinowitz 1981). Among employees of public agencies, career stage measured with reference to organizational tenure was found to moderate the relationship between job satisfaction and job performance. Their relation was strongest in the first career stage. This was supported also in a study among sales personnel. Among academic teachers, the relationship between satisfaction and performance was found to be negative during the first two years of tenure.
Most studies of career stage have dealt with men. Even many early studies in the 1970s, in which the sex of the respondents was not reported, it is apparent that most of the subjects were men. Ornstein and Lynn (1990) tested how the career stage models of Levinson and Super described differences in the career attitudes and intentions among professional women. The results suggest that career stages based on age were related to organizational commitment, intention to leave the organization and a desire for promotion. These findings were, in general, similar to the ones found among men (Ornstein, Cron and Slocum 1989). However, no support was derived for the predictive value of career stages as defined on a psychological basis.
Studies of stress have generally either ignored age, and consequently career stage, in their study designs or treated it as a confounding factor and controlled its effects. Hurrell, McLaney and Murphy (1990) contrasted the effects of stress in mid-career to its effects in early and late career using age as a basis for their grouping of US postal workers. Perceived ill health was not related to job stressors in mid-career, but work pressure and underutilization of skills predicted it in early and late career. Work pressure was related also to somatic complaints in the early and late career group. Underutilization of abilities was more strongly related to job satisfaction and somatic complaints among mid-career workers. Social support had more influence on mental health than physical health, and this effect is more pronounced in mid-career than in early or late career stages. Because the data were taken from a cross sectional study, the authors mention that cohort explanation of the results might also be possible (Hurrell, McLaney and Murphy 1990).
When adult male and female workers were grouped according to age, the older workers more frequently reported overload and responsibility as stressors at work, whereas the younger workers cited insufficiency (e.g., not challenging work), boundary-spanning roles and physical environment stressors (Osipow, Doty and Spokane 1985). The older workers reported fewer of all kinds of strain symptoms: one reason for this may be that older people used more rational-cognitive, self-care and recreational coping skills, evidently learned during their careers, but selection that is based on symptoms during one’s career may also explain these differences. Alternatively it might reflect some self-selection, when people leave jobs that stress them excessively over time.
Among Finnish and US male managers, the relationship between job demands and control on the one hand, and psychosomatic symptoms on the other, was found in the studies to vary according to career stage (defined on the basis of age) (Hurrell and Lindström 1992, Lindström and Hurrell 1992). Among US managers, job demands and control had a significant effect on symptom reporting in the middle career stage, but not in the early and late stage, while among Finnish managers, the long weekly working hours and low job control increased stress symptoms in the early career stage, but not in the later stages. Differences between the two groups might be due to the differences in the two samples studied. The Finnish managers, being in the construction trades, had high workloads already in their early career stage, whereas US managers—these were public sector workers—had the highest workloads in their middle career stage.
To sum up the results of research on the moderating effects of career stage: early career stage means low organizational commitment related to turnover as well as job stressors related to perceived ill health and somatic complaints. In mid-career the results are conflicting: sometimes job satisfaction and performance are positively related, sometimes negatively. In mid-career, job demands and low control are related to frequent symptom reporting among some occupational groups. In late career, organizational commitment is correlated to low absenteeism and good performance. Findings on relations between job stressors and strain are inconsistent for the late career stage. There are some indications that more effective coping decreases work-related strain symptoms in late career.
Practical interventions to help people to cope better with the specific demands of each career stage would be beneficial. Vocational counselling at the entry stage of one’s work life would be especially useful. Interventions for minimizing the negative impact of career plateauing are suggested because this can be either a time of frustration or an opportunity to face new challenges or to reappraise one’s life goals (Weiner, Remer and Remer 1992). Results of age-based health examinations in occupational health services have shown that job-related problems lowering working ability gradually increase and qualitatively change with age. In early and mid-career they are related to coping with work overload, but in later middle and late career they are gradually accompanied by declining psychological condition and physical health, facts that indicate the importance of early institutional intervention at an individual level (Lindström, Kaihilahti and Torstila 1988). Both in research and in practical interventions, mobility and turnover pattern should be taken into account, as well as the role played by one’s occupation (and situation within that occupation) in one’s career development.
The Type A behaviour pattern is an observable set of behaviours or style of living characterized by extremes of hostility, competitiveness, hurry, impatience, restlessness, aggressiveness (sometimes stringently suppressed), explosiveness of speech, and a high state of alertness accompanied by muscular tension. People with strong Type A behaviour struggle against the pressure of time and the challenge of responsibility (Jenkins 1979). Type A is neither an external stressor nor a response of strain or discomfort. It is more like a style of coping. At the other end of this bipolar continuum, Type B persons are more relaxed, cooperative, steady in their pace of activity, and appear more satisfied with their daily lives and the people around them.
The Type A/B behavioural continuum was first conceptualized and labelled in 1959 by the cardiologists Dr. Meyer Friedman and Dr. Ray H. Rosenman. They identified Type A as being typical of their younger male patients with ischaemic heart disease (IHD).
The intensity and frequency of Type A behaviour increases as societies become more industrialized, competitive and hurried. Type A behaviour is more frequent in urban than rural areas, in managerial and sales occupations than among technical workers, skilled craftsmen or artists, and in businesswomen than in housewives.
Areas of Research
Type A behaviour has been studied as part of the fields of personality and social psychology, organizational and industrial psychology, psychophysiology, cardiovascular disease and occupational health.
Research relating to personality and social psychology has yielded considerable understanding of the Type A pattern as an important psychological construct. Persons scoring high on Type A measures behave in ways predicted by Type A theory. They are more impatient and aggressive in social situations and spend more time working and less in leisure. They react more strongly to frustration.
Research that incorporates the Type A concept into organizational and industrial psychology includes comparisons of different occupations as well as employees’ responses to job stress. Under conditions of equivalent external stress, Type A employees tend to report more physical and emotional strain than Type B employees. They also tend to move into high-demand jobs (Type A behavior 1990).
Pronounced increases in blood pressure, serum cholesterol and catecholamines in Type A persons were first reported by Rosenman and al. (1975) and have since been confirmed by many other investigators. The tenor of these findings is that Type A and Type B persons are usually quite similar in chronic or baseline levels of these physiological variables, but that environmental demands, challenges or frustrations create far larger reactions in Type A than Type B persons. The literature has been somewhat inconsistent, partly because the same challenge may not physiologically activitate men or women of different backgrounds. A preponderance of positive findings continues to be published (Contrada and Krantz 1988).
The history of Type A/B behaviour as a risk factor for ischeamic heart disease has followed a common historical trajectory: a trickle then a flow of positive findings, a trickle then a flow of negative findings, and now intense controversy (Review Panel on Coronary-Prone Behavior and Coronary Heart Disease 1981). Broad-scope literature searches now reveal a continuing mixture of positive associations and non-associations between Type A behaviour and IHD. The general trend of the findings is that Type A behaviour is more likely to be positively associated with a risk of IHD:
The Type A pattern is not “dead” as an IHD risk factor, but in the future must be studied with the expectation that it may convey greater IHD risk only in certain sub-populations and in selected social settings. Some studies suggest that hostility may be the most damaging component of Type A.
A newer development has been the study of Type A behaviour as a risk factor for injuries and mild and moderate illnesses both in occupational and student groups. It is rational to hypothesize that people who are hurried and aggressive will incur the most accidents at work, in sports and on the highway. This has been found to be empirically true (Elander, West and French 1993). It is less clear theoretically why mild acute illnesses in a full array of physiologic systems should occur more often to Type A than Type B persons, but this has been found in a few studies (e. g. Suls and Sanders 1988). At least in some groups, Type A was found to be associated with a higher risk of future mild episodes of emotional distress. Future research needs to address both the validity of these associations and the physical and psychological reasons behind them.
Methods of Measurement
The Type A/B behaviour pattern was first measured in research settings by the Structured Interview (SI). The SI is a carefully administered clinical interview in which about 25 questions are asked at different rates of speed and with different degrees of challenge or intrusiveness. Special training is necessary for an interviewer to be certified as competent both to administer and interpret the SI. Typically, interviews are tape-recorded to permit subsequent study by other judges to ensure reliability. In comparative studies among several measures of Type A behaviour, the SI seems to have greater validity for cardiovascular and psychophysiological studies than is found for self-report questionnaires, but little is known about its comparative validity in psychological and occupational studies because the SI is used much less frequently in these settings.
The most common self-report instrument is the Jenkins Activity Survey (JAS), a self-report, computer-scored, multiple-choice questionnaire. It has been validated against the SI and against the criteria of current and future IHD, and has accumulated construct validity. Form C, a 52-item version of the JAS published in 1979 by the Psychological Corporation, is the most widely used. It has been translated into most of the languages of Europe and Asia. The JAS contains four scales: a general Type A scale, and factor-analytically derived scales for speed and impatience, job involvement and hard-driving competitiveness. A short form of the Type A scale (13 items) has been used in epidemiological studies by the World Health Organization.
The Framingham Type A Scale (FTAS) is a ten-item questionnaire shown to be a valid predictor of future IHD for both men and women in the Framingham Heart Study (USA). It has also been used internationally both in cardiovascular and psychological research. Factor analysis divides the FTAS into two factors, one of which correlates with other measures of Type A behaviour while the second correlates with measures of neuroticism and irritability.
The Bortner Rating Scale (BRS) is composed of fourteen items, each in the form of an analogue scale. Subsequent studies have performed item-analysis on the BRS and have achieved greater internal consistency or greater predictability by shortening the scale to 7 or 12 items. The BRS has been widely used in international translations. Additional Type A scales have been developed internationally, but these have mostly been used only for specific nationalities in whose language they were written.
Systematic efforts have been under way for at least two decades to help persons with intense Type A behaviour patterns to change them to more of a Type B style. Perhaps the largest of these efforts was in the Recurrent Coronary Prevention Project conducted in the San Francisco Bay area in the 1980s. Repeated follow-up over several years documented that changes were achieved in many people and also that the rate of recurrent myocardial infarction was reduced in persons receiving the Type A behaviour reduction efforts as opposed to those receiving only cardiovascular counselling (Thoreson and Powell 1992).
Intervention in the Type A behaviour pattern is difficult to accomplish successfully because this behavioural style has so many rewarding features, particularly in terms of career advancement and material gain. The programme itself must be carefully crafted according to effective psychological principles, and a group process approach appears to be more effective than individual counselling.
The characteristic of hardiness is based in an existential theory of personality and is defined as a person’s basic stance towards his or his place in the world that simultaneously expresses commitment, control and readiness to respond to challenge (Kobasa 1979; Kobasa, Maddi and Kahn 1982). Commitment is the tendency to involve oneself in, rather than experience alienation from, whatever one is doing or encounters in life. Committed persons have a generalized sense of purpose that allows them to identify with and find meaningful the persons, events and things of their environment. Control is the tendency to think, feel and act as if one is influential, rather than helpless, in the face of the varied contingencies of life. Persons with control do not naïvely expect to determine all events and outcomes but rather perceive themselves as being able to make a difference in the world through their exercise of imagination, knowledge, skill and choice. Challenge is the tendency to believe that change rather than stability is normal in life and that changes are interesting incentives to growth rather than threats to security. So far from being reckless adventurers, persons with challenge are rather individuals with an openness to new experiences and a tolerance of ambiguity that enables them to be flexible in the face of change.
Conceived of as a reaction and corrective to a pessimistic bias in early stress research that emphasized persons’ vulnerability to stress, the basic hardiness hypothesis is that individuals characterized by high levels of the three interrelated orientations of commitment, control and challenge are more likely to remain healthy under stress than those individuals who are low in hardiness. The personality possessing hardiness is marked by a way of perceiving and responding to stressful life events that prevents or minimizes the strain that can follow stress and that, in turn, can lead to mental and physical illness.
The initial evidence for the hardiness construct was provided by retrospective and longitudinal studies of a large group of middle- and upper-level male executives employed by a Midwestern telephone company in the United States during the time of the divestiture of American Telephone and Telegraph (ATT). Executives were monitored through yearly questionnaires over a five-year period for stressful life experiences at work and at home, physical health changes, personality characteristics, a variety of other work factors, social support and health habits. The primary finding was that under conditions of highly stressful life events, executives scoring high on hardiness are significantly less likely to become physically ill than are executives scoring low on hardiness, an outcome that was documented through self-reports of physical symptoms and illnesses and validated by medical records based on yearly physical examinations. The initial work also demonstrated: (a) the effectiveness of hardiness combined with social support and exercise to protect mental as well as physical health; and (b) the independence of hardiness with respect to the frequency and severity of stressful life events, age, education, marital status and job level. Finally, the body of hardiness research initially assembled as a result of the study led to further research that showed the generalizability of the hardiness effect across a number of occupational groups, including non-executive telephone personnel, lawyers and US Army officers (Kobasa 1982).
Since those basic studies, the hardiness construct has been employed by many investigators working in a variety of occupational and other contexts and with a variety of research strategies ranging from controlled experiments to more qualitative field investigations (for reviews, see Maddi 1990; Orr and Westman 1990; Ouellette 1993). The majority of these studies have basically supported and expanded the original hardiness formulation, but there have also been disconfirmations of the moderating effect of hardiness and criticisms of the strategies selected for the measurement of hardiness (Funk and Houston 1987; Hull, Van Treuren and Virnelli 1987).
Emphasizing individuals’ ability to do well in the face of serious stressors, researchers have confirmed the positive role of hardiness among many groups including, in samples studied in the United States, bus drivers, military air-disaster workers, nurses working in a variety of settings, teachers, candidates in training for a number of different occupations, persons with chronic illness and Asian immigrants. Elsewhere, studies have been carried out among businessmen in Japan and trainees in the Israeli defence forces. Across these groups, one finds an association between hardiness and lower levels of either physical or mental symptoms, and, less frequently, a significant interaction between stress levels and hardiness that provides support for the buffering role of personality. In addition, results establish the effects of hardiness on non-health outcomes such as work performance and job satisfaction as well as on burnout. Another large body of work, most of it conducted with college-student samples, confirms the hypothesized mechanisms through which hardiness has its health-protective effects. These studies demonstrated the influence of hardiness upon the subjects’ appraisal of stress (Wiebe and Williams 1992). Also relevant to construct validity, a smaller number of studies have provided some evidence for the psychophysiological arousal correlates of hardiness and the relationship between hardiness and various preventive health behaviours.
Essentially all of the empirical support for a link between hardiness and health has relied upon data obtained through self-report questionnaires. Appearing most often in publications is the composite questionnaire used in the original prospective test of hardiness and abridged derivatives of that measure. Fitting the broad-based definition of hardiness as defined in the opening words of this article, the composite questionnaire contains items from a number of established personality instruments that include Rotter’s Internal-External Locus of Control Scale (Rotter, Seeman and Liverant 1962), Hahn’s California Life Goals Evaluation Schedules (Hahn 1966), Maddi’s Alienation versus Commitment Test (Maddi, Kobasa and Hoover 1979) and Jackson’s Personality Research Form (Jackson 1974). More recent efforts at questionnaire development have led to the development of the Personal Views Survey, or what Maddi (1990) calls the “Third Generation Hardiness Test”. This new questionnaire addresses many of the criticisms raised with respect to the original measure, such as the preponderance of negative items and the instability of hardiness factor structures. Furthermore, studies of working adults in both the United States and the United Kingdom have yielded promising reports as to the reliability and validity of the hardiness measure. Nonetheless, not all of the problems have been resolved. For example, some reports show low internal reliability for the challenge component of hardiness. Another pushes beyond the measurement issue to raise a conceptual concern about whether hardiness should always be seen as a unitary phenomenon rather than a multidimensional construct made up of separate components that may have relationships with health independently of each other in certain stressful situations. The challenge to future on researchers hardiness is to retain both the conceptual and human richness of the hardiness notion while increasing its empirical precision.
Although Maddi and Kobasa (1984) describe the childhood and family experiences that support the development of personality hardiness, they and many other hardiness researchers are committed to defining interventions to increase adults’ stress- resistance. From an existential perspective, personality is seen as something that one is constantly constructing, and a person’s social context, including his or her work environment, is seen as either supportive or debilitating as regards the maintenance of hardiness. Maddi (1987, 1990) has provided the most thorough depiction and rationale for hardiness intervention strategies. He outlines a combination of focusing, situational reconstruction, and compensatory self-improvement strategies that he has used successfully in small group sessions to enhance hardiness and decrease the negative physical and mental effects of stress in the workplace.
Low self-esteem (SE) has long been studied as a determinant of psychological and physiological disorders (Beck 1967; Rosenberg 1965; Scherwitz, Berton and Leventhal 1978). Beginning in the 1980s, organizational researchers have investigated self-esteem’s moderating role in relationships between work stressors and individual outcomes. This reflects researchers’ growing interest in dispositions that seem either to protect or make a person more vulnerable to stressors.
Self-esteem can be defined as “the favorability of individuals’ characteristic self-evaluations” (Brockner 1988). Brockner (1983, 1988) has advanced the hypothesis that persons with low SE (low SEs) are generally more susceptible to environmental events than are high SEs. Brockner (1988) reviewed extensive evidence that this “plasticity hypothesis” explains a number of organizational processes. The most prominent research into this hypothesis has tested self-esteem’s moderating role in the relationship between role stressors (role conflict and role ambiguity) and health and affect. Role conflict (disagreement among one’s received roles) and role ambiguity (lack of clarity concerning the content of one’s role) are generated largely by events that are external to the individual, and therefore, according to the plasticity hypothesis, high SEs would be less vulnerable to them.
In a study of 206 nurses in a large southwestern US hospital, Mossholder, Bedeian and Armenakis (1981) found that self-reports of role ambiguity were negatively related to job satisfaction for low SEs but not for high SEs. Pierce et al. (1993) used an organization-based measure of self-esteem to test the plasticity hypothesis on 186 workers in a US utility company. Role ambiguity and role conflict were negatively related to satisfaction only among low SEs. Similar interactions with organization-based self-esteem were found for role overload, environmental support and supervisory support.
In the studies reviewed above, self-esteem was viewed as a proxy (or alternative measure) for self-appraisals of competence on the job. Ganster and Schaubroeck (1991a) speculated that the moderating role of self-esteem on role stressors’ effects was instead caused by low SEs’ lack of confidence in influencing their social environment, the result being weaker attempts at coping with these stressors. In a study of 157 US fire-fighters, they found that role conflict was positively related to somatic health complaints only among low SEs. There was no such interaction with role ambiguity.
In a separate analysis of the data on nurses’ reported in their earlier study (Mossholder, Bedeian and Armenakis 1981), these authors (1982) found that peer group interaction had a significantly more negative relationship to self-reported tension among low SEs than among high SEs. Likewise, low SEs reporting high peer-group interaction were less likely to wish to leave the organization than were high SEs reporting high peer-group interaction.
Several measures of self-esteem exist in the literature. Possibly the most often used of these is the ten-item instrument developed by Rosenberg (1965). This instrument was used in the Ganster and Schaubroeck (1991a) study. Mossholder and his colleagues (1981, 1982) used the self-confidence scale from Gough and Heilbrun’s (1965) Adjective Check List. The organization-based measure of self-esteem used by Pierce et al. (1993) was a ten-item instrument developed by Pierce et al. (1989).
The research findings suggest that health reports and satisfaction among low SEs can be improved either by reducing their role stressors or increasing their self-esteem. The organization development intervention of role clarification (dyadic supervisor-subordinate exchanges directed at clarifying the subordinate’s role and reconciling incompatible expectations), when combined with responsibility charting (clarifying and negotiating the roles of different departments), proved successful in a randomized field experiment at reducing role conflict and role ambiguity (Schaubroeck et al. 1993). It seems unlikely, however, that many organizations will be able and willing to undertake this rather extensive practice unless role stress is seen as particularly acute.
Brockner (1988) suggested a number of ways organizations can enhance employee self-esteem. Supervision practices are a major area in which organizations can improve. Performance appraisal feedback which focuses on behaviours rather than on traits, providing descriptive information with evaluative summations, and participatively developing plans for continuous improvement, is likely to have fewer adverse effects on employee self-esteem, and it may even enhance the self-esteem of some workers as they discover ways to improve their performance. Positive reinforcement of effective performance events is also critical. Training approaches such as mastery modelling (Wood and Bandura 1989) also ensure that positive efficacy perceptions are developed for each new task; these perceptions are the basis of organization-based self-esteem.