One of the more remarkable social transformations of this century was the emergence of a powerful Japanese economy from the debris of the Second World War. Fundamental to this climb to global competitiveness were a commitment to quality and a determination to prove false the then-common belief that Japanese goods were shoddy and worthless. Guided by the innovative teachings of Deming (1993), Juran (1988) and others, Japanese managers and engineers adopted practices that have ultimately evolved into a comprehensive management system rooted in the basic concept of quality. Fundamentally, this system represents a shift in thinking. The traditional view was that quality had to be balanced against the cost of attaining it. The view that Deming and Juran urged was that higher quality led to lower total cost and that a systems approach to improving work processes would help in attaining both of these objectives. Japanese managers adopted this management philosophy, engineers learned and practised statistical quality control, workers were trained and involved in process improvement, and the outcome was dramatic (Ishikawa 1985; Imai 1986).
By 1980, alarmed at the erosion of their markets and seeking to broaden their reach in the global economy, European and American managers began to search for ways to regain a competitive position. In the ensuing 15 years, more and more companies came to understand the principles underlying quality management and to apply them, initially in industrial production and later in the service sector as well. While there are a variety of names for this management system, the most commonly used is total quality management or TQM; an exception is the health care sector, which more frequently uses the term continuous quality improvement, or CQI. Recently, the term business process reengineering (BPR) has also come into use, but this tends to mean an emphasis on specific techniques for process improvement rather than on the adoption of a comprehensive management system or philosophy.
TQM is available in many “flavours,” but it is important to understand it as a system that includes both a management philosophy and a powerful set of tools for improving the efficiency of work processes. Some of the common elements of TQM include the following (Feigenbaum 1991; Mann 1989; Senge 1991):
Typically, organizations successfully adopting TQM find they must make changes on three fronts.
One is transformation. This involves such actions as defining and communicating a vision of the organization’s future, changing the management culture from top-down oversight to one of employee involvement, fostering collaboration instead of competition and refocusing the purpose of all work on meeting customer requirements. Seeing the organization as a system of interrelated processes is at the core of TQM, and is an essential means of securing a totally integrated effort towards improving performance at all levels. All employees must know the vision and the aim of the organization (the system) and understand where their work fits in it, or no amount of training in applying TQM process improvement tools can do much good. However, lack of genuine change of organizational culture, particularly among lower echelons of managers, is frequently the downfall of many nascent TQM efforts; Heilpern (1989) observes, “We have come to the conclusion that the major barriers to quality superiority are not technical, they are behavioural.” Unlike earlier, flawed “quality circle” programmes, in which improvement was expected to “convect” upward, TQM demands top management leadership and the firm expectation that middle management will facilitate employee participation (Hill 1991).
A second basis for successful TQM is strategic planning. The achievement of an organization’s vision and goals is tied to the development and deployment of a strategic quality plan. One corporation defined this as “a customer-driven plan for the application of quality principles to key business objectives and the continuous improvement of work processes” (Yarborough 1994). It is senior management’s responsibility—indeed, its obligation to workers, stockholders and beneficiaries alike—to link its quality philosophy to sound and feasible goals that can reasonably be attained. Deming (1993) called this “constancy of purpose” and saw its absence as a source of insecurity for the workforce of the organization. The fundamental intent of strategic planning is to align the activities of all of the people throughout the company or organization so that it can achieve its core goals and can react with agility to a changing environment. It is evident that it both requires and reinforces the need for widespread participation of supervisors and workers at all levels in shaping the goal-directed work of the company (Shiba, Graham and Walden 1994).
Only when these two changes are adequately carried out can one hope for success in the third: the implementation of continuous quality improvement. Quality outcomes, and with them customer satisfaction and improved competitive position, ultimately rest on widespread deployment of process improvement skills. Often, TQM programmes accomplish this through increased investments in training and through assignment of workers (frequently volunteers) to teams charged with addressing a problem. A basic concept of TQM is that the person most likely to know how a job can be done better is the person who is doing it at a given moment. Empowering these workers to make useful changes in their work processes is a part of the cultural transformation underlying TQM; equipping them with knowledge, skills and tools to do so is part of continuous quality improvement.
The collection of statistical data is a typical and basic step taken by workers and teams to understand how to improve work processes. Deming and others adapted their techniques from the seminal work of Shewhart in the 1920s (Schmidt and Finnigan 1992). Among the most useful TQM tools are: (a) the Pareto Chart, a graphical device for identifying the more frequently occurring problems, and hence the ones to be addressed first; (b) the statistical control chart, an analytic tool for ascertaining the degree of variability in the unimproved process; and (c) flow charting, a means to document exactly how the process is carried out at present. Possibly the most ubiquitous and important tool is the Ishikawa Diagram (or “fishbone” diagram), whose invention is credited to Kaoru Ishikawa (1985). This instrument is a simple but effective way by which team members can collaborate on identifying the root causes of the process problem under study, and thus point the path to process improvement.
TQM, effectively implemented, may be important to workers and worker health in many ways. For example, the adoption of TQM can have an indirect influence. In a very basic sense, an organization that makes a quality transformation has arguably improved its chances of economic survival and success, and hence those of its employees. Moreover, it is likely to be one where respect for people is a basic tenet. Indeed, TQM experts often speak of “shared values”, those things that must be exemplified in the behaviour of both management and workers. These are often publicized throughout the organization as formal values statements or aspiration statements, and typically include such emotive language as “trust”, “respecting each other”, “open communications”, and “valuing our diversity” (Howard 1990).
Thus, it is tempting to suppose that quality workplaces will be “worker-friendly”—where worker-improved processes become less hazardous and where the climate is less stressful. The logic of quality is to build quality into a product or service, not to detect failures after the fact. It can be summed up in a word—prevention (Widfeldt and Widfeldt 1992). Such a logic is clearly compatible with the public health logic of emphasizing prevention in occupational health. As Williams (1993) points out in a hypothetical example, “If the quality and design of castings in the foundry industry were improved there would be reduced exposure ... to vibration as less finishing of castings would be needed.” Some anecdotal support for this supposition comes from satisfied employers who cite trend data on job health measures, climate surveys that show better employee satisfaction, and more numerous safety and health awards in facilities using TQM. Williams further presents two case studies in UK settings that exemplify such employer reports (Williams 1993).
Unfortunately, virtually no published studies offer firm evidence on the matter. What is lacking is a research base of controlled studies that document health outcomes, consider the possibility of detrimental as well as positive health influences, and link all of this causally to measurable factors of business philosophy and TQM practice. Given the significant prevalence of TQM enterprises in the global economy of the 1990s, this is a research agenda with genuine potential to define whether TQM is in fact a supportive tool in the prevention armamentarium of occupational safety and health.
We are on somewhat firmer ground to suggest that TQM can have a direct influence on worker health when it explicitly focuses quality improvement efforts on safety and health. Obviously, like all other work in an enterprise, occupational and environmental health activity is made up of interrelated processes, and the tools of process improvement are readily applied to them. One of the criteria against which candidates are examined for the Baldridge Award, the most important competitive honour granted to US organizations, is the competitor’s improvements in occupational health and safety. Yarborough has described how the occupational and environmental health (OEH) employees of a major corporation were instructed by senior management to adopt TQM with the rest of the company and how OEH was integrated into the company’s strategic quality plan (Yarborough 1994). The chief executive of a US utility that was the first non-Japanese company ever to win Japan’s coveted Deming Prize notes that safety was accorded a high priority in the TQM effort: “Of all the company’s major quality indicators, the only one that addresses the internal customer is employee safety.” By defining safety as a process, subjecting it to continuous improvement, and tracking lost-time injuries per 100 employees as a quality indicator, the utility reduced its injury rate by half, reaching the lowest point in the history of the company (Hudiberg 1991).
In summary, TQM is a comprehensive management system grounded in a management philosophy that emphasizes the human dimensions of work. It is supported by a powerful set of technologies that use data derived from work processes to document, analyse and continuously improve these processes.
The term unemployment describes the situation of individuals who desire to work but are unable to trade their skills and labour for pay. It is used to indicate either an individual’s personal experience of failure to find gainful work, or the experience of an aggregate in a community, a geographic region or a country. The collective phenomenon of unemployment is often expressed as the unemployment rate, that is, the number of people who are seeking work divided by the total number of people in the labour force, which in turn consists of both the employed and the unemployed. Individuals who desire to work for pay but have given up their efforts to find work are termed discouraged workers. These persons are not listed in official reports as members of the group of unemployed workers, for they are no longer considered to be part of the labour force.
The Organization for Economic Cooperation and Development (OECD) provides statistical information on the magnitude of unemployment in 25 countries around the world (OECD 1995). These consist mostly of the economically developed countries of Europe and North America, as well as Japan, New Zealand and Australia. According to the report for the year 1994, the total unemployment rate in these countries was 8.1% (or 34.3 million individuals). In the developed countries of central and western Europe, the unemployment rate was 9.9% (11 million), in the southern European countries 13.7% (9.2 million), and in the United States 6.1% (8 million). Of the 25 countries studied, only six (Austria, Iceland, Japan, Mexico, Luxembourg and Switzerland) had an unemployment rate below 5%. The report projected only a slight overall decrease (less than one-half of 1%) in unemployment for the years 1995 and 1996. These figures suggest that millions of individuals will continue to be vulnerable to the harmful effects of unemployment in the foreseeable future (Reich 1991).
A large number of people become unemployed at various periods during their lives. Depending on the structure of the economy and on its cycles of expansion and contraction, unemployment may strike students who drop out of school; those who have been graduated from a high school, trade school or college but find it difficult to enter the labour market for the first time; women seeking to return to gainful employment after raising their children; veterans of the armed services; and older persons who want to supplement their income after retirement. However, at any given time, the largest segment of the unemployed population, usually between 50 and 65%, consists of displaced workers who have lost their jobs. The problems associated with unemployment are most visible in this segment of the unemployed partly because of its size. Unemployment is also a serious problem for minorities and younger persons. Their unemployment rates are often two to three times higher than that of the general population (USDOL 1995).
The fundamental causes of unemployment are rooted in demographic, economic and technological changes. The restructuring of local and national economies usually gives rise to at least temporary periods of high unemployment rates. The trend towards the globalization of markets, coupled with accelerated technological changes, results in greater economic competition and the transfer of industries and services to new places that supply more advantageous economic conditions in terms of taxation, a cheaper labour force and more accommodating labour and environmental laws. Inevitably, these changes exacerbate the problems of unemployment in areas that are economically depressed.
Most people depend on the income from a job to provide themselves and their families with the necessities of life and to sustain their accustomed standard of living. When they lose a job, they experience a substantial reduction in their income. Mean duration of unemployment, in the United States for example, varies between 16 and 20 weeks, with a median between eight and ten weeks (USDOL 1995). If the period of unemployment that follows the job loss persists so that unemployment benefits are exhausted, the displaced worker faces a financial crisis. That crisis plays itself out as a cascading series of stressful events that may include loss of a car through repossession, foreclosure on a house, loss of medical care, and food shortages. Indeed, an abundance of research in Europe and the United States shows that economic hardship is the most consistent outcome of unemployment (Fryer and Payne 1986), and that economic hardship mediates the adverse impact of unemployment on various other outcomes, in particular, on mental health (Kessler, Turner and House 1988).
There is a great deal of evidence that job loss and unemployment produce significant deterioration in mental health (Fryer and Payne 1986). The most common outcomes of job loss and unemployment are increases in anxiety, somatic symptoms and depression symptomatology (Dooley, Catalano and Wilson 1994; Hamilton et al. 1990; Kessler, House and Turner 1987; Warr, Jackson and Banks 1988). Furthermore, there is some evidence that unemployment increases by over twofold the risk of onset of clinical depression (Dooley, Catalano and Wilson 1994). In addition to the well-documented adverse effects of unemployment on mental health, there is research that implicates unemployment as a contributing factor to other outcomes (see Catalano 1991 for a review). These outcomes include suicide (Brenner 1976), separation and divorce (Stack 1981; Liem and Liem 1988), child neglect and abuse (Steinberg, Catalano and Dooley 1981), alcohol abuse (Dooley, Catalano and Hough 1992; Catalano et al. 1993a), violence in the workplace (Catalano et al. 1993b), criminal behaviour (Allan and Steffensmeier 1989), and highway fatalities (Leigh and Waldon 1991). Finally, there is also some evidence, based primarily on self-report, that unemployment contributes to physical illness (Kessler, House and Turner 1987).
The adverse effects of unemployment on displaced workers are not limited to the period during which they have no jobs. In most instances, when workers become re-employed, their new jobs are significantly worse than the jobs they lost. Even after four years in their new positions, their earnings are substantially lower than those of similar workers who were not laid off (Ruhm 1991).
Because the fundamental causes of job loss and unemployment are rooted in societal and economic processes, remedies for their adverse social effects must be sought in comprehensive economic and social policies (Blinder 1987). At the same time, various community-based programmes can be undertaken to reduce the negative social and psychological impact of unemployment at the local level. There is overwhelming evidence that re-employment reduces distress and depression symptoms and restores psychosocial functioning to pre-unemployment levels (Kessler, Turner and House 1989; Vinokur, Caplan and Williams 1987). Therefore, programmes for displaced workers or others who wish to become employed should be aimed primarily at promoting and facilitating their re-employment or new entry into the labour force. A variety of such programmes have been tried successfully. Among these are special community-based intervention programmes for creating new ventures that in turn generate job opportunities (e.g., Last et al. 1995), and others that focus on retraining (e.g., Wolf et al. 1995).
Of the various programmes that attempt to promote re-employment, the most common are job search programmes organized as job clubs that attempt to intensify job search efforts (Azrin and Beasalel 1982), or workshops that focus more broadly on enhancing job search skills and facilitating transition into re-employment in high-quality jobs (e.g., Caplan et al. 1989). Cost/benefit analyses have demonstrated that these job search programmes are cost effective (Meyer 1995; Vinokur et al. 1991). Furthermore, there is also evidence that they could prevent deterioration in mental health and possibly the onset of clinical depression (Price, van Ryn and Vinokur 1992).
Similarly, in the case of organizational downsizing, industries can reduce the scope of unemployment by devising ways to involve workers in the decision-making process regarding the management of the downsizing programme (Kozlowski et al. 1993; London 1995; Price 1990). Workers may choose to pool their resources and buy out the industry, thus avoiding layoffs; to reduce working hours to spread and even out the reduction in force; to agree to a reduction in wages to minimize layoffs; to retrain and/or relocate to take new jobs; or to participate in outplacement programmes. Employers can facilitate the process by timely implementation of a strategic plan that offers the above-mentioned programmes and services to workers at risk of being laid off. As has been indicated already, unemployment leads to pernicious outcomes at both the personal and societal level. A combination of comprehensive government policies, flexible downsizing strategies by business and industry, and community-based programmes can help to mitigate the adverse consequences of a problem that will continue to affect the lives of millions of people for years to come.
Downsizing, layoffs, re-engineering, reshaping, reduction in force (RIF), mergers, early retirement, and outplacement—the description of these increasingly familiar changes has become a matter of commonplace jargon around the world in the past two decades. As companies have fallen on hard times, workers at all organizational levels have been expended and many remaining jobs have been altered. The job loss count in a single year (1992–93) includes Eastman Kodak, 2,000; Siemens, 13,000; Daimler-Benz, 27,000; Phillips, 40,000; and IBM, 65,000 (The Economist 1993, extracted from “Job Future Ambiguity” (John M. Ivancevich)). Job cuts have occurred at companies earning healthy profits as well as at firms faced with the need to cut costs. The trend of cutting jobs and changing the way remaining jobs are performed is expected to continue even after worldwide economic growth returns.
Why has losing and changing jobs become so widespread? There is no simple answer that fits every organization or situation. However, one or more of a number of factors is usually implicated, including lost market share, increasing international and domestic competition, increasing labour costs, obsolete plant and technologies and poor managerial practices. These factors have resulted in managerial decisions to slim down, re-engineer jobs and alter the psychological contract between the employer and the worker.
A work situation in which an employee could count on job security or the opportunity to hold multiple positions via career-enhancing promotions in a single firm has changed drastically. Similarly, the binding power of the traditional employer-worker psychological contract has weakened as millions of managers and non-managers have been let go. Japan was once famous for providing “lifetime” employment to individuals. Today, even in Japan, a growing number of workers, especially in large firms, are not assured of lifetime employment. The Japanese, like their counterparts across the world, are facing what can be referred to as increased job insecurity and an ambiguous picture of what the future holds.
Job Insecurity: An Interpretation
Maslow (1954), Herzberg, Mausner and Snyderman (1959) and Super (1957) have proposed that individuals have a need for safety or security. That is, individual workers sense security when holding a permanent job or when being able to control the tasks performed on the job. Unfortunately, there has been a limited number of empirical studies that have thoroughly examined the job security needs of workers (Kuhnert and Pulmer 1991; Kuhnert, Sims and Lahey 1989).
On the other hand, with the increased attention that is being paid to downsizing, layoffs and mergers, more researchers have begun to investigate the notion of job insecurity. The nature, causes and consequences of job insecurity have been considered by Greenhalgh and Rosenblatt (1984) who offer a definition of job insecurity as “perceived powerlessness to maintain desired continuity in a threatened job situation”. In Greenhalgh and Rosenblatt’s framework, job insecurity is considered a part of a person’s environment. In the stress literature, job insecurity is considered to be a stressor that introduces a threat that is interpreted and responded to by an individual. An individual’s interpretation and response could possibly include the decreased effort to perform well, feeling ill or below par, seeking employment elsewhere, increased coping to deal with the threat, or seeking more colleague interaction to buffer the feelings of insecurity.
Lazarus’ theory of psychological stress (Lazarus 1966; Lazarus and Folkman 1984) is centred on the concept of cognitive appraisal. Regardless of the actual severity of the danger facing a person, the occurrence of psychological stress depends upon the individual’s own evaluation of the threatening situation (here, job insecurity).
Selected Research on Job Insecurity
Unfortunately, like the research on job security, there is a paucity of well-designed studies of job insecurity. Furthermore, the majority of job insecurity studies incorporate unitary measurement methods. Few researchers examining stressors in general or job insecurity specifically have adopted a multiple-level approach to assessment. This is understandable because of the limitations of resources. However, the problems created by unitary assessments of job insecurity have resulted in a limited understanding of the construct. There are available to researchers four basic methods of measuring job insecurity: self-report, performance, psychophysiological and biochemical. It is still debatable whether these four types of measure assess different aspects of the consequences of job insecurity (Baum, Grunberg and Singer 1982). Each type of measure has limitations that must be recognized.
In addition to measurement problems in job insecurity research, it must be noted that there is a predominance of concentration in imminent or actual job loss. As noted by researchers (Greenhalgh and Rosenblatt 1984; Roskies and Louis-Guerin 1990), there should be more attention paid to “concern about a significant deterioration in terms and conditions of employment.” The deterioration of working conditions would logically seem to play a role in a person’s attitudes and behaviours.
Brenner (1987) has discussed the relationship between a job insecurity factor, unemployment, and mortality. He proposed that uncertainty, or the threat of instability, rather than unemployment itself causes higher mortality. The threat of being unemployed or losing control of one’s job activities can be powerful enough to contribute to psychiatric problems.
In a study of 1,291 managers, Roskies and Louis-Guerin (1990) examined the perceptions of workers facing layoffs, as well as those of managerial personnel working in firms that worked in stable, growth-oriented firms. A minority of managers were stressed about imminent job loss. However, a substantial number of managers were more stressed about a deterioration in working conditions and long-term job security.
Roskies, Louis-Guerin and Fournier (1993) proposed in a research study that job insecurity may be a major psychological stressor. In this study of personnel in the airline industry, the researchers determined that personality disposition (positive and negative) plays a role in the impact of job security or the mental health of workers.
Addressing the Problem of Job Insecurity
Organizations have numerous alternatives to downsizing, layoffs and reduction in force. Displaying compassion that clearly shows that management realizes the hardships that job loss and future job ambiguity pose is an important step. Alternatives such as reduced work weeks, across-the-board salary cuts, attractive early retirement packages, retraining existing employees and voluntary layoff programmes can be implemented (Wexley and Silverman 1993).
The global marketplace has increased job demands and job skill requirements. For some people, the effect of increased job demands and job skill requirements will provide career opportunities. For others, these changes could exacerbate the feelings of job insecurity. It is difficult to pinpoint exactly how individual workers will respond. However, managers must be aware of how job insecurity can result in negative consequences. Furthermore, managers need to acknowledge and respond to job insecurity. But possessing a better understanding of the notion of job insecurity and its potential negative impact on the performance, behaviour and attitudes of workers is a step in the right direction for managers.
It will obviously require more rigorous research to better understand the full range of consequences of job insecurity among selected workers. As additional information becomes available, managers need to be open-minded about attempting to help workers cope with job insecurity. Redefining the way work is organized and executed should become a useful alternative to traditional job design methods. Managers have a responsibility:
Since job insecurity is likely to remain a perceived threat for many, but not all, workers, managers need to develop and implement strategies to address this factor. The institutional costs of ignoring job insecurity are too great for any firm to accept. Whether managers can efficiently deal with workers who feel insecure about their jobs and working conditions is fast becoming a measure of managerial competency.
The nature, prevalence, predictors and possible consequences of workplace violence have begun to attract the attention of labour and management practitioners, and researchers. The reason for this is the increasing occurrence of highly visible workplace murders. Once the focus is placed on workplace violence, it becomes clear that there are several issues, including the nature (or definition), prevalence, predictors, consequences and ultimately prevention of workplace violence.
Definition and Prevalence of Workplace Violence
The definition and prevalence of workplace violence are integrally related.
Consistent with the relative recency with which workplace violence has attracted attention, there is no uniform definition. This is an important issue for several reasons. First, until a uniform definition exists, any estimates of prevalence remain incomparable across studies and sites. Secondly, the nature of the violence is linked to strategies for prevention and interventions. For example, focusing on all instances of shootings within the workplace includes incidents that reflect the continuation of family conflicts, as well as those that reflect work-related stressors and conflicts. While employees would no doubt be affected in both situations, the control the organization has over the former is more limited, and hence the implications for interventions are different from those situations in which workplace shootings are a direct function of workplace stressors and conflicts.
Some statistics suggest that workplace murders are the fastest growing form of murder in the United States (for example, Anfuso 1994). In some jurisdictions (for example, New York State), murder is the modal cause of death in the workplace. Because of statistics such as these, workplace violence has attracted considerable attention recently. However, early indications suggest that those acts of workplace violence with the highest visibility (for example, murder, shootings) attract the greatest research scrutiny, but also occur with the least frequency. In contrast, verbal and psychological aggression against supervisors, subordinates and co-workers are far more common, but gather less attention. Supporting the notion of a close integration between definitional and prevalence issues, this would suggest that what is being studied in most cases is aggression rather than violence in the workplace.
Predictors of Workplace Violence
A reading of the literature on the predictors of workplace violence would reveal that most of the attention has been focused on the development of a “profile” of the potentially violent or “disgruntled” employee (for example, Mantell and Albrecht 1994; Slora, Joy and Terris 1991), most of which would identify the following as the salient personal characteristics of a disgruntled employee: white, male, aged 20-35, a “loner”, probable alcohol problem and a fascination with guns. Aside from the problem of the number of false-positive identifications this would lead to, this strategy is also based on identifying individuals predisposed to the most extreme forms of violence, and ignores the larger group involved in most of the aggressive and less violent workplace incidents.
Going beyond “demographic” characteristics, there are suggestions that some of the personal factors implicated in violence outside of the workplace would extend to the workplace itself. Thus, inappropriate use of alcohol, general history of aggression in one’s current life or family of origin, and low self-esteem have been implicated in workplace violence.
A more recent strategy has been to identify the workplace conditions under which workplace violence is most likely to occur: identifying the physical and psychosocial conditions in the workplace. While the research on psychosocial factors is still in its infancy, it would appear as though feelings of job insecurity, perceptions that organizational policies and their implementation are unjust, harsh management and supervision styles, and electronic monitoring are associated with workplace aggression and violence (United States House of Representatives 1992; Fox and Levin 1994).
Cox and Leather (1994) look to the predictors of aggression and violence in general in their attempt to understand the physical factors that predict workplace violence. In this respect, they suggest that workplace violence may be associated with perceived crowding, and extreme heat and noise. However, these suggestions about the causes of workplace violence await empirical scrutiny.
Consequences of workplace violence
The research to date suggests that there are primary and secondary victims of workplace violence, both of which are worthy of research attention. Bank tellers or store clerks who are held up and employees who are assaulted at work by current or former co-workers are the obvious or direct victims of violence at work. However, consistent with the literature showing that much human behaviour is learned from observing others, witnesses to workplace violence are secondary victims. Both groups might be expected to suffer negative effects, and more research is needed to focus on the way in which both aggression and violence at work affect primary and secondary victims.
Prevention of workplace violence
Most of the literature on the prevention of workplace violence focuses at this stage on prior selection, i.e., the prior identification of potentially violent individuals for the purpose of excluding them from employment in the first instance (for example, Mantell and Albrecht 1994). Such strategies are of dubious utility, for ethical and legal reasons. From a scientific perspective, it is equally doubtful whether we could identify potentially violent employees with sufficient precision (e.g., without an unacceptably high number of false-positive identifications). Clearly, we need to focus on workplace issues and job design for a preventive approach. Following Fox and Levin’s (1994) reasoning, ensuring that organizational policies and procedures are characterized by perceived justice will probably constitute an effective prevention technique.
Research on workplace violence is in its infancy, but gaining increasing attention. This bodes well for the further understanding, prediction and control of workplace aggression and violence.
Historically, the sexual harassment of female workers has been ignored, denied, made to seem trivial, condoned and even implicitly supported, with women themselves being blamed for it (MacKinnon 1978). Its victims are almost entirely women, and it has been a problem since females first sold their labour outside the home.
Although sexual harassment also exists outside the workplace, here it will be taken to denote harassment in the workplace.
Sexual harassment is not an innocent flirtation nor the mutual expression of attraction between men and women. Rather, sexual harassment is a workplace stressor that poses a threat to a woman’s psychological and physical integrity and security, in a context in which she has little control because of the risk of retaliation and the fear of losing her livelihood. Like other workplace stressors, sexual harassment may have adverse health consequences for women that can be serious and, as such, qualifies as a workplace health and safety issue (Bernstein 1994).
In the United States, sexual harassment is viewed primarily as a discrete case of wrongful conduct to which one may appropriately respond with blame and recourse to legal measures for the individual. In the European Community it tends to be viewed rather as a collective health and safety issue (Bernstein 1994).
Because the manifestations of sexual harassment vary, people may not agree on its defining qualities, even where it has been set forth in law. Still, there are some common features of harassment that are generally accepted by those doing work in this area:
When directed towards a specific woman it can involve sexual comments and seductive behaviours, “propositions” and pressure for dates, touching, sexual coercion through the use of threats or bribery and even physical assault and rape. In the case of a “hostile environment”, which is probably the more common state of affairs, it can involve jokes, taunts and other sexually charged comments that are threatening and demeaning to women; pornographic or sexually explicit posters; and crude sexual gestures, and so forth. One can add to these characteristics what is sometimes called “gender harassment”, which more involves sexist remarks that demean the dignity of women.
Women themselves may not label unwanted sexual attention or sexual remarks as harassing because they accept it as “normal” on the part of males (Gutek 1985). In general, women (especially if they have been harassed) are more likely to identify a situation as sexual harassment than men, who tend rather to make light of the situation, to disbelieve the woman in question or to blame her for “causing” the harassment (Fitzgerald and Ormerod 1993). People also are more likely to label incidents involving supervisors as sexually harassing than similar behaviour by peers (Fitzgerald and Ormerod 1993). This tendency reveals the significance of the differential power relationship between the harasser and the female employee (MacKinnon 1978.) As an example, a comment that a male supervisor may believe is complimentary may still be threatening to his female employee, who may fear that it will lead to pressure for sexual favours and that there will be retaliation for a negative response, including the potential loss of her job or negative evaluations.
Even when co-workers are involved, sexual harassment can be difficult for women to control and can be very stressful for them. This situation can occur where there are many more men than women in a work group, a hostile work environment is created and the supervisor is male (Gutek 1985; Fitzgerald and Ormerod 1993).
National data on sexual harassment are not collected, and it is difficult to obtain accurate numbers on its prevalence. In the United States, it has been estimated that 50% of all women will experience some form of sexual harassment during their working lives (Fitzgerald and Ormerod 1993). These numbers are consistent with surveys conducted in Europe (Bustelo 1992), although there is variation from country to country (Kauppinen-Toropainen and Gruber 1993). The extent of sexual harassment is also difficult to determine because women may not label it accurately and because of underreporting. Women may fear that they will be blamed, humiliated and not believed, that nothing will be done and that reporting problems will result in retaliation (Fitzgerald and Ormerod 1993). Instead, they may try to live with the situation or leave their jobs and risk serious financial hardship, a disruption of their work histories and problems with references (Koss et al. 1994).
Sexual harassment reduces job satisfaction and increases turnover, so that it has costs for the employer (Gutek 1985; Fitzgerald and Ormerod 1993; Kauppinen-Toropainen and Gruber 1993). Like other workplace stressors, it also can have negative effects on health that are sometimes quite serious. When the harassment is severe, as with rape or attempted rape, women are seriously traumatized. Even where sexual harassment is less severe, women can have psychological problems: they may become fearful, guilty and ashamed, depressed, nervous and less self-confident. They may have physical symptoms such as stomach-aches, headaches or nausea. They may have behavioural problems such as sleeplessness, over- or undereating, sexual problems and difficulties in their relations with others (Swanson et al. 1997).
Both the formal American and informal European approaches to combating harassment provide illustrative lessons (Bernstein 1994). In Europe, sexual harassment is sometimes dealt with by conflict resolution approaches that bring in third parties to help eliminate the harassment (e.g., England’s “challenge technique”). In the United States, sexual harassment is a legal wrong that provides victims with redress through the courts, although success is difficult to achieve. Victims of harassment also need to be supported through counselling, where needed, and helped to understand that they are not to blame for the harassment.
Prevention is the key to combating sexual harassment. Guidelines encouraging prevention have been promulgated through the European Commission Code of Practice (Rubenstein and DeVries 1993). They include the following: clear anti-harassment policies that are effectively communicated; special training and education for managers and supervisors; a designated ombudsperson to deal with complaints; formal grievance procedures and alternatives to them; and disciplinary treatment of those who violate the policies. Bernstein (1994) has suggested that mandated self-regulation may be a viable approach.
Finally, sexual harassment needs to be openly discussed as a workplace issue of legitimate concern to women and men. Trade unions have a critical role to play in helping place this issue on the public agenda. Ultimately, an end to sexual harassment requires that men and women reach social and economic equality and full integration in all occupations and workplaces.
Roles represent sets of behaviours that are expected of employees. To understand how organizational roles develop, it is particularly informative to see the process through the eyes of a new employee. Starting with the first day on the job, a new employee is presented with considerable information designed to communicate the organization’s role expectations. Some of this information is presented formally through a written job description and regular communications with one’s supervisor. Hackman (1992), however, states that workers also receive a variety of informal communications (termed discretionary stimuli) designed to shape their organizational roles. For example, a junior school faculty member who is too vocal during a departmental meeting may receive looks of disapproval from more senior colleagues. Such looks are subtle, but communicate much about what is expected of a junior colleague.
Ideally, the process of defining each employee’s role should proceed such that each employee is clear about his or her role. Unfortunately, this is often not the case and employees experience a lack of role clarity or, as it is commonly called, role ambiguity. According to Breaugh and Colihan (1994), employees are often unclear about how to do their jobs, when certain tasks should be performed and the criteria by which their performance will be judged. In some cases, it is simply difficult to provide an employee with a crystal-clear picture of his or her role. For example, when a job is relatively new, it is still “evolving” within the organization. Furthermore, in many jobs the individual employee has tremendous flexibility regarding how to get the job done. This is particularly true of highly complex jobs. In many other cases, however, role ambiguity is simply due to poor communication between either supervisors and subordinates or among members of work groups.
Another problem that can arise when role-related information is communicated to employees is role overload. That is, the role consists of too many responsibilities for an employee to handle in a reasonable amount of time. Role overload can occur for a number of reasons. In some occupations, role overload is the norm. For example, physicians in training experience tremendous role overload, largely as preparation for the demands of medical practice. In other cases, it is due to temporary circumstances. For example, if someone leaves an organization, the roles of other employees may need to be temporarily expanded to make up for the missing worker’s absence. In other instances, organizations may not anticipate the demands of the roles they create, or the nature of an employee’s role may change over time. Finally, it is also possible that an employee may voluntarily take on too many role responsibilities.
What are the consequences to workers in circumstances characterized by either role ambiguity, role overload or role clarity? Years of research on role ambiguity has shown that it is a noxious state which is associated with negative psychological, physical and behavioural outcomes (Jackson and Schuler 1985). That is, workers who perceive role ambiguity in their jobs tend to be dissatisfied with their work, anxious, tense, report high numbers of somatic complaints, tend to be absent from work and may leave their jobs. The most common correlates of role overload tend to be physical and emotional exhaustion. In addition, epidemiological research has shown that overloaded individuals (as measured by work hours) may be at greater risk for coronary heart disease. In considering the effects of both role ambiguity and role overload, it must be kept in mind that most studies are cross-sectional (measuring role stressors and outcomes at one point in time) and have examined self-reported outcomes. Thus, inferences about causality must be somewhat tentative.
Given the negative effects of role ambiguity and role overload, it is important for organizations to minimize, if not eliminate, these stressors. Since role ambiguity, in many cases, is due to poor communication, it is necessary to take steps to communicate role requirements more effectively. French and Bell (1990), in a book entitled Organization Development, describe interventions such as responsibility charting, role analysis and role negotiation. (For a recent example of the application of responsibility charting, see Schaubroeck et al. 1993). Each of these is designed to make employees’ role requirements explicit and well defined. In addition, these interventions allow employees input into the process of defining their roles.
When role requirements are made explicit, it may also be revealed that role responsibilities are not equitably distributed among employees. Thus, the previously mentioned interventions may also prevent role overload. In addition, organizations should keep up to date regarding individuals’ role responsibilities by reviewing job descriptions and carrying out job analyses (Levine 1983). It may also help to encourage employees to be realistic about the number of role responsibilities they can handle. In some cases, employees who are under pressure to take on too much may need to be more assertive when negotiating role responsibilities.
As a final comment, it must be remembered that role ambiguity and role overload are subjective states. Thus, efforts to reduce these stressors must consider individual differences. Some workers may in fact enjoy the challenge of these stressors. Others, however, may find them aversive. If this is the case, organizations have a moral, legal and financial interest in keeping these stressors at manageable levels.
The computerization of work has made possible the development of a new approach to work monitoring called electronic performance monitoring (EPM). EPM has been defined as the “computerized collection, storage, analysis, and reporting of information about employees’ activities on a continuous basis” (USOTA 1987). Although banned in many European countries, electronic performance monitoring is increasing throughout the world on account of intense competitive pressures to improve productivity in a global economy.
EPM has changed the psychosocial work environment. This application of computer technology has significant implications for work supervision, workload demands, performance appraisal, performance feedback, rewards, fairness and privacy. As a result, occupational health researchers, worker representatives, government agencies and the public news media have expressed concern about the stress-health effects of electronic performance monitoring (USOTA 1987).
Traditional approaches to work monitoring include direct observation of work behaviours, examination of work samples, review of progress reports and analysis of performance measures (Larson and Callahan 1990). Historically, employers have always attempted to improve on these methods of monitoring worker performance. Considered as part of a continuing monitoring effort across the years, then, EPM is not a new development. What is new, however, is the use of EPM, particularly in office and service work, to capture employee performance on a second-by-second, keystroke-by-keystroke basis so that work management in the form of corrective action, performance feedback, delivery of incentive pay, or disciplinary measures can be taken at any time (Smith 1988). In effect, the human supervisor is being replaced by an electronic supervisor.
EPM is used in office work such as word processing and data entry to monitor keystroke production and error rates. Airline reservation clerks and directory assistance operators are monitored by computers to determine how long it takes to service customers and to measure the time interval between calls. EPM also is used in more traditional economic sectors. Freight haulers, for example, are using computers to monitor driver speed and fuel consumption, and tire manufacturers are electronically monitoring the productivity of rubber workers. In sum, EPM is used to establish performance standards, track employee performance, compare actual performance with predetermined standards and administer incentive pay programmes based on these standards (USOTA 1987).
Advocates of EPM assert that continuous electronic work monitoring is essential to high performance and productivity in the contemporary workplace. It is argued that EPM enables managers and supervisors to organize and control human, material and financial resources. Specifically, EPM provides for:
Supporters of electronic monitoring also claim that, from the worker’s perspective, there are several benefits. Electronic monitoring, for example, can provide regular feedback of work performance, which enables workers to take corrective action when necessary. It also satisfies the worker’s need for self-evaluation and reduces performance uncertainty.
Despite the possible benefits of EPM, there is concern that certain monitoring practices are abusive and constitute an invasion of employee privacy (USOTA 1987). Privacy has become an issue particularly when workers do not know when or how often they are being monitored. Since work organizations often do not share performance data with workers, a related privacy issue is whether workers should have access to their own performance records or the right to question possible wrong information.
Workers also have raised objections to the manner in which monitoring systems have been implemented (Smith, Carayon and Miezio 1986; Westin 1986). In some workplaces, monitoring is perceived as an unfair labour practice when it is used to measure individual, as opposed to group, performance. In particular, workers have taken exception to the use of monitoring to enforce compliance with performance standards that impose excessive workload demands. Electronic monitoring also can make the work process more impersonal by replacing a human supervisor with an electronic supervisor. In addition, the overemphasis on increased production may encourage workers to compete instead of cooperate with one another.
Various theoretical paradigms have been postulated to account for the possible stress-health effects of EPM (Amick and Smith 1992; Schleifer and Shell 1992; Smith et al. 1992b). A fundamental assumption made by many of these models is that EPM indirectly influences stress-health outcomes by intensifying workload demands, diminishing job control and reducing social support. In effect, EPM mediates changes in the psychosocial work environment that result in an imbalance between the demands of the job and the worker’s resources to adapt.
The impact of EPM on the psychosocial work environment is felt at three levels of the work system: the organization-technology interface, the job-technology interface and the human-technology interface (Amick and Smith 1992). The extent of work system transformation and the subsequent implications for stress outcomes are contingent upon the inherent characteristics of the EPM process; that is, the type of information gathered, the method of gathering the information and the use of the information (Carayon 1993). These EPM characteristics can interact with various job design factors and increase stress-health risks.
An alternative theoretical perspective views EPM as a stressor that directly results in strain independent of other job-design stress factors (Smith et al. 1992b; Carayon 1994). EPM, for example, can generate fear and tension as a result of workers being constantly watched by “Big Brother”. EPM also may be perceived by workers as an invasion of privacy that is highly threatening.
With respect to the stress effects of EPM, empirical evidence obtained from controlled laboratory experiments indicates that EPM can produce mood disturbances (Aiello and Shao 1993; Schleifer, Galinsky and Pan 1995) and hyperventilatory stress reactions (Schleifer and Ley 1994). Field studies have also reported that EPM alters job-design stress factors (for example, workload), which, in turn, generate tension or anxiety together with depression (Smith, Carayon and Miezio 1986; Ditecco et al. 1992; Smith et al. 1992b; Carayon 1994). In addition, EPM is associated with symptoms of musculoskeletal discomfort among telecommunication workers and data-entry office workers (Smith et al. 1992b; Sauter et al. 1993; Schleifer, Galinsky and Pan 1995).
The use of EPM to enforce compliance with performance standards is perhaps one of the most stressful aspects of this approach to work monitoring (Schleifer and Shell 1992). Under these conditions, it may be useful to adjust performance standards with a stress allowance (Schleifer and Shell 1992): a stress allowance would be applied to the normal cycle time, as is the case with other more conventional work allowances such as rest breaks and machine delays. Particularly among workers who have difficulty meeting EPM performance standards, a stress allowance would optimize workload demands and promote well-being by balancing the productivity benefits of electronic performance monitoring against the stress effects of this approach to work monitoring.
Beyond the question of how to minimize or prevent the possible stress-health effects of EPM, a more fundamental issue is whether this “Tayloristic” approach to work monitoring has any utility in the modern workplace. Work organizations are increasingly utilizing sociotechnical work-design methods, “total quality management” practices, participative work groups, and organizational, as opposed to individual, measures of performance. As a result, electronic work monitoring of individual workers on a continuous basis may have no place in high-performance work systems. In this regard, it is interesting to note that those countries (for example, Sweden and Germany) that have banned EPM are the same countries which have most readily embraced the principles and practices associated with high-performance work systems.
In this article, the reasons machine-pacing is utilized in the workplace are reviewed. Furthermore, a classification of machine-paced work, information on the impact of machine-paced work on well-being and methodologies by which the effects can be alleviated or reduced, are set forth.
Benefits of Machine-Paced Work
The effective utilization of machine-paced work has the following benefits for an organization:
Classification of Machine-Paced Work
A classification of paced work is provided in figure 1.
Figure 1. The Job Stress Model of the National Institute for Occupational Safety and Health (NIOSH)
Effect of Machine-Paced Work on Well-Being
Machine-paced research has been carried out in laboratory settings, in industry (by case studies and controlled experiments) and by epidemiological studies (Salvendy 1981).
An analysis was performed of 85 studies dealing with machine-paced and self-paced work, of which 48% were laboratory studies, 30% industrial, 14% review studies, 4% combined laboratory and industrial, and 4% conceptual studies (Burke and Salvendy 1981). Of the 103 variables used in these studies, 41% were physiological, 32% were performance variables and 27% psychological. From this analysis, the following practical implications were derived for the use of machine-paced versus self-paced work arrangements :
In studying industrial workers for an entire year in our experimentally controlled situation, in which over 50 million data points were collected, it was shown that 45% of the labour force prefers self-paced work, 45% prefers machine-paced work, and 10% does not like work of any type (Salvendy1976).
Table 1. Psychological profiles of operators who prefer self-paced and machine-paced work
Uncertainty is the most significant contributor to stress and can be effectively managed by performance feedback (see figure 2) (Salvendy and Knight 1983).
Figure 2. Effects of performance feedback on reduction of stress
Autonomy and job control are concepts with a long history in the study of work and health. Autonomy—the extent to which workers can exercise discretion in how they perform their work—is most closely associated with theories that are concerned with the challenge of designing work so that it is intrinsically motivating, satisfying and conducive to physical and mental well-being. In virtually all such theories, the concept of autonomy plays a central role. The term control (defined below) is generally understood to have a broader meaning than autonomy. In fact, one could consider autonomy to be a specialized form of the more general concept of control. Because control is the more inclusive term, it will be used throughout the remainder of this article.
Throughout the 1980s, the concept of control formed the core of perhaps the most influential theory of occupational stress (see, for example, the review of the work stress literature by Ganster and Schaubroeck 1991b). This theory, usually known as the Job Decision Latitude Model (Karasek 1979) stimulated many large-scale epidemiological studies that investigated the joint effects of control in conjunction with a variety of demanding work conditions on worker health. Though there has been some controversy regarding the exact way that control might help determine health outcomes, epidemiologists and organizational psychologists have come to regard control as a critical variable that should be given serious consideration in any investigation of psychosocial work stress conditions. Concern for the possible detrimental effects of low worker control was so high, for example, that in 1987 the National Institute for Occupational Safety and Health (NIOSH) of the United States organized a special workshop of authorities from epidemiology, psychophysiology, and industrial and organizational psychology to critically review the evidence concerning the impact of control on worker health and well-being. This workshop eventually culminated in the comprehensive volume Job Control and Worker Health (Sauter, Hurrell and Cooper 1989) that provides a discussion of the global research efforts on control. Such widespread acknowledgement of the role of control in worker well-being also had an impact on governmental policy, with the Swedish Work Environment Act (Ministry of Labour 1987) stating that “the aim must be for work to be arranged in such a way so that the employee himself can influence his work situation”. In the remainder of this article I summarize the research evidence on work control with the goal of providing the occupational health and safety specialist with the following:
First, what exactly is meant by the term control? In its broadest sense it refers to workers’ ability to actually influence what happens in their work environment. Moreover, this ability to influence the work setting should be considered in light of the worker’s goals. The term refers to the ability to influence matters that are relevant to one’s personal goals. This emphasis on being able to influence the work environment distinguishes control from the related concept of predictability. The latter refers to one’s being able to anticipate what demands will be made on oneself, for example, but does not imply any ability to alter those demands. Lack of predictability constitutes a source of stress in its own right, particularly when it produces a high level of ambiguity about what performance strategies one ought to adopt to perform effectively or if one even has a secure future with the employer. Another distinction that should be made is that between control and the more inclusive concept of job complexity. Early conceptualizations of control considered it together with such aspects of work as skill level and availability of social interaction. Our discussion here discriminates control from these other domains of job complexity.
One can consider mechanisms by which workers can exercise control and the domains over which that control can apply. One way that workers can exercise control is by making decisions as individuals. These decisions can be about what tasks to complete, the order of those tasks, and the standards and processes to follow in completing those tasks, to name but a few. The worker might also have some collective control either through representation or by social action with co-workers. In terms of domains, control might apply to such matters as the work pace, the amount and timing of interaction with others, the physical work environment (lighting, noise and privacy), scheduling of vacations or even matters of policy at the worksite. Finally, one can distinguish between objective and subjective control. One might, for example, have the ability to choose one’s work pace but not be aware of it. Similarly, one might believe that one can influence policies in the workplace even though this influence is essentially nil.
How can the occupational health and safety specialist assess the level of control in a work situation? As recorded in the literature, basically two approaches have been taken. One approach has been to make an occupational-level determination of control. In this case every worker in a given occupation would be considered to have the same level of control, as it is assumed to be determined by the nature of the occupation itself. The disadvantage to this approach, of course, is that one cannot obtain much insight as to how workers are faring in a particular worksite, where their control might have been determined as much by their employer’s policies and practices as by their occupational status. The more common approach is to survey workers about their subjective perceptions of control. A number of psychometrically sound measures have been developed for this purpose and are readily available. The NIOSH control scale (McLaney and Hurrell 1988), for example, consists of sixteen questions and provides assessments of control in the domains of task, decision, resources and physical environment. Such scales can easily be incorporated into an assessment of worker safety and health concerns.
Is control a significant determinant of worker safety and health? This question has driven many large-scale research efforts since at least 1985. Since most of these studies have consisted of non- experimental field surveys in which control was not purposely manipulated, the evidence can only show a systematic correlation between control and health and safety outcome variables. The lack of experimental evidence prevents us from making direct causal assertions, but the correlational evidence is quite consistent in showing that workers with lower levels of control suffer more from mental and physical health complaints. The evidence is strongly suggestive, then, that increasing worker control constitutes a viable strategy for improving the health and welfare of workers. A more controversial question is whether control interacts with other sources of psychosocial stress to determine health outcomes. That is, will high control levels counteract the deleterious effects of other job demands? This is an intriguing question, for, if true, it suggests that the ill effects of high workloads, for example, can be negated by increasing worker control with no corresponding need to lower workload demands. The evidence is clearly mixed on this question, however. About as many investigators have reported such interaction effects as have not. Thus, control should not be considered a panacea that will cure the problems brought on by other psychosocial stressors.
Work by organizational researchers suggests that increasing worker control can significantly improve health and well-being. Moreover, it is relatively easy to make a diagnosis of low worker control through the use of brief survey measures. How can the health and safety specialist intervene, then, to increase worker control levels? As there are many domains of control, there are many ways to increase workplace control. These range from providing opportunities for workers to participate in decisions that affect them to the fundamental redesign of jobs. What is clearly important is that control domains be targeted that are relevant to the primary goals of the workers and that fit the situational demands. These domains can probably best be determined by involving workers in joint diagnosis and problem-solving sessions. It should be noted, however, that the kinds of changes in the workplace that in many cases are necessary to achieve real gains in control involve fundamental changes in management systems and policies. Increasing control might be as simple as providing a switch that allows machine-paced workers to control their pace, but it is just as likely to involve important changes in the decision-making authority of workers. Thus, organizational decision makers must usually be full and active supporters of control enhancing interventions.
The purpose of this article is to afford the reader an understanding of how ergonomic conditions can affect the psychosocial aspects of working, employee satisfaction with the work environment, and employee health and well-being. The major thesis is that, with respect to physical surroundings, job demands and technological factors, improper design of the work environment and job activities can cause adverse employee perceptions, psychological stress and health problems (Smith and Sainfort 1989; Cooper and Marshall 1976).
Industrial ergonomics is the science of fitting the work environment and job activities to the capabilities, dimensions and needs of people. Ergonomics deals with the physical work environment, tools and technology design, workstation design, job demands and physiological and biomechanical loading on the body. Its goal is to increase the degree of fit among the employees, the environments in which they work, their tools and their job demands. When the fit is poor, stress and health problems can occur. The many relationships between the demands of the job and psychological distress are discussed elsewhere in this chapter as well as in Smith and Sainfort (1989), in which a definition is given of the balance theory of job stress and job design. Balance is the use of different aspects of job design to counteract job stressors. The concept of job balance is important in the examination of ergonomic considerations and health. For instance, the discomforts and disorders produced by poor ergonomic conditions can make an individual more susceptible to job stress and psychological disorders, or can intensify the somatic effects of job stress.
As spelled out by Smith and Sainfort (1989), there are various sources of job stress, including
Smith (1987) and Cooper and Marshall (1976) discuss the characteristics of the workplace that can cause psychological stress. These include improper workload, heavy work pressure, hostile environment, role ambiguity, lack of challenging tasks, cognitive overload, poor supervisory relations, lack of task control or decision-making authority, poor relationship with other employees and lack of social support from supervisors, fellow employees and family.
Adverse ergonomic characteristics of work can cause visual, muscular and psychological disturbances such as visual fatigue, eye strain, sore eyes, headaches, fatigue, muscle soreness, cumulative trauma disorders, back disorders, psychological tension, anxiety and depression. Sometimes these effects are temporary and may disappear when the individual is removed from work or given an opportunity to rest at work, or when the design of the work environment is improved. When exposure to poor ergonomic conditions is chronic, then the effects can become permanent. Visual and muscular disturbances, and aches and pains can induce anxiety in employees. The result may be psychological stress or an exacerbation of the stress effects of other adverse working conditions that cause stress. Visual and musculoskeletal disorders that lead to a loss of function and disability can lead to anxiety, depression, anger and melancholy. There is a synergistic relationship among the disorders caused by ergonomic misfit, so that a circular effect is created in which visual or muscular discomfort generates more psychological stress, which then leads to a greater sensitivity in pain perception in the eyes and muscles, which leads to more stress and so on.
Smith and Sainfort (1989) have defined five elements of the work system that are significant in the design of work that relate to the causes and control of stress. These are: (1) the person; (2) the physical work environment; (3) tasks; (4) technology; and (5) work organization. All but the person are discussed.
Physical Work Environment
The physical work environment produces sensory demands which affect an employee’s ability to see, hear and touch properly, and includes such features as air quality, temperature and humidity. In addition, noise is one of the most prominent of the ergonomic conditions that produce stress (Cohen and Spacapan 1983). When physical working conditions produce a “poor fit” with employees’ needs and capabilities, generalized fatigue, sensory fatigue and performance frustration are the result. Such conditions can lead to psychological stress (Grandjean 1968).
Technology and Workstation Factors
Various aspects of technology have proved troublesome for employees, including incompatible controls and displays, poor response characteristics of controls, displays with poor sensory sensitivity, difficulty in operating characteristics of the technology, equipment that impairs employee performance and equipment breakdowns (Sanders and McCormick 1993; Smith et al. 1992a). Research has shown that employees with such problems report more physical and psychological stress (Smith and Sainfort 1989; Sauter, Dainoff and Smith 1990).
Two very critical ergonomic task factors that have been tied to job stress are heavy workloads and work pressure (Cooper and Smith 1985). Too much or too little work produces stress, as does unwanted overtime work. When employees must work under time pressure, for example, to meet deadlines or when the workload is unrelentingly high, then stress is also high. Other critical task factors that have been tied to stress are machine pacing of the work process, a lack of cognitive content of the job tasks and low task control. From an ergonomic perspective, workloads should be established using scientific methods of time and motion evaluation (ILO 1986), and not be set by other criteria such as economic need to recover capital investment or by the capacity of the technology.
Three ergonomic aspects of the management of the work process have been identified as conditions that can lead to employee psychological stress. These are shift work, machine-paced work or assembly-line work, and unwanted overtime (Smith 1987). Shift work has been shown to disrupt biological rhythms and basic physiological functioning (Tepas and Monk 1987; Monk and Tepas 1985). Machine-paced work or assembly-line work that produces short-cycle tasks with little cognitive content and low employee control over the process leads to stress (Sauter, Hurrell and Cooper 1989). Unwanted overtime can lead to employee fatigue and to adverse psychological reactions such as anger and mood disturbances (Smith 1987). Machine-paced work, unwanted overtime and perceived lack of control over work activities have also been linked to mass psychogenic illness (Colligan 1985).