Andrew Steptoe and Tessa M. Pollard
The acute physiological adjustments recorded during the performance of problem-solving or psychomotor tasks in the laboratory include: raised heart rate and blood pressure; alterations in cardiac output and peripheral vascular resistance; increased muscle tension and electrodermal (sweat gland) activity; disturbances in breathing pattern; and modifications in gastrointestinal activity and immune function. The best studied neurohormonal responses are those of the catecholamines (adrenaline and noradrenaline) and cortisol. Noradrenaline is the primary transmitter released by the nerves of the sympathetic branch of the autonomic nervous system. Adrenaline is released from the adrenal medulla following stimulation of the sympathetic nervous system, while activation of the pituitary gland by higher centres in the brain results in the release of cortisol from the adrenal cortex. These hormones support autonomic activation during stress and are responsible for other acute changes, such as stimulation of the processes that govern blood clotting, and the release of stored energy supplies from adipose tissue. It is likely that these kinds of response will also be seen during job stress, but studies in which work conditions are simulated, or in which people are tested in their normal jobs, are required to demonstrate such effects.
A variety of methods is available to monitor these responses. Conventional psychophysiological techniques are used to assess autonomic responses to demanding tasks (Cacioppo and Tassinary 1990). Levels of stress hormones can be measured in the blood or urine, or in the case of cortisol, in the saliva. The sympathetic activity associated with challenge has also been documented by measures of noradrenaline spillover from nerve terminals, and by direct recording of sympathetic nervous activity with miniature electrodes. The parasympathetic or vagal branch of the autonomic nervous system typically responds to task performance with reduced activity, and this can, under certain circumstances, be indexed through recording heart rate variability or sinus arrhythmia. In recent years, power spectrum analysis of heart rate and blood pressure signals has revealed wave bands that are characteristically associated with sympathetic and parasympathetic activity. Measures of the power in these wavebands can be used to index autonomic balance, and have shown a shift towards the sympathetic branch at the expense of the parasympathetic branch during task performance.
Few laboratory assessments of acute physiological responses have simulated work conditions directly. However, dimensions of task demand and performance that are relevant to work have been investigated. For example, as the demands of externally paced work increase (through faster pace or more complex problem solving), there is a rise in adrenaline level, heart rate and blood pressure, a reduction in heart rate variability and an increase in muscle tension. In comparison with self-paced tasks performed at the same rate, external pacing results in greater blood pressure and heart rate increases (Steptoe et al. 1993). In general, personal control over potentially stressful stimuli reduces autonomic and neuroendocrine activation in comparison with uncontrollable situations, although the effort of maintaining control over the situation itself has its own physiological costs.
Frankenhaeuser (1991) has suggested that adrenaline levels are raised when a person is mentally aroused or performing a demanding task, and that cortisol levels are raised when an individual is distressed or unhappy. Applying these ideas to job stress, Frankenhaeuser has proposed that job demand is likely to lead to increased effort and thus to raise levels of adrenaline, while lack of job control is one of the main causes of distress at work and is therefore likely to stimulate raised cortisol levels. Studies comparing levels of these hormones in people doing their normal work with levels in the same people at leisure have shown that adrenaline is normally raised when people are at work. Effects for noradrenaline are inconsistent and may depend on the amount of physical activity that people carry out during work and leisure time. It has also been shown that adrenaline levels at work correlate positively with levels of job demand. In contrast, cortisol levels have not been shown typically to be raised in people at work, and it is yet to be demonstrated that cortisol levels vary according to the degree of job control. In the “Air Traffic Controller Health Change Study”, only a small proportion of workers produced consistent increases in cortisol as the objective workload became greater (Rose and Fogg 1993).
Thus only adrenaline among the stress hormones has been shown conclusively to rise in people at work, and to do so according to the level of demand they experience. There is evidence that levels of prolactin increase in response to stress while levels of testosterone decrease. However, studies of these hormones in people at work are very limited. Acute changes in the concentration of cholesterol in the blood have also been observed with increased workload, but the results are not consistent (Niaura, Stoney and Herbst 1992).
As far as cardiovascular variables are concerned, it has repeatedly been found that blood pressure is higher in men and women during work than either after work or during equivalent times of day spent at leisure. These effects have been observed both with self-monitored blood pressure and with automated portable (or ambulatory) monitoring instruments. Blood pressure is especially high during periods of increased work demand (Rose and Fogg 1993). It has also been found that blood pressure rises with emotional demands, for example, in studies of paramedics attending the scenes of accidents. However, it is often difficult to determine whether blood pressure fluctuations at work are due to psychological demands or to associated physical activity and changes in posture. The raised blood pressure recorded at work is especially pronounced among people reporting high job strain according to the Demand-Control model (Schnall et al. 1990).
Heart rate has not been shown to be consistently raised during work. Acute elevations of heart rate may nevertheless be elicited by disruption of work, for example with breakdown of equipment. Emergency workers such as fire-fighters exhibit extremely fast heart rates in response to alarm signals at work. On the other hand, high levels of social support at work are associated with reduced heart rates. Abnormalities of cardiac rhythm may also be elicited by stressful working conditions, but the pathological significance of such responses has not been established.
Gastrointestinal problems are commonly reported in studies of job stress (see “Gastrointestinal problems” below). Unfortunately, it is difficult to assess the physiological systems underlying gastrointestinal symptoms in the work setting. Acute mental stress has variable effects on gastric acid secretion, stimulating large increases in some individuals and reduced output in others. Shift workers have a particularly high prevalence of gastrointestinal problems, and it has been suggested that these may arise when diurnal rhythms in the central nervous system’s control of gastric acid secretion are disrupted. Anomalies of small bowel motility have been recorded using radiotelemetry in patients diagnosed with irritable bowel syndrome while they go about their everyday lives. Health complaints, including gastrointestinal symptoms, have been shown to co-vary with perceived workload, but it is not clear whether this reflects objective changes in physiological function or patterns of symptom perception and reporting.
Major changes are taking place within the workforces of many of the world’s leading industrial nations, with members of ethnic minority groups making up increasingly larger proportions. However, little of the occupational stress research has focused on ethnic minority populations. The changing demographics of the world’s workforce give clear notice that these populations can no longer be ignored. This article briefly addresses some of the major issues of occupational stress in ethnic minority populations with a focus on the United States. However, much of the discussion should be generalizable to other nations of the world.
Much of the occupational stress research either excludes ethnic minorities, includes too few to allow meaningful comparisons or generalizations to be made, or does not report enough information about the sample to determine racial or ethnic participation. Many studies fail to make distinctions among ethnic minorities, treating them as one homogeneous group, thus minimizing the differences in demographic characteristics, culture, language and socio-economic status which have been documented both between and within ethnic minority groups (Olmedo and Parron 1981).
In addition to the failure to address issues of ethnicity, by far the greater part of research does not examine class or gender differences, or class by race and gender interactions. Moreover, little is known about the cross-cultural utility of many of the assessment procedures. Documentation used in such procedures is not adequately translated nor is there demonstrated equivalency between the standardized English and other language versions. Even when the reliabilities appear to indicate equivalence across ethnic or cultural groups, there is uncertainty about which symptoms in the scale are elicited in a reliable fashion, that is, whether the phenomenology of a disorder is similar across groups (Roberts, Vernon and Rhoades 1989).
Many assessment instruments inadequately assess conditions within ethnic minority populations; consequently results are often suspect. For example, many stress scales are based on models of stress as a function of undesirable change or readjustment. However, many minority individuals experience stress in large part as a function of ongoing undesirable situations such as poverty, economic marginality, inadequate housing, unemployment, crime and discrimination. These chronic stressors are not usually reflected in many of the stress scales. Models which conceptualize stress as resulting from the interplay between both chronic and acute stressors, and various internal and external mediating factors, are more appropriate for assessing stress in ethnic minority and poor populations (Watts-Jones 1990).
A major stressor affecting ethnic minorities is the prejudice and discrimination they encounter as a result of their minority status in a given society (Martin 1987; James 1994). It is a well- established fact that minority individuals experience more prejudice and discrimination as a result of their ethnic status than do members of the majority. They also perceive greater discrimination and fewer opportunities for advancement as compared with whites (Galinsky, Bond and Friedman 1993). Workers who feel discriminated against or who feel that there are fewer chances for advancement for people of their ethnic group are more likely to feel “burned out” in their jobs, care less about working hard and doing their jobs well, feel less loyal to their employers, are less satisfied with their jobs, take less initiative, feel less committed to helping their employers succeed and plan to leave their current employers sooner (Galinsky, Bond and Friedman 1993). Moreover, perceived prejudice and discrimination are positively correlated with self-reported health problems and higher blood pressure levels (James 1994).
An important focus of occupational stress research has been the relationship between social support and stress. However, there has been little attention paid to this variable with respect to ethnic minority populations. The available research tends to show conflicting results. For example, Hispanic workers who reported higher levels of social support had less job-related tension and fewer reported health problems (Gutierres, Saenz and Green 1994); ethnic minority workers with lower levels of emotional support were more likely to experience job burn-out, health symptoms, episodic job stress, chronic job stress and frustration; this relationship was strongest for women and for management as opposed to non-management personnel (Ford 1985). James (1994), however, did not find a significant relationship between social support and health outcomes in a sample of African-American workers.
Most models of job satisfaction have been derived and tested using samples of white workers. When ethnic minority groups have been included, they have tended to be African-Americans, and potential effects due to ethnicity were often masked (Tuch and Martin 1991). Research that is available on African-American employees tends to yield significantly lower scores on overall job satisfaction in comparison to whites (Weaver 1978, 1980; Staines and Quinn 1979; Tuch and Martin 1991). Examining this difference, Tuch and Martin (1991) noted that the factors determining job satisfaction were basically the same but that African-Americans were less likely to have the situations that led to job satisfaction. More specifically, extrinsic rewards increase African-Americans’ job satisfaction, but African-Americans are disadvantaged relatively to whites on these variables. On the other hand, blue-collar incumbency and urban residence decrease job satisfaction for African-Americans but African-Americans are overrepresented in these areas. Wright, King and Berg (1985) found that organizational variables (i.e., job authority, qualifications for the position and a sense that advancement within the organization is possible) were the best predictors of job satisfaction in their sample of black female managers in keeping with previous research on primarily white samples.
Ethnic minority workers are more likely than their white counterparts to be in jobs with hazardous work conditions. Bullard and Wright (1986/1987) noted this propensity and indicated that the population differences in injuries are likely to be the result of racial and ethnic disparities in income, education, type of employment and other socio-economic factors correlated with exposure to hazards. One of the most likely reasons, they noted, was that occupational injuries are highly dependent on the job and industry category of the workers and ethnic minorities tend to work in more hazardous occupations.
Foreign workers who have entered the country illegally often experience special work stress and maltreatment. They often endure substandard and unsafe working conditions and accept less than minimum wages because of fear of being reported to the immigration authorities and they have few options for better employment. Most health and safety regulations, guidelines for use, and warnings are in English and many immigrants, illegal or otherwise, may not have a good understanding of written or spoken English (Sanchez 1990).
Some areas of research have almost totally ignored ethnic minority populations. For example, hundreds of studies have examined the relationship between Type A behaviour and occupational stress. White males constitute the most frequently studied groups with ethnic minority men and women almost totally excluded. Available research—e.g., a study by Adams et al. (1986), using a sample of college freshmen, and e.g., Gamble and Matteson (1992), investigating black workers—indicates the same positive relationship between Type A behaviour and self-reported stress as that found for white samples.
Similarly, little research on issues such as job control and work demands is available for ethnic minority workers, although these are central constructs in occupational stress theory. Available research tends to show that these are important constructs for ethnic minority workers as well. For example, African-American licensed practical nurses (LPNs) report significantly less decision authority and more dead-end jobs (and hazard exposures) than do white LPNs and this difference is not a function of educational differences (Marshall and Barnett 1991); the presence of low decision latitude in the face of high demands tends to be the pattern most characteristic of jobs with low socio-economic status, which are more likely to be held by ethnic minority workers (Waitzman and Smith 1994); and middle- and upper-level white men rate their jobs consistently higher than their ethnic minority (and female) peers on six work design factors (Fernandez 1981).
Thus, it appears that many research questions remain regarding ethnic minority populations in the occupational stress and health arena as regards ethnic minority populations. These questions will not be answered until ethnic minority workers are included in study samples and in the development and validation of investigatory instruments.
Do job stressors affect men and women differently? This question has only recently been addressed in the job stress–illness literature. In fact, the word gender does not even appear in the index of the first edition of the Handbook of Stress (Goldberger and Breznitz 1982) nor does it appear in the indices of such major reference books as Job Stress and Blue Collar Work (Cooper and Smith 1985) and Job Control and Worker Health (Sauter, Hurrell and Cooper 1989). Moreover, in a 1992 review of moderator variables and interaction effects in the occupational stress literature, gender effects were not even mentioned (Holt 1992). One reason for this state of affairs lies in the history of occupational health and safety psychology, which in turn reflects the pervasive gender stereotyping in our culture. With the exception of reproductive health, when researchers have looked at physical health outcomes and physical injuries, they have generally studied men and variations in their work. When researchers have studied mental health outcomes, they have generally studied women and variations in their social roles.
As a result, the “available evidence” on the physical health impact of work has until recently been almost completely limited to men (Hall 1992). For example, attempts to identify correlates of coronary heart disease have been focused exclusively on men and on aspects of their work; researchers did not even inquire into their male subjects’ marital or parental roles (Rosenman et al. 1975). Indeed, few studies of the job stress–illness relationship in men include assessments of their marital and parental relationships (Caplan et al. 1975).
In contrast, concern about reproductive health, fertility and pregnancy focused primarily on women. Not surprisingly, “the research on reproductive effects of occupational exposures is far more extensive on females than on males” (Walsh and Kelleher 1987). With respect to psychological distress, attempts to specify the psychosocial correlates, in particular the stressors associated with balancing work and family demands, have centred heavily on women.
By reinforcing the notion of “separate spheres” for men and women, these conceptualizations and the research paradigms they generated prevented any examination of gender effects, thereby effectively controlling for the influence of gender. Extensive sex segregation in the workplace (Bergman 1986; Reskin and Hartman 1986) also acts as a control, precluding the study of gender as a moderator. If all men are employed in “men’s jobs” and all women are employed in “women’s jobs”, it would not be reasonable to ask about the moderating effect of gender on the job stress–illness relationship: job conditions and gender would be confounded. It is only when some women are employed in jobs that men occupy and when some men are employed in jobs that women occupy that the question is meaningful.
Controlling is one of three strategies for treating the effects of gender. The other two are ignoring these effects or analysing them (Hall 1991). Most investigations of health have either ignored or controlled for gender, thereby accounting for the dearth of references to gender as discussed above and for a body of research that reinforces stereotyped views about the role of gender in the job stress–illness relationship. These views portray women as essentially different from men in ways that render them less robust in the workplace, and portray men as comparatively unaffected by non-workplace experiences.
In spite of this beginning, the situation is already changing. Witness the publication in 1987 of Gender and Stress (Barnett, Biener and Baruch 1987), the first edited volume focusing specifically on the impact of gender at all points in the stress reaction. And the second edition of the Handbook of Stress (Barnett 1992) includes a chapter on gender effects. Indeed, current studies increasingly reflect the third strategy: analysing gender effects. This strategy holds great promise, but also has pitfalls. Operationally, it involves analysing data relating to males and females and estimating both the main and the interaction effects of gender. A significant main effect tells us that after controlling for the other predictors in the model, men and women differ with respect to the level of the outcome variable. Interaction-effects analyses concern differential reactivity, that is, does the relationship between a given stressor and a health outcome differ for women and men?
The main promise of this line of inquiry is to challenge stereotyped views of women and men. The main pitfall is that conclusions about gender difference can still be drawn erroneously. Because gender is confounded with many other variables in our society, these variables have to be taken into account before conclusions about gender can be inferred. For example, samples of employed men and women will undoubtedly differ with respect to a host of work and non-work variables that could reasonably affect health outcomes. Most important among these contextual variables are occupational prestige, salary, part-time versus full-time employment, marital status, education, employment status of spouse, overall work burdens and responsibility for care of younger and older dependants. In addition, evidence suggests the existence of gender differences in several personality, cognitive, behavioural and social system variables that are related to health outcomes. These include: sensation seeking; self-efficacy (feelings of competence); external locus of control; emotion-focused versus problem-focused coping strategies; use of social resources and social support; harmful acquired risks, such as smoking and alcohol abuse; protective behaviours, such as exercise, balanced diets and preventive health regimens; early medical intervention; and social power (Walsh, Sorensen and Leonard, in press). The better one can control these contextual variables, the closer one can get to understanding the effect of gender per se on the relationships of interest, and thereby to understanding whether it is gender or other, gender-related variables that are the effective moderators.
To illustrate, in one study (Karasek 1990) job changes among white-collar workers were less likely to be associated with negative health outcomes if the changes resulted in increased job control. This finding was true for men, not women. Further analyses indicated that job control and gender were confounded. For women, one of “the less aggressive [or powerful] groups in the labour market” (Karasek 1990), white-collar job changes often involved reduced control, whereas for men, such job changes often involved increased control. Thus, power, not gender, accounted for this interaction effect. Such analyses lead us to refine the question about moderator effects. Do men and women react differentially to workplace stressors because of their inherent (i.e., biological) nature or because of their different experiences?
Although only a few studies have examined gender interaction effects, most report that when appropriate controls are utilized, the relationship between job conditions and physical or mental health outcomes is not affected by gender. (Lowe and Northcott 1988 describe one such study). In other words, there is no evidence of an inherent difference in reactivity.
Findings from a random sample of full-time employed men and women in dual-earner couples illustrates this conclusion with respect to psychological distress. In a series of cross-sectional and longitudinal analyses, a matched pairs design was used that controlled for such individual-level variables as age, education, occupational prestige and marital-role quality, and for such couple-level variables as parental status, years married and household income (Barnett et al. 1993; Barnett et al. 1995; Barnett, Brennan and Marshall 1994). Positive experiences on the job were associated with low distress; insufficient skill discretion and overload were associated with high distress; experiences in the roles of partner and parent moderated the relationship between job experiences and distress; and change over time in skill discretion and overload were each associated with change over time in psychological distress. In no case was the effect of gender significant. In other words, the magnitude of these relationships was not affected by gender.
One important exception is tokenism (see, for example, Yoder 1991). Whereas “it is clear and undeniable that there is a considerable advantage in being a member of the male minority in any female profession” (Kadushin 1976), the opposite is not true. Women who are in minority in a male work situation experience a considerable disadvantage. Such a difference is readily understandable in the context of men’s and women’s relative power and status in our culture.
Overall, studies of physical health outcomes also fail to reveal significant gender interaction effects. It appears, for example, that characteristics of work activity are stronger determinants of safety than are attributes of workers, and that women in traditionally male occupations suffer the same types of injury with approximately the same frequency as their male counterparts. Moreover, poorly designed protective equipment, not any inherent incapacity on the part of women in relation to the work, is often to blame when women in male-dominated jobs experience more injuries (Walsh, Sorensen and Leonard, 1995).
Two caveats are in order. First, no one study controls for all the gender-related covariates. Therefore, any conclusions about “gender” effects must be tentative. Secondly, because controls vary from study to study, comparisons between studies are difficult.
As increasing numbers of women enter the labour force and occupy jobs similar to those occupied by men, both the opportunity and the need for analysing the effect of gender on the job stress–illness relationship also increase. In addition, future research needs to refine the conceptualization and measurement of the stress construct to include job stressors important to women; extend interaction effects analyses to studies previously restricted to male or female samples, for example, studies of reproductive health and of stresses due to non-workplace variables; and examine the interaction effects of race and class as well as the joint interaction effects of gender x race and gender x class.
During the mid-1970s public health practitioners, and in particular, epidemiologists “discovered” the concept of social support in their studies of causal relationships between stress, mortality and morbidity (Cassel 1974; Cobb 1976). In the past decade there has been an explosion in the literature relating the concept of social support to work-related stressors. By contrast, in psychology, social support as a concept had already been well integrated into clinical practice. Rogers’ (1942) client-centred therapy of unconditional positive regard is fundamentally a social support approach. Lindeman’s (1944) pioneering work on grief management identified the critical role of support in moderating the crisis of death loss. Caplin’s (1964) model of preventive community psychiatry (1964) elaborated on the importance of community and support groups.
Cassel (1976) adapted the concept of social support into public health theory as a way of explaining the differences in diseases that were thought to be stress-related. He was interested in understanding why some individuals appeared to be more resistant to stress than others. The idea of social support as a factor in disease causation was reasonable since, he noted, both people and animals who experienced stress in the company of “significant others” seemed to suffer fewer adverse consequences than those who were isolated. Cassel proposed that social support could act as a protective factor buffering an individual from the effects of stress.
Cobb (1976) expanded on the concept by noting that the mere presence of another person is not social support. He suggested that an exchange of “information” was needed. He established three categories for this exchange:
Cobb reported that those experiencing severe events without such social support were ten times more likely to come to be depressed and concluded that somehow intimate relations, or social support, was protective of the effects of stress reactions. He also proposed that social support operates throughout one’s life span, encompassing various life events such as unemployment, severe illness and bereavement. Cobb pointed out the great diversity of studies, samples, methods and outcomes as convincing evidence that social support is a common factor in modifying stress, but is, in itself, not a panacea for avoiding its effects.
According to Cobb, social support increases coping ability (environmental manipulation) and facilitates adaptation (self-change to improve the person-environment fit). He cautioned, however, that most research was focused on acute stressors and did not permit generalizations of the protective nature of social support for coping with the effects of chronic stressors or traumatic stress.
Over the intervening years since the publication of these seminal works, investigators have moved away from considering social support as a unitary concept, and have attempted to understand the components of social stress and social support.
Hirsh (1980) describes five possible elements of social support:
House felt that emotional support was the most important form of social support. In the workplace, the supportiveness of the supervisor was the most important element, followed by co-worker support. The structure and organization of the enterprise, as well as the specific jobs within it, could either enhance or inhibit potential for support. House found that greater task specialization and fragmentation of work leads to more isolated work roles and to decreased opportunities for support.
Pines’ (1983) study of burnout, which is a phenomenon discussed separately in this chapter, found that the availability of social support at work is negatively correlated with burnout. He identifies six different relevant aspects of social support which modify the burnout response. These include listening, encouragement, giving advice and, providing companionship and tangible aid.
As one may gather from the foregoing discussion in which the models proposed by several researchers have been described, while the field has attempted to specify the concept of social support, there is no clear consensus on the precise elements of the concept, although considerable overlap between models is evident.
Interaction between Stress and Social Support
Although the literature on stress and social support is quite extensive, there is still considerable debate as to the mechanisms by which stress and social support interact. A long-standing question is whether social support has a direct or indirect effect on health.
Main effect/Direct effect
Social support can have a direct or main effect by serving as a barrier to the effects of the stressor. A social support network may provide needed information or needed feedback in order to overcome the stressor. It may provide a person with the resources he or she needs to minimize the stress. An individual’s self-perception may also be influenced by group membership so as to provide self-confidence, a sense of mastery and skill and hence thereby a sense of control over the environment. This is relevant to Bandura’s (1986) theories of personal control as the mediator of stress effects. There appears to be a minimum threshold level of social contact required for good health, and increases in social support above the minimum are less important. If one considers social support as having a direct—or main—effect, then one can create an index by which to measure it (Cohen and Syme 1985; Gottlieb 1983).
Cohen and Syme (1985), however, also suggest that an alternative explanation to social support acting as a main effect is that it is the isolation, or lack of social support, which causes the ill health rather than the social support itself promoting better health. This is an unresolved issue. Gottlieb also raises the issue of what happens when the stress results in the loss of the social network itself, such as might occur during disasters, major accidents or loss of work. This effect has not yet been quantified.
Buffering/Indirect effect
The buffering hypothesis is that social support intervenes between the stressor and the stress response to reduce its effects. Buffering could change one’s perception of the stressor, thus diminishing its potency, or it could increases one’s coping skills. Social support from others may provide tangible aid in a crisis, or it may lead to suggestions that facilitate adaptive responses. Finally, social support may be the stress-modifying effect which calms the neuroendocrine system so that the person may be less reactive to the stressor.
Pines (1983) notes that the relevant aspect of social support may be in the sharing of a social reality. Gottlieb proposes that social support could offset self-recrimination and dispel notions that the individual is him or herself responsible for the problems. Interaction with a social support system can encourage the venting of fears and can assist re-establishing a meaningful social identity.
Additional Theoretical Issues
Research thus far has tended to treat social support as a static, given factor. While the issue of its change over time has been raised, little data exist on the time course of social support (Gottlieb 1983; Cohen and Syme 1985). Social support is, of course, fluid, just as the stressors that it affects. It varies as the individual passes through the stages of life. It can also change over the short-term experience of a particular stressful event (Wilcox 1981).
Such variability probably means that social support fulfils different functions during different developmental stages or during different phases of a crisis. For example at the onset of a crisis, informational support may be more essential than tangible aid. The source of support, its density and the length of time it is operative will also be in flux. The reciprocal relationship between stress and social support must be recognized. Some stressors themselves have a direct impact on available support. Death of a spouse, for example, usually reduces the extent of the network and may have serious consequences for the survivor (Goldberg et al. 1985).
Social support is not a magic bullet that reduces the impact of stress. Under certain conditions it may exacerbate or be the cause of stress. Wilcox (1981) noted that those with a denser kin network had more difficulties adjusting to divorce because their families were less likely to accept divorce as a solution to marital problems. The literature on addiction and family violence also shows possible severe negative effects of social networks. Indeed, as Pines and Aronson (1981) point out, much of professional mental health interventions are devoted to undoing destructive relationships, and to teaching interpersonal skills and to assisting people to recover from social rejection.
There are a large number of studies employing a variety of measures of the functional content of social support. These measures have a wide range of reliability and construct validity. Another methodological problem is that these analyses depend largely on the self-reports of those being studied. The responses will therefore of necessity be subjective and will cause one to wonder whether it is the actual event or level of social support that is important or whether it is the individual’s perception of support and outcomes that is more critical. If it is the perception that is critical, then it may be that some other, third variable, such as personality type, is affecting both stress and social support (Turner 1983). For example, a third factor, such as age or socio-economic status, may influence change in both social support and outcome, according to Dooley (1985). Solomon (1986) provides some evidence for this idea with a study of women who have been forced by financial constraints into involuntary interdependence on friends and kin. She found that such women opt out of these relationships as quickly as they are financially able to do so.
Thoits (1982) raises concerns about reverse causation. It may be, she points out, that certain disorders chase away friends and lead to loss of support. Studies by Peters-Golden (1982) and Maher (1982) on cancer victims and social support appear to be consistent with this proposition.
Social Support and Work Stress
Studies on the relationship between social support and work stress indicate that successful coping is related to the effective use of support systems (Cohen and Ahearn 1980). Successful coping activities have emphasized the use of both formal and informal social support in dealing with work stress. Laid-off workers, for example, are advised to actively seek support to provide informational, emotional and tangible support. There have been relatively few evaluations of the effectiveness of such interventions. It appears, however, that formal support is only effective in the short term and informal systems are necessary for longer-term coping. Attempts to provide institutional formal social support can create negative outcomes, since the anger and rage about layoff or bankruptcy, for example, may be displaced to those who provide the social support. Prolonged reliance on social support may create a sense of dependency and lowered self- esteem.
In some occupations, such as seafarers, fire-fighters or staff in remote locations such as on oil rigs, there is a consistent, long-term, highly defined social network which can be compared to a family or kin system. Given the necessity for small work groups and joint efforts, it is natural that a strong sense of social cohesion and support develops among workers. The sometimes hazardous nature of the work requires that workers develop mutual respect, trust and confidence. Strong bonds and interdependence are created when people are dependent on each other for their survival and well-being.
Further research on the nature of social support during routine periods, as well as downsizing or major organizational change, is necessary to further define this factor. For example, when an employee is promoted to a supervisory position, he or she normally must distance him or herself from the other members of the work group. Does this make a difference in the day-to-day levels of social support he or she receives or requires? Does the source of support shift to other supervisors or to the family or somewhere else? Do those in positions of responsibility or authority experience different work stressors? Do these individuals require different types, sources or functions of social support?
If the target of the group-based interventions is also changing the functions of social support or the nature of the network, does this provide a preventive effect in future stressful events?
What will be the effect of growing numbers of women in these occupations? Does their presence change the nature and functions of support for all or does each sex require different levels or types of support?
The workplace presents a unique opportunity to study the intricate web of social support. As a closed subculture, it provides a natural experimental setting for research into the role of social support, social networks and their interrelationships with acute, cumulative and traumatic stress.
Coping has been defined as “efforts to reduce the negative impacts of stress on individual well-being” (Edwards 1988). Coping, like the experience of work stress itself, is a complex, dynamic process. Coping efforts are triggered by the appraisal of situations as threatening, harmful or anxiety producing (i.e., by the experience of stress). Coping is an individual difference variable that moderates the stress-outcome relationship.
Coping styles encompass trait-like combinations of thoughts, beliefs and behaviours that result from the experience of stress and may be expressed independently of the type of stressor. A coping style is a dispositional variable. Coping styles are fairly stable over time and situations and are influenced by personality traits, but are different from them. The distinction between the two is one of generality or level of abstraction. Examples of such styles, expressed in broad terms, include: monitor-blunter (Miller 1979) and repressor-sensitizer (Houston and Hodges 1970). Individual differences in personality, age, experience, gender, intellectual ability and cognitive style affect the way an individual copes with stress. Coping styles are the result of both prior experience and previous learning.
Shanan (1967) offered an early perspective on what he termed an adaptive coping style. This “response set” was characterized by four ingredients: the availability of energy directly focused on potential sources of the difficulty; a clear distinction between events internal and external to the person; confronting rather than avoiding external difficulties; and balancing external demands with needs of the self. Antonovsky (1987) similarly suggests that, to be effective, the individual person must be motivated to cope, have clarified the nature and dimensions of the problem and the reality in which it exists, and then selected the most appropriate resources for the problem at hand.
The most common typology of coping style (Lazarus and Folkman 1984) includes problem-focused coping (which includes information seeking and problem solving) and emotion-focused coping (which involves expressing emotion and regulating emotions). These two factors are sometimes complemented by a third factor, appraisal-focused coping (whose components include denial, acceptance, social comparison, redefinition and logical analysis).
Moos and Billings (1982) distinguish among the following coping styles:
Greenglass (1993) has recently proposed a coping style termed social coping, which integrates social and interpersonal factors with cognitive factors. Her research showed significant relationships between various kinds of social support and coping forms (e.g., problem-focused and emotion-focused). Women, generally possessing relatively greater interpersonal competence, were found to make greater use of social coping.
In addition, it may be possible to link another approach to coping, termed preventive coping, with a large body of previously separate writing dealing with healthy lifestyles (Roskies 1991). Wong and Reker (1984) suggest that a preventive coping style is aimed at promoting one’s well-being and reducing the likelihood of future problems. Preventive coping includes such activities as physical exercise and relaxation, as well as the development of appropriate sleeping and eating habits, and planning, time management and social support skills.
Another coping style, which has been described as a broad aspect of personality (Watson and Clark 1984), involves the concepts of negative affectivity (NA) and positive affectivity (PA). People with high NA accentuate the negative in evaluating themselves, other people and their environment in general and reflect higher levels of distress. Those with high PA focus on the positives in evaluating themselves, other people and their world in general. People with high PA report lower levels of distress.
These two dispositions can affect a person’s perceptions of the number and magnitude of potential stressors as well as his or her coping responses (i.e., one’s perceptions of the resources that one has available, as well as the actual coping strategies that are used). Thus, those with high NA will report fewer resources available and are more likely to use ineffective (defeatist) strategies (such as releasing emotions, avoidance and disengagement in coping) and less likely to use more effective strategies (such as direct action and cognitive reframing). Individuals with high PA would be more confident in their coping resources and use more productive coping strategies.
Antonovsky’s (1979; 1987) sense of coherence (SOC) concept overlaps considerably with PA. He defines SOC as a generalized view of the world as meaningful and comprehensible. This orientation allows the person to first focus on the specific situation and then to act on the problem and the emotions associated with the problem. High SOC individuals have the motivation and the cognitive resources to engage in these sorts of behaviours likely to resolve the problem. In addition, high SOC individuals are more likely to realize the importance of emotions, more likely to experience particular emotions and to regulate them, and more likely to take responsibility for their circumstances instead of blaming others or projecting their perceptions upon them. Considerable research has since supplied support for Antonovsky’s thesis.
Coping styles can be described with reference to dimensions of complexity and flexibility (Lazarus and Folkman 1984). People using a variety of strategies exhibit a complex style; those preferring a single strategy exhibit a single style. Those who use the same strategy in all situations exhibit a rigid style; those who use different strategies in the same, or different, situations exhibit a flexible style. A flexible style has been shown to be more effective than a rigid style.
Coping styles are typically measured by using self-reported questionnaires or by asking individuals, in an open-ended way, how they coped with a particular stressor. The questionnaire developed by Lazarus and Folkman (1984), the “Ways of Coping Checklist”, is the most widely used measure of problem-focused and emotion-focused coping. Dewe (1989), on the other hand, has frequently used individuals’ descriptions of their own coping initiatives in his research on coping styles.
There are a variety of practical interventions that may be implemented with regard to coping styles. Most often, intervention consists of education and training in which individuals are presented with information, sometimes coupled with self-assessment exercises that enable them to examine their own preferred coping style as well as other varieties of coping styles and their potential usefulness. Such information is typically well received by the persons to whom the intervention is directed, but the demonstrated usefulness of such information in helping them cope with real life stressors is lacking. In fact, the few studies that considered individual coping (Shinn et al. 1984; Ganster et al. 1982) have reported limited practical value in such education, particularly when a follow-up has been undertaken (Murphy 1988).
Matteson and Ivancevich (1987) outline a study dealing with coping styles as part of a longer programme of stress management training. Improvements in three coping skills are addressed: cognitive, interpersonal and problem solving. Coping skills are classified as problem-focused or emotion-focused. Problem-focused skills include problem solving, time management, communication and social skills, assertiveness, lifestyle changes and direct actions to change environmental demands. Emotion-focused skills are designed to relieve distress and foster emotion regulation. These include denial, expressing feelings and relaxation.
The preparation of this article was supported in part by the Faculty of Administrative Studies, York University.
Locus of control (LOC) refers to a personality trait reflecting the generalized belief that either events in life are controlled by one’s own actions (an internal LOC) or by outside influences (an external LOC). Those with an internal LOC believe that they can exert control over life events and circumstances, including the associated reinforcements, that is, those outcomes which are perceived to reward one’s behaviours and attitudes. In contrast, those with an external LOC believe they have little control over life events and circumstances, and attribute reinforcements to powerful others or to luck.
The construct of locus of control emerged from Rotter’s (1954) social learning theory. To measure LOC, Rotter (1966) developed the Internal-External (I-E) scale, which has been the instrument of choice in most research studies. However, research has questioned the unidimensionality of the I-E scale, with some authors suggesting that LOC has two dimensions (e.g., personal control and social system control), and others suggesting that LOC has three dimensions (personal efficacy, control ideology and political control). More recently developed scales to measure LOC are multidimensional, or assess LOC for specific domains, such as health or work (Hurrell and Murphy 1992).
One of the most consistent and widespread findings in the general research literature is the association between an external LOC and poor physical and mental health (Ganster and Fusilier 1989). A number of studies in occupational settings report similar findings: workers with an external LOC tended to report more burnout, job dissatisfaction, stress and lower self-esteem than those with an internal LOC (Kasl 1989). Recent evidence suggests that LOC moderates the relationship between role stressors (role ambiguity and role conflict) and symptoms of distress (Cvetanovski and Jex 1994; Spector and O’Connell 1994).
However, research linking LOC beliefs and ill health is difficult to interpret for several reasons (Kasl 1989). First, there may be conceptual overlap between the measures of health and locus of control scales. Secondly, a dispositional factor, like negative affectivity, may be present which is responsible for the relationship. For example, in the study by Spector and O’Connell (1994), LOC beliefs correlated more strongly with negative affectivity than with perceived autonomy at work, and did not correlate with physical health symptoms. Thirdly, the direction of causality is ambiguous; it is possible that the work experience may alter LOC beliefs. Finally, other studies have not found moderating effects of LOC on job stressors or health outcomes (Hurrell and Murphy 1992).
The question of how LOC moderates job stressor-health relationships has not been well researched. One proposed mechanism involves the use of more effective, problem-focused coping behaviour by those with an internal LOC. Those with an external LOC might use fewer problem-solving coping strategies because they believe that events in their lives are outside their control. There is evidence that people with an internal LOC utilize more task-centred coping behaviours and fewer emotion-centred coping behaviours than those with an external LOC (Hurrell and Murphy 1992). Other evidence indicates that in situations viewed as changeable, those with an internal LOC reported high levels of problem-solving coping and low levels of emotional suppression, whereas those with an external LOC showed the reverse pattern. It is important to bear in mind that many workplace stressors are not under the direct control of the worker, and that attempts to change uncontrollable stressors might actually increase stress symptoms (Hurrell and Murphy 1992).
A second mechanism whereby LOC could influence stressor-health relationships is via social support, another moderating factor of stress and health relationships. Fusilier, Ganster and Mays (1987) found that locus of control and social support jointly determined how workers responded to job stressors and Cummins (1989) found that social support buffered the effects of job stress, but only for those with an internal LOC and only when the support was work-related.
Although the topic of LOC is intriguing and has stimulated a great deal of research, there are serious methodological problems attaching to investigations in this area which need to be addressed. For example, the trait-like (unchanging) nature of LOC beliefs has been questioned by research which showed that people adopt a more external orientation with advancing age and after certain life experiences such as unemployment. Furthermore, LOC may be measuring worker perceptions of job control, instead of an enduring trait of the worker. Still other studies have suggested that LOC scales may not only measure beliefs about control, but also the tendency to use defensive manoeuvres, and to display anxiety or proneness to Type A behaviour (Hurrell and Murphy 1992).
Finally, there has been little research on the influence of LOC on vocational choice, and the reciprocal effects of LOC and job perceptions. Regarding the former, occupational differences in the proportion of “internals” and “externals” may be evidence that LOC influences vocational choice (Hurrell and Murphy 1992). On the other hand, such differences might reflect exposure to the job environment, just as the work environment is thought to be instrumental in the development of the Type A behaviour pattern. A final alternative is that occupational differences in LOC are be due to “drift”, that is the movement of workers into or out of certain occupations as a result of job dissatisfaction, health concerns or desire for advancement.
In summary, the research literature does not present a clear picture of the influence of LOC beliefs on job stressor or health relationships. Even where research has produced more or less consistent findings, the meaning of the relationship is obscured by confounding influences (Kasl 1989). Additional research is needed to determine the stability of the LOC construct and to identify the mechanisms or pathways through which LOC influences worker perceptions and mental and physical health. Components of the path should reflect the interaction of LOC with other traits of the worker, and the interaction of LOC beliefs with work environment factors, including reciprocal effects of the work environment and LOC beliefs. Future research should produce less ambiguous results if it incorporates measures of related individual traits (e.g., Type A behaviour or anxiety) and utilizes domain-specific measures of locus of control (e.g., work).
Low self-esteem (SE) has long been studied as a determinant of psychological and physiological disorders (Beck 1967; Rosenberg 1965; Scherwitz, Berton and Leventhal 1978). Beginning in the 1980s, organizational researchers have investigated self-esteem’s moderating role in relationships between work stressors and individual outcomes. This reflects researchers’ growing interest in dispositions that seem either to protect or make a person more vulnerable to stressors.
Self-esteem can be defined as “the favorability of individuals’ characteristic self-evaluations” (Brockner 1988). Brockner (1983, 1988) has advanced the hypothesis that persons with low SE (low SEs) are generally more susceptible to environmental events than are high SEs. Brockner (1988) reviewed extensive evidence that this “plasticity hypothesis” explains a number of organizational processes. The most prominent research into this hypothesis has tested self-esteem’s moderating role in the relationship between role stressors (role conflict and role ambiguity) and health and affect. Role conflict (disagreement among one’s received roles) and role ambiguity (lack of clarity concerning the content of one’s role) are generated largely by events that are external to the individual, and therefore, according to the plasticity hypothesis, high SEs would be less vulnerable to them.
In a study of 206 nurses in a large southwestern US hospital, Mossholder, Bedeian and Armenakis (1981) found that self-reports of role ambiguity were negatively related to job satisfaction for low SEs but not for high SEs. Pierce et al. (1993) used an organization-based measure of self-esteem to test the plasticity hypothesis on 186 workers in a US utility company. Role ambiguity and role conflict were negatively related to satisfaction only among low SEs. Similar interactions with organization-based self-esteem were found for role overload, environmental support and supervisory support.
In the studies reviewed above, self-esteem was viewed as a proxy (or alternative measure) for self-appraisals of competence on the job. Ganster and Schaubroeck (1991a) speculated that the moderating role of self-esteem on role stressors’ effects was instead caused by low SEs’ lack of confidence in influencing their social environment, the result being weaker attempts at coping with these stressors. In a study of 157 US fire-fighters, they found that role conflict was positively related to somatic health complaints only among low SEs. There was no such interaction with role ambiguity.
In a separate analysis of the data on nurses’ reported in their earlier study (Mossholder, Bedeian and Armenakis 1981), these authors (1982) found that peer group interaction had a significantly more negative relationship to self-reported tension among low SEs than among high SEs. Likewise, low SEs reporting high peer-group interaction were less likely to wish to leave the organization than were high SEs reporting high peer-group interaction.
Several measures of self-esteem exist in the literature. Possibly the most often used of these is the ten-item instrument developed by Rosenberg (1965). This instrument was used in the Ganster and Schaubroeck (1991a) study. Mossholder and his colleagues (1981, 1982) used the self-confidence scale from Gough and Heilbrun’s (1965) Adjective Check List. The organization-based measure of self-esteem used by Pierce et al. (1993) was a ten-item instrument developed by Pierce et al. (1989).
The research findings suggest that health reports and satisfaction among low SEs can be improved either by reducing their role stressors or increasing their self-esteem. The organization development intervention of role clarification (dyadic supervisor-subordinate exchanges directed at clarifying the subordinate’s role and reconciling incompatible expectations), when combined with responsibility charting (clarifying and negotiating the roles of different departments), proved successful in a randomized field experiment at reducing role conflict and role ambiguity (Schaubroeck et al. 1993). It seems unlikely, however, that many organizations will be able and willing to undertake this rather extensive practice unless role stress is seen as particularly acute.
Brockner (1988) suggested a number of ways organizations can enhance employee self-esteem. Supervision practices are a major area in which organizations can improve. Performance appraisal feedback which focuses on behaviours rather than on traits, providing descriptive information with evaluative summations, and participatively developing plans for continuous improvement, is likely to have fewer adverse effects on employee self-esteem, and it may even enhance the self-esteem of some workers as they discover ways to improve their performance. Positive reinforcement of effective performance events is also critical. Training approaches such as mastery modelling (Wood and Bandura 1989) also ensure that positive efficacy perceptions are developed for each new task; these perceptions are the basis of organization-based self-esteem.
The characteristic of hardiness is based in an existential theory of personality and is defined as a person’s basic stance towards his or his place in the world that simultaneously expresses commitment, control and readiness to respond to challenge (Kobasa 1979; Kobasa, Maddi and Kahn 1982). Commitment is the tendency to involve oneself in, rather than experience alienation from, whatever one is doing or encounters in life. Committed persons have a generalized sense of purpose that allows them to identify with and find meaningful the persons, events and things of their environment. Control is the tendency to think, feel and act as if one is influential, rather than helpless, in the face of the varied contingencies of life. Persons with control do not naïvely expect to determine all events and outcomes but rather perceive themselves as being able to make a difference in the world through their exercise of imagination, knowledge, skill and choice. Challenge is the tendency to believe that change rather than stability is normal in life and that changes are interesting incentives to growth rather than threats to security. So far from being reckless adventurers, persons with challenge are rather individuals with an openness to new experiences and a tolerance of ambiguity that enables them to be flexible in the face of change.
Conceived of as a reaction and corrective to a pessimistic bias in early stress research that emphasized persons’ vulnerability to stress, the basic hardiness hypothesis is that individuals characterized by high levels of the three interrelated orientations of commitment, control and challenge are more likely to remain healthy under stress than those individuals who are low in hardiness. The personality possessing hardiness is marked by a way of perceiving and responding to stressful life events that prevents or minimizes the strain that can follow stress and that, in turn, can lead to mental and physical illness.
The initial evidence for the hardiness construct was provided by retrospective and longitudinal studies of a large group of middle- and upper-level male executives employed by a Midwestern telephone company in the United States during the time of the divestiture of American Telephone and Telegraph (ATT). Executives were monitored through yearly questionnaires over a five-year period for stressful life experiences at work and at home, physical health changes, personality characteristics, a variety of other work factors, social support and health habits. The primary finding was that under conditions of highly stressful life events, executives scoring high on hardiness are significantly less likely to become physically ill than are executives scoring low on hardiness, an outcome that was documented through self-reports of physical symptoms and illnesses and validated by medical records based on yearly physical examinations. The initial work also demonstrated: (a) the effectiveness of hardiness combined with social support and exercise to protect mental as well as physical health; and (b) the independence of hardiness with respect to the frequency and severity of stressful life events, age, education, marital status and job level. Finally, the body of hardiness research initially assembled as a result of the study led to further research that showed the generalizability of the hardiness effect across a number of occupational groups, including non-executive telephone personnel, lawyers and US Army officers (Kobasa 1982).
Since those basic studies, the hardiness construct has been employed by many investigators working in a variety of occupational and other contexts and with a variety of research strategies ranging from controlled experiments to more qualitative field investigations (for reviews, see Maddi 1990; Orr and Westman 1990; Ouellette 1993). The majority of these studies have basically supported and expanded the original hardiness formulation, but there have also been disconfirmations of the moderating effect of hardiness and criticisms of the strategies selected for the measurement of hardiness (Funk and Houston 1987; Hull, Van Treuren and Virnelli 1987).
Emphasizing individuals’ ability to do well in the face of serious stressors, researchers have confirmed the positive role of hardiness among many groups including, in samples studied in the United States, bus drivers, military air-disaster workers, nurses working in a variety of settings, teachers, candidates in training for a number of different occupations, persons with chronic illness and Asian immigrants. Elsewhere, studies have been carried out among businessmen in Japan and trainees in the Israeli defence forces. Across these groups, one finds an association between hardiness and lower levels of either physical or mental symptoms, and, less frequently, a significant interaction between stress levels and hardiness that provides support for the buffering role of personality. In addition, results establish the effects of hardiness on non-health outcomes such as work performance and job satisfaction as well as on burnout. Another large body of work, most of it conducted with college-student samples, confirms the hypothesized mechanisms through which hardiness has its health-protective effects. These studies demonstrated the influence of hardiness upon the subjects’ appraisal of stress (Wiebe and Williams 1992). Also relevant to construct validity, a smaller number of studies have provided some evidence for the psychophysiological arousal correlates of hardiness and the relationship between hardiness and various preventive health behaviours.
Essentially all of the empirical support for a link between hardiness and health has relied upon data obtained through self-report questionnaires. Appearing most often in publications is the composite questionnaire used in the original prospective test of hardiness and abridged derivatives of that measure. Fitting the broad-based definition of hardiness as defined in the opening words of this article, the composite questionnaire contains items from a number of established personality instruments that include Rotter’s Internal-External Locus of Control Scale (Rotter, Seeman and Liverant 1962), Hahn’s California Life Goals Evaluation Schedules (Hahn 1966), Maddi’s Alienation versus Commitment Test (Maddi, Kobasa and Hoover 1979) and Jackson’s Personality Research Form (Jackson 1974). More recent efforts at questionnaire development have led to the development of the Personal Views Survey, or what Maddi (1990) calls the “Third Generation Hardiness Test”. This new questionnaire addresses many of the criticisms raised with respect to the original measure, such as the preponderance of negative items and the instability of hardiness factor structures. Furthermore, studies of working adults in both the United States and the United Kingdom have yielded promising reports as to the reliability and validity of the hardiness measure. Nonetheless, not all of the problems have been resolved. For example, some reports show low internal reliability for the challenge component of hardiness. Another pushes beyond the measurement issue to raise a conceptual concern about whether hardiness should always be seen as a unitary phenomenon rather than a multidimensional construct made up of separate components that may have relationships with health independently of each other in certain stressful situations. The challenge to future on researchers hardiness is to retain both the conceptual and human richness of the hardiness notion while increasing its empirical precision.
Although Maddi and Kobasa (1984) describe the childhood and family experiences that support the development of personality hardiness, they and many other hardiness researchers are committed to defining interventions to increase adults’ stress- resistance. From an existential perspective, personality is seen as something that one is constantly constructing, and a person’s social context, including his or her work environment, is seen as either supportive or debilitating as regards the maintenance of hardiness. Maddi (1987, 1990) has provided the most thorough depiction and rationale for hardiness intervention strategies. He outlines a combination of focusing, situational reconstruction, and compensatory self-improvement strategies that he has used successfully in small group sessions to enhance hardiness and decrease the negative physical and mental effects of stress in the workplace.
Definition
The Type A behaviour pattern is an observable set of behaviours or style of living characterized by extremes of hostility, competitiveness, hurry, impatience, restlessness, aggressiveness (sometimes stringently suppressed), explosiveness of speech, and a high state of alertness accompanied by muscular tension. People with strong Type A behaviour struggle against the pressure of time and the challenge of responsibility (Jenkins 1979). Type A is neither an external stressor nor a response of strain or discomfort. It is more like a style of coping. At the other end of this bipolar continuum, Type B persons are more relaxed, cooperative, steady in their pace of activity, and appear more satisfied with their daily lives and the people around them.
The Type A/B behavioural continuum was first conceptualized and labelled in 1959 by the cardiologists Dr. Meyer Friedman and Dr. Ray H. Rosenman. They identified Type A as being typical of their younger male patients with ischaemic heart disease (IHD).
The intensity and frequency of Type A behaviour increases as societies become more industrialized, competitive and hurried. Type A behaviour is more frequent in urban than rural areas, in managerial and sales occupations than among technical workers, skilled craftsmen or artists, and in businesswomen than in housewives.
Areas of Research
Type A behaviour has been studied as part of the fields of personality and social psychology, organizational and industrial psychology, psychophysiology, cardiovascular disease and occupational health.
Research relating to personality and social psychology has yielded considerable understanding of the Type A pattern as an important psychological construct. Persons scoring high on Type A measures behave in ways predicted by Type A theory. They are more impatient and aggressive in social situations and spend more time working and less in leisure. They react more strongly to frustration.
Research that incorporates the Type A concept into organizational and industrial psychology includes comparisons of different occupations as well as employees’ responses to job stress. Under conditions of equivalent external stress, Type A employees tend to report more physical and emotional strain than Type B employees. They also tend to move into high-demand jobs (Type A behavior 1990).
Pronounced increases in blood pressure, serum cholesterol and catecholamines in Type A persons were first reported by Rosenman and al. (1975) and have since been confirmed by many other investigators. The tenor of these findings is that Type A and Type B persons are usually quite similar in chronic or baseline levels of these physiological variables, but that environmental demands, challenges or frustrations create far larger reactions in Type A than Type B persons. The literature has been somewhat inconsistent, partly because the same challenge may not physiologically activitate men or women of different backgrounds. A preponderance of positive findings continues to be published (Contrada and Krantz 1988).
The history of Type A/B behaviour as a risk factor for ischeamic heart disease has followed a common historical trajectory: a trickle then a flow of positive findings, a trickle then a flow of negative findings, and now intense controversy (Review Panel on Coronary-Prone Behavior and Coronary Heart Disease 1981). Broad-scope literature searches now reveal a continuing mixture of positive associations and non-associations between Type A behaviour and IHD. The general trend of the findings is that Type A behaviour is more likely to be positively associated with a risk of IHD:
The Type A pattern is not “dead” as an IHD risk factor, but in the future must be studied with the expectation that it may convey greater IHD risk only in certain sub-populations and in selected social settings. Some studies suggest that hostility may be the most damaging component of Type A.
A newer development has been the study of Type A behaviour as a risk factor for injuries and mild and moderate illnesses both in occupational and student groups. It is rational to hypothesize that people who are hurried and aggressive will incur the most accidents at work, in sports and on the highway. This has been found to be empirically true (Elander, West and French 1993). It is less clear theoretically why mild acute illnesses in a full array of physiologic systems should occur more often to Type A than Type B persons, but this has been found in a few studies (e. g. Suls and Sanders 1988). At least in some groups, Type A was found to be associated with a higher risk of future mild episodes of emotional distress. Future research needs to address both the validity of these associations and the physical and psychological reasons behind them.
Methods of Measurement
The Type A/B behaviour pattern was first measured in research settings by the Structured Interview (SI). The SI is a carefully administered clinical interview in which about 25 questions are asked at different rates of speed and with different degrees of challenge or intrusiveness. Special training is necessary for an interviewer to be certified as competent both to administer and interpret the SI. Typically, interviews are tape-recorded to permit subsequent study by other judges to ensure reliability. In comparative studies among several measures of Type A behaviour, the SI seems to have greater validity for cardiovascular and psychophysiological studies than is found for self-report questionnaires, but little is known about its comparative validity in psychological and occupational studies because the SI is used much less frequently in these settings.
Self-Report Measures
The most common self-report instrument is the Jenkins Activity Survey (JAS), a self-report, computer-scored, multiple-choice questionnaire. It has been validated against the SI and against the criteria of current and future IHD, and has accumulated construct validity. Form C, a 52-item version of the JAS published in 1979 by the Psychological Corporation, is the most widely used. It has been translated into most of the languages of Europe and Asia. The JAS contains four scales: a general Type A scale, and factor-analytically derived scales for speed and impatience, job involvement and hard-driving competitiveness. A short form of the Type A scale (13 items) has been used in epidemiological studies by the World Health Organization.
The Framingham Type A Scale (FTAS) is a ten-item questionnaire shown to be a valid predictor of future IHD for both men and women in the Framingham Heart Study (USA). It has also been used internationally both in cardiovascular and psychological research. Factor analysis divides the FTAS into two factors, one of which correlates with other measures of Type A behaviour while the second correlates with measures of neuroticism and irritability.
The Bortner Rating Scale (BRS) is composed of fourteen items, each in the form of an analogue scale. Subsequent studies have performed item-analysis on the BRS and have achieved greater internal consistency or greater predictability by shortening the scale to 7 or 12 items. The BRS has been widely used in international translations. Additional Type A scales have been developed internationally, but these have mostly been used only for specific nationalities in whose language they were written.
Practical Interventions
Systematic efforts have been under way for at least two decades to help persons with intense Type A behaviour patterns to change them to more of a Type B style. Perhaps the largest of these efforts was in the Recurrent Coronary Prevention Project conducted in the San Francisco Bay area in the 1980s. Repeated follow-up over several years documented that changes were achieved in many people and also that the rate of recurrent myocardial infarction was reduced in persons receiving the Type A behaviour reduction efforts as opposed to those receiving only cardiovascular counselling (Thoreson and Powell 1992).
Intervention in the Type A behaviour pattern is difficult to accomplish successfully because this behavioural style has so many rewarding features, particularly in terms of career advancement and material gain. The programme itself must be carefully crafted according to effective psychological principles, and a group process approach appears to be more effective than individual counselling.
Introduction
The career stage approach is one way to look at career development. The way in which a researcher approaches the issue of career stages is frequently based on Levinson’s life stage development model (Levinson 1986). According to this model, people grow through specific stages separated by transition periods. At each stage a new and crucial activity and psychological adjustment may be completed (Ornstein, Cron and Slocum 1989). In this way, defined career stages can be, and usually are, based on chronological age. The age ranges assigned for each stage have varied considerably between empirical studies, but usually the early career stage is considered to range from the ages of 20 to 34 years, the mid-career from 35 to 50 years and the late career from 50 to 65 years.
According to Super’s career development model (Super 1957; Ornstein, Cron and Slocum 1989) the four career stages are based on the qualitatively different psychological task of each stage. They can be based either on age or on organizational, positional or professional tenure. The same people can recycle several times through these stages in their work career. For example, according to the Career Concerns Inventory Adult Form, the actual career stage can be defined at an individual or group level. This instrument assesses an individual’s awareness of and concerns with various tasks of career development (Super, Zelkowitz and Thompson 1981). When tenure measures are used, the first two years are seen as a trial period. The establishment period from two to ten years means career advancement and growth. After ten years comes the maintenance period, which means holding on to the accomplishments achieved. The decline stage implies the development of one’s self-image independently of one’s career.
Because the theoretical bases of the definition of the career stages and the sorts of measure used in practice differ from one study to another, it is apparent that the results concerning the health- and job-relatedness of career development vary, too.
Career Stage as a Moderator of Work-Related Health and Well-Being
Most studies of career stage as a moderator between job characteristics and the health or well-being of employees deal with organizational commitment and its relation to job satisfaction or to behavioural outcomes such as performance, turnover and absenteeism (Cohen 1991). The relationship between job characteristics and strain has also been studied. The moderating effect of career stage means statistically that the average correlation between measures of job characteristics and well-being varies from one career stage to another.
Work commitment usually increases from early career stages to later stages, although among salaried male professionals, job involvement was found to be lowest in the middle stage. In the early career stage, employees had a stronger need to leave the organization and to be relocated (Morrow and McElroy 1987). Among hospital staff, nurses’ measures of well-being were most strongly associated with career and affective-organizational commitment (i.e., emotional attachment to the organization). Continuance commitment (this is a function of perceived number of alternatives and degree of sacrifice) and normative commitment (loyalty to organization) increased with career stage (Reilly and Orsak 1991).
A meta-analysis was carried out of 41 samples dealing with the relationship between organizational commitment and outcomes indicating well-being. The samples were divided into different career stage groups according to two measures of career stage: age and tenure. Age as a career stage indicator significantly affected turnover and turnover intentions, while organizational tenure was related to job performance and absenteeism. Low organizational commitment was related to high turnover, especially in the early career stage, whereas low organizational commitment was related to high absenteeism and low job performance in the late career stage (Cohen 1991).
The relationship between work attitudes, for instance job satisfaction and work behaviour, has been found to be moderated by career stage to a considerable degree (e.g., Stumpf and Rabinowitz 1981). Among employees of public agencies, career stage measured with reference to organizational tenure was found to moderate the relationship between job satisfaction and job performance. Their relation was strongest in the first career stage. This was supported also in a study among sales personnel. Among academic teachers, the relationship between satisfaction and performance was found to be negative during the first two years of tenure.
Most studies of career stage have dealt with men. Even many early studies in the 1970s, in which the sex of the respondents was not reported, it is apparent that most of the subjects were men. Ornstein and Lynn (1990) tested how the career stage models of Levinson and Super described differences in the career attitudes and intentions among professional women. The results suggest that career stages based on age were related to organizational commitment, intention to leave the organization and a desire for promotion. These findings were, in general, similar to the ones found among men (Ornstein, Cron and Slocum 1989). However, no support was derived for the predictive value of career stages as defined on a psychological basis.
Studies of stress have generally either ignored age, and consequently career stage, in their study designs or treated it as a confounding factor and controlled its effects. Hurrell, McLaney and Murphy (1990) contrasted the effects of stress in mid-career to its effects in early and late career using age as a basis for their grouping of US postal workers. Perceived ill health was not related to job stressors in mid-career, but work pressure and underutilization of skills predicted it in early and late career. Work pressure was related also to somatic complaints in the early and late career group. Underutilization of abilities was more strongly related to job satisfaction and somatic complaints among mid-career workers. Social support had more influence on mental health than physical health, and this effect is more pronounced in mid-career than in early or late career stages. Because the data were taken from a cross sectional study, the authors mention that cohort explanation of the results might also be possible (Hurrell, McLaney and Murphy 1990).
When adult male and female workers were grouped according to age, the older workers more frequently reported overload and responsibility as stressors at work, whereas the younger workers cited insufficiency (e.g., not challenging work), boundary-spanning roles and physical environment stressors (Osipow, Doty and Spokane 1985). The older workers reported fewer of all kinds of strain symptoms: one reason for this may be that older people used more rational-cognitive, self-care and recreational coping skills, evidently learned during their careers, but selection that is based on symptoms during one’s career may also explain these differences. Alternatively it might reflect some self-selection, when people leave jobs that stress them excessively over time.
Among Finnish and US male managers, the relationship between job demands and control on the one hand, and psychosomatic symptoms on the other, was found in the studies to vary according to career stage (defined on the basis of age) (Hurrell and Lindström 1992, Lindström and Hurrell 1992). Among US managers, job demands and control had a significant effect on symptom reporting in the middle career stage, but not in the early and late stage, while among Finnish managers, the long weekly working hours and low job control increased stress symptoms in the early career stage, but not in the later stages. Differences between the two groups might be due to the differences in the two samples studied. The Finnish managers, being in the construction trades, had high workloads already in their early career stage, whereas US managers—these were public sector workers—had the highest workloads in their middle career stage.
To sum up the results of research on the moderating effects of career stage: early career stage means low organizational commitment related to turnover as well as job stressors related to perceived ill health and somatic complaints. In mid-career the results are conflicting: sometimes job satisfaction and performance are positively related, sometimes negatively. In mid-career, job demands and low control are related to frequent symptom reporting among some occupational groups. In late career, organizational commitment is correlated to low absenteeism and good performance. Findings on relations between job stressors and strain are inconsistent for the late career stage. There are some indications that more effective coping decreases work-related strain symptoms in late career.
Interventions
Practical interventions to help people to cope better with the specific demands of each career stage would be beneficial. Vocational counselling at the entry stage of one’s work life would be especially useful. Interventions for minimizing the negative impact of career plateauing are suggested because this can be either a time of frustration or an opportunity to face new challenges or to reappraise one’s life goals (Weiner, Remer and Remer 1992). Results of age-based health examinations in occupational health services have shown that job-related problems lowering working ability gradually increase and qualitatively change with age. In early and mid-career they are related to coping with work overload, but in later middle and late career they are gradually accompanied by declining psychological condition and physical health, facts that indicate the importance of early institutional intervention at an individual level (Lindström, Kaihilahti and Torstila 1988). Both in research and in practical interventions, mobility and turnover pattern should be taken into account, as well as the role played by one’s occupation (and situation within that occupation) in one’s career development.
" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."