ILO Content Manager

ILO Content Manager

Friday, 14 January 2011 18:11

Coping Styles

Coping has been defined as “efforts to reduce the negative impacts of stress on individual well-being” (Edwards 1988). Coping, like the experience of work stress itself, is a complex, dynamic process. Coping efforts are triggered by the appraisal of situations as threatening, harmful or anxiety producing (i.e., by the experience of stress). Coping is an individual difference variable that moderates the stress-outcome relationship.

Coping styles encompass trait-like combinations of thoughts, beliefs and behaviours that result from the experience of stress and may be expressed independently of the type of stressor. A coping style is a dispositional variable. Coping styles are fairly stable over time and situations and are influenced by personality traits, but are different from them. The distinction between the two is one of generality or level of abstraction. Examples of such styles, expressed in broad terms, include: monitor-blunter (Miller 1979) and repressor-sensitizer (Houston and Hodges 1970). Individual differences in personality, age, experience, gender, intellectual ability and cognitive style affect the way an individual copes with stress. Coping styles are the result of both prior experience and previous learning.

Shanan (1967) offered an early perspective on what he termed an adaptive coping style. This “response set” was characterized by four ingredients: the availability of energy directly focused on potential sources of the difficulty; a clear distinction between events internal and external to the person; confronting rather than avoiding external difficulties; and balancing external demands with needs of the self. Antonovsky (1987) similarly suggests that, to be effective, the individual person must be motivated to cope, have clarified the nature and dimensions of the problem and the reality in which it exists, and then selected the most appropriate resources for the problem at hand.

The most common typology of coping style (Lazarus and Folkman 1984) includes problem-focused coping (which includes information seeking and problem solving) and emotion-focused coping (which involves expressing emotion and regulating emotions). These two factors are sometimes complemented by a third factor, appraisal-focused coping (whose components include denial, acceptance, social comparison, redefinition and logical analysis).

Moos and Billings (1982) distinguish among the following coping styles:

  • Active-cognitive. The person tries to manage their appraisal of the stressful situation.
  • Active-behavioural. This style involves behaviour dealing directly with the stressful situations.
  • Avoidance. The person avoids confronting the problem.

 

Greenglass (1993) has recently proposed a coping style termed social coping, which integrates social and interpersonal factors with cognitive factors. Her research showed significant relationships between various kinds of social support and coping forms (e.g., problem-focused and emotion-focused). Women, generally possessing relatively greater interpersonal competence, were found to make greater use of social coping.

In addition, it may be possible to link another approach to coping, termed preventive coping, with a large body of previously separate writing dealing with healthy lifestyles (Roskies 1991). Wong and Reker (1984) suggest that a preventive coping style is aimed at promoting one’s well-being and reducing the likelihood of future problems. Preventive coping includes such activities as physical exercise and relaxation, as well as the development of appropriate sleeping and eating habits, and planning, time management and social support skills.

Another coping style, which has been described as a broad aspect of personality (Watson and Clark 1984), involves the concepts of negative affectivity (NA) and positive affectivity (PA). People with high NA accentuate the negative in evaluating themselves, other people and their environment in general and reflect higher levels of distress. Those with high PA focus on the positives in evaluating themselves, other people and their world in general. People with high PA report lower levels of distress.

These two dispositions can affect a person’s perceptions of the number and magnitude of potential stressors as well as his or her coping responses (i.e., one’s perceptions of the resources that one has available, as well as the actual coping strategies that are used). Thus, those with high NA will report fewer resources available and are more likely to use ineffective (defeatist) strategies (such as releasing emotions, avoidance and disengagement in coping) and less likely to use more effective strategies (such as direct action and cognitive reframing). Individuals with high PA would be more confident in their coping resources and use more productive coping strategies.

Antonovsky’s (1979; 1987) sense of coherence (SOC) concept overlaps considerably with PA. He defines SOC as a generalized view of the world as meaningful and comprehensible. This orientation allows the person to first focus on the specific situation and then to act on the problem and the emotions associated with the problem. High SOC individuals have the motivation and the cognitive resources to engage in these sorts of behaviours likely to resolve the problem. In addition, high SOC individuals are more likely to realize the importance of emotions, more likely to experience particular emotions and to regulate them, and more likely to take responsibility for their circumstances instead of blaming others or projecting their perceptions upon them. Considerable research has since supplied support for Antonovsky’s thesis.

Coping styles can be described with reference to dimensions of complexity and flexibility (Lazarus and Folkman 1984). People using a variety of strategies exhibit a complex style; those preferring a single strategy exhibit a single style. Those who use the same strategy in all situations exhibit a rigid style; those who use different strategies in the same, or different, situations exhibit a flexible style. A flexible style has been shown to be more effective than a rigid style.

Coping styles are typically measured by using self-reported questionnaires or by asking individuals, in an open-ended way, how they coped with a particular stressor. The questionnaire developed by Lazarus and Folkman (1984), the “Ways of Coping Checklist”, is the most widely used measure of problem-focused and emotion-focused coping. Dewe (1989), on the other hand, has frequently used individuals’ descriptions of their own coping initiatives in his research on coping styles.

There are a variety of practical interventions that may be implemented with regard to coping styles. Most often, intervention consists of education and training in which individuals are presented with information, sometimes coupled with self-assessment exercises that enable them to examine their own preferred coping style as well as other varieties of coping styles and their potential usefulness. Such information is typically well received by the persons to whom the intervention is directed, but the demonstrated usefulness of such information in helping them cope with real life stressors is lacking. In fact, the few studies that considered individual coping (Shinn et al. 1984; Ganster et al. 1982) have reported limited practical value in such education, particularly when a follow-up has been undertaken (Murphy 1988).

Matteson and Ivancevich (1987) outline a study dealing with coping styles as part of a longer programme of stress management training. Improvements in three coping skills are addressed: cognitive, interpersonal and problem solving. Coping skills are classified as problem-focused or emotion-focused. Problem-focused skills include problem solving, time management, communication and social skills, assertiveness, lifestyle changes and direct actions to change environmental demands. Emotion-focused skills are designed to relieve distress and foster emotion regulation. These include denial, expressing feelings and relaxation.

The preparation of this article was supported in part by the Faculty of Administrative Studies, York University.


Back

Friday, 14 January 2011 18:01

Locus of Control

Locus of control (LOC) refers to a personality trait reflecting the generalized belief that either events in life are controlled by one’s own actions (an internal LOC) or by outside influences (an external LOC). Those with an internal LOC believe that they can exert control over life events and circumstances, including the associated reinforcements, that is, those outcomes which are perceived to reward one’s behaviours and attitudes. In contrast, those with an external LOC believe they have little control over life events and circumstances, and attribute reinforcements to powerful others or to luck.

The construct of locus of control emerged from Rotter’s (1954) social learning theory. To measure LOC, Rotter (1966) developed the Internal-External (I-E) scale, which has been the instrument of choice in most research studies. However, research has questioned the unidimensionality of the I-E scale, with some authors suggesting that LOC has two dimensions (e.g., personal control and social system control), and others suggesting that LOC has three dimensions (personal efficacy, control ideology and political control). More recently developed scales to measure LOC are multidimensional, or assess LOC for specific domains, such as health or work (Hurrell and Murphy 1992).

One of the most consistent and widespread findings in the general research literature is the association between an external LOC and poor physical and mental health (Ganster and Fusilier 1989). A number of studies in occupational settings report similar findings: workers with an external LOC tended to report more burnout, job dissatisfaction, stress and lower self-esteem than those with an internal LOC (Kasl 1989). Recent evidence suggests that LOC moderates the relationship between role stressors (role ambiguity and role conflict) and symptoms of distress (Cvetanovski and Jex 1994; Spector and O’Connell 1994).

However, research linking LOC beliefs and ill health is difficult to interpret for several reasons (Kasl 1989). First, there may be conceptual overlap between the measures of health and locus of control scales. Secondly, a dispositional factor, like negative affectivity, may be present which is responsible for the relationship. For example, in the study by Spector and O’Connell (1994), LOC beliefs correlated more strongly with negative affectivity than with perceived autonomy at work, and did not correlate with physical health symptoms. Thirdly, the direction of causality is ambiguous; it is possible that the work experience may alter LOC beliefs. Finally, other studies have not found moderating effects of LOC on job stressors or health outcomes (Hurrell and Murphy 1992).

The question of how LOC moderates job stressor-health relationships has not been well researched. One proposed mechanism involves the use of more effective, problem-focused coping behaviour by those with an internal LOC. Those with an external LOC might use fewer problem-solving coping strategies because they believe that events in their lives are outside their control. There is evidence that people with an internal LOC utilize more task-centred coping behaviours and fewer emotion-centred coping behaviours than those with an external LOC (Hurrell and Murphy 1992). Other evidence indicates that in situations viewed as changeable, those with an internal LOC reported high levels of problem-solving coping and low levels of emotional suppression, whereas those with an external LOC showed the reverse pattern. It is important to bear in mind that many workplace stressors are not under the direct control of the worker, and that attempts to change uncontrollable stressors might actually increase stress symptoms (Hurrell and Murphy 1992).

A second mechanism whereby LOC could influence stressor-health relationships is via social support, another moderating factor of stress and health relationships. Fusilier, Ganster and Mays (1987) found that locus of control and social support jointly determined how workers responded to job stressors and Cummins (1989) found that social support buffered the effects of job stress, but only for those with an internal LOC and only when the support was work-related.

Although the topic of LOC is intriguing and has stimulated a great deal of research, there are serious methodological problems attaching to investigations in this area which need to be addressed. For example, the trait-like (unchanging) nature of LOC beliefs has been questioned by research which showed that people adopt a more external orientation with advancing age and after certain life experiences such as unemployment. Furthermore, LOC may be measuring worker perceptions of job control, instead of an enduring trait of the worker. Still other studies have suggested that LOC scales may not only measure beliefs about control, but also the tendency to use defensive manoeuvres, and to display anxiety or proneness to Type A behaviour (Hurrell and Murphy 1992).

Finally, there has been little research on the influence of LOC on vocational choice, and the reciprocal effects of LOC and job perceptions. Regarding the former, occupational differences in the proportion of “internals” and “externals” may be evidence that LOC influences vocational choice (Hurrell and Murphy 1992). On the other hand, such differences might reflect exposure to the job environment, just as the work environment is thought to be instrumental in the development of the Type A behaviour pattern. A final alternative is that occupational differences in LOC are be due to “drift”, that is the movement of workers into or out of certain occupations as a result of job dissatisfaction, health concerns or desire for advancement.

In summary, the research literature does not present a clear picture of the influence of LOC beliefs on job stressor or health relationships. Even where research has produced more or less consistent findings, the meaning of the relationship is obscured by confounding influences (Kasl 1989). Additional research is needed to determine the stability of the LOC construct and to identify the mechanisms or pathways through which LOC influences worker perceptions and mental and physical health. Components of the path should reflect the interaction of LOC with other traits of the worker, and the interaction of LOC beliefs with work environment factors, including reciprocal effects of the work environment and LOC beliefs. Future research should produce less ambiguous results if it incorporates measures of related individual traits (e.g., Type A behaviour or anxiety) and utilizes domain-specific measures of locus of control (e.g., work).


Back

Friday, 14 January 2011 17:58

Self-Esteem

Low self-esteem (SE) has long been studied as a determinant of psychological and physiological disorders (Beck 1967; Rosenberg 1965; Scherwitz, Berton and Leventhal 1978). Beginning in the 1980s, organizational researchers have investigated self-esteem’s moderating role in relationships between work stressors and individual outcomes. This reflects researchers’ growing interest in dispositions that seem either to protect or make a person more vulnerable to stressors.

Self-esteem can be defined as “the favorability of individuals’ characteristic self-evaluations” (Brockner 1988). Brockner (1983, 1988) has advanced the hypothesis that persons with low SE (low SEs) are generally more susceptible to environmental events than are high SEs. Brockner (1988) reviewed extensive evidence that this “plasticity hypothesis” explains a number of organizational processes. The most prominent research into this hypothesis has tested self-esteem’s moderating role in the relationship between role stressors (role conflict and role ambiguity) and health and affect. Role conflict (disagreement among one’s received roles) and role ambiguity (lack of clarity concerning the content of one’s role) are generated largely by events that are external to the individual, and therefore, according to the plasticity hypothesis, high SEs would be less vulnerable to them.

In a study of 206 nurses in a large southwestern US hospital, Mossholder, Bedeian and Armenakis (1981) found that self-reports of role ambiguity were negatively related to job satisfaction for low SEs but not for high SEs. Pierce et al. (1993) used an organization-based measure of self-esteem to test the plasticity hypothesis on 186 workers in a US utility company. Role ambiguity and role conflict were negatively related to satisfaction only among low SEs. Similar interactions with organization-based self-esteem were found for role overload, environmental support and supervisory support.

In the studies reviewed above, self-esteem was viewed as a proxy (or alternative measure) for self-appraisals of competence on the job. Ganster and Schaubroeck (1991a) speculated that the moderating role of self-esteem on role stressors’ effects was instead caused by low SEs’ lack of confidence in influencing their social environment, the result being weaker attempts at coping with these stressors. In a study of 157 US fire-fighters, they found that role conflict was positively related to somatic health complaints only among low SEs. There was no such interaction with role ambiguity.

In a separate analysis of the data on nurses’ reported in their earlier study (Mossholder, Bedeian and Armenakis 1981), these authors (1982) found that peer group interaction had a significantly more negative relationship to self-reported tension among low SEs than among high SEs. Likewise, low SEs reporting high peer-group interaction were less likely to wish to leave the organization than were high SEs reporting high peer-group interaction.

Several measures of self-esteem exist in the literature. Possibly the most often used of these is the ten-item instrument developed by Rosenberg (1965). This instrument was used in the Ganster and Schaubroeck (1991a) study. Mossholder and his colleagues (1981, 1982) used the self-confidence scale from Gough and Heilbrun’s (1965) Adjective Check List. The organization-based measure of self-esteem used by Pierce et al. (1993) was a ten-item instrument developed by Pierce et al. (1989).

The research findings suggest that health reports and satisfaction among low SEs can be improved either by reducing their role stressors or increasing their self-esteem. The organization development intervention of role clarification (dyadic supervisor-subordinate exchanges directed at clarifying the subordinate’s role and reconciling incompatible expectations), when combined with responsibility charting (clarifying and negotiating the roles of different departments), proved successful in a randomized field experiment at reducing role conflict and role ambiguity (Schaubroeck et al. 1993). It seems unlikely, however, that many organizations will be able and willing to undertake this rather extensive practice unless role stress is seen as particularly acute.

Brockner (1988) suggested a number of ways organizations can enhance employee self-esteem. Supervision practices are a major area in which organizations can improve. Performance appraisal feedback which focuses on behaviours rather than on traits, providing descriptive information with evaluative summations, and participatively developing plans for continuous improvement, is likely to have fewer adverse effects on employee self-esteem, and it may even enhance the self-esteem of some workers as they discover ways to improve their performance. Positive reinforcement of effective performance events is also critical. Training approaches such as mastery modelling (Wood and Bandura 1989) also ensure that positive efficacy perceptions are developed for each new task; these perceptions are the basis of organization-based self-esteem.


Back

Friday, 14 January 2011 17:49

Hardiness

The characteristic of hardiness is based in an existential theory of personality and is defined as a person’s basic stance towards his or his place in the world that simultaneously expresses commitment, control and readiness to respond to challenge (Kobasa 1979; Kobasa, Maddi and Kahn 1982). Commitment is the tendency to involve oneself in, rather than experience alienation from, whatever one is doing or encounters in life. Committed persons have a generalized sense of purpose that allows them to identify with and find meaningful the persons, events and things of their environment. Control is the tendency to think, feel and act as if one is influential, rather than helpless, in the face of the varied contingencies of life. Persons with control do not naïvely expect to determine all events and outcomes but rather perceive themselves as being able to make a difference in the world through their exercise of imagination, knowledge, skill and choice. Challenge is the tendency to believe that change rather than stability is normal in life and that changes are interesting incentives to growth rather than threats to security. So far from being reckless adventurers, persons with challenge are rather individuals with an openness to new experiences and a tolerance of ambiguity that enables them to be flexible in the face of change.

Conceived of as a reaction and corrective to a pessimistic bias in early stress research that emphasized persons’ vulnerability to stress, the basic hardiness hypothesis is that individuals characterized by high levels of the three interrelated orientations of commitment, control and challenge are more likely to remain healthy under stress than those individuals who are low in hardiness. The personality possessing hardiness is marked by a way of perceiving and responding to stressful life events that prevents or minimizes the strain that can follow stress and that, in turn, can lead to mental and physical illness.

The initial evidence for the hardiness construct was provided by retrospective and longitudinal studies of a large group of middle- and upper-level male executives employed by a Midwestern telephone company in the United States during the time of the divestiture of American Telephone and Telegraph (ATT). Executives were monitored through yearly questionnaires over a five-year period for stressful life experiences at work and at home, physical health changes, personality characteristics, a variety of other work factors, social support and health habits. The primary finding was that under conditions of highly stressful life events, executives scoring high on hardiness are significantly less likely to become physically ill than are executives scoring low on hardiness, an outcome that was documented through self-reports of physical symptoms and illnesses and validated by medical records based on yearly physical examinations. The initial work also demonstrated: (a) the effectiveness of hardiness combined with social support and exercise to protect mental as well as physical health; and (b) the independence of hardiness with respect to the frequency and severity of stressful life events, age, education, marital status and job level. Finally, the body of hardiness research initially assembled as a result of the study led to further research that showed the generalizability of the hardiness effect across a number of occupational groups, including non-executive telephone personnel, lawyers and US Army officers (Kobasa 1982).

Since those basic studies, the hardiness construct has been employed by many investigators working in a variety of occupational and other contexts and with a variety of research strategies ranging from controlled experiments to more qualitative field investigations (for reviews, see Maddi 1990; Orr and Westman 1990; Ouellette 1993). The majority of these studies have basically supported and expanded the original hardiness formulation, but there have also been disconfirmations of the moderating effect of hardiness and criticisms of the strategies selected for the measurement of hardiness (Funk and Houston 1987; Hull, Van Treuren and Virnelli 1987).

Emphasizing individuals’ ability to do well in the face of serious stressors, researchers have confirmed the positive role of hardiness among many groups including, in samples studied in the United States, bus drivers, military air-disaster workers, nurses working in a variety of settings, teachers, candidates in training for a number of different occupations, persons with chronic illness and Asian immigrants. Elsewhere, studies have been carried out among businessmen in Japan and trainees in the Israeli defence forces. Across these groups, one finds an association between hardiness and lower levels of either physical or mental symptoms, and, less frequently, a significant interaction between stress levels and hardiness that provides support for the buffering role of personality. In addition, results establish the effects of hardiness on non-health outcomes such as work performance and job satisfaction as well as on burnout. Another large body of work, most of it conducted with college-student samples, confirms the hypothesized mechanisms through which hardiness has its health-protective effects. These studies demonstrated the influence of hardiness upon the subjects’ appraisal of stress (Wiebe and Williams 1992). Also relevant to construct validity, a smaller number of studies have provided some evidence for the psychophysiological arousal correlates of hardiness and the relationship between hardiness and various preventive health behaviours.

Essentially all of the empirical support for a link between hardiness and health has relied upon data obtained through self-report questionnaires. Appearing most often in publications is the composite questionnaire used in the original prospective test of hardiness and abridged derivatives of that measure. Fitting the broad-based definition of hardiness as defined in the opening words of this article, the composite questionnaire contains items from a number of established personality instruments that include Rotter’s Internal-External Locus of Control Scale (Rotter, Seeman and Liverant 1962), Hahn’s California Life Goals Evaluation Schedules (Hahn 1966), Maddi’s Alienation versus Commitment Test (Maddi, Kobasa and Hoover 1979) and Jackson’s Personality Research Form (Jackson 1974). More recent efforts at questionnaire development have led to the development of the Personal Views Survey, or what Maddi (1990) calls the “Third Generation Hardiness Test”. This new questionnaire addresses many of the criticisms raised with respect to the original measure, such as the preponderance of negative items and the instability of hardiness factor structures. Furthermore, studies of working adults in both the United States and the United Kingdom have yielded promising reports as to the reliability and validity of the hardiness measure. Nonetheless, not all of the problems have been resolved. For example, some reports show low internal reliability for the challenge component of hardiness. Another pushes beyond the measurement issue to raise a conceptual concern about whether hardiness should always be seen as a unitary phenomenon rather than a multidimensional construct made up of separate components that may have relationships with health independently of each other in certain stressful situations. The challenge to future on researchers hardiness is to retain both the conceptual and human richness of the hardiness notion while increasing its empirical precision.

Although Maddi and Kobasa (1984) describe the childhood and family experiences that support the development of personality hardiness, they and many other hardiness researchers are committed to defining interventions to increase adults’ stress- resistance. From an existential perspective, personality is seen as something that one is constantly constructing, and a person’s social context, including his or her work environment, is seen as either supportive or debilitating as regards the maintenance of hardiness. Maddi (1987, 1990) has provided the most thorough depiction and rationale for hardiness intervention strategies. He outlines a combination of focusing, situational reconstruction, and compensatory self-improvement strategies that he has used successfully in small group sessions to enhance hardiness and decrease the negative physical and mental effects of stress in the workplace.


Back

Friday, 14 January 2011 17:44

Type A/B Behaviour Pattern

Definition

The Type A behaviour pattern is an observable set of behaviours or style of living characterized by extremes of hostility, competitiveness, hurry, impatience, restlessness, aggressiveness (sometimes stringently suppressed), explosiveness of speech, and a high state of alertness accompanied by muscular tension. People with strong Type A behaviour struggle against the pressure of time and the challenge of responsibility (Jenkins 1979). Type A is neither an external stressor nor a response of strain or discomfort. It is more like a style of coping. At the other end of this bipolar continuum, Type B persons are more relaxed, cooperative, steady in their pace of activity, and appear more satisfied with their daily lives and the people around them.

The Type A/B behavioural continuum was first conceptualized and labelled in 1959 by the cardiologists Dr. Meyer Friedman and Dr. Ray H. Rosenman. They identified Type A as being typical of their younger male patients with ischaemic heart disease (IHD).

The intensity and frequency of Type A behaviour increases as societies become more industrialized, competitive and hurried. Type A behaviour is more frequent in urban than rural areas, in managerial and sales occupations than among technical workers, skilled craftsmen or artists, and in businesswomen than in housewives.

Areas of Research

Type A behaviour has been studied as part of the fields of personality and social psychology, organizational and industrial psychology, psychophysiology, cardiovascular disease and occupational health.

Research relating to personality and social psychology has yielded considerable understanding of the Type A pattern as an important psychological construct. Persons scoring high on Type A measures behave in ways predicted by Type A theory. They are more impatient and aggressive in social situations and spend more time working and less in leisure. They react more strongly to frustration.

Research that incorporates the Type A concept into organizational and industrial psychology includes comparisons of different occupations as well as employees’ responses to job stress. Under conditions of equivalent external stress, Type A employees tend to report more physical and emotional strain than Type B employees. They also tend to move into high-demand jobs (Type A behavior 1990).

Pronounced increases in blood pressure, serum cholesterol and catecholamines in Type A persons were first reported by Rosenman and al. (1975) and have since been confirmed by many other investigators. The tenor of these findings is that Type A and Type B persons are usually quite similar in chronic or baseline levels of these physiological variables, but that environmental demands, challenges or frustrations create far larger reactions in Type A than Type B persons. The literature has been somewhat inconsistent, partly because the same challenge may not physiologically activitate men or women of different backgrounds. A preponderance of positive findings continues to be published (Contrada and Krantz 1988).

The history of Type A/B behaviour as a risk factor for ischeamic heart disease has followed a common historical trajectory: a trickle then a flow of positive findings, a trickle then a flow of negative findings, and now intense controversy (Review Panel on Coronary-Prone Behavior and Coronary Heart Disease 1981). Broad-scope literature searches now reveal a continuing mixture of positive associations and non-associations between Type A behaviour and IHD. The general trend of the findings is that Type A behaviour is more likely to be positively associated with a risk of IHD:

  1. in cross-sectional and case-control studies rather than prospective studies
  2. in studies of general populations and occupational groups rather than studies limited to persons with cardiovascular disease or who score high on other IHD risk factors
  3. in younger study groups (under age 60) rather than older populations
  4. in countries still in the process of industrialization or still at the peak of their economic development.

 

The Type A pattern is not “dead” as an IHD risk factor, but in the future must be studied with the expectation that it may convey greater IHD risk only in certain sub-populations and in selected social settings. Some studies suggest that hostility may be the most damaging component of Type A.

A newer development has been the study of Type A behaviour as a risk factor for injuries and mild and moderate illnesses both in occupational and student groups. It is rational to hypothesize that people who are hurried and aggressive will incur the most accidents at work, in sports and on the highway. This has been found to be empirically true (Elander, West and French 1993). It is less clear theoretically why mild acute illnesses in a full array of physiologic systems should occur more often to Type A than Type B persons, but this has been found in a few studies (e. g. Suls and Sanders 1988). At least in some groups, Type A was found to be associated with a higher risk of future mild episodes of emotional distress. Future research needs to address both the validity of these associations and the physical and psychological reasons behind them.

Methods of Measurement

The Type A/B behaviour pattern was first measured in research settings by the Structured Interview (SI). The SI is a carefully administered clinical interview in which about 25 questions are asked at different rates of speed and with different degrees of challenge or intrusiveness. Special training is necessary for an interviewer to be certified as competent both to administer and interpret the SI. Typically, interviews are tape-recorded to permit subsequent study by other judges to ensure reliability. In comparative studies among several measures of Type A behaviour, the SI seems to have greater validity for cardiovascular and psychophysiological studies than is found for self-report questionnaires, but little is known about its comparative validity in psychological and occupational studies because the SI is used much less frequently in these settings.

Self-Report Measures

The most common self-report instrument is the Jenkins Activity Survey (JAS), a self-report, computer-scored, multiple-choice questionnaire. It has been validated against the SI and against the criteria of current and future IHD, and has accumulated construct validity. Form C, a 52-item version of the JAS published in 1979 by the Psychological Corporation, is the most widely used. It has been translated into most of the languages of Europe and Asia. The JAS contains four scales: a general Type A scale, and factor-analytically derived scales for speed and impatience, job involvement and hard-driving competitiveness. A short form of the Type A scale (13 items) has been used in epidemiological studies by the World Health Organization.

The Framingham Type A Scale (FTAS) is a ten-item questionnaire shown to be a valid predictor of future IHD for both men and women in the Framingham Heart Study (USA). It has also been used internationally both in cardiovascular and psychological research. Factor analysis divides the FTAS into two factors, one of which correlates with other measures of Type A behaviour while the second correlates with measures of neuroticism and irritability.

The Bortner Rating Scale (BRS) is composed of fourteen items, each in the form of an analogue scale. Subsequent studies have performed item-analysis on the BRS and have achieved greater internal consistency or greater predictability by shortening the scale to 7 or 12 items. The BRS has been widely used in international translations. Additional Type A scales have been developed internationally, but these have mostly been used only for specific nationalities in whose language they were written.

Practical Interventions

Systematic efforts have been under way for at least two decades to help persons with intense Type A behaviour patterns to change them to more of a Type B style. Perhaps the largest of these efforts was in the Recurrent Coronary Prevention Project conducted in the San Francisco Bay area in the 1980s. Repeated follow-up over several years documented that changes were achieved in many people and also that the rate of recurrent myocardial infarction was reduced in persons receiving the Type A behaviour reduction efforts as opposed to those receiving only cardiovascular counselling (Thoreson and Powell 1992).

Intervention in the Type A behaviour pattern is difficult to accomplish successfully because this behavioural style has so many rewarding features, particularly in terms of career advancement and material gain. The programme itself must be carefully crafted according to effective psychological principles, and a group process approach appears to be more effective than individual counselling.


Back

Friday, 14 January 2011 17:34

Career Stages

Introduction

The career stage approach is one way to look at career development. The way in which a researcher approaches the issue of career stages is frequently based on Levinson’s life stage development model (Levinson 1986). According to this model, people grow through specific stages separated by transition periods. At each stage a new and crucial activity and psychological adjustment may be completed (Ornstein, Cron and Slocum 1989). In this way, defined career stages can be, and usually are, based on chronological age. The age ranges assigned for each stage have varied considerably between empirical studies, but usually the early career stage is considered to range from the ages of 20 to 34 years, the mid-career from 35 to 50 years and the late career from 50 to 65 years.

According to Super’s career development model (Super 1957; Ornstein, Cron and Slocum 1989) the four career stages are based on the qualitatively different psychological task of each stage. They can be based either on age or on organizational, positional or professional tenure. The same people can recycle several times through these stages in their work career. For example, according to the Career Concerns Inventory Adult Form, the actual career stage can be defined at an individual or group level. This instrument assesses an individual’s awareness of and concerns with various tasks of career development (Super, Zelkowitz and Thompson 1981). When tenure measures are used, the first two years are seen as a trial period. The establishment period from two to ten years means career advancement and growth. After ten years comes the maintenance period, which means holding on to the accomplishments achieved. The decline stage implies the development of one’s self-image independently of one’s career.

Because the theoretical bases of the definition of the career stages and the sorts of measure used in practice differ from one study to another, it is apparent that the results concerning the health- and job-relatedness of career development vary, too.

Career Stage as a Moderator of Work-Related Health and Well-Being

Most studies of career stage as a moderator between job characteristics and the health or well-being of employees deal with organizational commitment and its relation to job satisfaction or to behavioural outcomes such as performance, turnover and absenteeism (Cohen 1991). The relationship between job characteristics and strain has also been studied. The moderating effect of career stage means statistically that the average correlation between measures of job characteristics and well-being varies from one career stage to another.

Work commitment usually increases from early career stages to later stages, although among salaried male professionals, job involvement was found to be lowest in the middle stage. In the early career stage, employees had a stronger need to leave the organization and to be relocated (Morrow and McElroy 1987). Among hospital staff, nurses’ measures of well-being were most strongly associated with career and affective-organizational commitment (i.e., emotional attachment to the organization). Continuance commitment (this is a function of perceived number of alternatives and degree of sacrifice) and normative commitment (loyalty to organization) increased with career stage (Reilly and Orsak 1991).

A meta-analysis was carried out of 41 samples dealing with the relationship between organizational commitment and outcomes indicating well-being. The samples were divided into different career stage groups according to two measures of career stage: age and tenure. Age as a career stage indicator significantly affected turnover and turnover intentions, while organizational tenure was related to job performance and absenteeism. Low organizational commitment was related to high turnover, especially in the early career stage, whereas low organizational commitment was related to high absenteeism and low job performance in the late career stage (Cohen 1991).

The relationship between work attitudes, for instance job satisfaction and work behaviour, has been found to be moderated by career stage to a considerable degree (e.g., Stumpf and Rabinowitz 1981). Among employees of public agencies, career stage measured with reference to organizational tenure was found to moderate the relationship between job satisfaction and job performance. Their relation was strongest in the first career stage. This was supported also in a study among sales personnel. Among academic teachers, the relationship between satisfaction and performance was found to be negative during the first two years of tenure.

Most studies of career stage have dealt with men. Even many early studies in the 1970s, in which the sex of the respondents was not reported, it is apparent that most of the subjects were men. Ornstein and Lynn (1990) tested how the career stage models of Levinson and Super described differences in the career attitudes and intentions among professional women. The results suggest that career stages based on age were related to organizational commitment, intention to leave the organization and a desire for promotion. These findings were, in general, similar to the ones found among men (Ornstein, Cron and Slocum 1989). However, no support was derived for the predictive value of career stages as defined on a psychological basis.

Studies of stress have generally either ignored age, and consequently career stage, in their study designs or treated it as a confounding factor and controlled its effects. Hurrell, McLaney and Murphy (1990) contrasted the effects of stress in mid-career to its effects in early and late career using age as a basis for their grouping of US postal workers. Perceived ill health was not related to job stressors in mid-career, but work pressure and underutilization of skills predicted it in early and late career. Work pressure was related also to somatic complaints in the early and late career group. Underutilization of abilities was more strongly related to job satisfaction and somatic complaints among mid-career workers. Social support had more influence on mental health than physical health, and this effect is more pronounced in mid-career than in early or late career stages. Because the data were taken from a cross sectional study, the authors mention that cohort explanation of the results might also be possible (Hurrell, McLaney and Murphy 1990).

When adult male and female workers were grouped according to age, the older workers more frequently reported overload and responsibility as stressors at work, whereas the younger workers cited insufficiency (e.g., not challenging work), boundary-spanning roles and physical environment stressors (Osipow, Doty and Spokane 1985). The older workers reported fewer of all kinds of strain symptoms: one reason for this may be that older people used more rational-cognitive, self-care and recreational coping skills, evidently learned during their careers, but selection that is based on symptoms during one’s career may also explain these differences. Alternatively it might reflect some self-selection, when people leave jobs that stress them excessively over time.

Among Finnish and US male managers, the relationship between job demands and control on the one hand, and psychosomatic symptoms on the other, was found in the studies to vary according to career stage (defined on the basis of age) (Hurrell and Lindström 1992, Lindström and Hurrell 1992). Among US managers, job demands and control had a significant effect on symptom reporting in the middle career stage, but not in the early and late stage, while among Finnish managers, the long weekly working hours and low job control increased stress symptoms in the early career stage, but not in the later stages. Differences between the two groups might be due to the differences in the two samples studied. The Finnish managers, being in the construction trades, had high workloads already in their early career stage, whereas US managers—these were public sector workers—had the highest workloads in their middle career stage.

To sum up the results of research on the moderating effects of career stage: early career stage means low organizational commitment related to turnover as well as job stressors related to perceived ill health and somatic complaints. In mid-career the results are conflicting: sometimes job satisfaction and performance are positively related, sometimes negatively. In mid-career, job demands and low control are related to frequent symptom reporting among some occupational groups. In late career, organizational commitment is correlated to low absenteeism and good performance. Findings on relations between job stressors and strain are inconsistent for the late career stage. There are some indications that more effective coping decreases work-related strain symptoms in late career.

Interventions

Practical interventions to help people to cope better with the specific demands of each career stage would be beneficial. Vocational counselling at the entry stage of one’s work life would be especially useful. Interventions for minimizing the negative impact of career plateauing are suggested because this can be either a time of frustration or an opportunity to face new challenges or to reappraise one’s life goals (Weiner, Remer and Remer 1992). Results of age-based health examinations in occupational health services have shown that job-related problems lowering working ability gradually increase and qualitatively change with age. In early and mid-career they are related to coping with work overload, but in later middle and late career they are gradually accompanied by declining psychological condition and physical health, facts that indicate the importance of early institutional intervention at an individual level (Lindström, Kaihilahti and Torstila 1988). Both in research and in practical interventions, mobility and turnover pattern should be taken into account, as well as the role played by one’s occupation (and situation within that occupation) in one’s career development.


Back

Friday, 14 January 2011 17:32

Socialization

The process by which outsiders become organizational insiders is known as organizational socialization. While early research on socialization focused on indicators of adjustment such as job satisfaction and performance, recent research has emphasized the links between organizational socialization and work stress.

Socialization as a Moderator of Job Stress

Entering a new organization is an inherently stressful experience. Newcomers encounter a myriad of stressors, including role ambiguity, role conflict, work and home conflicts, politics, time pressure and work overload. These stressors can lead to distress symptoms. Studies in the 1980s, however, suggest that a properly managed socialization process has the potential for moderating the stressor-strain connection.

Two particular themes have emerged in the contemporary research on socialization:

  1. the acquisition of information during socialization,
  2. supervisory support during socialization.

 

Information acquired by newcomers during socialization helps alleviate the considerable uncertainty in their efforts to master their new tasks, roles and interpersonal relationships. Often, this information is provided via formal orientation-cum-socialization programmes. In the absence of formal programmes, or (where they exist) in addition to them, socialization occurs informally. Recent studies have indicated that newcomers who proactively seek out information adjust more effectively (Morrison l993). In addition, newcomers who underestimate the stressors in their new job report higher distress symptoms (Nelson and Sutton l99l).

Supervisory support during the socialization process is of special value. Newcomers who receive support from their supervisors report less stress from unmet expectations (Fisher l985) and fewer psychological symptoms of distress (Nelson and Quick l99l). Supervisory support can help newcomers cope with stressors in at least three ways. First, supervisors may provide instrumental support (such as flexible work hours) that helps alleviate a particular stressor. Secondly, they may provide emotional support that leads a newcomer to feel more efficacy in coping with a stressor. Thirdly, supervisors play an important role in helping newcomers make sense of their new environment (Louis l980). For example, they can frame situations for newcomers in a way that helps them appraise situations as threatening or nonthreatening.

In summary, socialization efforts that provide necessary information to newcomers and support from supervisors can prevent the stressful experience from becoming distressful.

Evaluating Organizational Socialization

The organizational socialization process is dynamic, interactive and communicative, and it unfolds over time. In this complexity lies the challenge of evaluating socialization efforts. Two broad approaches to measuring socialization have been proposed. One approach consists of the stage models of socialization (Feldman l976; Nelson l987). These models portray socialization as a multistage transition process with key variables at each of the stages. Another approach highlights the various socialization tactics that organizations use to help newcomers become insiders (Van Maanen and Schein l979).

With both approaches, it is contended that there are certain outcomes that mark successful socialization. These outcomes include performance, job satisfaction, organizational commit-ment, job involvement and intent to remain with the organization. If socialization is a stress moderator, then distress symptoms (specifically, low levels of distress symptoms) should be included as an indicator of successful socialization.

Health Outcomes of Socialization

Because the relationship between socialization and stress has only recently received attention, few studies have included health outcomes. The evidence indicates, however, that the socialization process is linked to distress symptoms. Newcomers who found interactions with their supervisors and other newcomers helpful reported lower levels of psychological distress symptoms such as depression and inability to concentrate (Nelson and Quick l99l). Further, newcomers with more accurate expectations of the stressors in their new jobs reported lower levels of both psychological symptoms (e.g., irritability) and physiological symptoms (e.g., nausea and headaches).

Because socialization is a stressful experience, health outcomes are appropriate variables to study. Studies are needed that focus on a broad range of health outcomes and that combine self-reports of distress symptoms with objective health measures.

Organizational Socialization as Stress Intervention

The contemporary research on organizational socialization suggests that it is a stressful process that, if not managed well, can lead to distress symptoms and other health problems. Organizations can take at least three actions to ease the transition by way of intervening to ensure positive outcomes from socialization.

First, organizations should encourage realistic expectations among newcomers of the stressors inherent in the new job. One way of accomplishing this is to provide a realistic job preview that details the most commonly experienced stressors and effective ways of coping (Wanous l992). Newcomers who have an accurate view of what they will encounter can preplan coping strategies and will experience less reality shock from those stressors about which they have been forewarned.

Secondly, organizations should make numerous sources of accurate information available to newcomers in the form of booklets, interactive information systems or hotlines (or all of these). The uncertainty of the transition into a new organization can be overwhelming, and multiple sources of informational support can aid newcomers in coping with the uncertainty of their new jobs. In addition, newcomers should be encouraged to seek out information during their socialization experiences.

Thirdly, emotional support should be explicitly planned for in designing socialization programmes. The supervisor is a key player in the provision of such support and may be most helpful by being emotionally and psychologically available to newcomers (Hirshhorn l990). Other avenues for emotional support include mentoring, activities with more senior and experienced co-workers, and contact with other newcomers.


Back

Friday, 14 January 2011 16:43

Gravel

Gravel is a loose conglomerate of stones that have been mined from a surface deposit, dredged from a river bottom or obtained from a quarry and crushed into desired sizes. Gravel has a variety of uses, including: for rail beds; in roadways, walkways and roofs; as filler in concrete (often for foundations); in landscaping and gardening; and as a filter medium.

The principal safety and health hazards to those who work with gravel are airborne silica dust, musculoskeletal problems and noise. Free crystalline silicon dioxide occurs naturally in many rocks that are used to make gravel. The silica content of bulk species of stone varies and is not a reliable indicator of the percentage of airborne silica dust in a dust sample. Granite contains about 30% silica by weight. Limestone and marble have less free silica.

Silica can become airborne during quarrying, sawing, crushing, sizing and, to a lesser extent, spreading of gravel. Generation of airborne silica can usually be prevented with water sprays and jets, and sometimes with local exhaust ventilation (LEV). In addition to construction workers, workers exposed to silica dust from gravel include quarry workers, railroad workers and landscape workers. Silicosis is more common among quarry or stone-crushing workers than among construction workers who work with gravel as a finished product. An elevated risk of mortality from pneumoconiosis and other non-malignant respiratory disease has been observed in one cohort of workers in the crushed-stone industry in the United States.

Musculoskeletal problems can occur as a result of manual loading or unloading of gravel or during manual spreading. The larger the individual pieces of stone and the larger the shovel or other tool used, the more difficult it is to manage the material with hand tools. The risk of sprains and strains can be reduced if two or more workers work together on strenuous tasks, and more so if draught animals or powered machines are used. Smaller shovels or rakes carry or push less weight than larger ones and can reduce the risk of musculoskeletal problems.

Noise accompanies mechanical processing or handling of stone or gravel. Stone crushing using a ball mill generates considerable low-frequency noise and vibration. Transporting gravel through metal chutes and mixing it in drums are both noisy processes. Noise can be controlled by using sound-absorbing or -reflecting materials around the ball mill, by using chutes lined with wood or other sound-absorbing (and durable) material or by using noise-insulated mixing drums.

 

Back

Friday, 14 January 2011 16:41

Asphalt

Asphalts can generally be defined as complex mixtures of chemical compounds of high molecular weight, predominantly asphaltenes, cyclic hydrocarbons (aromatic or naphthenic) and a lesser quantity of saturated components of low chemical reactivity. The chemical composition of asphalts depends both on the original crude oil and on the process used during refining. Asphalts are predominantly derived from crude oils, especially heavier residue crude oil. Asphalt also occurs as a natural deposit, where it is usually the residue resulting from the evaporation and oxidation of liquid petroleum. Such deposits have been found in California, China, the Russian Federation, Switzerland, Trinidad and Tobago and Venezuela. Asphalts are non-volatile at ambient temperatures and soften gradually when heated. Asphalt should not be confused with tar, which is physically and chemically dissimilar.

A wide variety of applications include paving streets, highways and airfields; making roofing, waterproofing and insulating materials; lining irrigation canals and reservoirs; and the facing of dams and levees. Asphalt is also a valuable ingredient of some paints and varnishes. It is estimated that the current annual world production of asphalts is over 60 million tonnes, with more than 80% being used in need construction and maintenance and more than 15% used in roofing materials.

Asphalt mixes for road construction are produced by first heating and drying mixtures of graded crushed stone (such as granite or limestone), sand and filler and then mixing with penetration bitumen, referred to in the US as straight-run asphalt. This is a hot process. The asphalt is also heated using propane flames during application to a road bed.

Exposures and Hazards

Exposures to particulate polynuclear aromatic hydrocarbons (PAHs) in asphalt fumes have been measured in a variety of settings. Most of the PAHs found was composed of napthalene derivatives, not the four- to six-ring compounds which are more likely to pose a significant carcinogenic risk. In refinery asphalt processing units, respirable PAH levels range from non-detectable to 40 mg/m3. During drum-filling operations, 4 hour breathing zone samples ranged from 1.0 mg/m3upwind to 5.3 mg/m3 downwind. At asphalt mixing plants, exposures to benzene-soluble organic compounds ranged from 0.2 to 5.4 mg/m3. During paving operations, exposures to respirable PAH ranged from less than 0.1 mg/m3 to 2.7 mg/m3. Potentially noteworthy worker exposures may also occur during the manufacture and application of asphalt roofing materials. Little information is available regarding exposures to asphalt fumes in other industrial situations and during the application or use of asphalt products.

Handling of hot asphalt can cause severe burns because it is sticky and is not readily removed from the skin. The principal concern from the industrial toxicological aspect is irritation of the skin and eyes by fumes of hot asphalt. These fumes may cause dermatitis and acne-like lesions as well as mild keratoses on prolonged and repeated exposure. The greenish-yellow fumes given off by boiling asphalt can also cause photosensitization and melanosis.

Although all asphaltic materials will combust if heated sufficiently, asphalt cements and oxidized asphalts will not normally burn unless their temperature is raised about 260°C. The flammability of the liquid asphalts is influenced by the volatility and amount of petroleum solvent added to the base material. Thus, the rapid-curing liquid asphalts present the greatest fire hazard, which becomes progressively lower with the medium- and slow-curing types.

Because of its insolubility in aqueous media and the high molecular weight of its components, asphalt has a low order of toxicity.

The effects on the tracheobronchial tree and lungs of mice inhaling an aerosol of petroleum asphalt and another group inhaling smoke from heated petroleum asphalt included congestion, acute bronchitis, pneumonitis, bronchial dilation, some peribronchiolar round cell infiltration, abscess formation, loss of cilia, epithelial atrophy and necrosis. The pathological changes were patchy, and in some animals were relatively refractory to treatment. It was concluded that these changes were a non-specific reaction to breathing air polluted with aromatic hydrocarbons, and that their extent was dose dependent. Guinea pigs and rats inhaling fumes from heated asphalt showed effects such as chronic fibrosing pneumonitis with peribronchial adenomatosis, and the rats developed squamous cell metaplasia, but none of the animals had malignant lesions.

Steam-refined petroleum asphalts were tested by application to the skin of mice. Skin tumours were produced by undiluted asphalts, dilutions in benzene and a fraction of steam-refined asphalt. When air-refined (oxidized) asphalts were applied to the skin of mice, no tumour was found with undiluted material, but, in one experiment, an air-refined asphalt in solvent (toluene) produced topical skin tumours. Two cracking-residue asphalts produced skin tumours when applied to the skin of mice. A pooled mixture of steam- and air-blown petroleum asphalts in benzene produced tumours at the site of application on the skin of mice. One sample of heated, air-refined asphalt injected subcutaneously into mice produced a few sarcomas at the injection sites. A pooled mixture of steam- and air-blown petroleum asphalts produced sarcomas at the site of subcutaneous injection in mice. Steam-distilled asphalts injected intramuscularly produced local sarcomas in one experiment in rats. Both an extract of road-surfacing asphalt and its emissions were mutagenic to Salmonella typhimurium.

Evidence for carcinogenicity to humans is not conclusive. A cohort of roofers exposed to both asphalts and coal tar pitches showed an excess risk for respiratory cancer. Likewise, two Danish studies of asphalt workers found an excess risk for lung cancer, but some of these workers may also have been exposed to coal tar, and they were more likely to be smokers than the comparison group. Among Minnesota (but not California) highway workers, increases were noted for leukaemia and urological cancers. Even though the epidemiological data to date are inadequate to demonstrate with a reasonable degree of scientific certainty that asphalt presents a cancer risk to humans, general agreement exists, on the basis of experimental studies, that asphalt may pose such a risk.

Safety and Health Measures

Since heated asphalt will cause severe skin burns, those working with it should wear loose clothing in good condition, with the neck closed and the sleeves rolled down. Hand and arm protection should be worn. Safety shoes should be about 15 cm high and laced so that no openings are left through which hot asphalt may reach the skin. Face and eye protection is also recommended when heated asphalt is handled. Changing rooms and proper washing and bathing facilities are desirable. At crushing plants where dust is produced and at boiling pans from which fumes escape, adequate exhaust ventilation should be provided.

Asphalt kettles should be set securely and be levelled to preclude the possibility of their tipping. Workers should stand upwind of a kettle. The temperature of heated asphalt should be checked frequently in order to prevent overheating and possible ignition. If the flash point is approached, the fire under a kettle must be put out at once and no open flame or other source of ignition should be permitted nearby. Where asphalt is being heated, fire-extinguishing equipment should be within easy reach. For asphalt fires, dry chemical or carbon dioxide types of extinguishers are considered most appropriate. The asphalt spreader and the driver of an asphalt paving machine should be offered half-face respirators with organic vapour cartridges. In addition, to prevent the inadvertent swallowing of toxic materials, workers should not eat, drink or smoke near a kettle.

If molten asphalt strikes the exposed skin, it should be cooled immediately by quenching with cold water or by some other method recommended by medical advisers. An extensive burn should be covered with a sterile dressing and the patient should be taken to a hospital; minor burns should be seen by a physician. Solvents should not be used to remove asphalt from burned flesh. No attempt should be made to remove particles of asphalt from the eyes; instead the victim should be taken to a physician at once.


Classes of bitumens / asphalts

Class 1: Penetration bitumens are classified by their penetration value. They are usually produced from the residue from atmospheric distillation of petroleum crude oil by applying further distillation under vacuum, partial oxidation (air rectification), solvent precipitation or a combination of these processes. In Australia and the United States, bitumens that are approximately equivalent to those described here are called asphalt cements or viscosity-graded asphalts, and are specified on the basis of viscosity measurements at 60°C.

Class 2: Oxidized bitumens are classified by their softening points and penetration values. They are produced by passing air through hot, soft bitumen under controlled temperature conditions. This process alters the characteristics of the bitumen to give reduced temperature susceptibility and greater resistance to different types of imposed stress. In the United States, bitumens produced using air blowing are known as air-blown asphalts or roofing asphalts and are similar to oxidized bitumens.

Class 3: Cutback bitumens are produced by mixing penetration bitumens or oxidized bitumens with suitable volatile diluents from petroleum crudes such as white spirit, kerosene or gas oil, to reduce their viscosity and render them more fluid for ease of handling. When the diluent evaporates, the initial properties of bitumen are recovered. In the United States, cutback bitumens are sometimes referred to as road oils.

Class 4: Hard bitumens are normally classified by their softening point. They are manufactured similarly to penetration bitumens, but have lower penetration values and higher softening points (i.e., they are more brittle).

Class 5: Bitumen emulsions are fine dispersions of droplets of bitumen (from classes 1, 3 or 6) in water. They are manufactured using high-speed shearing devices, such as colloid mills. The bitumen content can range from 30 to 70% by weight. They can be anionic, cationic or non-ionic. In the United States, they are referred to as emulsified asphalts.

Class 6: Blended or fluxed bitumens may be produced by blending bitumens (primarily penetration bitumens) with solvent extracts (aromatic by-products from the refining of base oils), thermally cracked residues or certain heavy petroleum distillates with final boiling points above 350°C.

Class 7: Modified bitumens contain appreciable quantities (typically 3 to 15% by weight) of special addidtives, such as polymers, elastomers, sulphur and other products used to modify their properties; they are used for specialized applications.

Class 8: Thermal bitumens were produced by extended distillation, at high temperature, of a petroleum residue. Currently, they are not manufactured in Europe or in the United States.

Source: IARC1985


 

Back

Friday, 14 January 2011 16:35

Cement and Concrete

Cement

Cement is a hydraulic bonding agent used in building construction and civil engineering. It is a fine powder obtained by grinding the clinker of a clay and limestone mixture calcined at high temperatures. When water is added to cement it becomes a slurry that gradually hardens to a stone-like consistency. It can be mixed with sand and gravel (coarse aggregates) to form mortar and concrete.

There are two types of cement: natural and artificial. The natural cements are obtained from natural materials having a cement-like structure and require only calcining and grinding to yield hydraulic cement powder. Artificial cements are available in large and increasing numbers. Each type has a different composition and mechanical structure and has specific merits and uses. Artificial cements may be classified as portland cement (named after the town of Portland in the United Kingdom) and aluminous cement.

Production

The portland process, which accounts for by far the largest part of world cement production, is illustrated in figure 1. It comprises two stages: clinker manufacture and clinker grinding. The raw materials used for clinker manufacture are calcareous materials such as limestone and argillaceous materials such as clay. The raw materials are blended and ground either dry (dry process) or in water (wet process). The pulverised mixture is calcined either in vertical or rotary-inclined kilns at a temperature ranging from 1,400 to 1,450°C. On leaving the kiln, the clinker is cooled rapidly to prevent the conversion of tricalcium silicate, the main ingredient of portland cement, into bicalcium silicate and calcium oxide.

Figure 1. The manufacture of cement

CCE095F1

The lumps of cooled clinker are often mixed with gypsum and various other additives which control the setting time and other properties of the mixture in use. In this way it is possible to obtain a wide range of different cements such as normal portland cement, rapid-setting cement, hydraulic cement, metallurgical cement, trass cement, hydrophobic cement, maritime cement, cements for oil and gas wells, cements for highways or dams, expansive cement, magnesium cement and so on. Finally, the clinker is ground in a mill, screened and stored in silos ready for packaging and shipping. The chemical composition of normal portland cement is:

  • calcium oxide (CaO): 60 to 70%
  • silicon dioxide (SiO2) (including about 5% free SiO2): 19 to 24%
  • aluminium trioxide (Al3O3): 4 to 7%
  • ferric oxide (Fe2O3): 2 to 6%
  • magnesium oxide (MgO): less than 5%

 

Aluminous cement produces mortar or concrete with high initial strength. It is made from a mixture of limestone and clay with a high aluminium oxide content (without extenders) which is calcined at about 1,400°C. The chemical composition of aluminous cement is approximately:

  • aluminium oxide (Al2O3): 50%
  • calcium oxide (CaO): 40%
  • ferric oxide (Fe2O3): 6%
  • silicon dioxide (SiO2): 4%

 

Fuel shortages lead to the increased production of natural cements, especially those using tuff (volcanic ash). If necessary, this is calcined at 1,200°C, instead of 1,400 to 1,450°C as required for portland. The tuff may contain 70 to 80% amorphous free silica and 5 to 10% quartz. With calcination the amorphous silica is partially transformed to tridimite and crystobalite.

Uses

Cement is used as a binding agent in mortar and concrete —a mixture of cement, gravel and sand. By varying the processing method or by including additives, different types of concrete may be obtained using a single type of cement (e.g., normal, clay, bituminous, asphalt tar, rapid-setting, foamed, waterproof, microporous, reinforced, stressed, centrifuged concrete and so on).

Hazards

In the quarries from which the clay, limestone and gypsum for cement are extracted, workers are exposed to the hazards of climatic conditions, dusts produced during drilling and crushing, explosions and falls of rock and earth. Road transport accidents occur during haulage to the cement works.

During cement processing, the main hazard is dust. In the past, dust levels ranging from 26 to 114 mg/m3 have been recorded in quarries and cement works. In individual processes the following dust levels were reported: clay extraction—41.4 mg/m3; raw materials crushing and milling—79.8 mg/m3; sieving— 384 mg/m3; clinker grinding—140 mg/m3; cement packing— 256.6 mg/m3; and loading, etc.—179 mg/m3. In modern factories using the wet process, 15 to 20 mg dust/m3 air are occasionally the upper short-time values. The air pollution in the neighbourhood of cement factories is around 5 to 10% of the old values, thanks in particular to the widespread use of electrostatic filters. The free silica content of the dust usually varies between the level in raw material (clay may contain fine particulate quartz, and sand may be added) and that of the clinker or the cement, from which all the free silica will normally have been eliminated.

Other hazards encountered in cement works include high ambient temperatures, especially near furnace doors and on furnace platforms, radiant heat and high noise levels (120 dB) in the vicinity of the ball mills. Carbon monoxide concentrations ranging from trace quantities up to 50 ppm have been found near limestone kilns.

Other hazardous conditions encountered in cement industry workers include diseases of the respiratory system, digestive disorders, skin diseases, rheumatic and nervous conditions and hearing and visual disorders.

Respiratory tract diseases

Respiratory tract disorders are the most important group of occupational diseases in the cement industry and are the result of inhalation of airborne dust and the effects of macroclimatic and microclimatic conditions in the workplace environment. Chronic bronchitis, often associated with emphysema, has been reported as the most frequent respiratory disease.

Normal portland cement does not cause silicosis because of the absence of free silica. However, workers engaged in cement production may be exposed to raw materials which present great variations in free silica content. Acid-resistant cements used for refractory plates, bricks and dust contain high amounts of free silica, and exposure to them involves a definite risk of silicosis.

Cement pneumoconiosis has been described as a benign pinhead or reticular pneumoconiosis, which may appear after prolonged exposure, and presents a very slow progression. However, a few cases of severe pneumoconiosis have also been observed, most likely following exposure to materials other than clay and portland cement.

Some cements also contain varying amounts of diatomaceous earth and tuff. It is reported that when heated, diatomaceous earth becomes more toxic due to the transformation of the amorphous silica into cristobalite, a crystalline substance even more pathogenic than quartz. Concomitant tuberculosis may complicate the course of the cement pneumoconiosis.

Digestive disorders

Attention has been drawn to the apparently high incidence of gastroduodenal ulcers in the cement industry. Examination of 269 cement plant workers revealed 13 cases of gastroduodenal ulcer (4.8%). Subsequently, gastric ulcers were induced in both guinea pigs and a dog fed on cement dust. However, a study at a cement works showed a sickness absence rate of 1.48 to 2.69% due to gastroduodenal ulcers. Since ulcers may pass through an acute phase several times a year, these figures are not excessive when compared with those for other occupations.

Skin diseases

Skin diseases are widely reported in the literature and have been said to account for about 25% and more of all the occupational skin diseases. Various forms have been observed, including inclusions in the skin, periungal erosions, diffuse eczematous lesions and cutaneous infections (furuncles, abscesses and panaritiums). However, these are more frequent among cement users (e.g., bricklayers and masons) than among cement manufacturing plant workers.

As early as 1947 it was suggested that cement eczema might be due to the presence in the cement of hexavalent chromium (detected by the chromium solution test). The chromium salts probably enter the dermal papillae, combine with proteins and produce a sensitization of an allergic nature. Since the raw materials used for cement manufacture do not usually contain chromium, the following have been listed as the possible sources of the chromium in cement: volcanic rock, the abrasion of the refractory lining of the kiln, the steel balls used in the grinding mills and the different tools used for crushing and grinding the raw materials and the clinker. Sensitization to chromium may be the leading cause of nickel and cobalt sensitivity. The high alkalinity of cement is considered an important factor in cement dermatoses.

Rheumatic and nervous disorders

The wide variations in macroclimatic and microclimatic conditions encountered in the cement industry have been associated with the appearance of various disorders of the locomotor system (e.g., arthritis, rheumatism, spondylitis and various muscular pains) and the peripheral nervous system (e.g., back pain, neuralgia and radiculitis of the sciatic nerves).

Hearing and vision disorders

Moderate cochlear hypoacusia in workers in a cement mill has been reported. The main eye disease is conjunctivitis, which normally requires only ambulatory medical care.

Accidents

Accidents in quarries are due in most cases to falls of earth or rock, or they occur during transportation. In cement works the main types of accidental injuries are bruises, cuts and abrasions which occur during manual handling work.

Safety and health measures

A basic requirement in the prevention of dust hazards in the cement industry is a precise knowledge of the composition and, especially, of the free silica content of all the materials used. Knowledge of the exact composition of newly-developed types of cement is particularly important.

In quarries, excavators should be equipped with closed cabins and ventilation to ensure a pure air supply, and dust suppression measures should be implemented during drilling and crushing. The possibility of poisoning due to carbon monoxide and nitrous gases released during blasting may be countered by ensuring that workers are at a suitable distance during shotfiring and do not return to the blasting point until all fumes have cleared. Suitable protective clothing may be necessary to protect workers against inclement weather.

All dusty processes in cement works (grinding, sieving, transfer by conveyor belts) should be equipped with adequate ventilation systems, and conveyor belts carrying cement or raw materials should be enclosed, with special precautions being taken at conveyor transfer points. Good ventilation is also required on the clinker cooling platform, for clinker grinding and in cement packing plants.

The most difficult dust control problem is that of the clinker kiln stacks, which are usually fitted with electrostatic filters, preceded by bag or other filters. Electrostatic filters may be used also for the sieving and packing processes, where they must be combined with other methods for air pollution control. Ground clinker should be conveyed in enclosed screw conveyors.

Hot work points should be equipped with cold air showers, and adequate thermal screening should be provided. Repairs on clinker kilns should not be undertaken until the kiln has cooled adequately, and then only by young, healthy workers. These workers should be kept under medical supervision to check their cardiac, respiratory and sweat function and prevent the occurrence of thermal shock. Persons working in hot environments should be supplied with salted drinks when appropriate.

Skin disease prevention measures should include the provision of shower baths and barrier creams for use after showering. Desensitization treatment may be applied in cases of eczema: after removal from cement exposure for 3 to 6 months to allow healing, 2 drops of 1:10,000 aqueous potassium dichromate solution is applied to the skin for 5 minutes, 2 to 3 times per week. In the absence of local or general reaction, contact time is normally increased to 15 minutes, followed by an increase in the strength of the solution. This desensitization procedure can also be applied in cases of sensitivity to cobalt, nickel and manganese. It has been found that chrome dermatitis—and even chrome poisoning—may be prevented and treated with ascorbic acid. The mechanism for the inactivation of hexavalent chromium by ascorbic acid involves reduction to trivalent chromium, which has a low toxicity, and subsequent complex formation of the trivalent species.

Concrete and Reinforced Concrete Work

To produce concrete, aggregates, such as gravel and sand, are mixed with cement and water in motor-driven horizontal or vertical mixers of various capacities installed at the construction site, but sometimes it is more economical to have ready-mixed concrete delivered and discharged into a silo on the site. For this purpose concrete mixing stations are installed in the periphery of towns or near gravel pits. Special rotary-drum lorries are used to avoid separation of the mixed constituents of the concrete, which would lower the strength of concrete structures.

Tower cranes or hoists are used to transport the ready-mixed concrete from the mixer or silo to the framework. The size and height of certain structures may also require the use of concrete pumps for conveying and placing the ready-mixed concrete. There are pumps which lift the concrete to heights of up to 100 m. As their capacity is by far greater than that of cranes of hoists, they are used in particular for the construction of high piers, towers and silos with the aid of climbing formwork. Concrete pumps are generally mounted on lorries, and the rotary-drum lorries used for transporting ready-mixed concrete are now frequently equipped to deliver the concrete directly to the concrete pump without passing through a silo.

Formwork

Formwork has followed the technical development rendered possible by the availability of larger tower cranes with longer arms and increased capacities, and it is no longer necessary to prepare shuttering in situ.

Prefabricated formwork up to 25 m2 in size is used in particular for making the vertical structures of large residential and industrial buildings, such as facades and dividing walls. These structural-steel formwork elements, which are prefabricated in the site shop or by the industry, are lined with sheet-metal or wooden panels. They are handled by crane and removed after the concrete has set. Depending on the type of building method, prefabricated formwork panels are either lowered to the ground for cleaning or taken to the next wall section ready for pouring.

So-called formwork tables are used to make horizontal structures (i.e., floor slabs for large buildings). These tables are composed of several structural-steel elements and can be assembled to form floors of different surfaces. The upper part of the table (i.e., the actual floor-slab form) is lowered by means of screw jacks or hydraulic jacks after the concrete has set. Special beak-like load-carrying devices have been devised to withdraw the tables, to lift them to the next floor and to insert them there.

Sliding or climbing formwork is used to build towers, silos, bridge piers and similar high structures. A single formwork element is prepared in situ for this purpose; its cross-section corresponds to that of the structure to be erected, and its height may vary between 2 and 4 m. The formwork surfaces in contact with the concrete are lined with steel sheets, and the entire element is linked to jacking devices. Vertical steel bars anchored in the concrete which is poured serve as jacking guides. The sliding form is jacked upwards as the concrete sets, and the reinforcement work and concrete placing continue without interruption. This means that work has to go on around the clock.

Climbing forms differ from sliding ones in that they are anchored in the concrete by means of screw sleeves. As soon as the poured concrete has set to the required strength, the anchor screws are undone, the form is lifted to the height of the next section to be poured, anchored and prepared for receiving the concrete.

So-called form cars are frequently used in civil engineering, in particular for making bridge deck slabs. Especially when long bridges or viaducts are built, a form car replaces the rather complex falsework. The deck forms corresponding to one length of bay are fitted to a structural-steel frame so that the various form elements can be jacked into position and be removed laterally or lowered after the concrete has set. When the bay is finished, the supporting frame is advanced by one bay length, the form elements are again jacked into position, and the next bay is poured

When a bridge is built using the so-called cantilever technique the form-supporting frame is much shorter than the one described above. It does not rest on the next pier but must be anchored to form a cantilever. This technique, which is generally used for very high bridges, often relies on two such frames which are advanced by stages from piers on both sides of the span.

Prestressed concrete is used particularly for bridges, but also in building especially designed structures. Strands of steel wire wrapped in steel-sheet or plastic sheathing are embedded in the concrete at the same time as the reinforcement. The ends of the strands or tendons are provided with head plates so that the prestressed concrete elements may be pretensioned with the aid of hydraulic jacks before the elements are loaded.

Prefabricated elements

Construction techniques for large residential buildings, bridges and tunnels have been rationalized even further by prefabricating elements such as floor slabs, walls, bridge beams and so on, in a special concrete factory or near the construction site. The prefabricated elements, which are assembled on the site, do away with the erection, displacement and dismantling of complex formwork and falsework, and a great deal of dangerous work at height can be avoided.

Reinforcement

Reinforcement is generally delivered to the site cut and bent according to bar and bending schedules. Only when prefabricating concrete elements on the site or in the factory are the reinforcement bars tied or welded to each other to form cages or mats which are inserted into the forms before the concrete is poured.

Prevention of accidents

Mechanization and rationalization have eliminated many traditional hazards on building sites, but have also created new dangers. For instance, fatalities due to falls from height have considerably diminished thanks to the use of form cars, form-supporting frames in bridge building and other techniques. This is due to the fact that the work platforms and walkways with their guard rails are assembled only once and displaced at the same time as the form car, whereas with traditional formwork the guard rails were often neglected. On the other hand, mechanical hazards are increasing and electrical hazards are particularly serious in wet environments. Health hazards arise from cement itself, from substances added for curing or waterproofing and from lubricants for formwork.

Some important accident prevention measures to be taken for various operations are given below.

Concrete mixing

As concrete is nearly always mixed by machine, special attention should be paid to the design and layout of switchgear and feed-hopper skips. In particular, when concrete mixers are being cleaned, a switch may be unintentionally actuated, starting the drum or the skip and causing injury to the worker. Therefore, switches should be protected and also arranged in such a manner that no confusion is possible. If necessary, they should be interlocked or provided with a lock. The skips should be free from danger zones for the mixer attendant and workers moving on passageways near it. It must also be ensured that workers cleaning the pits beneath feed-hopper skips are not injured by the accidental lowering of the hopper.

Silos for aggregates, especially sand, present a hazard of fatal accidents. For example, workers entering a silo without a standby person and without a safety harness and lifeline may fall and be buried in the loose material. Silos should therefore be equipped with vibrators and platforms from which sticking sand can be poked down, and corresponding warning notices should be displayed. No person should be allowed to enter the silo without another standing by.

Concrete handling and placing

The proper layout of concrete transfer points and their equipment with mirrors and bucket receiving cages obviates the danger of injuring a standby worker who otherwise has to reach out for the crane bucket and guide it to a proper position.

Transfer silos which are jacked up hydraulically must be secured so that they are not suddenly lowered if a pipeline breaks.

Work platforms fitted with guard rails must be provided when placing the concrete in the forms with the aid of buckets suspended from the crane hook or with a concrete pump. The crane operators must be trained for this type of work and must have normal vision. If large distances are covered, two-way telephone communication or walkie-talkies have to be used.

When concrete pumps with pipelines and placer masts are used, special attention should be paid to the stability of the installation. Agitating lorries (cement mixers) with built-in concrete pumps must be equipped with interlocked switches which make it impossible to start the two operations simultaneously. The agitators must be guarded so that the operating personnel cannot come into contact with moving parts. The baskets for collecting the rubber ball which is pressed through the pipeline to clean it after the concrete has been poured, are now replaced by two elbows arranged in opposite directions. These elbows absorb almost all the pressure needed to push the ball through the placing line; they not only eliminate the whip effect at the line end, but also prevent the ball from being shot out of the line end.

When agitating lorries are used in combination with placing plant and lifting equipment, special attention has to be paid to overhead electric lines. Unless the overhead line can be displaced they must be insulated or guarded by protective scaffolds within the work range to exclude any accidental contact. It is important to contact the power supply station.

Formwork

Falls are common during the assembly of traditional formwork composed of square timber and boards because the necessary guard rails and toe boards are often neglected for work platforms which are only required for short periods. Nowadays, steel supporting structures are widely used to speed up formwork assembly, but here again the available guard rails and toe boards are frequently not installed on the pretext that they are needed for so short a time.

Plywood form panels, which are increasingly used, offer the advantage of being easy and quick to assemble. However, often after being used several times, they are frequently misappropriated as platforms for rapidly required scaffolds, and it is generally forgotten that the distances between the supporting transoms must be considerably reduced in comparison with normal scaffold planks. Accidents resulting from breakage of form panels misused as scaffold platforms are still rather frequent.

Two outstanding hazards must be borne in mind when using prefabricated form elements. These elements must be stored in such a manner that they cannot turn over. Since it is not always feasible to store form elements horizontally, they must be secured by stays. Form elements permanently equipped with platforms, guard rails and toeboards may be attached by slings to the crane hook as well as being assembled and dismantled on the structure under construction. They constitute a safe workplace for the personnel and do away with the provision of work platforms for placing the concrete. Fixed ladders may be added for safer access to platforms. Scaffold and work platforms with guard rails and toe boards permanently attached to the form element should be used in particular with sliding and climbing formwork.

Experience has shown that accidents due to falls are rare when work platforms do not have to be improvised and rapidly assembled. Unfortunately, form elements fitted with guard rails cannot be used everywhere, especially where small residential buildings are being erected.

When the form elements are raised by crane from storage to the structure, lifting tackle of appropriate size and strength, such as slings and spreaders, must be used. If the angle between the sling legs is too large, the form elements must be handled with the aid of spreaders.

The workers cleaning the forms are exposed to a health hazard which is generally overlooked: the use of portable grinders to remove concrete residues adhering to the form surfaces. Dust measurements have shown that the grinding dust contains a high percentage of respirable fractions and silica. Therefore, dust control measures must be taken (e.g., portable grinders with exhaust devices linked to a filter unit or an enclosed form-board cleaning plant with exhaust ventilation.

Assembly of prefabricated elements

Special lifting equipment should be used in the manufacturing plant so that the elements can be moved and handled safely and without injury to the workers. Anchor bolts embedded in the concrete facilitate their handling not only in the factory but also on the assembly site. To avoid bending of the anchor bolts by oblique loads, large elements must be lifted with the aid of spreaders with short rope slings. If a load is applied to the bolts at an oblique angle, concrete may spill off and the bolts may be torn out. The use of inappropriate lifting tackle has caused serious accidents resulting from falling concrete elements.

Appropriate vehicles must be used for the road transport of prefabricated elements. They must be approximately secured against overturning or sliding—for example, when the driver has to brake the vehicle suddenly. Visibly displayed weight indications on the elements facilitate the task of the crane operator during loading, unloading and assembly on the site.

Lifting equipment on the site should be adequately chosen and operated. Tracks and roads must be kept in good condition in order to avoid overturning of loaded equipment during operation.

Work platforms protecting personnel against falls from height must be provided for the assembly of the elements. All possible means of collective protection, such as scaffolds, safety nets and overhead travelling cranes erected before completion of the building, should be taken into consideration before recourse is taken to reliance on PPE. It is, of course, possible to equip the workers with safety harnesses and lifelines, but experience has shown that there are workers who use this equipment only when they are under constant close supervision. Lifelines are indeed a hindrance when certain tasks are performed, and certain workers are proud of being capable of working at great heights without using any protection.

Before starting to design a prefabricated building, the architect, the manufacturer of the prefabricated elements and the building contractor should meet to discuss and study the course and safety of all operations. When it is known beforehand what types of handling and lifting equipment are available on the site, the concrete elements may be provided in the factory with fastening devices for guard rails and toe boards. The façade ends of floor elements, for instance, are then easily fitted with prefabricated guard rails and toe boards before the elements are lifted into place. The wall elements corresponding to the floor slab may thereafter be safely assembled because the workers are protected by guard rails.

For the erection of certain high industrial structures, mobile work platforms are lifted into position by crane and hung from suspension bolts embedded in the structure itself. In such cases it may be safer to transport the workers to the platform by crane (which should have high safety characteristics and be run by a qualified operator) than to use improvised scaffolds or ladders.

When post-tensioning concrete elements, attention should be paid to the design of the post-tensioning recesses, which should enable the tensioning jacks to be applied, operated and removed without any hazard for the personnel. Suspension hooks for tensioning jacks or openings for passing the crane rope must be provided for post-tensioning work beneath bridge decks or in box-type elements. This type of work, too, requires the provision of work platforms with guard rails and toe boards. The platform floor should be sufficiently low to allow for ample work space and safe handling of the jack. No person should be permitted at the rear of the tensioning jack because serious accidents may result from the high energy released in the breakage of an anchoring element or a steel tendon. The workers should also avoid being in front of the anchor plates as long as the mortar pressed into the tendon sheaths has not set. As the mortar pump is connected with hydraulic pipes to the jack, no person should be permitted in the area between pump and jack during tensioning. Continuous communication among the operators and with supervisors is also very important.

Training

Thorough training of plant operators in particular and all construction site personnel in general is becoming more and more important in view of increasing mechanization and the use of many types of machinery, plant and substances. Unskilled labourers or helpers should be employed in exceptional cases only, if the number of construction site accidents is to be reduced.

 

Back

Page 103 of 106

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents

Preface
Part I. The Body
Part II. Health Care
Part III. Management & Policy
Part IV. Tools and Approaches
Part V. Psychosocial and Organizational Factors
Part VI. General Hazards
Part VII. The Environment
Part VIII. Accidents and Safety Management
Part IX. Chemicals
Part X. Industries Based on Biological Resources
Part XI. Industries Based on Natural Resources
Part XII. Chemical Industries
Part XIII. Manufacturing Industries
Part XIV. Textile and Apparel Industries
Part XV. Transport Industries
Part XVI. Construction
Part XVII. Services and Trade
Part XVIII. Guides