Banner ToolsApproach

Children categories

28. Epidemiology and Statistics

28. Epidemiology and Statistics (12)

Banner 4

 

28. Epidemiology and Statistics

Chapter Editors:  Franco Merletti, Colin L. Soskolne and Paolo Vineis


Table of Contents

Tables and Figures

Epidemiological Method Applied to Occupational Health and Safety
Franco Merletti, Colin L. Soskolne and Paolo Vineis

Exposure Assessment
M. Gerald Ott

Summary Worklife Exposure Measures
Colin L. Soskolne

Measuring Effects of Exposures
Shelia Hoar Zahm

     Case Study: Measures
     Franco Merletti, Colin L. Soskolne and Paola Vineis

Options in Study Design
Sven Hernberg

Validity Issues in Study Design
Annie J. Sasco

Impact of Random Measurement Error
Paolo Vineis and Colin L. Soskolne

Statistical Methods
Annibale Biggeri and Mario Braga

Causality Assessment and Ethics in Epidemiological Research
Paolo Vineis

Case Studies Illustrating Methodological Issues in the Surveillance of Occupational Diseases
Jung-Der Wang

Questionnaires in Epidemiological Research
Steven D. Stellman and Colin L. Soskolne

Asbestos Historical Perspective
Lawrence Garfinkel

Tables

Click a link below to view table in article context.

1. Five selected summary measures of worklife exposure

2. Measures of disease occurrence

3. Measures of association for a cohort study

4. Measures of association for case-control studies

5. General frequency table layout for cohort data

6. Sample layout of case-control data

7. Layout case-control data - one control per case

8. Hypothetical cohort of 1950 individuals to T2

9. Indices of central tendency & dispersion

10. A binomial experiment & probabilities

11. Possible outcomes of a binomial experiment

12. Binomial distribution, 15 successes/30 trials

13. Binomial distribution, p = 0.25; 30 trials

14. Type II error & power; x = 12, n = 30, a = 0.05

15. Type II error & power; x = 12, n = 40, a = 0.05

16. 632 workers exposed to asbestos 20 years or longer

17. O/E number of deaths among 632 asbestos workers

Figures

Point to a thumbnail to see figure caption, click to see the figure in article context.

EPI110F1EPI110F2


Click to return to top of page

View items...
29. Ergonomics

29. Ergonomics (27)

Banner 4

 

29. Ergonomics

Chapter Editors:  Wolfgang Laurig and Joachim Vedder

 


 

Table of Contents 

Tables and Figures

Overview
Wolfgang Laurig and Joachim Vedder

Goals, Principles and Methods

The Nature and Aims of Ergonomics
William T. Singleton

Analysis of Activities, Tasks and Work Systems
Véronique De Keyser

Ergonomics and Standardization
Friedhelm Nachreiner

Checklists
Pranab Kumar Nag

Physical and Physiological Aspects

Anthropometry
Melchiorre Masali

Muscular Work
Juhani Smolander and Veikko Louhevaara

Postures at Work
Ilkka Kuorinka

Biomechanics
Frank Darby

General Fatigue
Étienne Grandjean

Fatigue and Recovery
Rolf Helbig and Walter Rohmert

Psychological Aspects

Mental Workload
Winfried Hacker

Vigilance
Herbert Heuer

Mental Fatigue
Peter Richter

Organizational Aspects of Work

Work Organization
Eberhard Ulich and Gudela Grote

Sleep Deprivation
Kazutaka Kogi

Work Systems Design

Workstations
Roland Kadefors

Tools
T.M. Fraser

Controls, Indicators and Panels
Karl H. E. Kroemer

Information Processing and Design
Andries F. Sanders

Designing for Everyone

Designing for Specific Groups
Joke H. Grady-van den Nieuwboer

     Case Study: The International Classification of Functional Limitation in People

Cultural Differences
Houshang Shahnavaz

Elderly Workers
Antoine Laville and Serge Volkoff

Workers with Special Needs
Joke H. Grady-van den Nieuwboer

Diversity and Importance of Ergonomics--Two Examples

System Design in Diamond Manufacturing
Issachar Gilad

Disregarding Ergonomic Design Principles: Chernobyl
Vladimir M. Munipov 

Tables

Click a link below to view table in article context.

1. Basic anthropometric core list

2. Fatigue & recovery dependent on activity levels

3. Rules of combination effects of two stress factors on strain

4. Differenting among several negative consequences of mental strain

5. Work-oriented principles for production structuring

6. Participation in organizational context

7. User participation in the technology process

8. Irregular working hours & sleep deprivation

9. Aspects of advance, anchor & retard sleeps

10. Control movements & expected effects

11. Control-effect relations of common hand controls

12. Rules for arrangement of controls

13. Guidelines for labels

Figures

Point to a thumbnail to see figure caption, click to see the figure in the article context.

ERG040T1ERG040F1ERG040F2ERG040F3ERG040T2ERG040F5ERG070F1ERG070F2ERG070F3ERG060F2ERG060F1ERG060F3ERG080F1ERG080F4ERG090F1ERG090F2ERG090F3ERG090F4ERG225F1ERG225F2ERG150F1ERG150F2ERG150F4ERG150F5ERG150F6ERG120F1ERG130F1ERG290F1ERG160T1ERG160F1ERG185F1ERG185F2ERG185F3ERG185F4ERG190F1ERG190F2ERG190F3ERG210F1ERG210F2ERG210F3ERG210F4ERG210T4ERG210T5ERG210T6ERG220F1ERG240F1ERG240F2ERG240F3ERG240F4ERG260F1ERG300F1ERG255F1

View items...
32. Record Systems and Surveillance

32. Record Systems and Surveillance (9)

Banner 4

 

32. Record Systems and Surveillance

Chapter Editor:  Steven D. Stellman

 


 

Table of Contents 

Tables and Figures

Occupational Disease Surveillance and Reporting Systems
Steven B. Markowitz

Occupational Hazard Surveillance
David H. Wegman and Steven D. Stellman

Surveillance in Developing Countries
David Koh and Kee-Seng Chia

Development and Application of an Occupational Injury and Illness Classification System
Elyce Biddle

Risk Analysis of Nonfatal Workplace Injuries and Illnesses
John W. Ruser

Case Study: Worker Protection and Statistics on Accidents and Occupational Diseases - HVBG, Germany
Martin Butz and Burkhard Hoffmann

Case Study: Wismut - A Uranium Exposure Revisited
Heinz Otten and Horst Schulz

Measurement Strategies and Techniques for Occupational Exposure Assessment in Epidemiology
Frank Bochmann and Helmut Blome

Case Study: Occupational Health Surveys in China

Tables

Click a link below to view the table in article context.

1. Angiosarcoma of the liver - world register

2. Occupational illness, US, 1986 versus 1992

3. US Deaths from pneumoconiosis & pleural mesothelioma

4. Sample list of notifiable occupational diseases

5. Illness & injury reporting code structure, US

6. Nonfatal occupational injuries & illnesses, US 1993

7. Risk of occupational injuries & illnesses

8. Relative risk for repetitive motion conditions

9. Workplace accidents, Germany, 1981-93

10. Grinders in metalworking accidents, Germany, 1984-93

11. Occupational disease, Germany, 1980-93

12. Infectious diseases, Germany, 1980-93

13. Radiation exposure in the Wismut mines

14. Occupational diseases in Wismut uranium mines 1952-90

Figures

Point to a thumbnail to see figure caption, click to see the figure in article context.

REC60F1AREC060F2REC100F1REC100T1REC100T2


Click to return to top of page

View items...
33. Toxicology

33. Toxicology (21)

Banner 4

 

33. Toxicology

Chapter Editor: Ellen K. Silbergeld


Table of Contents

Tables and Figures

Introduction
Ellen K. Silbergeld, Chapter Editor

General Principles of Toxicology

Definitions and Concepts
Bo Holmberg, Johan Hogberg and Gunnar Johanson

Toxicokinetics
Dušan Djuríc

Target Organ And Critical Effects
Marek Jakubowski

Effects Of Age, Sex And Other Factors
Spomenka Telišman

Genetic Determinants Of Toxic Response
Daniel W. Nebert and Ross A. McKinnon

Mechanisms of Toxicity

Introduction And Concepts
Philip G. Watanabe

Cellular Injury And Cellular Death
Benjamin F. Trump and Irene K. Berezesky

Genetic Toxicology
R. Rita Misra and Michael P. Waalkes

Immunotoxicology
Joseph G. Vos and Henk van Loveren

Target Organ Toxicology
Ellen K. Silbergeld

Toxicology Test Methods

Biomarkers
Philippe Grandjean

Genetic Toxicity Assessment
David M. DeMarini and James Huff

In Vitro Toxicity Testing
Joanne Zurlo

Structure Activity Relationships
Ellen K. Silbergeld

Regulatory Toxicology

Toxicology In Health And Safety Regulation
Ellen K. Silbergeld

Principles Of Hazard Identification - The Japanese Approach
Masayuki Ikeda

The United States Approach to Risk Assessment Of Reproductive Toxicants and Neurotoxic Agents
Ellen K. Silbergeld

Approaches To Hazard Identification - IARC
Harri Vainio and Julian Wilbourn

Appendix - Overall Evaluations of Carcinogenicity to Humans: IARC Monographs Volumes 1-69 (836)

Carcinogen Risk Assessment: Other Approaches
Cees A. van der Heijden

Tables 

Click a link below to view table in article context.

  1. Examples of critical organs & critical effects
  2. Basic effects of possible multiple interactions of metals
  3. Haemoglobin adducts in workers exposed to aniline & acetanilide
  4. Hereditary, cancer-prone disorders & defects in DNA repair
  5. Examples of chemicals that exhibit genotoxicity in human cells
  6. Classification of tests for immune markers
  7. Examples of biomarkers of exposure
  8. Pros & cons of methods for identifying human cancer risks
  9. Comparison of in vitro systems for hepatotoxicity studies
  10. Comparison of SAR & test data: OECD/NTP analyses
  11. Regulation of chemical substances by laws, Japan
  12. Test items under the Chemical Substance Control Law, Japan
  13. Chemical substances & the Chemical Substances Control Law
  14. Selected major neurotoxicity incidents
  15. Examples of specialized tests to measure neurotoxicity
  16. Endpoints in reproductive toxicology
  17. Comparison of low-dose extrapolations procedures
  18. Frequently cited models in carcinogen risk characterization

Figures

Point to a thumbnail to see figure caption, click to see figure in article context.

testTOX050F1TOX050F2TOX050F4TOX050T1TOX050F6TOX210F1TOX210F2TOX060F1TOX090F1TOX090F2TOX090F3TOX090F4TOX110F1TOX260F1TOX260T4


Click to return to top of page

View items...
Monday, 14 March 2011 19:11

Mental Fatigue

Mental strain is a normal consequence of the coping process with mental workload (MWL). Long-term load or a high intensity of job demands can result in short-term consequences of overload (fatigue) and underload (monotony, satiation) and in long-term consequences (e.g., stress symptoms and work-related diseases). The maintenance of the stable regulation of actions while under strain can be realized through changes in one’s action style (by variation of strategies of information-seeking and decision-making), in the lowering of the level of need for achievement (by redefinition of tasks and reduction of quality standards) and by means of a compensatory increase of psychophysiological effort and afterwards a decrease of effort during work time.

This understanding of the process of mental strain can be conceptualized as a transactional process of action regulation during the imposition of loading factors which include not only the negative components of the strain process but also the positive aspects of learning such as accretion, tuning and restructuring and motivation (see figure 2).

Figure 1. Components of the process of strain and its consequences

ERG290F1

Mental fatigue can be defined as a process of time-reversible decrement of behavioural stability in performance, mood and activity after prolonged working time. This state is temporarily reversible by changing the work demands, the environmental influences or stimulation and is completely reversible by means of sleep.

Mental fatigue is a consequence of performing tasks with a high level of difficulty that involve predominantly information processing and/or are of protracted duration. In contrast with monotony, the recovery of the decrements is time-consuming and does not occur suddenly after changing task conditions. Symptoms of fatigue are identified on several levels of behavioural regulation: dis-regulation in the biological homeostasis between environment and organism, dis-regulation in the cognitive processes of goal-directed actions and loss of stability in goal-oriented motivation and achievement level.

 

 

 

 

 

 

 

 

 

 

Symptoms of mental fatigue can be identified in all subsystems of the human information processing system:

  • perception: reduced eye movements, reduced discrimination of signals, threshold deterioration
  • information processing: extension of decision time, action slips, decision uncertainty, blockings, “risky strategies” in action sequences, disturbances in sensorimotor coordination of movements
  • memory functions: prolongation of information in ultrashort-term storages, disturbances in the rehearsal processes in short-term memory, delay in information transmission in long-term memory and in memory searching processes.

Differential Diagnostic of Mental Fatigue

Sufficient criteria exist to differentiate amongst menta fatigue, monotony, mental satiation and stress (in a narrow sense) (table 1).

Table 1. Differentiation among several negative consequences of mental strain

Criteria

Mental fatigue

Monotony

Satiation        

Stress

Key
condition

Poor fit in terms of overload
preconditions

Poor fit in terms
of underload
preconditions

Loss of perceived sense of tasks

Goals perceived
as threatening

Mood

Tiredness without
boredom exhaustion

Tiredness with
boredom

Irritability

Anxiety, threat
aversion

Emotional
evaluation

Neutral

Neutral

Increased affective aversion

Increased anxiety

Activation

Continuously
decreased

Not continuously
decreased

Increased

Increased

Recovery

Time-consuming

Suddenly after task alternation

?

Long-term
disturbances in recovery

Prevention

Task design,
training, short-break
system

Enrichment of job content

Goal-setting
programmes
and job
enrichment

Job redesign,
conflict and stress management

 

Degrees of Mental Fatigue

The well-described phenomenology of mental fatigue (Schmidtke 1965), many valid methods of assessment and the great amount of experimental and field results offer the possibility of an ordinal scaling of degrees of mental fatigue (Hacker and Richter 1994). The scaling is based on the individual’s capacity to cope with behavioural decrements:

Level 1: Optimal and efficient performance: no symptoms of decrement in performance, mood and activation level.

Level 2: Complete compensation characterized by increased peripheral psycho-physiological activation (e.g., as measured by electromyogram of finger muscles), perceived increase of mental effort, increased variability in performance criteria.

Level 3: Labile compensation additional to that described in level 2: action slips, perceived fatigue, increasing (compensatory) psycho-physiological activity in central indicators, heart rate, blood pressure.

Level 4: Reduced efficiency additional to that described in level 3: decrease of performance criteria.

Level 5: Yet further functional disturbances: disturbances in social relationships and cooperation at workplace; symptoms of clinical fatigue like loss of sleep quality and vital exhaustion.

Prevention of Mental Fatigue

The design of task structures, environment, rest periods during working time and sufficient sleep are the ways to reduce symptoms of mental fatigue in order that no clinical consequences will occur:

1. Changes in the structure of tasks. Designing of preconditions for adequate learning and task structuring is not only a means of furthering the development of efficient job structures, but is also essential for the prevention of a misfit in terms of mental overload or underload:

    • Information processing burdens can be relieved by developing efficient internal task representations and organization of information. The resulting enlargement of cognitive capacity will match information needs and resources more aptly.
    • Human-centred technologies with high compatibility between the order of information as it is presented and the required task (Norman 1993) will reduce the mental effort necessary for information recoding and, in consequence, relieve symptoms of fatigue and stress.
    • Well-balanced coordination of different levels of regulations (as they apply to skills, rules and knowledge) may reduce effort and, moreover, increase human reliability (Rasmussen 1983).
    • Training workers in goal-directed action sequences in advance of actual problems will lighten their sense of mental effort by making their jobs clearer, more predictable and more evidently under their control. Their psychophysiological activation level will be effectively reduced.

     

      2. Introduction of systems of short-term breaks during work. The positive effects of such breaks depend on the observance of some preconditions. More short breaks are more efficient than fewer long breaks; effects depend on a fixed and therefore anticipatable time schedule; and the content of the breaks should have a compensatory function to the physical and mental job demands.

      3. Sufficient relaxation and sleep. Special employee-assistant programmes and stress-management techniques may support the ability of relaxation and the prevention of the development of chronicle fatigue (Sethi, Caro and Schuler 1987).

       

      Back

      Sunday, 16 January 2011 18:53

      In Vitro Toxicity Testing

      The emergence of sophisticated technologies in molecular and cellular biology has spurred a relatively rapid evolution in the life sciences, including toxicology. In effect, the focus of toxicology is shifting from whole animals and populations of whole animals to the cells and molecules of individual animals and humans. Since the mid-1980s, toxicologists have begun to employ these new methodologies in assessing the effects of chemicals on living systems. As a logical progression, such methods are being adapted for the purposes of toxicity testing. These scientific advances have worked together with social and economic factors to effect change in the evaluation of product safety and potential risk.

      Economic factors are specifically related to the volume of materials that must be tested. A plethora of new cosmetics, pharmaceuticals, pesticides, chemicals and household products is introduced into the market every year. All of these products must be evaluated for their potential toxicity. In addition, there is a backlog of chemicals already in use that have not been adequately tested. The enormous task of obtaining detailed safety information on all of these chemicals using traditional whole animal testing methods would be costly in terms of both money and time, if it could even be accomplished.

      There are also societal issues that relate to public health and safety, as well as increasing public concern about the use of animals for product safety testing. With regard to human safety, public interest and environmental advocacy groups have placed significant pressure on government agencies to apply more stringent regulations on chemicals. A recent example of this has been a movement by some environmental groups to ban chlorine and chlorine-containing compounds in the United States. One of the motivations for such an extreme action lies in the fact that most of these compounds have never been adequately tested. From a toxicological perspective, the concept of banning a whole class of diverse chemicals based simply on the presence of chlorine is both scientifically unsound and irresponsible. Yet, it is understandable that from the public’s perspective, there must be some assurance that chemicals released into the environment do not pose a significant health risk. Such a situation underscores the need for more efficient and rapid methods to assess toxicity.

      The other societal concern that has impacted the area of toxicity testing is animal welfare. The growing number of animal protection groups throughout the world have voiced considerable opposition to the use of whole animals for product safety testing. Active campaigns have been waged against manufacturers of cosmetics, household and personal care products and pharmaceuticals in attempts to stop animal testing. Such efforts in Europe have resulted in the passage of the Sixth Amendment to Directive 76/768/EEC (the Cosmetics Directive). The consequence of this Directive is that cosmetic products or cosmetic ingredients that have been tested in animals after January 1, 1998 cannot be marketed in the European Union, unless alternative methods are insufficiently validated. While this Directive has no jurisdiction over the sale of such products in the United States or other countries, it will significantly affect those companies that have international markets that include Europe.

      The concept of alternatives, which forms the basis for the development of tests other than those on whole animals, is defined by the three Rs: reduction in the numbers of animals used; refinement of protocols so that animals experience less stress or discomfort; and replacement of current animal tests with in vitro tests (i.e., tests done outside of the living animal), computer models or test on lower vertebrate or invertebrate species. The three Rs were introduced in a book published in 1959 by two British scientists, W.M.S. Russell and Rex Burch, The Principles of Humane Experimental Technique. Russell and Burch maintained that the only way in which valid scientific results could be obtained is through the humane treatment of animals, and believed that methods should be developed to reduce animal use and ultimately replace it. Interestingly, the principles outlined by Russell and Burch received little attention until the resurgence of the animal welfare movement in the mid-1970s. Today the concept of the three Rs is very much in the forefront with regard to research, testing and education.

      In summary, the development of in vitro test methodologies has been influenced by a variety of factors that have converged over the last ten to 20 years. It is difficult to ascertain if any of these factors alone would have had such a profound effect on toxicity testing strategies.

      Concept of In Vitro Toxicity Tests

      This section will focus solely on in vitro methods for evaluating toxicity, as one of the alternatives to whole-animal testing. Additional non-animal alternatives such as computer modelling and quantitative structure-activity relationships are discussed in other articles of this chapter.

      In vitro studies are generally conducted in animal or human cells or tissues outside of the body. In vitro literally means “in glass”, and refers to procedures carried out on living material or components of living material cultured in petri dishes or in test tubes under defined conditions. These may be contrasted with in vivo studies, or those carried out “in the living animal”. While it is difficult, if not impossible, to project the effects of a chemical on a complex organism when the observations are confined to a single type of cells in a dish, in vitro studies do provide a significant amount of information about intrinsic toxicity as well as cellular and molecular mechanisms of toxicity. In addition, they offer many advantages over in vivo studies in that they are generally less expensive and they may be conducted under more controlled conditions. Furthermore, despite the fact that small numbers of animals are still needed to obtain cells for in vitro cultures, these methods may be considered reduction alternatives (since many fewer animals are used compared to in vivo studies) and refinement alternatives (because they eliminate the need to subject the animals to the adverse toxic consequences imposed by in vivo experiments).

      In order to interpret the results of in vitro toxicity tests, determine their potential usefulness in assessing toxicity and relate them to the overall toxicological process in vivo, it is necessary to understand which part of the toxicological process is being examined. The entire toxicological process consists of events that begin with the organism’s exposure to a physical or chemical agent, progress through cellular and molecular interactions and ultimately manifest themselves in the response of the whole organism. In vitro tests are generally limited to the part of the toxicological process that takes place at the cellular and molecular level. The types of information that may be obtained from in vitro studies include pathways of metabolism, interaction of active metabolites with cellular and molecular targets and potentially measurable toxic endpoints that can serve as molecular biomarkers for exposure. In an ideal situation, the mechanism of toxicity of each chemical from exposure to organismal manifestation would be known, such that the information obtained from in vitro tests could be fully interpreted and related to the response of the whole organism. However, this is virtually impossible, since relatively few complete toxicological mechanisms have been elucidated. Thus, toxicologists are faced with a situation in which the results of an in vitro test cannot be used as an entirely accurate prediction of in vivo toxicity because the mechanism is unknown. However, frequently during the process of developing an in vitro test, components of the cellular and molecular mechanism(s) of toxicity are elucidated.

      One of the key unresolved issues surrounding the development and implementation of in vitro tests is related to the following consideration: should they be mechanistically based or is it sufficient for them to be descriptive? It is inarguably better from a scientific perspective to utilize only mechanistically based tests as replacements for in vivo tests. However in the absence of complete mechanistic knowledge, the prospect of developing in vitro tests to completely replace whole animal tests in the near future is almost nil. This does not, however, rule out the use of more descriptive types of assays as early screening tools, which is the case presently. These screens have resulted in a significant reduction in animal use. Therefore, until such time as more mechanistic information is generated, it may be necessary to employ to a more limited extent, tests whose results simply correlate well with those obtained in vivo.

      In Vitro Tests for Cytotoxicity

      In this section, several in vitro tests that have been developed to assess a chemical’s cytotoxic potential will be described. For the most part, these tests are easy to perform and analysis can be automated. One commonly used in vitro test for cytotoxicity is the neutral red assay. This assay is done on cells in culture, and for most applications, the cells can be maintained in culture dishes that contain 96 small wells, each 6.4mm in diameter. Since each well can be used for a single determination, this arrangement can accommodate multiple concentrations of the test chemical as well as positive and negative controls with a sufficient number of replicates for each. Following treatment of the cells with various concentrations of the test chemical ranging over at least two orders of magnitude (e.g., from 0.01mM to 1mM), as well as positive and negative control chemicals, the cells are rinsed and treated with neutral red, a dye that can be taken up and retained only by live cells. The dye may be added upon removal of the test chemical to determine immediate effects, or it may be added at various times after the test chemical is removed to determine cumulative or delayed effects. The intensity of the colour in each well corresponds to the number of live cells in that well. The colour intensity is measured by a spectrophotometer which may be equipped with a plate reader. The plate reader is programmed to provide individual measurements for each of the 96 wells of the culture dish. This automated methodology permits the investigator to rapidly perform a concentration-response experiment and to obtain statistically useful data.

      Another relatively simple assay for cytotoxicity is the MTT test. MTT (3[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) is a tetrazolium dye that is reduced by mitochondrial enzymes to a blue colour. Only cells with viable mitochondria will retain the ability to carry out this reaction; therefore the colour intensity is directly related to the degree of mitochondrial integrity. This is a useful test to detect general cytotoxic compounds as well as those agents that specifically target mitochondria.

      The measurement of lactate dehydrogenase (LDH) activity is also used as a broad-based assay for cytotoxicity. This enzyme is normally present in the cytoplasm of living cells and is released into the cell culture medium through leaky cell membranes of dead or dying cells that have been adversely affected by a toxic agent. Small amounts of culture medium may be removed at various times after chemical treatment of the cells to measure the amount of LDH released and determine a time course of toxicity. While the LDH release assay is a very general assessment of cytotoxicity, it is useful because it is easy to perform and it may be done in real time.

      There are many new methods being developed to detect cellular damage. More sophisticated methods employ fluorescent probes to measure a variety of intracellular parameters, such as calcium release and changes in pH and membrane potential. In general, these probes are very sensitive and may detect more subtle cellular changes, thus reducing the need to use cell death as an endpoint. In addition, many of these fluorescent assays may be automated by the use of 96-well plates and fluorescent plate readers.

      Once data have been collected on a series of chemicals using one of these tests, the relative toxicities may be determined. The relative toxicity of a chemical, as determined in an in vitro test, may be expressed as the concentration that exerts a 50% effect on the endpoint response of untreated cells. This determination is referred to as the EC50 (Effective Concentration for 50% of the cells) and may be used to compare toxicities of different chemicals in vitro. (A similar term used in evaluating relative toxicity is IC50, indicating the concentration of a chemical that causes a 50% inhibition of a cellular process, e.g., the ability to take up neutral red.) It is not easy to assess whether the relative in vitro toxicity of the chemicals is comparable to their relative in vivo toxicities, since there are so many confounding factors in the in vivo system, such as toxicokinetics, metabolism, repair and defence mechanisms. In addition, since most of these assays measure general cytotoxicity endpoints, they are not mechanistically based. Therefore, agreement between in vitro and in vivo relative toxicities is simply correlative. Despite the numerous complexities and difficulties in extrapolating from in vitro to in vivo, these in vitro tests are proving to be very valuable because they are simple and inexpensive to perform and may be used as screens to flag highly toxic drugs or chemicals at early stages of development.

      Target Organ Toxicity

      In vitro tests can also be used to assess specific target organ toxicity. There are a number of difficulties associated with designing such tests, the most notable being the inability of in vitro systems to maintain many of the features of the organ in vivo. Frequently, when cells are taken from animals and placed into culture, they tend either to degenerate quickly and/or to dedifferentiate, that is, lose their organ-like functions and become more generic. This presents a problem in that within a short period of time, usually a few days, the cultures are no longer useful for assessing organ-specific effects of a toxin.

      Many of these problems are being overcome because of recent advances in molecular and cellular biology. Information that is obtained about the cellular environment in vivo may be utilized in modulating culture conditions in vitro. Since the mid-1980s, new growth factors and cytokines have been discovered, and many of these are now available commercially. Addition of these factors to cells in culture helps to preserve their integrity and may also help to retain more differentiated functions for longer periods of time. Other basic studies have increased the knowledge of the nutritional and hormonal requirements of cells in culture, so that new media may be formulated. Recent advances have also been made in identifying both naturally occurring and artificial extracellular matrices on which cells may be cultured. Culture of cells on these different matrices can have profound effects on both their structure and function. A major advantage derived from this knowledge is the ability to intricately control the environment of cells in culture and individually examine the effects of these factors on basic cell processes and on their responses to different chemical agents. In short, these systems can provide great insight into organ-specific mechanisms of toxicity.

      Many target organ toxicity studies are conducted in primary cells, which by definition are freshly isolated from an organ, and usually exhibit a finite lifetime in culture. There are many advantages to having primary cultures of a single cell type from an organ for toxicity assessment. From a mechanistic perspective, such cultures are useful for studying specific cellular targets of a chemical. In some instances, two or more cell types from an organ may be cultured together, and this provides an added advantage of being able to look at cell-cell interactions in response to a toxin. Some co-culture systems for skin have been engineered so that they form a three dimensional structure resembling skin in vivo. It is also possible to co-culture cells from different organs—for example, liver and kidney. This type of culture would be useful in assessing the effects specific to kidney cells, of a chemical that must be bioactivated in the liver.

      Molecular biological tools have also played an important role in the development of continuous cell lines that can be useful for target organ toxicity testing. These cell lines are generated by transfecting DNA into primary cells. In the transfection procedure, the cells and the DNA are treated such that the DNA can be taken up by the cells. The DNA is usually from a virus and contains a gene or genes that, when expressed, allow the cells to become immortalized (i.e., able to live and grow for extended periods of time in culture). The DNA can also be engineered so that the immortalizing gene is controlled by an inducible promoter. The advantage of this type of construct is that the cells will divide only when they receive the appropriate chemical stimulus to allow expression of the immortalizing gene. An example of such a construct is the large T antigen gene from Simian Virus 40 (SV40) (the immortalizing gene), preceded by the promoter region of the metallothionein gene, which is induced by the presence of a metal in the culture medium. Thus, after the gene is transfected into the cells, the cells may be treated with low concentrations of zinc to stimulate the MT promoter and turn on the expression of the T antigen gene. Under these conditions, the cells proliferate. When zinc is removed from the medium, the cells stop dividing and under ideal conditions return to a state where they express their tissue-specific functions.

      The ability to generate immortalized cells combined with the advances in cell culture technology have greatly contributed to the creation of cell lines from many different organs, including brain, kidney and liver. However, before these cell lines may be used as a surrogate for the bona fide cell types, they must be carefully characterized to determine how “normal” they really are.

      Other in vitro systems for studying target organ toxicity involve increasing complexity. As in vitro systems progress in complexity from single cell to whole organ culture, they become more comparable to the in vivo milieu, but at the same time they become much more difficult to control given the increased number of variables. Therefore, what may be gained in moving to a higher level of organization can be lost in the inability of the researcher to control the experimental environment. Table 1 compares some of the characteristics of various in vitro systems that have been used to study hepatotoxicity.

      Table 1. Comparison of in vitro systems for hepatotoxicity studies

      System Complexity
      (level of interaction)
      Ability to retain liver-specific functions Potential duration of culture Ability to control environment
      Immortalized cell lines some cell to cell (varies with cell line) poor to good (varies with cell line) indefinite excellent
      Primary hepatocyte cultures cell to cell fair to excellent (varies with culture conditions) days to weeks excellent
      Liver cell co-cultures cell to cell (between the same and different cell types) good to excellent weeks excellent
      Liver slices cell to cell (among all cell types) good to excellent hours to days good
      Isolated, perfused liver cell to cell (among all cell types), and intra-organ excellent hours fair

       

      Precision-cut tissue slices are being used more extensively for toxicological studies. There are new instruments available that enable the researcher to cut uniform tissue slices in a sterile environment. Tissue slices offer some advantage over cell culture systems in that all of the cell types of the organ are present and they maintain their in vivo architecture and intercellular communication. Thus, in vitro studies may be conducted to determine the target cell type within an organ as well as to investigate specific target organ toxicity. A disadvantage of the slices is that they degenerate rapidly after the first 24 hours of culture, mainly due to poor diffusion of oxygen to the cells on the interior of the slices. However, recent studies have indicated that more efficient aeration may be achieved by gentle rotation. This, together with the use of a more complex medium, allows the slices to survive for up to 96 hours.

      Tissue explants are similar in concept to tissue slices and may also be used to determine the toxicity of chemicals in specific target organs. Tissue explants are established by removing a small piece of tissue (for teratogenicity studies, an intact embryo) and placing it into culture for further study. Explant cultures have been useful for short-term toxicity studies including irritation and corrosivity in skin, asbestos studies in trachea and neurotoxicity studies in brain tissue.

      Isolated perfused organs may also be used to assess target organ toxicity. These systems offer an advantage similar to that of tissue slices and explants in that all cell types are present, but without the stress to the tissue introduced by the manipulations involved in preparing slices. In addition, they allow for the maintenance of intra-organ interactions. A major disadvantage is their short-term viability, which limits their use for in vitro toxicity testing. In terms of serving as an alternative, these cultures may be considered a refinement since the animals do not experience the adverse consequences of in vivo treatment with toxicants. However, their use does not significantly decrease the numbers of animals required.

      In summary, there are several types of in vitro systems available for assessing target organ toxicity. It is possible to acquire much information about mechanisms of toxicity using one or more of these techniques. The difficulty remains in knowing how to extrapolate from an in vitro system, which represents a relatively small part of the toxicological process, to the whole process occurring in vivo.

      In Vitro Tests for Ocular Irritation

      Perhaps the most contentious whole-animal toxicity test from an animal welfare perspective is the Draize test for eye irritation, which is conducted in rabbits. In this test, a small fixed dose of a chemical is placed in one of the rabbit’s eyes while the other eye is used as a control. The degree of irritation and inflammation is scored at various times after exposure. A major effort is being made to develop methodologies to replace this test, which has been criticized not only for humane reasons, but also because of the subjectivity of the observations and variability of the results. It is interesting to note that despite the harsh criticism the Draize test has received, it has proven to be remarkably successful in predicting human eye irritants, particularly slightly to moderately irritating substances, that are difficult to identify by other methods. Thus, the demands on in vitro alternatives are great.

      The quest for alternatives to the Draize test is a complicated one, albeit one that is predicted to be successful. Numerous in vitro and other alternatives have been developed and in some cases they have been implemented. Refinement alternatives to the Draize test, which by definition, are less painful or distressful to the animals, include the Low Volume Eye Test, in which smaller amounts of test materials are placed in the rabbits’ eyes, not only for humane reasons, but to more closely mimic the amounts to which people may actually be accidentally exposed. Another refinement is that substances which have a pH less than 2 or greater than 11.5 are no longer tested in animals since they are known to be severely irritating to the eye.

      Between 1980 and 1989, there has been an estimated 87% decline in the number of rabbits used for eye irritation testing of cosmetics. In vitro tests have been incorporated as part of a tier-testing approach to bring about this vast reduction in whole-animal tests. This approach is a multi-step process that begins with a thorough examination of the historical eye irritation data and physical and chemical analysis of the chemical to be evaluated. If these two processes do not yield enough information, then a battery of in vitro tests is performed. The additional data obtained from the in vitro tests might then be sufficient to assess the safety of the substance. If not, then the final step would be to perform limited in vivo tests. It is easy to see how this approach can eliminate or at least drastically reduce the numbers of animals needed to predict the safety of a test substance.

      The battery of in vitro tests that is used as part of this tier-testing strategy depends upon the needs of the particular industry. Eye irritation testing is done by a wide variety of industries from cosmetics to pharmaceuticals to industrial chemicals. The type of information required by each industry varies and therefore it is not possible to define a single battery of in vitro tests. A test battery is generally designed to assess five parameters: cytotoxicity, changes in tissue physiology and biochemistry, quantitative structure-activity relationships, inflammation mediators, and recovery and repair. An example of a test for cytotoxicity, which is one possible cause for irritation, is the neutral red assay using cultured cells (see above). Changes in cellular physiology and biochemistry resulting from exposure to a chemical may be assayed in cultures of human corneal epithelial cells. Alternatively, investigators have also used intact or dissected bovine or chicken eyeballs obtained from slaughterhouses. Many of the endpoints measured in these whole organ cultures are the same as those measured in vivo, such as corneal opacity and corneal swelling.

      Inflammation is frequently a component of chemical-induced eye injury, and there are a number of assays available to examine this parameter. Various biochemical assays detect the presence of mediators released during the inflammatory process such as arachidonic acid and cytokines. The chorioallantoic membrane (CAM) of the hen’s egg may also be used as an indicator of inflammation. In the CAM assay, a small piece of the shell of a ten-to-14-day chick embryo is removed to expose the CAM. The chemical is then applied to the CAM and signs of inflammation, such as vascular hemorrhaging, are scored at various times thereafter.

      One of the most difficult in vivo processes to assess in vitro is recovery and repair of ocular injury. A newly developed instrument, the silicon microphysiometer, measures small changes in extracellular pH and can been used to monitor cultured cells in real time. This analysis has been shown to correlate fairly well with in vivo recovery and has been used as an in vitro test for this process. This has been a brief overview of the types of tests being employed as alternatives to the Draize test for ocular irritation. It is likely that within the next several years a complete series of in vitro test batteries will be defined and each will be validated for its specific purpose.

      Validation

      The key to regulatory acceptance and implementation of in vitro test methodologies is validation, the process by which the credibility of a candidate test is established for a specific purpose. Efforts to define and coordinate the validation process have been made both in the United States and in Europe. The European Union established the European Centre for the Validation of Alternative Methods (ECVAM) in 1993 to coordinate efforts there and to interact with American organizations such as the Johns Hopkins Centre for Alternatives to Animal Testing (CAAT), an academic centre in the United States, and the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM), composed of representatives from the National Institutes of Health, the US Environmental Protection Agency, the US Food and Drug Administration and the Consumer Products Safety Commission.

      Validation of in vitro tests requires substantial organization and planning. There must be consensus among government regulators and industrial and academic scientists on acceptable procedures, and sufficient oversight by a scientific advisory board to ensure that the protocols meet set standards. The validation studies should be performed in a series of reference laboratories using calibrated sets of chemicals from a chemical bank and cells or tissues from a single source. Both intralaboratory repeatability and interlaboratory reproducibility of a candidate test must be demonstrated and the results subjected to appropriate statistical analysis. Once the results from the different components of the validation studies have been compiled, the scientific advisory board can make recommendations on the validity of the candidate test(s) for a specific purpose. In addition, results of the studies should be published in peer-reviewed journals and placed in a database.

      The definition of the validation process is currently a work in progress. Each new validation study will provide information useful to the design of the next study. International communication and cooperation are essential for the expeditious development of a widely acceptable series of protocols, particularly given the increased urgency imposed by the passage of the EC Cosmetics Directive. This legislation may indeed provide the needed impetus for a serious validation effort to be undertaken. It is only through completion of this process that the acceptance of in vitro methods by the various regulatory communities can commence.

      Conclusion

      This article has provided a broad overview of the current status of in vitro toxicity testing. The science of in vitro toxicology is relatively young, but it is growing exponentially. The challenge for the years ahead is to incorporate the mechanistic knowledge generated by cellular and molecular studies into the vast inventory of in vivo data to provide a more complete description of toxicological mechanisms as well as to establish a paradigm by which in vitro data may be used to predict toxicity in vivo. It will only be through the concerted efforts of toxicologists and government representatives that the inherent value of these in vitro methods can be realized.

       

      Back

      Sunday, 16 January 2011 18:56

      Structure Activity Relationships

      Structure activity relationships (SAR) analysis is the utilization of information on the molecular structure of chemicals to predict important characteristics related to persistence, distribution, uptake and absorption, and toxicity. SAR is an alternative method of identifying potential hazardous chemicals, which holds promise of assisting industries and governments in prioritizing substances for further evaluation or for early-stage decision making for new chemicals. Toxicology is an increasingly expensive and resource-intensive undertaking. Increased concerns over the potential for chemicals to cause adverse effects in exposed human populations have prompted regulatory and health agencies to expand the range and sensitivity of tests to detect toxicological hazards. At the same time, the real and perceived burdens of regulation upon industry have provoked concerns for the practicality of toxicity testing methods and data analysis. At present, the determination of chemical carcinogenicity depends upon lifetime testing of at least two species, both sexes, at several doses, with careful histopathological analysis of multiple organs, as well as detection of preneoplastic changes in cells and target organs. In the United States, the cancer bioassay is estimated to cost in excess of $3 million (1995 dollars).

      Even with unlimited financial resources, the burden of testing the approximately 70,000 existing chemicals produced in the world today would exceed the available resources of trained toxicologists. Centuries would be required to complete even a first tier evaluation of these chemicals (NRC 1984). In many countries ethical concerns over the use of animals in toxicity testing have increased, bringing additional pressures upon the uses of standard methods of toxicity testing. SAR has been widely used in the pharmaceutical industry to identify molecules with potential for beneficial use in treatment (Hansch and Zhang 1993). In environmental and occupational health policy, SAR is used to predict the dispersion of compounds in the physical-chemical environment and to screen new chemicals for further evaluation of potential toxicity. Under the US Toxic Substances Control Act (TSCA), the EPA has used since 1979 an SAR approach as a “first screen” of new chemicals in the premanufacture notification (PMN) process; Australia uses a similar approach as part of its new chemicals notification (NICNAS) procedure. In the US SAR analysis is an important basis for determining that there is a reasonable basis to conclude that manufacture, processing, distribution, use or disposal of the substance will present an unreasonable risk of injury to human health or the environment, as required by Section 5(f) of TSCA. On the basis of this finding, EPA can then require actual tests of the substance under Section 6 of TSCA.

      Rationale for SAR

      The scientific rationale for SAR is based upon the assumption that the molecular structure of a chemical will predict important aspects of its behaviour in physical-chemical and biological systems (Hansch and Leo 1979).

      SAR Process

      The SAR review process includes identification of the chemical structure, including empirical formulations as well as the pure compound; identification of structurally analogous substances; searching databases and literature for information on structural analogs; and analysis of toxicity and other data on structural analogs. In some rare cases, information on the structure of the compound alone can be sufficient to support some SAR analysis, based upon well-understood mechanisms of toxicity. Several databases on SAR have been compiled, as well as computer-based methods for molecular structure prediction.

      With this information, the following endpoints can be estimated with SAR:

      • physical-chemical parameters: boiling point, vapour pressure, water solubility, octanol/water partition coefficient
      • biological/environmental fate parameters: biodegradation, soil sorption, photodegradation, pharmacokinetics
      • toxicity parameters: aquatic organism toxicity, absorption, acute mammalian toxicity (limit test or LD50), dermal, lung and eye irritation, sensitization, subchronic toxicity, mutagenicity.

       

      It should be noted that SAR methods do not exist for such important health endpoints as carcinogenicity, developmental toxicity, reproductive toxicity, neurotoxicity, immunotoxicity or other target organ effects. This is due to three factors: the lack of a large database upon which to test SAR hypotheses, lack of knowledge of structural determinants of toxic action, and the multiplicity of target cells and mechanisms that are involved in these endpoints (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”). Some limited attempts to utilize SAR for predicting pharmacokinetics using information on partition coefficients and solubility (Johanson and Naslund 1988). More extensive quantitative SAR has been done to predict P450-dependent metabolism of a range of compounds and binding of dioxin- and PCB-like molecules to the cytosolic “dioxin” receptor (Hansch and Zhang 1993).

      SAR has been shown to have varying predictability for some of the endpoints listed above, as shown in table 1. This table presents data from two comparisons of predicted activity with actual results obtained by empirical measurement or toxicity testing. SAR as conducted by US EPA experts performed more poorly for predicting physical-chemical properties than for predicting biological activity, including biodegradation. For toxicity endpoints, SAR performed best for predicting mutagenicity. Ashby and Tennant (1991) in a more extended study also found good predictability of short-term genotoxicity in their analysis of NTP chemicals. These findings are not surprising, given current understanding of molecular mechanisms of genotoxicity (see “Genetic toxicology”) and the role of electrophilicity in DNA binding. In contrast, SAR tended to underpredict systemic and subchronic toxicity in mammals and to overpredict acute toxicity to aquatic organisms.

      Table 1. Comparison of SAR and test data: OECD/NTP analyses

      Endpoint Agreement (%) Disagreement (%) Number
      Boiling point 50 50 30
      Vapour pressure 63 37 113
      Water solubility 68 32 133
      Partition coefficient 61 39 82
      Biodegradation 93 7 107
      Fish toxicity 77 22 130
      Daphnia toxicity 67 33 127
      Acute mammalian toxicity (LD50 ) 80 201 142
      Skin irritation 82 18 144
      Eye irritation 78 22 144
      Skin sensitization 84 16 144
      Subchronic toxicity 57 32 143
      Mutagenicity2 88 12 139
      Mutagenicity3 82–944 1–10 301
      Carcinogenicity3 : Two year bioassay 72–954 301

      Source: Data from OECD, personal communication C. Auer ,US EPA. Only those endpoints for which comparable SAR predictions and actual test data were available were used in this analysis. NTP data are from Ashby and Tennant 1991.

      1 Of concern was the failure by SAR to predict acute toxicity in 12% of the chemicals tested.

      2 OECD data, based on Ames test concordance with SAR

      3 NTP data, based on genetox assays compared to SAR predictions for several classes of “structurally alerting chemicals”.

      4 Concordance varies with class; highest concordance was with aromatic amino/nitro compounds; lowest with “miscellaneous” structures.

      For other toxic endpoints, as noted above, SAR has less demonstrable utility. Mammalian toxicity predictions are complicated by the lack of SAR for toxicokinetics of complex molecules. Nevertheless, some attempts have been made to propose SAR principles for complex mammalian toxicity endpoints (for instance, see Bernstein (1984) for an SAR analysis of potential male reproductive toxicants). In most cases, the database is too small to permit rigorous testing of structure-based predictions.

      At this point it may be concluded that SAR may be useful mainly for prioritizing the investment of toxicity testing resources or for raising early concerns about potential hazard. Only in the case of mutagenicity is it likely that SAR analysis by itself can be utilized with reliability to inform other decisions. For no endpoint is it likely that SAR can provide the type of quantitative information required for risk assessment purposes as discussed elsewhere in this chapter and Encyclopaedia.

       

      Back

      Monday, 07 March 2011 18:46

      Overview

      In the 3rd edition of the ILO’s Encyclopaedia, published in 1983, ergonomics was summarized in one article that was only about four pages long. Since the publication of the 3rd edition, there has been a major change in emphasis and in understanding of interrelationships in safety and health: the world is no longer easily classifiable into medicine, safety and hazard prevention. In the last decade almost every branch in the production and service industries has expended great effort in improving productivity and quality. This restructuring process has yielded practical experience which clearly shows that productivity and quality are directly related to the design of working conditions. One direct economical measure of productivity—the costs of absenteeism through illness—is affected by working conditions. Therefore it should be possible to increase productivity and quality and to avoid absenteeism by paying more attention to the design of working conditions.

      In sum, the simple hypothesis of modern ergonomics can be stated thus: Pain and exhaustion cause health hazards, wasted productivity and reduced quality, which are measures of the costs and benefits of human work.

      This simple hypothesis can be contrasted to occupational medicine which generally restricts itself to establishing the aetiology of occupational diseases. Occupational medicine’s goal is to establish conditions under which the probability of developing such diseases is minimized. Using ergonomic principles these conditions can be most easily formulated in the form of demands and load limitations. Occupational medicine can be summed up as establishing “limitations through medico-scientific studies”. Traditional ergonomics regards its role as one of formulating the methods where, using design and work organization, the limitations established through occupational medicine can be put into practice. Traditional ergonomics could then be described as developing “corrections through scientific studies”, where “corrections” are understood to be all work design recommendations that call for attention to be paid to load limits only in order to prevent health hazards. It is a characteristic of such corrective recommendations that practitioners are finally left alone with the problem of applying them—there is no multidisciplinary team effort.

      The original aim of inventing ergonomics in 1857 stands in contrast to this kind of “ergonomics by correction”:

      ... a scientific approach enabling us to reap, for the benefit of ourselves and others, the best fruits of life’s labour for the minimum effort and maximum satisfaction (Jastrzebowski 1857).

      The root of the term “ergonomics” stems from the Greek “nomos” meaning rule, and “ergo” meaning work. One could propose that ergonomics should develop “rules” for a more forward-looking, prospective concept of design. In contrast to “corrective ergonomics”, the idea of prospective ergonomics is based on applying ergonomic recommendations which simultaneously take into consideration profitability margins (Laurig 1992).

      The basic rules for the development of this approach can be deduced from practical experience and reinforced by the results of occupational hygiene and ergonomics research. In other words, prospective ergonomics means searching for alternatives in work design which prevent fatigue and exhaustion on the part of the working subject in order to promote human productivity (“... for the benefit of ourselves and others”). This comprehensive approach of prospective ergonomics includes workplace and equipment design as well as the design of working conditions determined by an increasing amount of information processing and a changing work organization. Prospective ergonomics is, therefore, an interdisciplinary approach of researchers and practitioners from a wide range of fields united by the same goal, and one part of a general basis for a modern understanding of occupational safety and health (UNESCO 1992).

      Based on this understanding, the Ergonomics chapter in the 4th edition of the ILO Encyclopaedia covers the different clusters of knowledge and experiences oriented toward worker characteristics and capabilities, and aimed at an optimum use of the resource “human work” by making work more “ergonomic”, that is, more humane.

      The choice of topics and the structure of articles in this chapter follows the structure of typical questions in the field as practised in industry. Beginning with the goals, principles and methods of ergonomics, the articles which follow cover fundamental principles from basic sciences, such as physiology and psychology. Based on this foundation, the next articles introduce major aspects of an ergonomic design of working conditions ranging from work organization to product design. “Designing for everyone” puts special emphasis on an ergonomic approach that is based on the characteristics and capabilities of the worker, a concept often overlooked in practice. The importance and diversity of ergonomics is shown in two examples at the end of the chapter and can also be found in the fact that many other chapters in this edition of the ILO Encyclopaedia are directly related to ergonomics, such as Heat and Cold, Noise, Vibration, Visual Display Units, and virtually all chapters in the sections Accident and Safety Management and Management and Policy.

       

      Back

      Monday, 14 March 2011 19:23

      Work Organization

      Design of Production Systems

      Many companies invest millions in computer-supported production systems and at the same time do not make full use of their human resources, whose value can be significantly increased through investments in training. In fact, the use of qualified employee potential instead of highly complex automation can not only, in certain circumstances, significantly reduce investment costs, it can also greatly increase flexibility and system capability.

      Causes of Inefficient Use of Technology

      The improvements which investments in modern technology are intended to make are frequently not even approximately achieved (Strohm, Kuark and Schilling 1993; Ulich 1994). The most important reasons for this are due to problems in the areas of technology, organization and employee qualifications.

      Three main causes can be identified for problems with technology:

        1. Insufficient technology. Because of the rapidity of technological changes, new technology reaching the market has sometimes undergone inadequate continuous usability tests, and unplanned downtime can result.
        2. Unsuitable technology. Technology developed for large companies is often not suitable for smaller companies. When a small firm introduces a production planning and control system developed for a large company, it may deprive itself of the flexibility necessary for its success or even survival.
        3. Excessively complex technology. When designers and developers use their entire planning knowledge to realize what is technically feasible without taking into account the experience of those involved in production, the result can be complex automated systems which are no longer easy to master.

             

            Problems with organization are primarily attributable to continuous attempts at implementing the latest technology in unsuitable organizational structures. For instance, it makes little sense to introduce third, fourth and fifth generation computers into second generation organizations. But this is exactly what many companies do (Savage and Appleton 1988). In many companies, a radical restructuring of the organization is a precondition for the successful use of new technology. This particularly includes an examination of the concepts of production planning and control. Ultimately, local self-control by qualified operators can in certain circumstances be significantly more efficient and economical than a technically highly developed production planning and control system.

            Problems with the qualifications of employees primarily arise because a large number of companies do not recognize the need for qualification measures in conjunction with the introduction of computer-supported production systems. In addition, training is too frequently regarded as a cost factor to be controlled and minimized, rather than as a strategic investment. In fact, system downtime and the resulting costs can often be effectively reduced by allowing faults to be diagnosed and remedied on the basis of operators’ competence and system-specific knowledge and experience. This is particularly the case in tightly coupled production facilities (Köhler et al. 1989). The same applies to introducing new products or product variants. Many examples of inefficient excessive technology use testify to such relationships.

            The consequence of the analysis briefly presented here is that the introduction of computer-supported production systems only promises success if it is integrated into an overall concept which seeks to jointly optimize the use of technology, the structure of the organization and the enhancement of staff qualifications.

            From the Task to the Design of Socio-Technical Systems

            Work-related psychological concepts of production design are based on the primacy of
            the task
            . On the one hand, the task forms the interface between individual and organization (Volpert 1987). On the other hand, the task links the social subsystem with the technical subsystem. “The task must be the point of articulation between the social and technical system—linking the job in the technical system with its correlated role behaviour, in the social system” (Blumberg 1988).

            This means that a socio-technical system, for example, a production island, is primarily defined by the task which it has to perform. The distribution of work between human and machine plays a central role, because it decides whether the person “functions” as the long arm of the machine with a function leftover in an automation “gap” or whether the machine functions as the long arm of the person, with a tool function supporting human capabilities and competence. We refer to these opposing positions as “technology-oriented” and “work-oriented” (Ulich 1994).

            The Concept of Complete Task

            The principle of complete activity (Hacker 1986) or complete task plays a central role in work-related psychological concepts for defining work tasks and for dividing up tasks between human and machine. Complete tasks are those “over which the individual has considerable personal control” and that “induce strong forces within the individual to complete or to continue them”. Complete tasks contribute to the “development of what has been described ... as ‘task orientation’—that is, a state of affairs in which the individual’s interest is aroused, engaged and directed by the character of the task” (Emery 1959). Figure 1 summarizes characteristics of completeness which must be taken into account for measures geared towards work-oriented design of production systems.

            Figure 1. Characteristics of complete tasks

            ERG160T1
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
             
            Illustrations of concrete consequences for production design arising from the principle of the complete task are the following:
             
              1. The independent setting of objectives, which can be incorporated into higher-order goals, requires turning away from central planning and control in favour of decentralized shop-floor control, which provides the possibility of making self-determined decisions within defined periods of time.
              2. Self-determined preparation for action, in the sense of carrying out planning functions, requires the integration of work preparation tasks on the shop-floor.
              3. Selecting methods means, for example, allowing a designer to decide whether he or she wishes to use the drawing board instead of an automated system (such as a CAD application) to perform certain subtasks, provided that it is ensured that data required for other parts of the process are entered in the system.
              4. Performance functions with process feedback for correcting actions where appropriate require in the case of encapsulated work processes “windows to the process” which help to minimize process distance.
              5. Action control with feedback of results means that shop-floor workers take on the function of quality inspection and control.

                       

                      These indications of the consequences arising from realizing the principle of the complete task make two things clear: (1) in many cases—probably even the majority of cases—complete tasks in the sense described in figure 1 can only be structured as group tasks on account of the resulting complexity and the associated scope; (2) restructuring of work tasks—particularly when it is linked to introducing group work—requires their integration into a comprehensive restructuring concept which covers all levels of the company.

                      The structural principles which apply to the various levels are summarized in table 1.

                      Table 1. Work-oriented principles for production structuring

                      Organizational level

                      Structural principle

                      Company

                      Decentralization

                      Organizational unit

                      Functional integration

                      Group

                      Self-regulation1

                      Individual

                      Skilled production work1

                      1 Taking into account the principle of differential work design.

                      Source: Ulich 1994.

                      Possibilities for realizing the principles for production structuring outlined in table 1 are illustrated by the proposal for restructuring a production company shown in figure 2. This proposal, which was unanimously approved both by those responsible for production and by the project group formed for the purpose of restructuring, also demonstrates a fundamental turning away from Tayloristic concepts of labour and authority divisions. The examples of many companies show that the restructuring of work and organization structures on the basis of such models is able to meet both work psychological criteria of promoting health and personality development and the demand for long-term economic efficiency (see Ulich 1994).

                      Figure 2. Proposal for restructuring a production company

                      ERG160F1

                      The line of argument favoured here—only very briefly outlined for reasons of space—seeks to make three things clear:

                        1. Concepts like the ones mentioned here represent an alternative to “lean production” in the sense described by Womack, Jones and Roos (1990). While in the latter approach “every free space is removed” and extreme breaking down of work activities in the Tayloristic sense is maintained, in the approach being advanced in these pages, complete tasks in groups with wide-ranging self-regulation play a central role.
                        2. Classical career paths for skilled workers are modified and in some cases precluded by the necessary realization of the functional integration principle, that is, with the reintegration on the shop-floor of what are known as indirectly productive functions, such as shop-floor work preparation, maintenance, quality control and so forth. This requires a fundamental reorientation in the sense of replacing the traditional career culture with a competence culture.
                        3. Concepts such as those mentioned here mean a fundamental change to corporate power structures which must find their counterpart in the development of corresponding possibilities for participation.

                             

                            Workers’ Participation

                            In the previous sections types of work organization were described that have as one basic characteristic the democratization at lower levels of an organization’s hierarchy through increased autonomy and decision latitude regarding work content as well as working conditions on the shop-floor. In this section, democratization is approached from a different angle by looking at participative decision-making in general. First, a definitional framework for participation is presented, followed by a discussion of research on the effects of participation. Finally, participative systems design is looked at in some detail.

                            Definitional framework for participation

                            Organizational development, leadership, systems design, and labour relations are examples of the variety of tasks and contexts where participation is considered relevant. A common denominator which can be regarded as the core of participation is the opportunity for individuals and groups to promote their interests through influencing the choice between alternative actions in a given situation (Wilpert 1989). In order to describe participation in more detail, a number of dimensions are necessary, however. Frequently suggested dimensions are (a) formal-informal, (b) direct-indirect, (c) degree of influence and (d) content of decision (e.g., Dachler and Wilpert 1978; Locke and Schweiger 1979). Formal participation refers to participation within legally or otherwise prescribed rules (e.g., bargaining procedures, guidelines for project management), while informal participation is based on non-prescribed exchanges, for example, between supervisor and subordinate. Direct participation allows for direct influence by the individuals concerned, whereas indirect participation functions through a system of representation. Degree of influence is usually described by means of a scale ranging from “no information to employees about a decision”, through “advance information to employees” and “consultation with employees” to “common decision of all parties involved”. As regards the giving of advance information without any consultation or common decision-making, some authors argue that this is not a low level of participation at all, but merely a form of “pseudo-participation” (Wall and Lischeron 1977). Finally, the content area for participative decision-making can be specified, for example, technological or organizational change, labour relations, or day-to-day operational decisions.

                            A classification scheme quite different from those derived from the dimensions presented so far was developed by Hornby and Clegg (1992). Based on work by Wall and Lischeron (1977), they distinguish three aspects of participative processes:

                              1. the types and levels of interactions between the parties involved in a decision
                              2. the flow of information between the participants
                              3. the nature and degree of influence the parties exert on each other.

                                   

                                  They then used these aspects to complement a framework suggested by Gowler and Legge (1978), which describes participation as a function of two organizational variables, namely, type of structure (mechanistic versus organic) and type of process (stable versus unstable). As this model includes a number of assumptions about participation and its relationship to organization, it cannot be used to classify general types of participation. It is presented here as one attempt to define participation in a broader context (see table 2). (In the last section of this article, Hornby and Clegg’s study (1992) will be discussed, which also aimed at testing the model’s assumptions.)

                                  Table 2. Participation in organizational context

                                   

                                  Organizational structure

                                   

                                  Mechanistic

                                  Organic

                                  Organizational  processes

                                     

                                  Stable

                                  Regulated
                                  Interaction: vertical/command
                                  Information flow: non-reciprocal
                                  Influence: asymmetrical

                                  Open
                                  Interaction: lateral/consultative
                                  Information flow: reciprocal
                                  Influence: asymmetrical

                                  Unstable

                                  Arbitrary
                                  Interaction: ritualistic/random
                                  Information flow:
                                  non-reciprocal/sporadic
                                  Influence: authoritarian

                                  Regulated
                                  Interaction: intensive/random
                                  Information flow:
                                  reciprocal/interrogative
                                  Influence: paternalistic

                                  Source: Adapted from Hornby and Clegg 1992.

                                  An important dimension usually not included in classifications for participation is the organizational goal behind choosing a participative strategy (Dachler and Wilpert 1978). Most fundamentally, participation can take place in order to comply with a democratic norm, irrespective of its influence on the effectiveness of the decision-making process and the quality of the decision outcome and implementation. On the other hand, a participative procedure can be chosen to benefit from the knowledge and experience of the individuals involved or to ensure acceptance of a decision. Often it is difficult to identify the objectives behind choosing a participative approach to a decision and often several objectives will be found at the same time, so that this dimension cannot be easily used to classify participation. However, for understanding participative processes it is an important dimension to keep in mind.

                                  Research on the effects of participation

                                  A widely shared assumption holds that satisfaction as well as productivity gains can be achieved by providing the opportunity for direct participation in decision-making. Overall, research has supported this assumption, but the evidence is not unequivocal and many of the studies have been criticized on theoretical and methodological grounds (Cotton et al. 1988; Locke and Schweiger 1979; Wall and Lischeron 1977). Cotton et al. (1988) argued that inconsistent findings are due to differences in the form of participation studied; for instance, informal participation and employee ownership are associated with high productivity and satisfaction whereas short-term participation is ineffective in both respects. Although their conclusions were strongly criticized (Leana, Locke and Schweiger 1990), there is agreement that participation research is generally characterized by a number of deficiencies, ranging from conceptual problems like those mentioned by Cotton et al. (1988) to methodological issues like variations in results based on different operationalizations of the dependent variables (e.g., Wagner and Gooding 1987).

                                  To exemplify the difficulties of participation research, the classic study by Coch and French (1948) is briefly described, followed by the critique of Bartlem and Locke (1981). The focus of the former study was overcoming resistance to change by means of participation. Operators in a textile plant where frequent transfers between work tasks occurred were given the opportunity to participate in the design of their new jobs to varying degrees. One group of operators participated in the decisions (detailed working procedures for new jobs and piece rates) through chosen representatives, that is, several operators of their group. In two smaller groups, all operators participated in those decisions and a fourth group served as control with no participation allowed. Previously it had been found in the plant that most operators resented being transferred and were slower in relearning their new jobs as compared with learning their first job in the plant and that absenteeism and turnover among transferred operators was higher than among operators not recently transferred.

                                  This occurred despite the fact that a transfer bonus was given to compensate for the initial loss in piece-rate earnings after a transfer to a new job. Comparing the three experimental conditions it was found that the group with no participation remained at a low level of production—which had been set as the group standard—for the first month after the transfer, while the groups with full participation recovered to their former productivity within a few days and even exceeded it at the end of the month. The third group that participated through chosen representatives did not recover as fast, but showed their old productivity after a month. (They also had insufficient material to work on for the first week, however.) No turnover occurred in the groups with participation and little aggression towards management was observed. The turnover in the participation group without participation was 17% and the attitude towards management was generally hostile. The group with no participation was broken up after one month and brought together again after another two and one-half months to work on a new job, and this time they were given the opportunity to participate in the design of their job. They then showed the same pattern of recovery and increased productivity as the groups with participation in the first experiment. The results were explained by Coch and French on the basis of a general model of resistance to change derived from work by Lewin (1951, see below).

                                  Bartlem and Locke (1981) argued that these findings could not be interpreted as support for the positive effects of participation because there were important differences between the groups as regards the explanation of the need for changes in the introductory meetings with management, the amount of training received, the way the time studies were carried out to set the piece rate, the amount of work available and group size. They assumed that perceived fairness of pay rates and general trust in management contributed to the better performance of the participation groups, not participation per se.

                                  In addition to the problems associated with research on the effects of participation, very little is known about the processes that lead to these effects (e.g., Wilpert 1989). In a longitudinal study on the effects of participative job design, Baitsch (1985) described in detail processes of competence development in a number of shop-floor employees. His study can be linked to Deci’s (1975) theory of intrinsic motivation based on the need for being competent and self-determining. A theoretical framework focusing on the effects of participation on the resistance to change was suggested by Lewin (1951) who argued that social systems gain a quasi-stationary equilibrium which is disturbed by any attempt at change. For the change to be successfully carried through, forces in favour of the change must be stronger than the resisting forces. Participation helps in reducing the resisting forces as well as in increasing the driving forces because reasons for resistance can be openly discussed and dealt with, and individual concerns and needs can be integrated into the proposed change. Additionally, Lewin assumed that common decisions resulting from participatory change processes provide the link between the motivation for change and the actual changes in behaviour.

                                  Participation in systems design

                                  Given the—albeit not completely consistent—empirical support for the effectiveness of participation, as well as its ethical underpinnings in industrial democracy, there is widespread agreement that for the purposes of systems design a participative strategy should be followed (Greenbaum and Kyng 1991; Majchrzak 1988; Scarbrough and Corbett 1992). Additionally, a number of case studies on participative design processes have demonstrated the specific advantages of participation in systems design, for example, regarding the quality of the resulting design, user satisfaction, and acceptance (i.e., actual use) of the new system (Mumford and Henshall 1979; Spinas 1989; Ulich et al. 1991).

                                  The important question then is not the if, but the how of participation. Scarbrough and Corbett (1992) provided an overview of various types of participation in the various stages of the design process (see table 3). As they point out, user involvement in the actual design of technology is rather rare and often does not extend beyond information distribution. Participation mostly occurs in the latter stages of implementation and optimization of the technical system and during the development of socio-technical design options, that is, options of organizational and job design in combination with options for the use of the technical system.

                                  Table 3. User participation in the technology process

                                   

                                  Type of participation

                                  Phases of technology process

                                  Formal

                                  Informal

                                  Design

                                  Trade union consultation
                                  Prototyping

                                  User redesign

                                  Implementation

                                  New technology agreements
                                  Collective bargaining

                                  Skills bargaining
                                  Negotiation
                                  User cooperation

                                  Use

                                  Job design

                                  Quality circles

                                  Informal job redesign
                                  and work practices

                                  Adapted from Scarbrough and Corbett 1992.

                                  Besides resistance in managers and engineers to the involvement of users in the design of technical systems and potential restrictions embedded in the formal participation structure of a company, an important difficulty concerns the need for methods that allow the discussion and evaluation of systems that do not yet exist (Grote 1994). In software development, usability labs can help to overcome this difficulty as they provide an opportunity for early testing by future users.

                                  In looking at the process of systems design, including participative processes, Hirschheim and Klein (1989) have stressed the effects of implicit and explicit assumptions of system developers and managers about basic topics such as the nature of social organization, the nature of technology and their own role in the development process. Whether system designers see themselves as experts, catalysts or emancipators will greatly influence the design and implementation process. Also, as mentioned before, the broader organizational context in which participative design takes place has to be taken into account. Hornby and Clegg (1992) provided some evidence for the relationship between general organizational characteristics and the form of participation chosen (or, more precisely, the form evolving in the course of system design and implementation). They studied the introduction of an information system which was carried out within a participative project structure and with explicit commitment to user participation. However, users reported that they had had little information about the changes supposed to take place and low levels of influence over system design and related questions like job design and job security. This finding was interpreted in terms of the mechanistic structure and unstable processes of the organization that fostered “arbitrary” participation instead of the desired open participation (see table 2).

                                  In conclusion, there is sufficient evidence demonstrating the benefits of participative change strategies. However, much still needs to be learned about the underlying processes and influencing factors that bring about, moderate or prevent these positive effects.

                                   

                                  Back

                                  Thursday, 10 March 2011 16:45

                                  Goals, Definitions and General Information

                                  Work is essential for life, development and personal fulfilment. Unfortunately, indispensable activities such as food production, extraction of raw materials, manufacturing of goods, energy production and services involve processes, operations and materials which can, to a greater or lesser extent, create hazards to the health of workers and those in nearby communities, as well as to the general environment.

                                  However, the generation and release of harmful agents in the work environment can be prevented, through adequate hazard control interventions, which not only protect workers’ health but also limit the damage to the environment often associated with industrialization. If a harmful chemical is eliminated from a work process, it will neither affect the workers nor go beyond, to pollute the environment.

                                  The profession that aims specifically at the prevention and control of hazards arising from work processes is occupational hygiene. The goals of occupational hygiene include the protection and promotion of workers’ health, the protection of the environment and contribution to a safe and sustainable development.

                                  The need for occupational hygiene in the protection of workers’ health cannot be overemphasized. Even when feasible, the diagnosis and the cure of an occupational disease will not prevent further occurrences, if exposure to the aetiological agent does not cease. So long as the unhealthy work environment remains unchanged, its potential to impair health remains. Only the control of health hazards can break the vicious circle illustrated in figure 1.

                                  Figure 1. Interactions between people and the environment

                                  IHY010F1

                                  However, preventive action should start much earlier, not only before the manifestation of any health impairment but even before exposure actually occurs. The work environment should be under continuous surveillance so that hazardous agents and factors can be detected and removed, or controlled, before they cause any ill effects; this is the role of occupational hygiene.

                                  Furthermore, occupational hygiene may also contribute to a safe and sustainable development, that is “to ensure that (development) meets the needs of the present without compromising the ability of the future generations to meet their own needs” (World Commission on Environment and Development 1987). Meeting the needs of the present world population without depleting or damaging the global resource base, and without causing adverse health and environmental consequences, requires knowledge and means to influence action (WHO 1992a); when related to work processes this is closely related to occupational hygiene practice.

                                   

                                   

                                   

                                   

                                   

                                   

                                   

                                   

                                   

                                   

                                   

                                   

                                  Occupational health requires a multidisciplinary approach and involves fundamental disciplines, one of which is occupational hygiene, along with others which include occupational medicine and nursing, ergonomics and work psychology. A schematic representation of the scopes of action for occupational physicians and occupational hygienists is presented in figure 2.

                                  Figure 2. Scopes of action for occupational physicians and occupational hygienists.

                                  IHY010F2

                                  It is important that decision makers, managers and workers themselves, as well as all occupational health professionals, understand the essential role that occupational hygiene plays in the protection of workers’ health and of the environment, as well as the need for specialized professionals in this field. The close link between occupational and environmental health should also be kept in mind, since the prevention of pollution from industrial sources, through the adequate handling and disposal of hazardous effluents and waste, should be started at the workplace level. (See “Evaluation of the work environment”).

                                   

                                   

                                   

                                   

                                  Concepts and Definitions

                                  Occupational hygiene

                                  Occupational hygiene is the science of the anticipation, recognition, evaluation and control of hazards arising in or from the workplace, and which could impair the health and well-being of workers, also taking into account the possible impact on the surrounding communities and the general environment.

                                  Definitions of occupational hygiene may be presented in different ways; however, they all have essentially the same meaning and aim at the same fundamental goal of protecting and promoting the health and well-being of workers, as well as protecting the general environment, through preventive actions in the workplace.

                                  Occupational hygiene is not yet universally recognized as a profession; however, in many countries, framework legislation is emerging that will lead to its establishment.


                                  Occupational hygienist

                                   An occupational hygienist is a professional able to:

                                  • anticipate the health hazards that may result from work processes, operations and equipment, and accordingly advise on their planning and design
                                  • recognize and understand, in the work environment, the occurrence (real or potential) of chemical, physical and biological agents and other stresses, and their interactions with other factors, which may affect the health and well-being of workers
                                  • understand the possible routes of agent entry into the human body, and the effects that such agents and other factors may have on health
                                  • assess workers’ exposure to potentially harmful agents and factors and to evaluate the results
                                  •  evaluate work processes and methods, from the point of view of the possible generation and release/propagation of potentially harmful agents and other factors, with a view to eliminating exposures, or reducing them to acceptable levels
                                  • design, recommend for adoption, and evaluate the effectiveness of control strategies, alone or in collaboration with other professionals to ensure effective and economical control
                                  • participate in overall risk analysis and management of an agent, process or workplace, and contribute to the establishment of priorities for risk management
                                  • understand the legal framework for occupational hygiene practice in their own country
                                  • educate, train, inform and advise persons at all levels, in all aspects of hazard communication
                                  • work effectively in a multidisciplinary team involving other professionals
                                  • recognize agents and factors that may have environmental impact, and understand the need to integrate occupational hygiene practice with environmental protection.

                                   

                                  It should be kept in mind that a profession consists not only of a body of knowledge, but also of a Code of Ethics; national occupational hygiene associations, as well as the International Occupational Hygiene Association (IOHA), have their own Codes of Ethics (WHO 1992b).  


                                   

                                  Occupational hygiene technician

                                  An occupational hygiene technician is “a person competent to carry out measurements of the work environment” but not “to make the interpretations, judgements, and recommendations required from an occupational hygienist”. The necessary level of competence may be obtained in a comprehensive or limited field (WHO 1992b).

                                  International Occupational Hygiene Association (IOHA)

                                  IOHA was formally established, during a meeting in Montreal, on June 2, 1987. At present IOHA has the participation of 19 national occupational hygiene associations, with over nineteen thousand members from seventeen countries.

                                  The primary objective of IOHA is to promote and develop occupational hygiene throughout the world, at a high level of professional competence, through means that include the exchange of information among organizations and individuals, the further development of human resources and the promotion of a high standard of ethical practice. IOHA activities include scientific meetings and publication of a newsletter. Members of affiliated associations are automatically members of IOHA; it is also possible to join as an individual member, for those in countries where there is not yet a national association.

                                  Certification

                                  In addition to an accepted definition of occupational hygiene and of the role of the occupational hygienist, there is need for the establishment of certification schemes to ensure acceptable standards of occupational hygiene competence and practice. Certification refers to a formal scheme based on procedures for establishing and maintaining knowledge, skills and competence of professionals (Burdorf 1995).

                                  IOHA has promoted a survey of existing national certification schemes (Burdorf 1995), together with recommendations for the promotion of international cooperation in assuring the quality of professional occupational hygienists, which include the following:

                                  • “the harmonization of standards on the competence and practice of professional occupational hygienists”
                                  • “the establishment of an international body of peers to review the quality of existing certification schemes”.

                                   

                                  Other suggestions in this report include items such as: “reciprocity” and “cross-acceptance of national designations, ultimately aiming at an umbrella scheme with one internationally accepted designation”.

                                  The Practice of Occupational Hygiene

                                  The classical steps in occupational hygiene practice are:

                                  • the recognition of the possible health hazards in the work environment
                                  • the evaluation of hazards, which is the process of assessing exposure and reaching conclusions as to the level of risk to human health
                                  • prevention and control of hazards, which is the process of developing and implementing strategies to eliminate, or reduce to acceptable levels, the occurrence of harmful agents and factors in the workplace, while also accounting for environmental protection.

                                   

                                  The ideal approach to hazard prevention is “anticipated and integrated preventive action”, which should include:

                                  • occupational health and environmental impact assessments, prior to the design and installation of any new workplace
                                  • selection of the safest, least hazardous and least polluting technology (“cleaner production”)
                                  • environmentally appropriate location
                                  • proper design, with adequate layout and appropriate control technology, including for the safe handling and disposal of the resulting effluents and waste
                                  • elaboration of guidelines and regulations for training on the correct operation of processes, including on safe work practices, maintenance and emergency procedures.

                                   

                                  The importance of anticipating and preventing all types of environmental pollution cannot be overemphasized. There is, fortunately, an increasing tendency to consider new technologies from the point of view of the possible negative impacts and their prevention, from the design and installation of the process to the handling of the resulting effluents and waste, in the so-called cradle-to-grave approach. Environmental disasters, which have occurred in both developed and developing countries, could have been avoided by the application of appropriate control strategies and emergency procedures in the workplace.

                                  Economic aspects should be viewed in broader terms than the usual initial cost consideration; more expensive options that offer good health and environmental protection may prove to be more economical in the long run. The protection of workers’ health and of the environment must start much earlier than it usually does. Technical information and advice on occupational and environmental hygiene should always be available to those designing new processes, machinery, equipment and workplaces. Unfortunately such information is often made available much too late, when the only solution is costly and difficult retrofitting, or worse, when consequences have already been disastrous.

                                  Recognition of hazards

                                  Recognition of hazards is a fundamental step in the practice of occupational hygiene, indispensable for the adequate planning of hazard evaluation and control strategies, as well as for the establishment of priorities for action. For the adequate design of control measures, it is also necessary to physically characterize contaminant sources and contaminant propagation paths.

                                  The recognition of hazards leads to the determination of:

                                  • which agents may be present and under which circumstances
                                  • the nature and possible extent of associated adverse effects on health and well-being.

                                   

                                  The identification of hazardous agents, their sources and the conditions of exposure requires extensive knowledge and careful study of work processes and operations, raw materials and chemicals used or generated, final products and eventual by-products, as well as of possibilities for the accidental formation of chemicals, decomposition of materials, combustion of fuels or the presence of impurities. The recognition of the nature and potential magnitude of the biological effects that such agents may cause if overexposure occurs, requires knowledge on and access to toxicological information. International sources of information in this respect include International Programme on Chemical Safety (IPCS), International Agency for Research on Cancer (IARC) and International Register of Potentially Toxic Chemicals, United Nations Environment Programme (UNEP-IRPTC).

                                  Agents which pose health hazards in the work environment include airborne contaminants; non-airborne chemicals; physical agents, such as heat and noise; biological agents; ergonomic factors, such as inadequate lifting procedures and working postures; and psychosocial stresses.

                                  Occupational hygiene evaluations

                                  Occupational hygiene evaluations are carried out to assess workers’ exposure, as well as to provide information for the design, or to test the efficiency, of control measures.

                                  Evaluation of workers’ exposure to occupational hazards, such as airborne contaminants, physical and biological agents, is covered elsewhere in this chapter. Nevertheless, some general considerations are provided here for a better understanding of the field of occupational hygiene.

                                  It is important to keep in mind that hazard evaluation is not an end in itself, but must be considered as part of a much broader procedure that starts with the realization that a certain agent, capable of causing health impairment, may be present in the work environment, and concludes with the control of this agent so that it will be prevented from causing harm. Hazard evaluation paves the way to, but does not replace, hazard prevention.

                                  Exposure assessment

                                  Exposure assessment aims at determining how much of an agent workers have been exposed to, how often and for how long. Guidelines in this respect have been established both at the national and international level—for example, EN 689, prepared by the Comité Européen de Normalisation (European Committee for Standardization) (CEN 1994).

                                  In the evaluation of exposure to airborne contaminants, the most usual procedure is the assessment of inhalation exposure, which requires the determination of the air concentration of the agent to which workers are exposed (or, in the case of airborne particles, the air concentration of the relevant fraction, e.g., the “respirable fraction”) and the duration of the exposure. However, if routes other than inhalation contribute appreciably to the uptake of a chemical, an erroneous judgement may be made by looking only at the inhalation exposure. In such cases, total exposure has to be assessed, and a very useful tool for this is biological monitoring.

                                  The practice of occupational hygiene is concerned with three kinds of situations:

                                  • initial studies to assess workers’ exposure
                                  • follow-up monitoring/surveillance
                                  • exposure assessment for epidemiological studies.

                                   

                                  A primary reason for determining whether there is overexposure to a hazardous agent in the work environment, is to decide whether interventions are required. This often, but not necessarily, means establishing whether there is compliance with an adopted standard, which is usually expressed in terms of an occupational exposure limit. The determination of the “worst exposure” situation may be enough to fulfil this purpose. Indeed, if exposures are expected to be either very high or very low in relation to accepted limit values, the accuracy and precision of quantitative evaluations can be lower than when the exposures are expected to be closer to the limit values. In fact, when hazards are obvious, it may be wiser to invest resources initially on controls and to carry out more precise environmental evaluations after controls have been implemented.

                                  Follow-up evaluations are often necessary, particularly if the need existed to install or improve control measures or if changes in the processes or materials utilized were foreseen. In these cases, quantitative evaluations have an important surveillance role in:

                                  • evaluating the adequacy, testing the efficiency or disclosing possible failures in the control systems
                                  • detecting whether alterations in the processes, such as operating temperature, or in the raw materials, have altered the exposure situation.

                                   

                                  Whenever an occupational hygiene survey is carried out in connection with an epidemiological study in order to obtain quantitative data on relationships between exposure and health effects, the exposure must be characterized with a high level of accuracy and precision. In this case, all exposure levels must be adequately characterized, since it would not be enough, for example, to characterize only the worst case exposure situation. It would be ideal, although difficult in practice, to always keep precise and accurate exposure assessment records since there may be a future need to have historical exposure data.

                                  In order to ensure that evaluation data is representative of workers’ exposure, and that resources are not wasted, an adequate sampling strategy, accounting for all possible sources of variability, must be designed and followed. Sampling strategies, as well as measurement techniques, are covered in “Evaluation of the work environment”.

                                  Interpretation of results

                                  The degree of uncertainty in the estimation of an exposure parameter, for example, the true average concentration of an airborne contaminant, is determined through statistical treatment of the results from measurements (e.g., sampling and analysis). The level of confidence on the results will depend on the coefficient of variation of the “measuring system” and on the number of measurements. Once there is an acceptable confidence, the next step is to consider the health implications of the exposure: what does it mean for the health of the exposed workers: now? in the near future? in their working life? will there be an impact on future generations?

                                  The evaluation process is only completed when results from measurements are interpreted in view of data (sometimes referred to as “risk assessment data”) derived from experimental toxicology, epidemiological and clinical studies and, in certain cases, clinical trials. It should be clarified that the term risk assessment has been used in connection with two types of assessments—the assessment of the nature and extent of risk resulting from exposure to chemicals or other agents, in general, and the assessment of risk for a particular worker or group of workers, in a specific workplace situation.

                                  In the practice of occupational hygiene, exposure assessment results are often compared with adopted occupational exposure limits which are intended to provide guidance for hazard evaluation and for setting target levels for control. Exposure in excess of these limits requires immediate remedial action by the improvement of existing control measures or implementation of new ones. In fact, preventive interventions should be made at the “action level”, which varies with the country (e.g., one-half or one-fifth of the occupational exposure limit). A low action level is the best assurance of avoiding future problems.

                                  Comparison of exposure assessment results with occupational exposure limits is a simplification, since, among other limitations, many factors which influence the uptake of chemicals (e.g., individual susceptibilities, physical activity and body build) are not accounted for by this procedure. Furthermore, in most workplaces there is simultaneous exposure to many agents; hence a very important issue is that of combined exposures and agent interactions, because the health consequences of exposure to a certain agent alone may differ considerably from the consequences of exposure to this same agent in combination with others, particularly if there is synergism or potentiation of effects.

                                  Measurements for control

                                  Measurements with the purpose of investigating the presence of agents and the patterns of exposure parameters in the work environment can be extremely useful for the planning and design of control measures and work practices. The objectives of such measurements include:

                                  • source identification and characterization
                                  • spotting of critical points in closed systems or enclosures (e.g., leaks)
                                  • determination of propagation paths in the work environment
                                  • comparison of different control interventions
                                  • verification that respirable dust has settled together with the coarse visible dust, when using water sprays
                                  • checking that contaminated air is not coming from an adjacent area.

                                   

                                  Direct-reading instruments are extremely useful for control purposes, particularly those which can be used for continuous sampling and reflect what is happening in real time, thus disclosing exposure situations which might not otherwise be detected and which need to be controlled. Examples of such instruments include: photo-ionization detectors, infrared analysers, aerosol meters and detector tubes. When sampling to obtain a picture of the behaviour of contaminants, from the source throughout the work environment, accuracy and precision are not as critical as they would be for exposure assessment.

                                  Recent developments in this type of measurement for control purposes include visualization techniques, one of which is the Picture Mix Exposure—PIMEX (Rosen 1993). This method combines a video image of the worker with a scale showing airborne contaminant concentrations, which are continuously measured, at the breathing zone, with a real-time monitoring instrument, thus making it possible to visualize how the concentration varies while the task is performed. This provides an excellent tool for comparing the relative efficacy of different control measures, such as ventilation and work practices, thus contributing to better design.

                                  Measurements are also needed to assess the efficiency of control measures. In this case, source sampling or area sampling are convenient, alone or in addition to personal sampling, for the assessment of workers’ exposure. In order to assure validity, the locations for “before” and “after” sampling (or measurements) and the techniques used should be the same, or equivalent, in sensitivity, accuracy and precision.

                                  Hazard prevention and control

                                  The primary goal of occupational hygiene is the implementation of appropriate hazard prevention and control measures in the work environment. Standards and regulations, if not enforced, are meaningless for the protection of workers’ health, and enforcement usually requires both monitoring and control strategies. The absence of legally established standards should not be an obstacle to the implementation of the necessary measures to prevent harmful exposures or control them to the lowest level feasible. When serious hazards are obvious, control should be recommended, even before quantitative evaluations are carried out. It may sometimes be necessary to change the classical concept of “recognition-evaluation-control” to “recognition-control-evaluation”, or even to “recognition-control”, if capabilities for evaluation of hazards do not exist. Some examples of hazards in obvious need of action without the necessity of prior environmental sampling are electroplating carried out in an unventilated, small room, or using a jackhammer or sand-blasting equipment with no environmental controls or protective equipment. For such recognized health hazards, the immediate need is control, not quantitative evaluation.

                                  Preventive action should in some way interrupt the chain by which the hazardous agent—a chemical, dust, a source of energy—is transmitted from the source to the worker. There are three major groups of control measures: engineering controls, work practices and personal measures.

                                  The most efficient hazard prevention approach is the application of engineering control measures which prevent occupational exposures by managing the work environment, thus decreasing the need for initiatives on the part of workers or potentially exposed persons. Engineering measures usually require some process modifications or mechanical structures, and involve technical measures that eliminate or reduce the use, generation or release of hazardous agents at their source, or, when source elimination is not possible, engineering measures should be designed to prevent or reduce the spread of hazardous agents into the work environment by:

                                  • containing them
                                  • removing them immediately beyond the source
                                  • interfering with their propagation
                                  • reducing their concentration or intensity.

                                   

                                  Control interventions which involve some modification of the source are the best approach because the harmful agent can be eliminated or reduced in concentration or intensity. Source reduction measures include substitution of materials, substitution/modification of processes or equipment and better maintenance of equipment.

                                  When source modifications are not feasible, or are not sufficient to attain the desired level of control, then the release and dissemination of hazardous agents in the work environment should be prevented by interrupting their transmission path through measures such as isolation (e.g., closed systems, enclosures), local exhaust ventilation, barriers and shields, isolation of workers.

                                  Other measures aiming at reducing exposures in the work environment include adequate workplace design, dilution or displacement ventilation, good housekeeping and adequate storage. Labelling and warning signs can assist workers in safe work practices. Monitoring and alarm systems may be required in a control programme. Monitors for carbon monoxide around furnaces, for hydrogen sulphide in sewage work, and for oxygen deficiency in closed spaces are some examples.

                                  Work practices are an important part of control—for example, jobs in which a worker’s work posture can affect exposure, such as whether a worker bends over his or her work. The position of the worker may affect the conditions of exposure (e.g., breathing zone in relation to contaminant source, possibility of skin absorption).

                                  Lastly, occupational exposure can be avoided or reduced by placing a protective barrier on the worker, at the critical entry point for the harmful agent in question (mouth, nose, skin, ear)—that is, the use of personal protective devices. It should be pointed out that all other possibilities of control should be explored before considering the use of personal protective equipment, as this is the least satisfactory means for routine control of exposures, particularly to airborne contaminants.

                                  Other personal preventive measures include education and training, personal hygiene and limitation of exposure time.

                                  Continuous evaluations, through environmental monitoring and health surveillance, should be part of any hazard prevention and control strategy.

                                  Appropriate control technology for the work environment must also encompass measures for the prevention of environmental pollution (air, water, soil), including adequate management of hazardous waste.

                                  Although most of the control principles hereby mentioned apply to airborne contaminants, many are also applicable to other types of hazards. For example, a process can be modified to produce less air contaminants or to produce less noise or less heat. An isolating barrier can isolate workers from a source of noise, heat or radiation.

                                  Far too often prevention dwells on the most widely known measures, such as local exhaust ventilation and personal protective equipment, without proper consideration of other valuable control options, such as alternative cleaner technologies, substitution of materials, modification of processes, and good work practices. It often happens that work processes are regarded as unchangeable when, in reality, changes can be made which effectively prevent or at least reduce the associated hazards.

                                  Hazard prevention and control in the work environment requires knowledge and ingenuity. Effective control does not necessarily require very costly and complicated measures. In many cases, hazard control can be achieved through appropriate technology, which can be as simple as a piece of impervious material between the naked shoulder of a dock worker and a bag of toxic material that can be absorbed through the skin. It can also consist of simple improvements such as placing a movable barrier between an ultraviolet source and a worker, or training workers in safe work practices.

                                  Aspects to be considered when selecting appropriate control strategies and technology, include the type of hazardous agent (nature, physical state, health effects, routes of entry into the body), type of source(s), magnitude and conditions of exposure, characteristics of the workplace and relative location of workstations.

                                  The required skills and resources for the correct design, implementation, operation, evaluation and maintenance of control systems must be ensured. Systems such as local exhaust ventilation must be evaluated after installation and routinely checked thereafter. Only regular monitoring and maintenance can ensure continued efficiency, since even well-designed systems may lose their initial performance if neglected.

                                  Control measures should be integrated into hazard prevention and control programmes, with clear objectives and efficient management, involving multidisciplinary teams made up of occupational hygienists and other occupational health and safety staff, production engineers, management and workers. Programmes must also include aspects such as hazard communication, education and training covering safe work practices and emergency procedures.

                                  Health promotion aspects should also be included, since the workplace is an ideal setting for promoting healthy life-styles in general and for alerting as to the dangers of hazardous non-occupational exposures caused, for example, by shooting without adequate protection, or smoking.

                                  The Links among Occupational Hygiene, Risk Assessment and Risk Management

                                  Risk assessment

                                  Risk assessment is a methodology that aims at characterizing the types of health effects expected as a result of a certain exposure to a given agent, as well as providing estimates on the probability of occurrence of these health effects, at different levels of exposure. It is also used to characterize specific risk situations. It involves hazard identification, the establishment of exposure-effect relationships, and exposure assessment, leading to risk characterization.

                                  The first step refers to the identification of an agent—for example, a chemical—as causing a harmful health effect (e.g., cancer or systemic poisoning). The second step establishes how much exposure causes how much of a given effect in how many of the exposed persons. This knowledge is essential for the interpretation of exposure assessment data.

                                  Exposure assessment is part of risk assessment, both when obtaining data to characterize a risk situation and when obtaining data for the establishment of exposure-effect relationships from epidemiological studies. In the latter case, the exposure that led to a certain occupational or environmentally caused effect has to be accurately characterized to ensure the validity of the correlation.

                                  Although risk assessment is fundamental to many decisions which are taken in the practice of occupational hygiene, it has limited effect in protecting workers’ health, unless translated into actual preventive action in the workplace.

                                  Risk assessment is a dynamic process, as new knowledge often discloses harmful effects of substances until then considered relatively harmless; therefore the occupational hygienist must have, at all times, access to up-to-date toxicological information. Another implication is that exposures should always be controlled to the lowest feasible level.

                                  Figure 3 is presented as an illustration of different elements of risk assessment.

                                  Figure 3. Elements of risk assessment.

                                  IHY010F3

                                  Risk management in the work environment

                                  It is not always feasible to eliminate all agents that pose occupational health risks because some are inherent to work processes that are indispensable or desirable; however, risks can and must be managed.

                                  Risk assessment provides a basis for risk management. However, while risk assessment is a scientific procedure, risk management is more pragmatic, involving decisions and actions that aim at preventing, or reducing to acceptable levels, the occurrence of agents which may pose hazards to the health of workers, surrounding communities and the environment, also accounting for the socio-economic and public health context.

                                  Risk management takes place at different levels; decisions and actions taken at the national level pave the way for the practice of risk management at the workplace level.

                                  Risk management at the workplace level requires information and knowledge on:

                                  • health hazards and their magnitude, identified and rated according to risk assessment findings
                                  • legal requirements and standards
                                  • technological feasibility, in terms of the available and applicable control technology
                                  • economic aspects, such as the costs to design, implement, operate and maintain control systems, and cost-benefit analysis (control costs versus financial benefits incurred by controlling occupational and environment hazards)
                                  • human resources (available and required)
                                  • socio-economic and public health context

                                   

                                  to serve as a basis for decisions which include:

                                  • establishment of a target for control
                                  • selection of adequate control strategies and technologies
                                  • establishment of priorities for action in view of the risk situation, as well as of the existing socio-economic and public health context (particularly important in developing countries)

                                   

                                  and which should lead to actions such as:

                                  • identification/search of financial and human resources (if not yet available)
                                  • design of specific control measures, which should be appropriate for the protection of workers’ health and of the environment, as well as safeguarding as much as possible the natural resource base
                                  • implementation of control measures, including provisions for adequate operation, maintenance and emergency procedures
                                  • establishment of a hazard prevention and control programme with adequate management and including routine surveillance.

                                   

                                  Traditionally, the profession responsible for most of these decisions and actions in the workplace is occupational hygiene.

                                  One key decision in risk management, that of acceptable risk (what effect can be accepted, in what percentage of the working population, if any at all?), is usually, but not always, taken at the national policy-making level and followed by the adoption of occupational exposure limits and the promulgation of occupational health regulations and standards. This leads to the establishment of targets for control, usually at the workplace level by the occupational hygienist, who should have knowledge of the legal requirements. However, it may happen that decisions on acceptable risk have to be taken by the occupational hygienist at the workplace level—for example, in situations when standards are not available or do not cover all potential exposures.

                                  All these decisions and actions must be integrated into a realistic plan, which requires multidisciplinary and multisectorial coordination and collaboration. Although risk management involves pragmatic approaches, its efficiency should be scientifically evaluated. Unfortunately risk management actions are, in most cases, a compromise between what should be done to avoid any risk and the best which can be done in practice, in view of financial and other limitations.

                                  Risk management concerning the work environment and the general environment should be well coordinated; not only are there overlapping areas, but, in most situations, the success of one is interlinked with the success of the other.

                                  Occupational Hygiene Programmes and Services

                                  Political will and decision making at the national level will, directly or indirectly, influence the establishment of occupational hygiene programmes or services, either at the governmental or private level. It is beyond the scope of this article to provide detailed models for all types of occupational hygiene programmes and services; however, there are general principles that are applicable to many situations and may contribute to their efficient implementation and operation.

                                  A comprehensive occupational hygiene service should have the capability to carry out adequate preliminary surveys, sampling, measurements and analysis for hazard evaluation and for control purposes, and to recommend control measures, if not to design them.

                                  Key elements of a comprehensive occupational hygiene programme or service are human and financial resources, facilities, equipment and information systems, well organized and coordinated through careful planning, under efficient management, and also involving quality assurance and continuous programme evaluation. Successful occupational hygiene programmes require a policy basis and commitment from top management. The procurement of financial resources is beyond the scope of this article.

                                  Human resources

                                  Adequate human resources constitute the main asset of any programme and should be ensured as a priority. All staff should have clear job descriptions and responsibilities. If needed, provisions for training and education should be made. The basic requirements for occupational hygiene programmes include:

                                  • occupational hygienists—in addition to general knowledge on the recognition, evaluation and control of occupational hazards, occupational hygienists may be specialized in specific areas, such as analytical chemistry or industrial ventilation; the ideal situation is to have a team of well-trained professionals in the comprehensive practice of occupational hygiene and in all required areas of expertise
                                  • laboratory personnel, chemists (depending on the extent of analytical work)
                                  • technicians and assistants, for field surveys and for laboratories, as well as for instrument maintenance and repairs
                                  • information specialists and administrative support.

                                   

                                  One important aspect is professional competence, which must not only be achieved but also maintained. Continuous education, in or outside the programme or service, should cover, for example, legislation updates, new advances and techniques, and gaps in knowledge. Participation in conferences, symposia and workshops also contribute to the maintenance of competence.

                                  Health and safety for staff

                                  Health and safety should be ensured for all staff in field surveys, laboratories and offices. Occupational hygienists may be exposed to serious hazards and should wear the required personal protective equipment. Depending on the type of work, immunization may be required. If rural work is involved, depending on the region, provisions such as antidote for snake bites should be made. Laboratory safety is a specialized field discussed elsewhere in this Encyclopaedia.

                                  Occupational hazards in offices should not be overlooked—for example, work with visual display units and sources of indoor pollution such as laser printers, photocopying machines and air-conditioning systems. Ergonomic and psychosocial factors should also be considered.

                                  Facilities

                                  These include offices and meeting room(s), laboratories and equipment, information systems and library. Facilities should be well designed, accounting for future needs, as later moves and adaptations are usually more costly and time consuming.

                                  Occupational hygiene laboratories and equipment

                                  Occupational hygiene laboratories should have in principle the capability to carry out qualitative and quantitative assessment of exposure to airborne contaminants (chemicals and dusts), physical agents (noise, heat stress, radiation, illumination) and biological agents. In the case of most biological agents, qualitative assessments are enough to recommend controls, thus eliminating the need for the usually difficult quantitative evaluations.

                                  Although some direct-reading instruments for airborne contaminants may have limitations for exposure assessment purposes, these are extremely useful for the recognition of hazards and identification of their sources, the determination of peaks in concentration, the gathering of data for control measures, and for checking on controls such as ventilation systems. In connection with the latter, instruments to check air velocity and static pressure are also needed.

                                  One of the possible structures would comprise the following units:

                                  • field equipment (sampling, direct-reading)
                                  • analytical laboratory
                                  • particles laboratory
                                  • physical agents (noise, thermal environment, illumination and radiation)
                                  • workshop for maintenance and repairs of instrumentation.

                                   

                                  Whenever selecting occupational hygiene equipment, in addition to performance characteristics, practical aspects have to be considered in view of the expected conditions of use—for example, available infrastructure, climate, location. These aspects include portability, required source of energy, calibration and maintenance requirements, and availability of the required expendable supplies.

                                  Equipment should be purchased only if and when:

                                  • there is a real need
                                  • skills for the adequate operation, maintenance and repairs are available
                                  • the complete procedure has been developed, since it is of no use, for example, to purchase sampling pumps without a laboratory to analyse the samples (or an agreement with an outside laboratory).

                                   

                                  Calibration of all types of occupational hygiene measuring and sampling as well as analytical equipment should be an integral part of any procedure, and the required equipment should be available.

                                  Maintenance and repairs are essential to prevent equipment from staying idle for long periods of time, and should be ensured by manufacturers, either by direct assistance or by providing training of staff.

                                  If a completely new programme is being developed, only basic equipment should be initially purchased, more items being added as the needs are established and operational capabilities ensured. However, even before equipment and laboratories are available and operational, much can be achieved by inspecting workplaces to qualitatively assess health hazards, and by recommending control measures for recognized hazards. Lack of capability to carry out quantitative exposure assessments should never justify inaction concerning obviously hazardous exposures. This is particularly true for situations where workplace hazards are uncontrolled and heavy exposures are common.

                                  Information

                                  This includes library (books, periodicals and other publications), databases (e.g. on CD-ROM) and communications.

                                  Whenever possible, personal computers and CD-ROM readers should be provided, as well as connections to the INTERNET. There are ever-increasing possibilities for on-line networked public information servers (World Wide Web and GOPHER sites), which provide access to a wealth of information sources relevant to workers’ health, therefore fully justifying investment in computers and communications. Such systems should include e-mail, which opens new horizons for communications and discussions, either individually or as groups, thus facilitating and promoting exchange of information throughout the world.

                                  Planning

                                  Timely and careful planning for the implementation, management and periodic evaluation of a programme is essential to ensure that the objectives and goals are achieved, while making the best use of the available resources.

                                  Initially, the following information should be obtained and analysed:

                                  • nature and magnitude of prevailing hazards, in order to establish priorities
                                  • legal requirements (legislation, standards)
                                  • available resources
                                  • infrastructure and support services.

                                   

                                  The planning and organization processes include:

                                  • establishment of the purpose of the programme or service, definition of objectives and the scope of the activities, in view of the expected demand and the available resources
                                  • allocation of resources
                                  • definition of the organizational structure
                                  • profile of the required human resources and plans for their development (if needed)
                                  • clear assignment of responsibilities to units, teams and individuals
                                  • design/adaptation of the facilities
                                  • selection of equipment
                                  • operational requirements
                                  • establishment of mechanisms for communication within and outside the service
                                  • timetable.

                                   

                                  Operational costs should not be underestimated, since lack of resources may seriously hinder the continuity of a programme. Requirements which cannot be overlooked include:

                                  • purchase of expendable supplies (including items such as filters, detector tubes, charcoal tubes, reagents), spare parts for equipment, etc.
                                  • maintenance and repairs of equipment
                                  • transportation (vehicles, fuel, maintenance) and travel
                                  • information update.

                                   

                                  Resources must be optimized through careful study of all elements which should be considered as integral parts of a comprehensive service. A well-balanced allocation of resources to the different units (field measurements, sampling, analytical laboratories, etc.) and all the components (facilities and equipment, personnel, operational aspects) is essential for a successful programme. Moreover, allocation of resources should allow for flexibility, because occupational hygiene services may have to undergo adaptations in order to respond to the real needs, which should be periodically assessed.

                                  Communication, sharing and collaboration are key words for successful teamwork and enhanced individual capabilities. Effective mechanisms for communication, within and outside the programme, are needed to ensure the required multidisciplinary approach for the protection and promotion of workers’ health. There should be close interaction with other occupational health professionals, particularly occupational physicians and nurses, ergonomists and work psychologists, as well as safety professionals. At the workplace level, this should include workers, production personnel and managers.

                                  The implementation of successful programmes is a gradual process. Therefore, at the planning stage, a realistic timetable should be prepared, according to well-established priorities and in view of the available resources.

                                  Management

                                  Management involves decision-making as to the goals to be achieved and actions required to efficiently achieve these goals, with participation of all concerned, as well as foreseeing and avoiding, or recognizing and solving, the problems which may create obstacles to the completion of the required tasks. It should be kept in mind that scientific knowledge is no assurance of the managerial competence required to run an efficient programme.

                                  The importance of implementing and enforcing correct procedures and quality assurance cannot be overemphasized, since there is much difference between work done and work well done. Moreover, the real objectives, not the intermediate steps, should serve as a yardstick; the efficiency of an occupational hygiene programme should be measured not by the number of surveys carried out, but rather by the number of surveys that led to actual action to protect workers’ health.

                                  Good management should be able to distinguish between what is impressive and what is important; very detailed surveys involving sampling and analysis, yielding very accurate and precise results, may be very impressive, but what is really important are the decisions and actions that will be taken afterwards.

                                  Quality assurance

                                  The concept of quality assurance, involving quality control and proficiency testing, refers primarily to activities which involve measurements. Although these concepts have been more often considered in connection with analytical laboratories, their scope has to be extended to also encompass sampling and measurements.

                                  Whenever sampling and analysis are required, the complete procedure should be considered as one, from the point of view of quality. Since no chain is stronger than the weakest link, it is a waste of resources to use, for the different steps of a same evaluation procedure, instruments and techniques of unequal levels of quality. The accuracy and precision of a very good analytical balance cannot compensate for a pump sampling at a wrong flowrate.

                                  The performance of laboratories has to be checked so that the sources of errors can be identified and corrected. There is need for a systematic approach in order to keep the numerous details involved under control. It is important to establish quality assurance programmes for occupational hygiene laboratories, and this refers both to internal quality control and to external quality assessments (often called “proficiency testing”).

                                  Concerning sampling, or measurements with direct-reading instruments (including for measurement of physical agents), quality involves adequate and correct:

                                  • preliminary studies including the identification of possible hazards and the factors required for the design of the strategy
                                  • design of the sampling (or measurement) strategy
                                  • selection and utilization of methodologies and equipment for sampling or measurements, accounting both for the purpose of the investigation and for quality requirements
                                  • performance of the procedures, including time monitoring
                                  • handling, transport and storage of samples (if the case).

                                   

                                  Concerning the analytical laboratory, quality involves adequate and correct:

                                  • design and installation of the facilities
                                  • selection and utilization of validated analytical methods (or, if necessary, validation of analytical methods)
                                  • selection and installation of instrumentation
                                  • adequate supplies (reagents, reference samples, etc.).

                                   

                                  For both, it is indispensable to have:

                                  • clear protocols, procedures and written instructions
                                  • routine calibration and maintenance of the equipment
                                  • training and motivation of the staff to adequately perform the required procedures
                                  • adequate management
                                  • internal quality control
                                  • external quality assessment or proficiency testing (if applicable).

                                   

                                  Furthermore, it is essential to have a correct treatment of the obtained data and interpretation of results, as well as accurate reporting and record keeping.

                                  Laboratory accreditation, defined by CEN (EN 45001) as “formal recognition that a testing laboratory is competent to carry out specific tests or specific types of tests” is a very important control tool and should be promoted. It should cover both the sampling and the analytical procedures.

                                  Programme evaluation

                                  The concept of quality must be applied to all steps of occupational hygiene practice, from the recognition of hazards to the implementation of hazard prevention and control programmes. With this in mind, occupational hygiene programmes and services must be periodically and critically evaluated, aiming at continuous improvement.

                                  Concluding Remarks

                                  Occupational hygiene is essential for the protection of workers’ health and the environment. Its practice involves many steps, which are interlinked and which have no meaning by themselves but must be integrated into a comprehensive approach.

                                   

                                  Back

                                  Sunday, 16 January 2011 19:01

                                  Toxicology in Health and Safety Regulation

                                  Toxicology plays a major role in the development of regulations and other occupational health policies. In order to prevent occupational injury and illness, decisions are increasingly based upon information obtainable prior to or in the absence of the types of human exposures that would yield definitive information on risk such as epidemiology studies. In addition, toxicological studies, as described in this chapter, can provide precise information on dose and response under the controlled conditions of laboratory research; this information is often difficult to obtain in the uncontrolled setting of occupational exposures. However, this information must be carefully evaluated in order to estimate the likelihood of adverse effects in humans, the nature of these adverse effects, and the quantitative relationship between exposures and effects.

                                  Considerable attention has been given in many countries, since the 1980s, to developing objective methods for utilizing toxicological information in regulatory decision-making. Formal methods, frequently referred to as risk assessment, have been proposed and utilized in these countries by both governmental and non-governmental entities. Risk assessment has been varyingly defined; fundamentally it is an evaluative process that incorporates toxicology, epidemiology and exposure information to identify and estimate the probability of adverse effects associated with exposures to hazardous substances or conditions. Risk assessment may be qualitative in nature, indicating the nature of an adverse effect and a general estimate of likelihood, or it may be quantitative, with estimates of numbers of affected persons at specific levels of exposure. In many regulatory systems, risk assessment is undertaken in four stages: hazard identification, the description of the nature of the toxic effect; dose-response evaluation, a semi-quantitative or quantitative analysis of the relationship between exposure (or dose) and severity or likelihood of toxic effect; exposure assessment, the evaluation of information on the range of exposures likely to occur for populations in general or for subgroups within populations; risk characterization, the compilation of all the above information into an expression of the magnitude of risk expected to occur under specified exposure conditions (see NRC 1983 for a statement of these principles).

                                  In this section, three approaches to risk assessment are presented as illustrative. It is impossible to provide a comprehensive compendium of risk assessment methods used throughout the world, and these selections should not be taken as prescriptive. It should be noted that there are trends towards harmonization of risk assessment methods, partly in response to provisions in the recent GATT accords. Two processes of international harmonization of risk assessment methods are currently underway, through the International Programme on Chemical Safety (IPCS) and the Organization for Economic Cooperation and Development (OECD). These organizations also maintain current information on national approaches to risk assessment.

                                   

                                  Back

                                  The WHO (World Health Organization) introduced in 1980 a classification of functional limitation in people; the ICIDH (International Classification Impairment, Disability and Handicap). In this classification a difference is made between illness, limitations and handicap.

                                  This reference model was created to facilitate international communication. The model was presented on the one hand to offer a reference framework for policy makers and on the other hand, to offer a reference framework for doctors diagnosing people suffering from the consequences of illness.

                                  Why this reference framework? It arose with the aim of trying to improve and increase the participation of people with long-term limited abilities. Two aims are mentioned:

                                  • the rehabilitation perspective, i.e., the reintegration of people into society, whether this means work, school, household, etc.
                                  • the prevention of illness and where possible the consequences of illness e.g., disability and handicap.

                                   

                                  As of January 1st, 1994 the classification is official. The activities that have followed, are widespread and especially concerned with issues such as: information and educational measures for specific groups; regulations for the protection of workers; or, for instance, demands that companies should employ, for example, at least 5 per cent of workers with a disability. The classification itself leads in the long term to integration and non-discrimination.

                                  Illness

                                  Illness strikes each of us. Certain illnesses can be prevented, others not. Certain illnesses can be cured, others not. Where possible illness should be prevented and if possible cured.

                                  Impairment

                                  Impairment means every absence or abnormality of a psychological, physiological or anatomic structure or function.

                                  Being born with three fingers instead of five does not have to lead to disability. The capabilities of the individual, and the degree of manipulation possible with the three fingers, will determine whether or not the person is disabled. When, however, a fair amount of signal processing is not possible on a central level in the brain, then impairment will certainly lead to disability as at present there is no method to “cure” (solve) this problem for the patient.

                                  Disability

                                  Disability describes the functional level of an individual having difficulty in task performance e.g., difficulty standing up from their chair. These difficulties are of course related to the impairment, but also to the circumstances surrounding it. A person who uses a wheelchair and lives in a flat country like the Netherlands has more possibilities for self-transportation than the same person living in a mountainous area like Tibet.

                                  Handicap

                                  When the problems are placed on a handicap level, it can be determined in which field the main problems are effective e.g., immobility or physical dependency. These can affect work performance; for example the person may not be able to get themselves to work; or, once at work, might need assistance in personal hygiene, etc.

                                  A handicap shows the negative consequences of disability and can only be solved by taking the negative consequences away.

                                  Summary and conclusions

                                  The above-mentioned classification and the policies thereof offer a well defined international workable framework. Any discussion on designing for specific groups will need such a framework in order to define our activities and try to implement these thoughts in design.

                                  Monday, 14 March 2011 19:35

                                  Sleep Deprivation

                                  Healthy individuals regularly sleep for several hours every day. Normally they sleep during the night hours. They find it most difficult to remain awake during the hours between midnight and early morning, when they normally sleep. If an individual has to remain awake during these hours either totally or partially, the individual comes to a state of forced sleep loss, or sleep deprivation, that is usually perceived as tiredness. A need for sleep, with fluctuating degrees of sleepiness, is felt which continues until sufficient sleep is taken. This is the reason why periods of sleep deprivation are often said to cause a person to incur sleep deficit or sleep debt.

                                  Sleep deprivation presents a particular problem for workers who cannot take sufficient sleep periods because of work schedules (e.g., working at night) or, for that matter, prolonged free-time activities. A worker on a night shift remains sleep-deprived until the opportunity for a sleep period becomes available at the end of the shift. Since sleep taken during daytime hours is usually shorter than needed, the worker cannot recover from the condition of sleep loss sufficiently until a long sleep period, most likely a night sleep, is taken. Until then, the person accumulates a sleep deficit. (A similar condition—jet lag—arises after travelling between time zones that differ by a few hours or more. The traveller tends to be sleep-deprived as the activity periods in the new time zone correspond more clearly to the normal sleep period in the originating place.) During the periods of sleep loss, workers feel tired and their performance is affected in various ways. Thus various degrees of sleep deprivation are incorporated into the daily life of workers having to work irregular hours and it is important to take measures to cope with unfavourable effects of such sleep deficit. The main conditions of irregular working hours that contribute to sleep deprivation are shown in table 1.

                                  Table 1. Main conditions of irregular working hours which contribute to sleep deprivation of various degrees

                                  Irregular working hours

                                  Conditions leading to sleep deprivation

                                  Night duty

                                  No or shortened night-time sleep

                                  Early morning or late evening duty

                                  Shortened sleep, disrupted sleep

                                  Long hours of work or working  two shifts together

                                  Phase displacement of sleep

                                  Straight night or early morning shifts

                                  Consecutive phase displacement of sleep

                                  Short between-shift period

                                  Short and disrupted sleep

                                  Long interval between days off

                                  Accumulation of sleep shortages

                                  Work in a different time zone

                                  No or shortened sleep during the “night” hours in the originating place (jet lag)

                                  Unbalanced free time periods

                                  Phase displacement of sleep, short sleep

                                   

                                  In extreme conditions, sleep deprivation may last for more than a day. Then sleepiness and performance changes increase as the period of sleep deprivation is prolonged. Workers, however, normally take some form of sleep before sleep deprivation becomes too protracted. If the sleep thus taken is not sufficient, the effects of sleep shortage still continue. Thus, it is important to know not only the effects of sleep deprivation in various forms but also the ways in which workers can recover from it.

                                  Figure 1.  Perfomance, sleep ratings and physiological variables of a group of subjects exposed to two nights of sleep deprivation

                                  ERG185F1

                                  The complex nature of sleep deprivation is shown by figure 1, which depicts data from laboratory studies on the effects of two days of sleep deprivation (Fröberg 1985). The data show three basic changes resulting from prolonged sleep deprivation:

                                    1. There is a general decreasing trend in both objective performance and subjective ratings of performance efficiency.
                                    2. The decline in performance is influenced by the time of day. This cycling decline is correlated with those physiological variables which have a circadian cycling period. Performance is better in the normal activity phase when, for example, adrenaline excretion and body temperature are higher than those in the period originally assigned to a normal night’s sleep, when the physiological measures are low.
                                    3. Self-ratings of sleepiness increase with time of continuous sleep deprivation, with a clear cyclic component associated with time of day.

                                         

                                        The fact that the effects of sleep deprivation are correlated with physiological circadian rhythms helps us to understand its complex nature (Folkard and Akerstedt 1992). These effects should be viewed as a result of a phase shift of the sleep-wakefulness cycle in one’s daily life.

                                        The effects of continuous work or sleep deprivation thus include not only a reduction in alertness but decreased performance capabilities, increased probability of falling asleep, lowered well-being and morale and impaired safety. When such periods of sleep deprivation are repeated, as in the case of shift workers, their health may be affected (Rutenfranz 1982; Koller 1983; Costa et al. 1990). An important aim of research is thus to determine to what extent sleep deprivation damages the well-being of individuals and how we can best use the recovery function of sleep in reducing such effects.

                                        Effects of Sleep Deprivation

                                        During and after a night of sleep deprivation, the physiological circadian rhythms of the human body seem to remain sustained. For example, the body temperature curve during the first day’s work among night-shift workers tends to keep its basic circadian pattern. During the night hours, the temperature declines towards early morning hours, rebounds to rise during the subsequent daytime and falls again after an afternoon peak. The physiological rhythms are known to get “adjusted” to the reversed sleep-wakefulness cycles of night-shift workers only gradually in the course of several days of repeated night shifts. This means that the effects on performance and sleepiness are more significant during night hours than in the daytime. The effects of sleep deprivation are therefore variably associated with the original circadian rhythms seen in physiological and psychological functions.

                                        The effects of sleep deprivation on performance depend on the type of the task to be performed. Different characteristics of the task influence the effects (Fröberg 1985; Folkard and Monk 1985; Folkard and Akerstedt 1992). Generally, a complex task is more vulnerable than a simpler task. Performance of a task involving an increasing number of digits or a more complex coding deteriorates more during three days of sleep loss (Fröberg 1985; Wilkinson 1964). Paced tasks that need to be responded to within a certain interval deteriorate more than self-paced tasks. Practical examples of vulnerable tasks include serial reactions to defined stimulations, simple sorting operations, the recording of coded messages, copy typing, display monitoring and continuous inspection. Effects of sleep deprivation on strenuous physical performance are also known. Typical effects of prolonged sleep deprivation on performance (on a visual task) is shown in figure 2 (Dinges 1992). The effects are more pronounced after two nights of sleep loss (40-56 hours) than after one night of sleep loss (16-40 hours).

                                        Figure 2. Regression lines fit to response speed (the reciprocal of response times) on a 10-minute simple, unprepared visual task administered repeatedly to healthy young adults during no sleep loss (5-16 hours), one night of sleep loss (16-40 hours) and two nights of sleep loss (40-56 hours)

                                        ERG185F2

                                        The degree to which the performance of tasks is affected also appears to depend on how it is influenced by the “masking” components of the circadian rhythms. For example, some measures of performance, such as five-target memory search tasks, are found to adjust to night work considerably more quickly than serial reaction time tasks, and hence they may be relatively unimpaired on rapidly rotating shift systems (Folkard et al. 1993). Such differences in the effects of endogenous physiological body clock rhythms and their masking components must be taken into account in considering the safety and accuracy of performance under the influence of sleep deprivation.

                                        One particular effect of sleep deprivation on performance efficiency is the appearance of frequent “lapses” or periods of no response (Wilkinson 1964; Empson 1993). These performance lapses are short periods of lowered alertness or light sleep. This can be traced in records of videotaped performance, eye movements or electroencephalograms (EEGs). A prolonged task (one-half hour or more), especially when the task is replicated, can more easily lead to such lapses. Monotonous tasks such as repetitions of simple reactions or monitoring of infrequent signals are very sensitive in this regard. On the other hand, a novel task is less affected. Performance in changing work situations is also resistant.

                                        While there is evidence of a gradual arousal decrease in sleep deprivation, one would expect less affected performance levels between lapses. This explains why results of some performance tests show little influence of sleep loss when the tests are done in a short period of time. In a simple reaction time task, lapses would lead to very long response times whereas the rest of the measured times would remain unchanged. Caution is thus needed in interpreting test results concerning sleep loss effects in actual situations.

                                        Changes in sleepiness during sleep deprivation obviously relate to physiological circadian rhythms as well as to such lapse periods. Sleepiness sharply increases with time of the first period of night-shift work, but decreases during subsequent daytime hours. If sleep deprivation continues to the second night sleepiness becomes very advanced during the night hours (Costa et al. 1990; Matsumoto and Harada 1994). There are moments when the need for sleep is felt to be almost irresistible; these moments correspond to the appearance of lapses, as well as to the appearance of interruptions in the cerebral functions as evidenced by EEG records. After a while, sleepiness is felt to be reduced, but there follows another period of lapse effects. If workers are questioned about various fatigue feelings, however, they usually mention increasing levels of fatigue and general tiredness persisting throughout the sleep deprivation period and between-lapse periods. A slight recovery of subjective fatigue levels is seen during the daytime following a night of sleep deprivation, but fatigue feelings are remarkably advanced in the second and subsequent nights of continued sleep deprivation.

                                        During sleep deprivation, sleep pressure from the interaction of prior wakefulness and circadian phase may always be present to some degree, but the lability of state in sleepy subjects is also modulated by context effects (Dinges 1992). Sleepiness is influenced by the amount and type of stimulation, the interest afforded by the environment and the meaning of the stimulation to the subject. Monotonous stimulation or that requiring sustained attention can more easily lead to vigilance decrement and lapses. The greater the physiological sleepiness due to sleep loss, the more the subject is vulnerable to environmental monotony. Motivation and incentive can help override this environmental effect, but only for a limited period.

                                        Effects of Partial Sleep Deprivation and Accumulated Sleep Shortages

                                        If a subject works continuously for a whole night without sleep, many performance functions will have definitely deteriorated. If the subject goes to the second night shift without getting any sleep, the performance decline is far advanced. After the third or fourth night of total sleep deprivation, very few people can stay awake and perform tasks even if highly motivated. In actual life, however, such conditions of total sleep loss rarely occur. Usually people take some sleep during subsequent night shifts. But reports from various countries show that sleep taken during daytime is almost always insufficient to recover from the sleep debt incurred by night work (Knauth and Rutenfranz 1981; Kogi 1981; ILO 1990). As a result, sleep shortages accumulate as shift workers repeat night shifts. Similar sleep shortages also result when sleep periods are reduced on account of the need to follow shift schedules. Even if night sleep can be taken, sleep restriction of as little as two hours each night is known to lead to an insufficient amount of sleep for most persons. Such sleep reduction can lead to impaired performance and alertness (Monk 1991).

                                        Examples of conditions in shift systems which contribute to accumulation of sleep shortages, or partial sleep deprivation, are given in table 1. In addition to continued night work for two or more days, short between-shift periods, repetition of an early start of morning shifts, frequent night shifts and inappropriate holiday allotment accelerate the accumulation of sleep shortages.

                                        The poor quality of daytime sleep or shortened sleep is important, too. Daytime sleep is accompanied by an increased frequency of awakenings, less deep and slow-wave sleep and a distribution of REM sleep different from that of normal night-time sleep (Torsvall, Akerstedt and Gillberg 1981; Folkard and Monk 1985; Empson 1993). Thus a daytime sleep may not be as sound as a night sleep even in a favourable environment.

                                        This difficulty of taking good quality sleep due to different timing of sleep in a shift system is illustrated by figure 3 which shows the duration of sleep as a function of the time of sleep onset for German and Japanese workers based on diary records (Knauth and Rutenfranz 1981; Kogi 1985). Due to circadian influence, daytime sleep is forced to be short. Many workers may have split sleep during the daytime and often add some sleep in the evening where possible.

                                        Figure 3. Mean sleep length as a function of the time of sleep onset. Comparison of data from German and Japanese shift workers.

                                        ERG185F3

                                        In real-life settings, shift workers take a variety of measures to cope with such accumulation of sleep shortages (Wedderburn 1991). For example, many of them try to sleep in advance before a night shift or have a long sleep after it. Although such efforts are by no means entirely effective to offset the effects of sleep deficit, they are made quite deliberately. Social and cultural activities may be restricted as part of coping measures. Outgoing free-time activities, for example, are undertaken less frequently between two night shifts. Sleep timing and duration as well as the actual accumulation of sleep deficit thus depend on both job-related and social circumstances.

                                         

                                         

                                         

                                         

                                        Recovery from Sleep Deprivation and Health Measures

                                        The only effective means of recovering from sleep deprivation is to sleep. This restorative effect of sleep is well known (Kogi 1982). As recovery by sleep may differ according to its timing and duration (Costa et al. 1990), it is essential to know when and for how long people should sleep. In normal daily life, it is always the best to take a full night’s sleep to accelerate the recovery from sleep deficit but efforts are usually made to minimize sleep deficit by taking sleep at different occasions as replacements of normal night sleeps of which one has been deprived. Aspects of such replacement sleeps are shown in table 2.

                                        Table 2. Aspects of advance, anchor & retard sleeps taken as replacement of normal night sleep

                                        Aspect

                                        Advance sleep

                                        Anchor sleep

                                        Retard sleep

                                        Occasion

                                        Before a night shift
                                        Between night shifts
                                        Before early
                                        morning work
                                        Late evening naps

                                        Intermittent night
                                        work
                                        During a night shift
                                        Alternate-day work
                                        Prolonged freetime
                                        Naps taken
                                        informally

                                        After a night shift
                                        Between night shifts
                                        After prolonged
                                        evening work
                                        Daytime naps

                                        Duration

                                        Usually short

                                        Short by definition

                                        Usually short but
                                        longer after late
                                        evening work

                                        Quality

                                        Longer latency of
                                        falling asleep
                                        Poor mood on rising
                                        Reduced REM sleep
                                        Slow-wave sleep
                                        dependent on
                                        prior wakefulness

                                        Short latency
                                        Poor mood on rising
                                        Sleep stages similar
                                        to initial part of a
                                        normal night sleep

                                        Shorter latency for
                                        REM sleep
                                        Increased
                                        awakenings
                                        Increased REM sleep
                                        Increased slow-wave
                                        sleep after long
                                        wakefulness

                                        Interaction with
                                        circadian
                                        rhythms

                                        Disrupted rhythms;
                                        relatively faster
                                        adjustment

                                        Conducive to
                                        stabilizing
                                        original rhythms

                                        Disrupted rhythms;
                                        slow adjustment

                                         

                                        To offset night sleep deficit, the usual effort made is to take daytime sleep in “advance” and “retard” phases (i.e., before and after night-shift work). Such a sleep coincides with the circadian activity phase. Thus the sleep is characterized by longer latency, shortened slow-wave sleep, disrupted REM sleep and disturbances of one’s social life. Social and environmental factors are important in determining the recuperative effect of a sleep. That a complete conversion of circadian rhythms is impossible for a shift worker in a real-life situation should be borne in mind in considering the effectiveness of the recovery functions of sleep.

                                        In this respect, interesting features of a short “anchor sleep” have been reported (Minors and Waterhouse 1981; Kogi 1982; Matsumoto and Harada 1994). When part of the customary daily sleep is taken during the normal night sleep period and the rest at irregular times, the circadian rhythms of rectal temperature and urinary secretion of several electrolytes can retain a 24-hour period. This means that a short night-time sleep taken during the night sleep period can help preserve the original circadian rhythms in subsequent periods.

                                        We may assume that sleeps taken at different periods of the day could have certain complementary effects in view of the different recovery functions of these sleeps. An interesting approach for night-shift workers is the use of a night-time nap which usually lasts up to a few hours. Surveys show this short sleep taken during a night shift is common among some groups of workers. This anchor-sleep type sleep is effective in reducing night work fatigue (Kogi 1982) and may reduce the need of recovery sleep. Figure 4 compares the subjective feelings of fatigue during two consecutive night shifts and the off-duty recovery period between the nap-taking group and the non-nap group (Matsumoto and Harada 1994). The positive effects of a night-time nap in reducing fatigue was obvious. These effects continued for a large part of the recovery period following night work. Between these two groups, no significant difference was found upon comparing the length of the day sleep of the non-nap group with the total sleeping time (night-time nap plus subsequent day sleep) of the nap group. Therefore a night-time nap enables part of the essential sleep to be taken in advance of the day sleep following night work. It can therefore be suggested that naps taken during night work can to a certain extent aid recovery from the fatigue caused by that work and accompanying sleep deprivation (Sakai et al. 1984; Saito and Matsumoto 1988).

                                        Figure 4. Mean scores for subjective feelings of  fatigue during two consecutive night shifts and the off-duty recovery period for nap and no-nap groups

                                        ERG185F4

                                        It must be admitted, however, that it is not possible to work out optimal strategies that each worker suffering from sleep deficit can apply. This is demonstrated in the development of international labour standards for night work that recommend a set of measures for workers doing frequent night work (Kogi and Thurman 1993). The varied nature of these measures and the trend towards increasing flexibility in shift systems clearly reflect an effort to develop flexible sleep strategies (Kogi 1991). Age, physical fitness, sleep habits and other individual differences in tolerance may play important roles (Folkard and Monk 1985; Costa et al. 1990; Härmä 1993). Increasing flexibility in work schedules in combination with better job design is useful in this regard (Kogi 1991).

                                        Sleep strategies against sleep deprivation should be dependent on type of working life and be flexible enough to meet individual situations (Knauth, Rohmert and Rutenfranz 1979; Rutenfranz, Knauth and Angersbach 1981; Wedderburn 1991; Monk 1991). A general conclusion is that we should minimize night sleep deprivation by selecting appropriate work schedules and facilitate recovery by encouraging individually suitable sleeps, including replacement sleeps and a sound night-time sleep in the early periods after sleep deprivation. It is important to prevent the accumulation of sleep deficit. The period of night work which deprives workers of sleep in the normal night sleep period should be as short as possible. Between-shift intervals should be long enough to allow a sleep of sufficient length. A better sleep environment and measures to cope with social needs are also useful. Thus, social support is essential in designing working time arrangements, job design and individual coping strategies in promoting the health of workers faced with frequent sleep deficit.

                                         

                                        Back

                                        Thursday, 10 March 2011 17:05

                                        Recognition of Hazards

                                        A workplace hazard can be defined as any condition that may adversely affect the well-being or health of exposed persons. Recognition of hazards in any occupational activity involves characterization of the workplace by identifying hazardous agents and groups of workers potentially exposed to these hazards. The hazards might be of chemical, biological or physical origin (see table 1). Some hazards in the work environment are easy to recognize—for example, irritants, which have an immediate irritating effect after skin exposure or inhalation. Others are not so easy to recognize—for example, chemicals which are accidentally formed and have no warning properties. Some agents like metals (e.g., lead, mercury, cadmium, manganese), which may cause injury after several years of exposure, might be easy to identify if you are aware of the risk. A toxic agent may not constitute a hazard at low concentrations or if no one is exposed. Basic to the recognition of hazards are identification of possible agents at the workplace, knowledge about health risks of these agents and awareness of possible exposure situations.

                                        Table 1.  Hazards of chemical, biological and physical agents.

                                        Type of hazard

                                        Description

                                        Examples

                                        CHEMICAL

                                        HAZARDS

                                         

                                        Chemicals enter the body principally through inhalation, skin absorption or ingestion. The toxic effect might be acute, chronic or both.,

                                         

                                        Corrosion

                                        Corrosive chemicals actually cause tissue destruction at the site of contact. Skin, eyes and digestive system are the most commonly affected parts of the body.

                                        Concentrated acids and alkalis, phosphorus

                                        Irritation

                                        Irritants cause inflammation of tissues where they are deposited. Skin irritants may cause reactions like eczema or dermatitis. Severe respiratory irritants might cause shortness of breath, inflammatory responses and oedema.

                                        Skin: acids, alkalis, solvents, oils Respiratory: aldehydes, alkaline dusts, ammonia, nitrogendioxide, phosgene, chlorine, bromine, ozone

                                        Allergic reactions

                                        Chemical allergens or sensitizers can cause skin or respiratory allergic reactions.

                                        Skin: colophony (rosin), formaldehyde, metals like chromium or nickel, some organic dyes, epoxy hardeners, turpentine

                                        Respiratory: isocyanates, fibre-reactive dyes, formaldehyde, many tropical wood dusts, nickel

                                         

                                        Asphyxiation

                                        Asphyxiants exert their effects by interfering with the oxygenation of the tissues. Simple asphyxiants are inert gases that dilute the available atmospheric oxygen below the level required to support life. Oxygen-deficient atmospheres may occur in tanks, holds of ships, silos or mines. Oxygen concentration in air should never be below 19.5% by volume. Chemical asphyxiants prevent oxygen transport and the normal oxygenation of blood or prevent normal oxygenation of tissues.

                                        Simple asphyxiants: methane, ethane, hydrogen, helium

                                        Chemical asphyxiants: carbon monoxide, nitrobenzene, hydrogencyanide, hydrogen sulphide

                                         

                                        Cancer

                                        Known human carcinogens are chemicals that have been clearly demonstrated to cause cancer in humans. Probable human carcinogens are chemicals that have been clearly demonstrated to cause cancer in animals or the evidence is not definite in humans. Soot and coal tars were the first chemicals suspected to cause cancer.

                                        Known: benzene (leukaemia); vinyl chloride (liver angio-sarcoma); 2-naphthylamine, benzidine (bladder cancer); asbestos (lung cancer, mesothelioma); hardwood dust (nasalor nasal sinus adenocarcinoma) Probable: formaldehyde, carbon tetrachloride, dichromates, beryllium

                                        Reproductive

                                        effects

                                         

                                        Reproductive toxicants interfere with reproductive or sexual functioning of an individual.

                                        Manganese, carbon disulphide, monomethyl and ethyl ethers of ethylene glycol, mercury

                                         

                                        Developmental toxicants are agents that may cause an adverse effect in offspring of exposed persons; for example, birth defects. Embryotoxic or foetotoxic chemicals can cause spontaneous abortions or miscarriages.

                                        Organic mercury compounds, carbon monoxide, lead, thalidomide, solvents

                                        Systemic

                                        poisons

                                         

                                        Systemic poisons are agents that cause injury to particular organs or body systems.

                                        Brain: solvents, lead, mercury, manganese

                                        Peripheral nervous system: n-hexane, lead, arsenic, carbon disulphide

                                        Blood-forming system: benzene, ethylene glycol ethers

                                        Kidneys: cadmium, lead, mercury, chlorinated hydrocarbons

                                        Lungs: silica, asbestos, coal dust (pneumoconiosis)

                                         

                                         

                                         

                                         

                                        BIOLOGICAL

                                        HAZARDS

                                         

                                        Biological hazards can be defined as organic dusts originating from different sources of biological origin such as virus, bacteria, fungi, proteins from animals or substances from plants such as degradation products of natural fibres. The aetiological agent might be derived from a viable organism or from contaminants or constitute a specific component in the dust. Biological hazards are grouped into infectious and non-infectious agents. Non-infectious hazards can be further divided into viable organisms, biogenic toxins and biogenic allergens.

                                         

                                        Infectious hazards

                                        Occupational diseases from infectious agents are relatively uncommon. Workers at risk include employees at hospitals, laboratory workers, farmers, slaughterhouse workers, veterinarians, zoo keepers and cooks. Susceptibility is very variable (e.g., persons treated with immunodepressing drugs will have a high sensitivity).

                                        Hepatitis B, tuberculosis, anthrax, brucella, tetanus, chlamydia psittaci, salmonella

                                        Viable organisms and biogenic toxins

                                        Viable organisms include fungi, spores and mycotoxins; biogenic toxins include endotoxins, aflatoxin and bacteria. The products of bacterial and fungal metabolism are complex and numerous and affected by temperature, humidity and kind of substrate on which they grow. Chemically they might consist of proteins, lipoproteins or mucopolysaccharides. Examples are Gram positive and Gram negative bacteria and moulds. Workers at risk include cotton mill workers, hemp and flax workers, sewage and sludge treatment workers, grain silo workers.

                                        Byssinosis, “grain fever”, Legionnaire’s disease

                                        Biogenic allergens

                                        Biogenic allergens include fungi, animal-derived proteins, terpenes, storage mites and enzymes. A considerable part of the biogenic allergens in agriculture comes from proteins from animal skin, hair from furs and protein from the faecal material and urine. Allergens might be found in many industrial environments, such as fermentation processes, drug production, bakeries, paper production, wood processing (saw mills, production, manufacturing) as well as in bio-technology (enzyme and vaccine production, tissue culture) and spice production. In sensitized persons, exposure to the allergic agents may induce allergic symptoms such as allergic rhinitis, conjunctivitis or asthma. Allergic alveolitis is characterized by acute respiratory symptoms like cough, chills, fever, headache and pain in the muscles, which might lead to chronic lung fibrosis.

                                        Occupational asthma: wool, furs, wheat grain, flour, red cedar, garlic powder

                                        Allergic alveolitis: farmer’s disease, bagassosis, “bird fancier’s disease”, humidifier fever, sequoiosis

                                         

                                        PHYSICAL HAZARDS

                                         

                                         

                                        Noise

                                        Noise is considered as any unwanted sound that may adversely affect the health and well-being of individuals or populations. Aspects of noise hazards include total energy of the sound, frequency distribution, duration of exposure and impulsive noise. Hearing acuity is generally affected first with a loss or dip at 4000 Hz followed by losses in the frequency range from 2000 to 6000 Hz. Noise might result in acute effects like communication problems, decreased concentration, sleepiness and as a consequence interference with job performance. Exposure to high levels of noise (usually above 85 dBA) or impulsive noise (about 140 dBC) over a significant period of time may cause both temporary and chronic hearing loss. Permanent hearing loss is the most common occupational disease in compensation claims.

                                        Foundries, woodworking, textile mills, metalworking

                                        Vibration

                                        Vibration has several parameters in common with noise-frequency, amplitude, duration of exposure and whether it is continuous or intermittent. Method of operation and skilfulness of the operator seem to play an important role in the development of harmful effects of vibration. Manual work using powered tools is associated with symptoms of peripheral circulatory disturbance known as “Raynaud’s phenomenon” or “vibration-induced white fingers” (VWF). Vibrating tools may also affect the peripheral nervous system and the musculo-skeletal system with reduced grip strength, low back pain and degenerative back disorders.

                                        Contract machines, mining loaders, fork-lift trucks, pneumatic tools, chain saws

                                        Ionizing

                                        radiation

                                         

                                        The most important chronic effect of ionizing radiation is cancer, including leukaemia. Overexposure from comparatively low levels of radiation have been associated with dermatitis of the hand and effects on the haematological system. Processes or activities which might give excessive exposure to ionizing radiation are very restricted and regulated.

                                        Nuclear reactors, medical and dental x-ray tubes, particle accelerators, radioisotopes

                                        Non-ionizing

                                        radiation

                                         

                                        Non-ionizing radiation consists of ultraviolet radiation, visible radiation, infrared, lasers, electromagnetic fields (microwaves and radio frequency) and extreme low frequency radiation. IR radiation might cause cataracts. High-powered lasers may cause eye and skin damage. There is an increasing concern about exposure to low levels of electromagnetic fields as a cause of cancer and as a potential cause of adverse reproductive outcomes among women, especially from exposure to video display units. The question about a causal link to cancer is not yet answered. Recent reviews of available scientific knowledge generally conclude that there is no association between use of VDUs and adverse reproductive outcome.

                                        Ultraviolet radiation: arc welding and cutting; UV curing of inks, glues, paints, etc.; disinfection; product control

                                        Infrared radiation: furnaces, glassblowing

                                        Lasers: communications, surgery, construction

                                         

                                         

                                         

                                        Identification and Classification of Hazards

                                        Before any occupational hygiene investigation is performed the purpose must be clearly defined. The purpose of an occupational hygiene investigation might be to identify possible hazards, to evaluate existing risks at the workplace, to prove compliance with regulatory requirements, to evaluate control measures or to assess exposure with regard to an epidemiological survey. This article is restricted to programmes aimed at identification and classification of hazards at the workplace. Many models or techniques have been developed to identify and evaluate hazards in the working environment. They differ in complexity, from simple checklists, preliminary industrial hygiene surveys, job-exposure matrices and hazard and operability studies to job exposure profiles and work surveillance programmes (Renes 1978; Gressel and Gideon 1991; Holzner, Hirsh and Perper 1993; Goldberg et al. 1993; Bouyer and Hémon 1993; Panett, Coggon and Acheson 1985; Tait 1992). No single technique is a clear choice for everyone, but all techniques have parts which are useful in any investigation. The usefulness of the models also depends on the purpose of the investigation, size of workplace, type of production and activity as well as complexity of operations.

                                        Identification and classification of hazards can be divided into three basic elements: workplace characterization, exposure pattern and hazard evaluation.

                                        Workplace characterization

                                        A workplace might have from a few employees up to several thousands and have different activities (e.g., production plants, construction sites, office buildings, hospitals or farms). At a workplace different activities can be localized to special areas such as departments or sections. In an industrial process, different stages and operations can be identified as production is followed from raw materials to finished products.

                                        Detailed information should be obtained about processes, operations or other activities of interest, to identify agents utilized, including raw materials, materials handled or added in the process, primary products, intermediates, final products, reaction products and by-products. Additives and catalysts in a process might also be of interest to identify. Raw material or added material which has been identified only by trade name must be evaluated by chemical composition. Information or safety data sheets should be available from manufacturer or supplier.

                                        Some stages in a process might take place in a closed system without anyone exposed, except during maintenance work or process failure. These events should be recognized and precautions taken to prevent exposure to hazardous agents. Other processes take place in open systems, which are provided with or without local exhaust ventilation. A general description of the ventilation system should be provided, including local exhaust system.

                                        When possible, hazards should be identified in the planning or design of new plants or processes, when changes can be made at an early stage and hazards might be anticipated and avoided. Conditions and procedures that may deviate from the intended design must be identified and evaluated in the process state. Recognition of hazards should also include emissions to the external environment and waste materials. Facility locations, operations, emission sources and agents should be grouped together in a systematic way to form recognizable units in the further analysis of potential exposure. In each unit, operations and agents should be grouped according to health effects of the agents and estimation of emitted amounts to the work environment.

                                        Exposure patterns

                                        The main exposure routes for chemical and biological agents are inhalation and dermal uptake or incidentally by ingestion. The exposure pattern depends on frequency of contact with the hazards, intensity of exposure and time of exposure. Working tasks have to be systematically examined. It is important not only to study work manuals but to look at what actually happens at the workplace. Workers might be directly exposed as a result of actually performing tasks, or be indirectly exposed because they are located in the same general area or location as the source of exposure. It might be necessary to start by focusing on working tasks with high potential to cause harm even if the exposure is of short duration. Non-routine and intermittent operations (e.g., maintenance, cleaning and changes in production cycles) have to be considered. Working tasks and situations might also vary throughout the year.

                                        Within the same job title exposure or uptake might differ because some workers wear protective equipment and others do not. In large plants, recognition of hazards or a qualitative hazard evaluation very seldom can be performed for every single worker. Therefore workers with similar working tasks have to be classified in the same exposure group. Differences in working tasks, work techniques and work time will result in considerably different exposure and have to be considered. Persons working outdoors and those working without local exhaust ventilation have been shown to have a larger day-to-day variability than groups working indoors with local exhaust ventilation (Kromhout, Symanski and Rappaport 1993). Work processes, agents applied for that process/job or different tasks within a job title might be used, instead of the job title, to characterize groups with similar exposure. Within the groups, workers potentially exposed must be identified and classified according to hazardous agents, routes of exposure, health effects of the agents, frequency of contact with the hazards, intensity and time of exposure. Different exposure groups should be ranked according to hazardous agents and estimated exposure in order to determine workers at greatest risk.

                                        Qualitative hazard evaluation

                                        Possible health effects of chemical, biological and physical agents present at the workplace should be based on an evaluation of available epidemiological, toxicological, clinical and environmental research. Up-to-date information about health hazards for products or agents used at the workplace should be obtained from health and safety journals, databases on toxicity and health effects, and relevant scientific and technical literature.

                                        Material Safety Data Sheets (MSDSs) should if necessary be updated. Data Sheets document percentages of hazardous ingredients together with the Chemical Abstracts Service chemical identifier, the CAS-number, and threshold limit value (TLV), if any. They also contain information about health hazards, protective equipment, preventive actions, manufacturer or supplier, and so on. Sometimes the ingredients reported are rather rudimentary and have to be supplemented with more detailed information.

                                        Monitored data and records of measurements should be studied. Agents with TLVs provide general guidance in deciding whether the situation is acceptable or not, although there must be allowance for possible interactions when workers are exposed to several chemicals. Within and between different exposure groups, workers should be ranked according to health effects of agents present and estimated exposure (e.g., from slight health effects and low exposure to severe health effects and estimated high exposure). Those with the highest ranks deserve highest priority. Before any prevention activities start it might be necessary to perform an exposure monitoring programme. All results should be documented and easily attainable. A working scheme is illustrated in figure 1.

                                        Figure 1. Elements of risk assessment

                                        IHY010F3

                                        In occupational hygiene investigations the hazards to the outdoor environment (e.g., pollution and greenhouse effects as well as effects on the ozone layer) might also be considered.

                                        Chemical, Biological and Physical Agents

                                        Hazards might be of chemical, biological or physical origin. In this section and in table 1 a brief description of the various hazards will be given together with examples of environments or activities where they will be found (Casarett 1980; International Congress on Occupational Health 1985; Jacobs 1992; Leidel, Busch and Lynch 1977; Olishifski 1988; Rylander 1994). More detailed information will be found elsewhere in this Encyclopaedia.

                                        Chemical agents

                                        Chemicals can be grouped into gases, vapours, liquids and aerosols (dusts, fumes, mists).

                                        Gases

                                        Gases are substances that can be changed to liquid or solid state only by the combined effects of increased pressure and decreased temperature. Handling gases always implies risk of exposure unless they are processed in closed systems. Gases in containers or distribution pipes might accidentally leak. In processes with high temperatures (e.g., welding operations and exhaust from engines) gases will be formed.

                                        Vapours

                                        Vapours are the gaseous form of substances that normally are in the liquid or solid state at room temperature and normal pressure. When a liquid evaporates it changes to a gas and mixes with the surrounding air. A vapour can be regarded as a gas, where the maximal concentration of a vapour depends on the temperature and the saturation pressure of the substance. Any process involving combustion will generate vapours or gases. Degreasing operations might be performed by vapour phase degreasing or soak cleaning with solvents. Work activities like charging and mixing liquids, painting, spraying, cleaning and dry cleaning might generate harmful vapours.

                                        Liquids

                                        Liquids may consist of a pure substance or a solution of two or more substances (e.g., solvents, acids, alkalis). A liquid stored in an open container will partially evaporate into the gas phase. The concentration in the vapour phase at equilibrium depends on the vapour pressure of the substance, its concentration in the liquid phase, and the temperature. Operations or activities with liquids might give rise to splashes or other skin contact, besides harmful vapours.

                                        Dusts

                                        Dusts consist of inorganic and organic particles, which can be classified as inhalable, thoracic or respirable, depending on particle size. Most organic dusts have a biological origin. Inorganic dusts will be generated in mechanical processes like grinding, sawing, cutting, crushing, screening or sieving. Dusts may be dispersed when dusty material is handled or whirled up by air movements from traffic. Handling dry materials or powder by weighing, filling, charging, transporting and packing will generate dust, as will activities like insulation and cleaning work.

                                        Fumes

                                        Fumes are solid particles vaporized at high temperature and condensed to small particles. The vaporization is often accompanied by a chemical reaction such as oxidation. The single particles that make up a fume are extremely fine, usually less than 0.1 μm, and often aggregate in larger units. Examples are fumes from welding, plasma cutting and similar operations.

                                        Mists

                                        Mists are suspended liquid droplets generated by condensation from the gaseous state to the liquid state or by breaking up a liquid into a dispersed state by splashing, foaming or atomizing. Examples are oil mists from cutting and grinding operations, acid mists from electroplating, acid or alkali mists from pickling operations or paint spray mists from spraying operations.

                                         

                                        Back

                                        Page 4 of 7

                                        " DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

                                        Contents