27. Biological Monitoring
Chapter Editor: Robert Lauwerys
Table of Contents
Vito Foà and Lorenzo Alessio
Metals and Organometallic Compounds
P. Hoet and Robert Lauwerys
Marco Maroni and Adalberto Ferioli
Click a link below to view table in article context.
Point to a thumbnail to see figure caption, click to see figure in article context.
28. Epidemiology and Statistics
Chapter Editors: Franco Merletti, Colin L. Soskolne and Paolo Vineis
Epidemiological Method Applied to Occupational Health and Safety
Franco Merletti, Colin L. Soskolne and Paolo Vineis
M. Gerald Ott
Summary Worklife Exposure Measures
Colin L. Soskolne
Measuring Effects of Exposures
Shelia Hoar Zahm
Case Study: Measures
Franco Merletti, Colin L. Soskolne and Paola Vineis
Options in Study Design
Validity Issues in Study Design
Annie J. Sasco
Impact of Random Measurement Error
Paolo Vineis and Colin L. Soskolne
Annibale Biggeri and Mario Braga
Questionnaires in Epidemiological Research
Steven D. Stellman and Colin L. Soskolne
Asbestos Historical Perspective
Click a link below to view table in article context.
Point to a thumbnail to see figure caption, click to see the figure in article context.
Chapter Editors: Wolfgang Laurig and Joachim Vedder
Table of Contents
Wolfgang Laurig and Joachim Vedder
The Nature and Aims of Ergonomics
William T. Singleton
Analysis of Activities, Tasks and Work Systems
Véronique De Keyser
Ergonomics and Standardization
Pranab Kumar Nag
Juhani Smolander and Veikko Louhevaara
Postures at Work
Fatigue and Recovery
Rolf Helbig and Walter Rohmert
Eberhard Ulich and Gudela Grote
Controls, Indicators and Panels
Karl H. E. Kroemer
Information Processing and Design
Andries F. Sanders
Designing for Specific Groups
Joke H. Grady-van den Nieuwboer
Antoine Laville and Serge Volkoff
Workers with Special Needs
Joke H. Grady-van den Nieuwboer
System Design in Diamond Manufacturing
Disregarding Ergonomic Design Principles: Chernobyl
Vladimir M. Munipov
Click a link below to view table in article context.
Point to a thumbnail to see figure caption, click to see the figure in the article context.
30. Occupational Hygiene
Chapter Editor: Robert F. Herrick
Table of Contents
Goals, Definitions and General Information
Berenice I. Ferrari Goelzer
Recognition of Hazards
Evaluation of the Work Environment
Lori A. Todd
The Biological Basis for Exposure Assessment
Occupational Exposure Limits
Dennis J. Paustenbach
31. Personal Protection
Chapter Editor: Robert F. Herrick
Table of Contents
Overview and Philosophy of Personal Protection
Robert F. Herrick
Eye and Face Protectors
Foot and Leg Protection
Isabelle Balty and Alain Mayer
John R. Franks and Elliott H. Berger
S. Zack Mansdorf
Thomas J. Nelson
Click a link below to view table in article context.
Point to a thumbnail to see figure caption, click to see figure in article context.
32. Record Systems and Surveillance
Chapter Editor: Steven D. Stellman
Table of Contents
Occupational Disease Surveillance and Reporting Systems
Steven B. Markowitz
Occupational Hazard Surveillance
David H. Wegman and Steven D. Stellman
Surveillance in Developing Countries
David Koh and Kee-Seng Chia
Case Study: Worker Protection and Statistics on Accidents and Occupational Diseases - HVBG, Germany
Martin Butz and Burkhard Hoffmann
Case Study: Wismut - A Uranium Exposure Revisited
Heinz Otten and Horst Schulz
Measurement Strategies and Techniques for Occupational Exposure Assessment in Epidemiology
Frank Bochmann and Helmut Blome
Click a link below to view the table in article context.
Point to a thumbnail to see figure caption, click to see the figure in article context.
Chapter Editor: Ellen K. Silbergeld
Ellen K. Silbergeld, Chapter Editor
Definitions and Concepts
Bo Holmberg, Johan Hogberg and Gunnar Johanson
Target Organ And Critical Effects
Effects Of Age, Sex And Other Factors
Genetic Determinants Of Toxic Response
Daniel W. Nebert and Ross A. McKinnon
Introduction And Concepts
Philip G. Watanabe
Cellular Injury And Cellular Death
Benjamin F. Trump and Irene K. Berezesky
R. Rita Misra and Michael P. Waalkes
Joseph G. Vos and Henk van Loveren
Target Organ Toxicology
Ellen K. Silbergeld
Genetic Toxicity Assessment
David M. DeMarini and James Huff
In Vitro Toxicity Testing
Structure Activity Relationships
Ellen K. Silbergeld
Toxicology In Health And Safety Regulation
Ellen K. Silbergeld
Approaches To Hazard Identification - IARC
Harri Vainio and Julian Wilbourn
Carcinogen Risk Assessment: Other Approaches
Cees A. van der Heijden
Click a link below to view table in article context.
Point to a thumbnail to see figure caption, click to see figure in article context.
Mental strain is a normal consequence of the coping process with mental workload (MWL). Long-term load or a high intensity of job demands can result in short-term consequences of overload (fatigue) and underload (monotony, satiation) and in long-term consequences (e.g., stress symptoms and work-related diseases). The maintenance of the stable regulation of actions while under strain can be realized through changes in one’s action style (by variation of strategies of information-seeking and decision-making), in the lowering of the level of need for achievement (by redefinition of tasks and reduction of quality standards) and by means of a compensatory increase of psychophysiological effort and afterwards a decrease of effort during work time.
This understanding of the process of mental strain can be conceptualized as a transactional process of action regulation during the imposition of loading factors which include not only the negative components of the strain process but also the positive aspects of learning such as accretion, tuning and restructuring and motivation (see figure 2).
Mental fatigue can be defined as a process of time-reversible decrement of behavioural stability in performance, mood and activity after prolonged working time. This state is temporarily reversible by changing the work demands, the environmental influences or stimulation and is completely reversible by means of sleep.
Mental fatigue is a consequence of performing tasks with a high level of difficulty that involve predominantly information processing and/or are of protracted duration. In contrast with monotony, the recovery of the decrements is time-consuming and does not occur suddenly after changing task conditions. Symptoms of fatigue are identified on several levels of behavioural regulation: dis-regulation in the biological homeostasis between environment and organism, dis-regulation in the cognitive processes of goal-directed actions and loss of stability in goal-oriented motivation and achievement level.
Symptoms of mental fatigue can be identified in all subsystems of the human information processing system:
Differential Diagnostic of Mental Fatigue
Table 1. Differentiation among several negative consequences of mental strain
Poor fit in terms of overload
Poor fit in terms
Loss of perceived sense of tasks
Increased affective aversion
Suddenly after task alternation
Enrichment of job content
Degrees of Mental Fatigue
The well-described phenomenology of mental fatigue (Schmidtke 1965), many valid methods of assessment and the great amount of experimental and field results offer the possibility of an ordinal scaling of degrees of mental fatigue (Hacker and Richter 1994). The scaling is based on the individual’s capacity to cope with behavioural decrements:
Level 1: Optimal and efficient performance: no symptoms of decrement in performance, mood and activation level.
Level 2: Complete compensation characterized by increased peripheral psycho-physiological activation (e.g., as measured by electromyogram of finger muscles), perceived increase of mental effort, increased variability in performance criteria.
Level 3: Labile compensation additional to that described in level 2: action slips, perceived fatigue, increasing (compensatory) psycho-physiological activity in central indicators, heart rate, blood pressure.
Level 4: Reduced efficiency additional to that described in level 3: decrease of performance criteria.
Level 5: Yet further functional disturbances: disturbances in social relationships and cooperation at workplace; symptoms of clinical fatigue like loss of sleep quality and vital exhaustion.
Prevention of Mental Fatigue
The design of task structures, environment, rest periods during working time and sufficient sleep are the ways to reduce symptoms of mental fatigue in order that no clinical consequences will occur:
1. Changes in the structure of tasks. Designing of preconditions for adequate learning and task structuring is not only a means of furthering the development of efficient job structures, but is also essential for the prevention of a misfit in terms of mental overload or underload:
2. Introduction of systems of short-term breaks during work. The positive effects of such breaks depend on the observance of some preconditions. More short breaks are more efficient than fewer long breaks; effects depend on a fixed and therefore anticipatable time schedule; and the content of the breaks should have a compensatory function to the physical and mental job demands.
3. Sufficient relaxation and sleep. Special employee-assistant programmes and stress-management techniques may support the ability of relaxation and the prevention of the development of chronicle fatigue (Sethi, Caro and Schuler 1987).
The emergence of sophisticated technologies in molecular and cellular biology has spurred a relatively rapid evolution in the life sciences, including toxicology. In effect, the focus of toxicology is shifting from whole animals and populations of whole animals to the cells and molecules of individual animals and humans. Since the mid-1980s, toxicologists have begun to employ these new methodologies in assessing the effects of chemicals on living systems. As a logical progression, such methods are being adapted for the purposes of toxicity testing. These scientific advances have worked together with social and economic factors to effect change in the evaluation of product safety and potential risk.
Economic factors are specifically related to the volume of materials that must be tested. A plethora of new cosmetics, pharmaceuticals, pesticides, chemicals and household products is introduced into the market every year. All of these products must be evaluated for their potential toxicity. In addition, there is a backlog of chemicals already in use that have not been adequately tested. The enormous task of obtaining detailed safety information on all of these chemicals using traditional whole animal testing methods would be costly in terms of both money and time, if it could even be accomplished.
There are also societal issues that relate to public health and safety, as well as increasing public concern about the use of animals for product safety testing. With regard to human safety, public interest and environmental advocacy groups have placed significant pressure on government agencies to apply more stringent regulations on chemicals. A recent example of this has been a movement by some environmental groups to ban chlorine and chlorine-containing compounds in the United States. One of the motivations for such an extreme action lies in the fact that most of these compounds have never been adequately tested. From a toxicological perspective, the concept of banning a whole class of diverse chemicals based simply on the presence of chlorine is both scientifically unsound and irresponsible. Yet, it is understandable that from the public’s perspective, there must be some assurance that chemicals released into the environment do not pose a significant health risk. Such a situation underscores the need for more efficient and rapid methods to assess toxicity.
The other societal concern that has impacted the area of toxicity testing is animal welfare. The growing number of animal protection groups throughout the world have voiced considerable opposition to the use of whole animals for product safety testing. Active campaigns have been waged against manufacturers of cosmetics, household and personal care products and pharmaceuticals in attempts to stop animal testing. Such efforts in Europe have resulted in the passage of the Sixth Amendment to Directive 76/768/EEC (the Cosmetics Directive). The consequence of this Directive is that cosmetic products or cosmetic ingredients that have been tested in animals after January 1, 1998 cannot be marketed in the European Union, unless alternative methods are insufficiently validated. While this Directive has no jurisdiction over the sale of such products in the United States or other countries, it will significantly affect those companies that have international markets that include Europe.
The concept of alternatives, which forms the basis for the development of tests other than those on whole animals, is defined by the three Rs: reduction in the numbers of animals used; refinement of protocols so that animals experience less stress or discomfort; and replacement of current animal tests with in vitro tests (i.e., tests done outside of the living animal), computer models or test on lower vertebrate or invertebrate species. The three Rs were introduced in a book published in 1959 by two British scientists, W.M.S. Russell and Rex Burch, The Principles of Humane Experimental Technique. Russell and Burch maintained that the only way in which valid scientific results could be obtained is through the humane treatment of animals, and believed that methods should be developed to reduce animal use and ultimately replace it. Interestingly, the principles outlined by Russell and Burch received little attention until the resurgence of the animal welfare movement in the mid-1970s. Today the concept of the three Rs is very much in the forefront with regard to research, testing and education.
In summary, the development of in vitro test methodologies has been influenced by a variety of factors that have converged over the last ten to 20 years. It is difficult to ascertain if any of these factors alone would have had such a profound effect on toxicity testing strategies.
Concept of In Vitro Toxicity Tests
This section will focus solely on in vitro methods for evaluating toxicity, as one of the alternatives to whole-animal testing. Additional non-animal alternatives such as computer modelling and quantitative structure-activity relationships are discussed in other articles of this chapter.
In vitro studies are generally conducted in animal or human cells or tissues outside of the body. In vitro literally means “in glass”, and refers to procedures carried out on living material or components of living material cultured in petri dishes or in test tubes under defined conditions. These may be contrasted with in vivo studies, or those carried out “in the living animal”. While it is difficult, if not impossible, to project the effects of a chemical on a complex organism when the observations are confined to a single type of cells in a dish, in vitro studies do provide a significant amount of information about intrinsic toxicity as well as cellular and molecular mechanisms of toxicity. In addition, they offer many advantages over in vivo studies in that they are generally less expensive and they may be conducted under more controlled conditions. Furthermore, despite the fact that small numbers of animals are still needed to obtain cells for in vitro cultures, these methods may be considered reduction alternatives (since many fewer animals are used compared to in vivo studies) and refinement alternatives (because they eliminate the need to subject the animals to the adverse toxic consequences imposed by in vivo experiments).
In order to interpret the results of in vitro toxicity tests, determine their potential usefulness in assessing toxicity and relate them to the overall toxicological process in vivo, it is necessary to understand which part of the toxicological process is being examined. The entire toxicological process consists of events that begin with the organism’s exposure to a physical or chemical agent, progress through cellular and molecular interactions and ultimately manifest themselves in the response of the whole organism. In vitro tests are generally limited to the part of the toxicological process that takes place at the cellular and molecular level. The types of information that may be obtained from in vitro studies include pathways of metabolism, interaction of active metabolites with cellular and molecular targets and potentially measurable toxic endpoints that can serve as molecular biomarkers for exposure. In an ideal situation, the mechanism of toxicity of each chemical from exposure to organismal manifestation would be known, such that the information obtained from in vitro tests could be fully interpreted and related to the response of the whole organism. However, this is virtually impossible, since relatively few complete toxicological mechanisms have been elucidated. Thus, toxicologists are faced with a situation in which the results of an in vitro test cannot be used as an entirely accurate prediction of in vivo toxicity because the mechanism is unknown. However, frequently during the process of developing an in vitro test, components of the cellular and molecular mechanism(s) of toxicity are elucidated.
One of the key unresolved issues surrounding the development and implementation of in vitro tests is related to the following consideration: should they be mechanistically based or is it sufficient for them to be descriptive? It is inarguably better from a scientific perspective to utilize only mechanistically based tests as replacements for in vivo tests. However in the absence of complete mechanistic knowledge, the prospect of developing in vitro tests to completely replace whole animal tests in the near future is almost nil. This does not, however, rule out the use of more descriptive types of assays as early screening tools, which is the case presently. These screens have resulted in a significant reduction in animal use. Therefore, until such time as more mechanistic information is generated, it may be necessary to employ to a more limited extent, tests whose results simply correlate well with those obtained in vivo.
In Vitro Tests for Cytotoxicity
In this section, several in vitro tests that have been developed to assess a chemical’s cytotoxic potential will be described. For the most part, these tests are easy to perform and analysis can be automated. One commonly used in vitro test for cytotoxicity is the neutral red assay. This assay is done on cells in culture, and for most applications, the cells can be maintained in culture dishes that contain 96 small wells, each 6.4mm in diameter. Since each well can be used for a single determination, this arrangement can accommodate multiple concentrations of the test chemical as well as positive and negative controls with a sufficient number of replicates for each. Following treatment of the cells with various concentrations of the test chemical ranging over at least two orders of magnitude (e.g., from 0.01mM to 1mM), as well as positive and negative control chemicals, the cells are rinsed and treated with neutral red, a dye that can be taken up and retained only by live cells. The dye may be added upon removal of the test chemical to determine immediate effects, or it may be added at various times after the test chemical is removed to determine cumulative or delayed effects. The intensity of the colour in each well corresponds to the number of live cells in that well. The colour intensity is measured by a spectrophotometer which may be equipped with a plate reader. The plate reader is programmed to provide individual measurements for each of the 96 wells of the culture dish. This automated methodology permits the investigator to rapidly perform a concentration-response experiment and to obtain statistically useful data.
Another relatively simple assay for cytotoxicity is the MTT test. MTT (3[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) is a tetrazolium dye that is reduced by mitochondrial enzymes to a blue colour. Only cells with viable mitochondria will retain the ability to carry out this reaction; therefore the colour intensity is directly related to the degree of mitochondrial integrity. This is a useful test to detect general cytotoxic compounds as well as those agents that specifically target mitochondria.
The measurement of lactate dehydrogenase (LDH) activity is also used as a broad-based assay for cytotoxicity. This enzyme is normally present in the cytoplasm of living cells and is released into the cell culture medium through leaky cell membranes of dead or dying cells that have been adversely affected by a toxic agent. Small amounts of culture medium may be removed at various times after chemical treatment of the cells to measure the amount of LDH released and determine a time course of toxicity. While the LDH release assay is a very general assessment of cytotoxicity, it is useful because it is easy to perform and it may be done in real time.
There are many new methods being developed to detect cellular damage. More sophisticated methods employ fluorescent probes to measure a variety of intracellular parameters, such as calcium release and changes in pH and membrane potential. In general, these probes are very sensitive and may detect more subtle cellular changes, thus reducing the need to use cell death as an endpoint. In addition, many of these fluorescent assays may be automated by the use of 96-well plates and fluorescent plate readers.
Once data have been collected on a series of chemicals using one of these tests, the relative toxicities may be determined. The relative toxicity of a chemical, as determined in an in vitro test, may be expressed as the concentration that exerts a 50% effect on the endpoint response of untreated cells. This determination is referred to as the EC50 (Effective Concentration for 50% of the cells) and may be used to compare toxicities of different chemicals in vitro. (A similar term used in evaluating relative toxicity is IC50, indicating the concentration of a chemical that causes a 50% inhibition of a cellular process, e.g., the ability to take up neutral red.) It is not easy to assess whether the relative in vitro toxicity of the chemicals is comparable to their relative in vivo toxicities, since there are so many confounding factors in the in vivo system, such as toxicokinetics, metabolism, repair and defence mechanisms. In addition, since most of these assays measure general cytotoxicity endpoints, they are not mechanistically based. Therefore, agreement between in vitro and in vivo relative toxicities is simply correlative. Despite the numerous complexities and difficulties in extrapolating from in vitro to in vivo, these in vitro tests are proving to be very valuable because they are simple and inexpensive to perform and may be used as screens to flag highly toxic drugs or chemicals at early stages of development.
Target Organ Toxicity
In vitro tests can also be used to assess specific target organ toxicity. There are a number of difficulties associated with designing such tests, the most notable being the inability of in vitro systems to maintain many of the features of the organ in vivo. Frequently, when cells are taken from animals and placed into culture, they tend either to degenerate quickly and/or to dedifferentiate, that is, lose their organ-like functions and become more generic. This presents a problem in that within a short period of time, usually a few days, the cultures are no longer useful for assessing organ-specific effects of a toxin.
Many of these problems are being overcome because of recent advances in molecular and cellular biology. Information that is obtained about the cellular environment in vivo may be utilized in modulating culture conditions in vitro. Since the mid-1980s, new growth factors and cytokines have been discovered, and many of these are now available commercially. Addition of these factors to cells in culture helps to preserve their integrity and may also help to retain more differentiated functions for longer periods of time. Other basic studies have increased the knowledge of the nutritional and hormonal requirements of cells in culture, so that new media may be formulated. Recent advances have also been made in identifying both naturally occurring and artificial extracellular matrices on which cells may be cultured. Culture of cells on these different matrices can have profound effects on both their structure and function. A major advantage derived from this knowledge is the ability to intricately control the environment of cells in culture and individually examine the effects of these factors on basic cell processes and on their responses to different chemical agents. In short, these systems can provide great insight into organ-specific mechanisms of toxicity.
Many target organ toxicity studies are conducted in primary cells, which by definition are freshly isolated from an organ, and usually exhibit a finite lifetime in culture. There are many advantages to having primary cultures of a single cell type from an organ for toxicity assessment. From a mechanistic perspective, such cultures are useful for studying specific cellular targets of a chemical. In some instances, two or more cell types from an organ may be cultured together, and this provides an added advantage of being able to look at cell-cell interactions in response to a toxin. Some co-culture systems for skin have been engineered so that they form a three dimensional structure resembling skin in vivo. It is also possible to co-culture cells from different organs—for example, liver and kidney. This type of culture would be useful in assessing the effects specific to kidney cells, of a chemical that must be bioactivated in the liver.
Molecular biological tools have also played an important role in the development of continuous cell lines that can be useful for target organ toxicity testing. These cell lines are generated by transfecting DNA into primary cells. In the transfection procedure, the cells and the DNA are treated such that the DNA can be taken up by the cells. The DNA is usually from a virus and contains a gene or genes that, when expressed, allow the cells to become immortalized (i.e., able to live and grow for extended periods of time in culture). The DNA can also be engineered so that the immortalizing gene is controlled by an inducible promoter. The advantage of this type of construct is that the cells will divide only when they receive the appropriate chemical stimulus to allow expression of the immortalizing gene. An example of such a construct is the large T antigen gene from Simian Virus 40 (SV40) (the immortalizing gene), preceded by the promoter region of the metallothionein gene, which is induced by the presence of a metal in the culture medium. Thus, after the gene is transfected into the cells, the cells may be treated with low concentrations of zinc to stimulate the MT promoter and turn on the expression of the T antigen gene. Under these conditions, the cells proliferate. When zinc is removed from the medium, the cells stop dividing and under ideal conditions return to a state where they express their tissue-specific functions.
The ability to generate immortalized cells combined with the advances in cell culture technology have greatly contributed to the creation of cell lines from many different organs, including brain, kidney and liver. However, before these cell lines may be used as a surrogate for the bona fide cell types, they must be carefully characterized to determine how “normal” they really are.
Other in vitro systems for studying target organ toxicity involve increasing complexity. As in vitro systems progress in complexity from single cell to whole organ culture, they become more comparable to the in vivo milieu, but at the same time they become much more difficult to control given the increased number of variables. Therefore, what may be gained in moving to a higher level of organization can be lost in the inability of the researcher to control the experimental environment. Table 1 compares some of the characteristics of various in vitro systems that have been used to study hepatotoxicity.
Table 1. Comparison of in vitro systems for hepatotoxicity studies
(level of interaction)
|Ability to retain liver-specific functions||Potential duration of culture||Ability to control environment|
|Immortalized cell lines||some cell to cell (varies with cell line)||poor to good (varies with cell line)||indefinite||excellent|
|Primary hepatocyte cultures||cell to cell||fair to excellent (varies with culture conditions)||days to weeks||excellent|
|Liver cell co-cultures||cell to cell (between the same and different cell types)||good to excellent||weeks||excellent|
|Liver slices||cell to cell (among all cell types)||good to excellent||hours to days||good|
|Isolated, perfused liver||cell to cell (among all cell types), and intra-organ||excellent||hours||fair|
Precision-cut tissue slices are being used more extensively for toxicological studies. There are new instruments available that enable the researcher to cut uniform tissue slices in a sterile environment. Tissue slices offer some advantage over cell culture systems in that all of the cell types of the organ are present and they maintain their in vivo architecture and intercellular communication. Thus, in vitro studies may be conducted to determine the target cell type within an organ as well as to investigate specific target organ toxicity. A disadvantage of the slices is that they degenerate rapidly after the first 24 hours of culture, mainly due to poor diffusion of oxygen to the cells on the interior of the slices. However, recent studies have indicated that more efficient aeration may be achieved by gentle rotation. This, together with the use of a more complex medium, allows the slices to survive for up to 96 hours.
Tissue explants are similar in concept to tissue slices and may also be used to determine the toxicity of chemicals in specific target organs. Tissue explants are established by removing a small piece of tissue (for teratogenicity studies, an intact embryo) and placing it into culture for further study. Explant cultures have been useful for short-term toxicity studies including irritation and corrosivity in skin, asbestos studies in trachea and neurotoxicity studies in brain tissue.
Isolated perfused organs may also be used to assess target organ toxicity. These systems offer an advantage similar to that of tissue slices and explants in that all cell types are present, but without the stress to the tissue introduced by the manipulations involved in preparing slices. In addition, they allow for the maintenance of intra-organ interactions. A major disadvantage is their short-term viability, which limits their use for in vitro toxicity testing. In terms of serving as an alternative, these cultures may be considered a refinement since the animals do not experience the adverse consequences of in vivo treatment with toxicants. However, their use does not significantly decrease the numbers of animals required.
In summary, there are several types of in vitro systems available for assessing target organ toxicity. It is possible to acquire much information about mechanisms of toxicity using one or more of these techniques. The difficulty remains in knowing how to extrapolate from an in vitro system, which represents a relatively small part of the toxicological process, to the whole process occurring in vivo.
In Vitro Tests for Ocular Irritation
Perhaps the most contentious whole-animal toxicity test from an animal welfare perspective is the Draize test for eye irritation, which is conducted in rabbits. In this test, a small fixed dose of a chemical is placed in one of the rabbit’s eyes while the other eye is used as a control. The degree of irritation and inflammation is scored at various times after exposure. A major effort is being made to develop methodologies to replace this test, which has been criticized not only for humane reasons, but also because of the subjectivity of the observations and variability of the results. It is interesting to note that despite the harsh criticism the Draize test has received, it has proven to be remarkably successful in predicting human eye irritants, particularly slightly to moderately irritating substances, that are difficult to identify by other methods. Thus, the demands on in vitro alternatives are great.
The quest for alternatives to the Draize test is a complicated one, albeit one that is predicted to be successful. Numerous in vitro and other alternatives have been developed and in some cases they have been implemented. Refinement alternatives to the Draize test, which by definition, are less painful or distressful to the animals, include the Low Volume Eye Test, in which smaller amounts of test materials are placed in the rabbits’ eyes, not only for humane reasons, but to more closely mimic the amounts to which people may actually be accidentally exposed. Another refinement is that substances which have a pH less than 2 or greater than 11.5 are no longer tested in animals since they are known to be severely irritating to the eye.
Between 1980 and 1989, there has been an estimated 87% decline in the number of rabbits used for eye irritation testing of cosmetics. In vitro tests have been incorporated as part of a tier-testing approach to bring about this vast reduction in whole-animal tests. This approach is a multi-step process that begins with a thorough examination of the historical eye irritation data and physical and chemical analysis of the chemical to be evaluated. If these two processes do not yield enough information, then a battery of in vitro tests is performed. The additional data obtained from the in vitro tests might then be sufficient to assess the safety of the substance. If not, then the final step would be to perform limited in vivo tests. It is easy to see how this approach can eliminate or at least drastically reduce the numbers of animals needed to predict the safety of a test substance.
The battery of in vitro tests that is used as part of this tier-testing strategy depends upon the needs of the particular industry. Eye irritation testing is done by a wide variety of industries from cosmetics to pharmaceuticals to industrial chemicals. The type of information required by each industry varies and therefore it is not possible to define a single battery of in vitro tests. A test battery is generally designed to assess five parameters: cytotoxicity, changes in tissue physiology and biochemistry, quantitative structure-activity relationships, inflammation mediators, and recovery and repair. An example of a test for cytotoxicity, which is one possible cause for irritation, is the neutral red assay using cultured cells (see above). Changes in cellular physiology and biochemistry resulting from exposure to a chemical may be assayed in cultures of human corneal epithelial cells. Alternatively, investigators have also used intact or dissected bovine or chicken eyeballs obtained from slaughterhouses. Many of the endpoints measured in these whole organ cultures are the same as those measured in vivo, such as corneal opacity and corneal swelling.
Inflammation is frequently a component of chemical-induced eye injury, and there are a number of assays available to examine this parameter. Various biochemical assays detect the presence of mediators released during the inflammatory process such as arachidonic acid and cytokines. The chorioallantoic membrane (CAM) of the hen’s egg may also be used as an indicator of inflammation. In the CAM assay, a small piece of the shell of a ten-to-14-day chick embryo is removed to expose the CAM. The chemical is then applied to the CAM and signs of inflammation, such as vascular hemorrhaging, are scored at various times thereafter.
One of the most difficult in vivo processes to assess in vitro is recovery and repair of ocular injury. A newly developed instrument, the silicon microphysiometer, measures small changes in extracellular pH and can been used to monitor cultured cells in real time. This analysis has been shown to correlate fairly well with in vivo recovery and has been used as an in vitro test for this process. This has been a brief overview of the types of tests being employed as alternatives to the Draize test for ocular irritation. It is likely that within the next several years a complete series of in vitro test batteries will be defined and each will be validated for its specific purpose.
The key to regulatory acceptance and implementation of in vitro test methodologies is validation, the process by which the credibility of a candidate test is established for a specific purpose. Efforts to define and coordinate the validation process have been made both in the United States and in Europe. The European Union established the European Centre for the Validation of Alternative Methods (ECVAM) in 1993 to coordinate efforts there and to interact with American organizations such as the Johns Hopkins Centre for Alternatives to Animal Testing (CAAT), an academic centre in the United States, and the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM), composed of representatives from the National Institutes of Health, the US Environmental Protection Agency, the US Food and Drug Administration and the Consumer Products Safety Commission.
Validation of in vitro tests requires substantial organization and planning. There must be consensus among government regulators and industrial and academic scientists on acceptable procedures, and sufficient oversight by a scientific advisory board to ensure that the protocols meet set standards. The validation studies should be performed in a series of reference laboratories using calibrated sets of chemicals from a chemical bank and cells or tissues from a single source. Both intralaboratory repeatability and interlaboratory reproducibility of a candidate test must be demonstrated and the results subjected to appropriate statistical analysis. Once the results from the different components of the validation studies have been compiled, the scientific advisory board can make recommendations on the validity of the candidate test(s) for a specific purpose. In addition, results of the studies should be published in peer-reviewed journals and placed in a database.
The definition of the validation process is currently a work in progress. Each new validation study will provide information useful to the design of the next study. International communication and cooperation are essential for the expeditious development of a widely acceptable series of protocols, particularly given the increased urgency imposed by the passage of the EC Cosmetics Directive. This legislation may indeed provide the needed impetus for a serious validation effort to be undertaken. It is only through completion of this process that the acceptance of in vitro methods by the various regulatory communities can commence.
This article has provided a broad overview of the current status of in vitro toxicity testing. The science of in vitro toxicology is relatively young, but it is growing exponentially. The challenge for the years ahead is to incorporate the mechanistic knowledge generated by cellular and molecular studies into the vast inventory of in vivo data to provide a more complete description of toxicological mechanisms as well as to establish a paradigm by which in vitro data may be used to predict toxicity in vivo. It will only be through the concerted efforts of toxicologists and government representatives that the inherent value of these in vitro methods can be realized.
Structure activity relationships (SAR) analysis is the utilization of information on the molecular structure of chemicals to predict important characteristics related to persistence, distribution, uptake and absorption, and toxicity. SAR is an alternative method of identifying potential hazardous chemicals, which holds promise of assisting industries and governments in prioritizing substances for further evaluation or for early-stage decision making for new chemicals. Toxicology is an increasingly expensive and resource-intensive undertaking. Increased concerns over the potential for chemicals to cause adverse effects in exposed human populations have prompted regulatory and health agencies to expand the range and sensitivity of tests to detect toxicological hazards. At the same time, the real and perceived burdens of regulation upon industry have provoked concerns for the practicality of toxicity testing methods and data analysis. At present, the determination of chemical carcinogenicity depends upon lifetime testing of at least two species, both sexes, at several doses, with careful histopathological analysis of multiple organs, as well as detection of preneoplastic changes in cells and target organs. In the United States, the cancer bioassay is estimated to cost in excess of $3 million (1995 dollars).
Even with unlimited financial resources, the burden of testing the approximately 70,000 existing chemicals produced in the world today would exceed the available resources of trained toxicologists. Centuries would be required to complete even a first tier evaluation of these chemicals (NRC 1984). In many countries ethical concerns over the use of animals in toxicity testing have increased, bringing additional pressures upon the uses of standard methods of toxicity testing. SAR has been widely used in the pharmaceutical industry to identify molecules with potential for beneficial use in treatment (Hansch and Zhang 1993). In environmental and occupational health policy, SAR is used to predict the dispersion of compounds in the physical-chemical environment and to screen new chemicals for further evaluation of potential toxicity. Under the US Toxic Substances Control Act (TSCA), the EPA has used since 1979 an SAR approach as a “first screen” of new chemicals in the premanufacture notification (PMN) process; Australia uses a similar approach as part of its new chemicals notification (NICNAS) procedure. In the US SAR analysis is an important basis for determining that there is a reasonable basis to conclude that manufacture, processing, distribution, use or disposal of the substance will present an unreasonable risk of injury to human health or the environment, as required by Section 5(f) of TSCA. On the basis of this finding, EPA can then require actual tests of the substance under Section 6 of TSCA.
Rationale for SAR
The scientific rationale for SAR is based upon the assumption that the molecular structure of a chemical will predict important aspects of its behaviour in physical-chemical and biological systems (Hansch and Leo 1979).
The SAR review process includes identification of the chemical structure, including empirical formulations as well as the pure compound; identification of structurally analogous substances; searching databases and literature for information on structural analogs; and analysis of toxicity and other data on structural analogs. In some rare cases, information on the structure of the compound alone can be sufficient to support some SAR analysis, based upon well-understood mechanisms of toxicity. Several databases on SAR have been compiled, as well as computer-based methods for molecular structure prediction.
With this information, the following endpoints can be estimated with SAR:
It should be noted that SAR methods do not exist for such important health endpoints as carcinogenicity, developmental toxicity, reproductive toxicity, neurotoxicity, immunotoxicity or other target organ effects. This is due to three factors: the lack of a large database upon which to test SAR hypotheses, lack of knowledge of structural determinants of toxic action, and the multiplicity of target cells and mechanisms that are involved in these endpoints (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”). Some limited attempts to utilize SAR for predicting pharmacokinetics using information on partition coefficients and solubility (Johanson and Naslund 1988). More extensive quantitative SAR has been done to predict P450-dependent metabolism of a range of compounds and binding of dioxin- and PCB-like molecules to the cytosolic “dioxin” receptor (Hansch and Zhang 1993).
SAR has been shown to have varying predictability for some of the endpoints listed above, as shown in table 1. This table presents data from two comparisons of predicted activity with actual results obtained by empirical measurement or toxicity testing. SAR as conducted by US EPA experts performed more poorly for predicting physical-chemical properties than for predicting biological activity, including biodegradation. For toxicity endpoints, SAR performed best for predicting mutagenicity. Ashby and Tennant (1991) in a more extended study also found good predictability of short-term genotoxicity in their analysis of NTP chemicals. These findings are not surprising, given current understanding of molecular mechanisms of genotoxicity (see “Genetic toxicology”) and the role of electrophilicity in DNA binding. In contrast, SAR tended to underpredict systemic and subchronic toxicity in mammals and to overpredict acute toxicity to aquatic organisms.
Table 1. Comparison of SAR and test data: OECD/NTP analyses
|Endpoint||Agreement (%)||Disagreement (%)||Number|
|Acute mammalian toxicity (LD50 )||80||201||142|
|Carcinogenicity3 : Two year bioassay||72–954||—||301|
Source: Data from OECD, personal communication C. Auer ,US EPA. Only those endpoints for which comparable SAR predictions and actual test data were available were used in this analysis. NTP data are from Ashby and Tennant 1991.
1 Of concern was the failure by SAR to predict acute toxicity in 12% of the chemicals tested.
2 OECD data, based on Ames test concordance with SAR
3 NTP data, based on genetox assays compared to SAR predictions for several classes of “structurally alerting chemicals”.
4 Concordance varies with class; highest concordance was with aromatic amino/nitro compounds; lowest with “miscellaneous” structures.
For other toxic endpoints, as noted above, SAR has less demonstrable utility. Mammalian toxicity predictions are complicated by the lack of SAR for toxicokinetics of complex molecules. Nevertheless, some attempts have been made to propose SAR principles for complex mammalian toxicity endpoints (for instance, see Bernstein (1984) for an SAR analysis of potential male reproductive toxicants). In most cases, the database is too small to permit rigorous testing of structure-based predictions.
At this point it may be concluded that SAR may be useful mainly for prioritizing the investment of toxicity testing resources or for raising early concerns about potential hazard. Only in the case of mutagenicity is it likely that SAR analysis by itself can be utilized with reliability to inform other decisions. For no endpoint is it likely that SAR can provide the type of quantitative information required for risk assessment purposes as discussed elsewhere in this chapter and Encyclopaedia.
In the 3rd edition of the ILO’s Encyclopaedia, published in 1983, ergonomics was summarized in one article that was only about four pages long. Since the publication of the 3rd edition, there has been a major change in emphasis and in understanding of interrelationships in safety and health: the world is no longer easily classifiable into medicine, safety and hazard prevention. In the last decade almost every branch in the production and service industries has expended great effort in improving productivity and quality. This restructuring process has yielded practical experience which clearly shows that productivity and quality are directly related to the design of working conditions. One direct economical measure of productivity—the costs of absenteeism through illness—is affected by working conditions. Therefore it should be possible to increase productivity and quality and to avoid absenteeism by paying more attention to the design of working conditions.
In sum, the simple hypothesis of modern ergonomics can be stated thus: Pain and exhaustion cause health hazards, wasted productivity and reduced quality, which are measures of the costs and benefits of human work.
This simple hypothesis can be contrasted to occupational medicine which generally restricts itself to establishing the aetiology of occupational diseases. Occupational medicine’s goal is to establish conditions under which the probability of developing such diseases is minimized. Using ergonomic principles these conditions can be most easily formulated in the form of demands and load limitations. Occupational medicine can be summed up as establishing “limitations through medico-scientific studies”. Traditional ergonomics regards its role as one of formulating the methods where, using design and work organization, the limitations established through occupational medicine can be put into practice. Traditional ergonomics could then be described as developing “corrections through scientific studies”, where “corrections” are understood to be all work design recommendations that call for attention to be paid to load limits only in order to prevent health hazards. It is a characteristic of such corrective recommendations that practitioners are finally left alone with the problem of applying them—there is no multidisciplinary team effort.
The original aim of inventing ergonomics in 1857 stands in contrast to this kind of “ergonomics by correction”:
... a scientific approach enabling us to reap, for the benefit of ourselves and others, the best fruits of life’s labour for the minimum effort and maximum satisfaction (Jastrzebowski 1857).
The root of the term “ergonomics” stems from the Greek “nomos” meaning rule, and “ergo” meaning work. One could propose that ergonomics should develop “rules” for a more forward-looking, prospective concept of design. In contrast to “corrective ergonomics”, the idea of prospective ergonomics is based on applying ergonomic recommendations which simultaneously take into consideration profitability margins (Laurig 1992).
The basic rules for the development of this approach can be deduced from practical experience and reinforced by the results of occupational hygiene and ergonomics research. In other words, prospective ergonomics means searching for alternatives in work design which prevent fatigue and exhaustion on the part of the working subject in order to promote human productivity (“... for the benefit of ourselves and others”). This comprehensive approach of prospective ergonomics includes workplace and equipment design as well as the design of working conditions determined by an increasing amount of information processing and a changing work organization. Prospective ergonomics is, therefore, an interdisciplinary approach of researchers and practitioners from a wide range of fields united by the same goal, and one part of a general basis for a modern understanding of occupational safety and health (UNESCO 1992).
Based on this understanding, the Ergonomics chapter in the 4th edition of the ILO Encyclopaedia covers the different clusters of knowledge and experiences oriented toward worker characteristics and capabilities, and aimed at an optimum use of the resource “human work” by making work more “ergonomic”, that is, more humane.
The choice of topics and the structure of articles in this chapter follows the structure of typical questions in the field as practised in industry. Beginning with the goals, principles and methods of ergonomics, the articles which follow cover fundamental principles from basic sciences, such as physiology and psychology. Based on this foundation, the next articles introduce major aspects of an ergonomic design of working conditions ranging from work organization to product design. “Designing for everyone” puts special emphasis on an ergonomic approach that is based on the characteristics and capabilities of the worker, a concept often overlooked in practice. The importance and diversity of ergonomics is shown in two examples at the end of the chapter and can also be found in the fact that many other chapters in this edition of the ILO Encyclopaedia are directly related to ergonomics, such as Heat and Cold, Noise, Vibration, Visual Display Units, and virtually all chapters in the sections Accident and Safety Management and Management and Policy.
Design of Production Systems
Many companies invest millions in computer-supported production systems and at the same time do not make full use of their human resources, whose value can be significantly increased through investments in training. In fact, the use of qualified employee potential instead of highly complex automation can not only, in certain circumstances, significantly reduce investment costs, it can also greatly increase flexibility and system capability.
Causes of Inefficient Use of Technology
The improvements which investments in modern technology are intended to make are frequently not even approximately achieved (Strohm, Kuark and Schilling 1993; Ulich 1994). The most important reasons for this are due to problems in the areas of technology, organization and employee qualifications.
Three main causes can be identified for problems with technology:
Problems with organization are primarily attributable to continuous attempts at implementing the latest technology in unsuitable organizational structures. For instance, it makes little sense to introduce third, fourth and fifth generation computers into second generation organizations. But this is exactly what many companies do (Savage and Appleton 1988). In many companies, a radical restructuring of the organization is a precondition for the successful use of new technology. This particularly includes an examination of the concepts of production planning and control. Ultimately, local self-control by qualified operators can in certain circumstances be significantly more efficient and economical than a technically highly developed production planning and control system.
Problems with the qualifications of employees primarily arise because a large number of companies do not recognize the need for qualification measures in conjunction with the introduction of computer-supported production systems. In addition, training is too frequently regarded as a cost factor to be controlled and minimized, rather than as a strategic investment. In fact, system downtime and the resulting costs can often be effectively reduced by allowing faults to be diagnosed and remedied on the basis of operators’ competence and system-specific knowledge and experience. This is particularly the case in tightly coupled production facilities (Köhler et al. 1989). The same applies to introducing new products or product variants. Many examples of inefficient excessive technology use testify to such relationships.
The consequence of the analysis briefly presented here is that the introduction of computer-supported production systems only promises success if it is integrated into an overall concept which seeks to jointly optimize the use of technology, the structure of the organization and the enhancement of staff qualifications.
From the Task to the Design of Socio-Technical Systems
Work-related psychological concepts of production design are based on the primacy of
the task. On the one hand, the task forms the interface between individual and organization (Volpert 1987). On the other hand, the task links the social subsystem with the technical subsystem. “The task must be the point of articulation between the social and technical system—linking the job in the technical system with its correlated role behaviour, in the social system” (Blumberg 1988).
This means that a socio-technical system, for example, a production island, is primarily defined by the task which it has to perform. The distribution of work between human and machine plays a central role, because it decides whether the person “functions” as the long arm of the machine with a function leftover in an automation “gap” or whether the machine functions as the long arm of the person, with a tool function supporting human capabilities and competence. We refer to these opposing positions as “technology-oriented” and “work-oriented” (Ulich 1994).
The Concept of Complete Task
The principle of complete activity (Hacker 1986) or complete task plays a central role in work-related psychological concepts for defining work tasks and for dividing up tasks between human and machine. Complete tasks are those “over which the individual has considerable personal control” and that “induce strong forces within the individual to complete or to continue them”. Complete tasks contribute to the “development of what has been described ... as ‘task orientation’—that is, a state of affairs in which the individual’s interest is aroused, engaged and directed by the character of the task” (Emery 1959). Figure 1 summarizes characteristics of completeness which must be taken into account for measures geared towards work-oriented design of production systems.
These indications of the consequences arising from realizing the principle of the complete task make two things clear: (1) in many cases—probably even the majority of cases—complete tasks in the sense described in figure 1 can only be structured as group tasks on account of the resulting complexity and the associated scope; (2) restructuring of work tasks—particularly when it is linked to introducing group work—requires their integration into a comprehensive restructuring concept which covers all levels of the company.
Table 1. Work-oriented principles for production structuring
Skilled production work1
1 Taking into account the principle of differential work design.
Source: Ulich 1994.
Possibilities for realizing the principles for production structuring outlined in table 1 are illustrated by the proposal for restructuring a production company shown in figure 2. This proposal, which was unanimously approved both by those responsible for production and by the project group formed for the purpose of restructuring, also demonstrates a fundamental turning away from Tayloristic concepts of labour and authority divisions. The examples of many companies show that the restructuring of work and organization structures on the basis of such models is able to meet both work psychological criteria of promoting health and personality development and the demand for long-term economic efficiency (see Ulich 1994).
The line of argument favoured here—only very briefly outlined for reasons of space—seeks to make three things clear:
In the previous sections types of work organization were described that have as one basic characteristic the democratization at lower levels of an organization’s hierarchy through increased autonomy and decision latitude regarding work content as well as working conditions on the shop-floor. In this section, democratization is approached from a different angle by looking at participative decision-making in general. First, a definitional framework for participation is presented, followed by a discussion of research on the effects of participation. Finally, participative systems design is looked at in some detail.
Definitional framework for participation
Organizational development, leadership, systems design, and labour relations are examples of the variety of tasks and contexts where participation is considered relevant. A common denominator which can be regarded as the core of participation is the opportunity for individuals and groups to promote their interests through influencing the choice between alternative actions in a given situation (Wilpert 1989). In order to describe participation in more detail, a number of dimensions are necessary, however. Frequently suggested dimensions are (a) formal-informal, (b) direct-indirect, (c) degree of influence and (d) content of decision (e.g., Dachler and Wilpert 1978; Locke and Schweiger 1979). Formal participation refers to participation within legally or otherwise prescribed rules (e.g., bargaining procedures, guidelines for project management), while informal participation is based on non-prescribed exchanges, for example, between supervisor and subordinate. Direct participation allows for direct influence by the individuals concerned, whereas indirect participation functions through a system of representation. Degree of influence is usually described by means of a scale ranging from “no information to employees about a decision”, through “advance information to employees” and “consultation with employees” to “common decision of all parties involved”. As regards the giving of advance information without any consultation or common decision-making, some authors argue that this is not a low level of participation at all, but merely a form of “pseudo-participation” (Wall and Lischeron 1977). Finally, the content area for participative decision-making can be specified, for example, technological or organizational change, labour relations, or day-to-day operational decisions.
A classification scheme quite different from those derived from the dimensions presented so far was developed by Hornby and Clegg (1992). Based on work by Wall and Lischeron (1977), they distinguish three aspects of participative processes:
They then used these aspects to complement a framework suggested by Gowler and Legge (1978), which describes participation as a function of two organizational variables, namely, type of structure (mechanistic versus organic) and type of process (stable versus unstable). As this model includes a number of assumptions about participation and its relationship to organization, it cannot be used to classify general types of participation. It is presented here as one attempt to define participation in a broader context (see table 2). (In the last section of this article, Hornby and Clegg’s study (1992) will be discussed, which also aimed at testing the model’s assumptions.)
Table 2. Participation in organizational context
Source: Adapted from Hornby and Clegg 1992.
An important dimension usually not included in classifications for participation is the organizational goal behind choosing a participative strategy (Dachler and Wilpert 1978). Most fundamentally, participation can take place in order to comply with a democratic norm, irrespective of its influence on the effectiveness of the decision-making process and the quality of the decision outcome and implementation. On the other hand, a participative procedure can be chosen to benefit from the knowledge and experience of the individuals involved or to ensure acceptance of a decision. Often it is difficult to identify the objectives behind choosing a participative approach to a decision and often several objectives will be found at the same time, so that this dimension cannot be easily used to classify participation. However, for understanding participative processes it is an important dimension to keep in mind.
Research on the effects of participation
A widely shared assumption holds that satisfaction as well as productivity gains can be achieved by providing the opportunity for direct participation in decision-making. Overall, research has supported this assumption, but the evidence is not unequivocal and many of the studies have been criticized on theoretical and methodological grounds (Cotton et al. 1988; Locke and Schweiger 1979; Wall and Lischeron 1977). Cotton et al. (1988) argued that inconsistent findings are due to differences in the form of participation studied; for instance, informal participation and employee ownership are associated with high productivity and satisfaction whereas short-term participation is ineffective in both respects. Although their conclusions were strongly criticized (Leana, Locke and Schweiger 1990), there is agreement that participation research is generally characterized by a number of deficiencies, ranging from conceptual problems like those mentioned by Cotton et al. (1988) to methodological issues like variations in results based on different operationalizations of the dependent variables (e.g., Wagner and Gooding 1987).
To exemplify the difficulties of participation research, the classic study by Coch and French (1948) is briefly described, followed by the critique of Bartlem and Locke (1981). The focus of the former study was overcoming resistance to change by means of participation. Operators in a textile plant where frequent transfers between work tasks occurred were given the opportunity to participate in the design of their new jobs to varying degrees. One group of operators participated in the decisions (detailed working procedures for new jobs and piece rates) through chosen representatives, that is, several operators of their group. In two smaller groups, all operators participated in those decisions and a fourth group served as control with no participation allowed. Previously it had been found in the plant that most operators resented being transferred and were slower in relearning their new jobs as compared with learning their first job in the plant and that absenteeism and turnover among transferred operators was higher than among operators not recently transferred.
This occurred despite the fact that a transfer bonus was given to compensate for the initial loss in piece-rate earnings after a transfer to a new job. Comparing the three experimental conditions it was found that the group with no participation remained at a low level of production—which had been set as the group standard—for the first month after the transfer, while the groups with full participation recovered to their former productivity within a few days and even exceeded it at the end of the month. The third group that participated through chosen representatives did not recover as fast, but showed their old productivity after a month. (They also had insufficient material to work on for the first week, however.) No turnover occurred in the groups with participation and little aggression towards management was observed. The turnover in the participation group without participation was 17% and the attitude towards management was generally hostile. The group with no participation was broken up after one month and brought together again after another two and one-half months to work on a new job, and this time they were given the opportunity to participate in the design of their job. They then showed the same pattern of recovery and increased productivity as the groups with participation in the first experiment. The results were explained by Coch and French on the basis of a general model of resistance to change derived from work by Lewin (1951, see below).
Bartlem and Locke (1981) argued that these findings could not be interpreted as support for the positive effects of participation because there were important differences between the groups as regards the explanation of the need for changes in the introductory meetings with management, the amount of training received, the way the time studies were carried out to set the piece rate, the amount of work available and group size. They assumed that perceived fairness of pay rates and general trust in management contributed to the better performance of the participation groups, not participation per se.
In addition to the problems associated with research on the effects of participation, very little is known about the processes that lead to these effects (e.g., Wilpert 1989). In a longitudinal study on the effects of participative job design, Baitsch (1985) described in detail processes of competence development in a number of shop-floor employees. His study can be linked to Deci’s (1975) theory of intrinsic motivation based on the need for being competent and self-determining. A theoretical framework focusing on the effects of participation on the resistance to change was suggested by Lewin (1951) who argued that social systems gain a quasi-stationary equilibrium which is disturbed by any attempt at change. For the change to be successfully carried through, forces in favour of the change must be stronger than the resisting forces. Participation helps in reducing the resisting forces as well as in increasing the driving forces because reasons for resistance can be openly discussed and dealt with, and individual concerns and needs can be integrated into the proposed change. Additionally, Lewin assumed that common decisions resulting from participatory change processes provide the link between the motivation for change and the actual changes in behaviour.
Participation in systems design
Given the—albeit not completely consistent—empirical support for the effectiveness of participation, as well as its ethical underpinnings in industrial democracy, there is widespread agreement that for the purposes of systems design a participative strategy should be followed (Greenbaum and Kyng 1991; Majchrzak 1988; Scarbrough and Corbett 1992). Additionally, a number of case studies on participative design processes have demonstrated the specific advantages of participation in systems design, for example, regarding the quality of the resulting design, user satisfaction, and acceptance (i.e., actual use) of the new system (Mumford and Henshall 1979; Spinas 1989; Ulich et al. 1991).
The important question then is not the if, but the how of participation. Scarbrough and Corbett (1992) provided an overview of various types of participation in the various stages of the design process (see table 3). As they point out, user involvement in the actual design of technology is rather rare and often does not extend beyond information distribution. Participation mostly occurs in the latter stages of implementation and optimization of the technical system and during the development of socio-technical design options, that is, options of organizational and job design in combination with options for the use of the technical system.
Table 3. User participation in the technology process
Type of participation
Phases of technology process
Trade union consultation
New technology agreements
Informal job redesign
Adapted from Scarbrough and Corbett 1992.
Besides resistance in managers and engineers to the involvement of users in the design of technical systems and potential restrictions embedded in the formal participation structure of a company, an important difficulty concerns the need for methods that allow the discussion and evaluation of systems that do not yet exist (Grote 1994). In software development, usability labs can help to overcome this difficulty as they provide an opportunity for early testing by future users.
In looking at the process of systems design, including participative processes, Hirschheim and Klein (1989) have stressed the effects of implicit and explicit assumptions of system developers and managers about basic topics such as the nature of social organization, the nature of technology and their own role in the development process. Whether system designers see themselves as experts, catalysts or emancipators will greatly influence the design and implementation process. Also, as mentioned before, the broader organizational context in which participative design takes place has to be taken into account. Hornby and Clegg (1992) provided some evidence for the relationship between general organizational characteristics and the form of participation chosen (or, more precisely, the form evolving in the course of system design and implementation). They studied the introduction of an information system which was carried out within a participative project structure and with explicit commitment to user participation. However, users reported that they had had little information about the changes supposed to take place and low levels of influence over system design and related questions like job design and job security. This finding was interpreted in terms of the mechanistic structure and unstable processes of the organization that fostered “arbitrary” participation instead of the desired open participation (see table 2).
In conclusion, there is sufficient evidence demonstrating the benefits of participative change strategies. However, much still needs to be learned about the underlying processes and influencing factors that bring about, moderate or prevent these positive effects.
Work is essential for life, development and personal fulfilment. Unfortunately, indispensable activities such as food production, extraction of raw materials, manufacturing of goods, energy production and services involve processes, operations and materials which can, to a greater or lesser extent, create hazards to the health of workers and those in nearby communities, as well as to the general environment.
However, the generation and release of harmful agents in the work environment can be prevented, through adequate hazard control interventions, which not only protect workers’ health but also limit the damage to the environment often associated with industrialization. If a harmful chemical is eliminated from a work process, it will neither affect the workers nor go beyond, to pollute the environment.
The profession that aims specifically at the prevention and control of hazards arising from work processes is occupational hygiene. The goals of occupational hygiene include the protection and promotion of workers’ health, the protection of the environment and contribution to a safe and sustainable development.
The need for occupational hygiene in the protection of workers’ health cannot be overemphasized. Even when feasible, the diagnosis and the cure of an occupational disease will not prevent further occurrences, if exposure to the aetiological agent does not cease. So long as the unhealthy work environment remains unchanged, its potential to impair health remains. Only the control of health hazards can break the vicious circle illustrated in figure 1.
Figure 1. Interactions between people and the environment
However, preventive action should start much earlier, not only before the manifestation of any health impairment but even before exposure actually occurs. The work environment should be under continuous surveillance so that hazardous agents and factors can be detected and removed, or controlled, before they cause any ill effects; this is the role of occupational hygiene.
Furthermore, occupational hygiene may also contribute to a safe and sustainable development, that is “to ensure that (development) meets the needs of the present without compromising the ability of the future generations to meet their own needs” (World Commission on Environment and Development 1987). Meeting the needs of the present world population without depleting or damaging the global resource base, and without causing adverse health and environmental consequences, requires knowledge and means to influence action (WHO 1992a); when related to work processes this is closely related to occupational hygiene practice.
Occupational health requires a multidisciplinary approach and involves fundamental disciplines, one of which is occupational hygiene, along with others which include occupational medicine and nursing, ergonomics and work psychology. A schematic representation of the scopes of action for occupational physicians and occupational hygienists is presented in figure 2.
Figure 2. Scopes of action for occupational physicians and occupational hygienists.
It is important that decision makers, managers and workers themselves, as well as all occupational health professionals, understand the essential role that occupational hygiene plays in the protection of workers’ health and of the environment, as well as the need for specialized professionals in this field. The close link between occupational and environmental health should also be kept in mind, since the prevention of pollution from industrial sources, through the adequate handling and disposal of hazardous effluents and waste, should be started at the workplace level. (See “Evaluation of the work environment”).
Concepts and Definitions
Occupational hygiene is the science of the anticipation, recognition, evaluation and control of hazards arising in or from the workplace, and which could impair the health and well-being of workers, also taking into account the possible impact on the surrounding communities and the general environment.
Definitions of occupational hygiene may be presented in different ways; however, they all have essentially the same meaning and aim at the same fundamental goal of protecting and promoting the health and well-being of workers, as well as protecting the general environment, through preventive actions in the workplace.
Occupational hygiene is not yet universally recognized as a profession; however, in many countries, framework legislation is emerging that will lead to its establishment.
An occupational hygienist is a professional able to:
It should be kept in mind that a profession consists not only of a body of knowledge, but also of a Code of Ethics; national occupational hygiene associations, as well as the International Occupational Hygiene Association (IOHA), have their own Codes of Ethics (WHO 1992b).
Occupational hygiene technician
An occupational hygiene technician is “a person competent to carry out measurements of the work environment” but not “to make the interpretations, judgements, and recommendations required from an occupational hygienist”. The necessary level of competence may be obtained in a comprehensive or limited field (WHO 1992b).
International Occupational Hygiene Association (IOHA)
IOHA was formally established, during a meeting in Montreal, on June 2, 1987. At present IOHA has the participation of 19 national occupational hygiene associations, with over nineteen thousand members from seventeen countries.
The primary objective of IOHA is to promote and develop occupational hygiene throughout the world, at a high level of professional competence, through means that include the exchange of information among organizations and individuals, the further development of human resources and the promotion of a high standard of ethical practice. IOHA activities include scientific meetings and publication of a newsletter. Members of affiliated associations are automatically members of IOHA; it is also possible to join as an individual member, for those in countries where there is not yet a national association.
In addition to an accepted definition of occupational hygiene and of the role of the occupational hygienist, there is need for the establishment of certification schemes to ensure acceptable standards of occupational hygiene competence and practice. Certification refers to a formal scheme based on procedures for establishing and maintaining knowledge, skills and competence of professionals (Burdorf 1995).
IOHA has promoted a survey of existing national certification schemes (Burdorf 1995), together with recommendations for the promotion of international cooperation in assuring the quality of professional occupational hygienists, which include the following:
Other suggestions in this report include items such as: “reciprocity” and “cross-acceptance of national designations, ultimately aiming at an umbrella scheme with one internationally accepted designation”.
The Practice of Occupational Hygiene
The classical steps in occupational hygiene practice are:
The ideal approach to hazard prevention is “anticipated and integrated preventive action”, which should include:
The importance of anticipating and preventing all types of environmental pollution cannot be overemphasized. There is, fortunately, an increasing tendency to consider new technologies from the point of view of the possible negative impacts and their prevention, from the design and installation of the process to the handling of the resulting effluents and waste, in the so-called cradle-to-grave approach. Environmental disasters, which have occurred in both developed and developing countries, could have been avoided by the application of appropriate control strategies and emergency procedures in the workplace.
Economic aspects should be viewed in broader terms than the usual initial cost consideration; more expensive options that offer good health and environmental protection may prove to be more economical in the long run. The protection of workers’ health and of the environment must start much earlier than it usually does. Technical information and advice on occupational and environmental hygiene should always be available to those designing new processes, machinery, equipment and workplaces. Unfortunately such information is often made available much too late, when the only solution is costly and difficult retrofitting, or worse, when consequences have already been disastrous.
Recognition of hazards
Recognition of hazards is a fundamental step in the practice of occupational hygiene, indispensable for the adequate planning of hazard evaluation and control strategies, as well as for the establishment of priorities for action. For the adequate design of control measures, it is also necessary to physically characterize contaminant sources and contaminant propagation paths.
The recognition of hazards leads to the determination of:
The identification of hazardous agents, their sources and the conditions of exposure requires extensive knowledge and careful study of work processes and operations, raw materials and chemicals used or generated, final products and eventual by-products, as well as of possibilities for the accidental formation of chemicals, decomposition of materials, combustion of fuels or the presence of impurities. The recognition of the nature and potential magnitude of the biological effects that such agents may cause if overexposure occurs, requires knowledge on and access to toxicological information. International sources of information in this respect include International Programme on Chemical Safety (IPCS), International Agency for Research on Cancer (IARC) and International Register of Potentially Toxic Chemicals, United Nations Environment Programme (UNEP-IRPTC).
Agents which pose health hazards in the work environment include airborne contaminants; non-airborne chemicals; physical agents, such as heat and noise; biological agents; ergonomic factors, such as inadequate lifting procedures and working postures; and psychosocial stresses.
Occupational hygiene evaluations
Occupational hygiene evaluations are carried out to assess workers’ exposure, as well as to provide information for the design, or to test the efficiency, of control measures.
Evaluation of workers’ exposure to occupational hazards, such as airborne contaminants, physical and biological agents, is covered elsewhere in this chapter. Nevertheless, some general considerations are provided here for a better understanding of the field of occupational hygiene.
It is important to keep in mind that hazard evaluation is not an end in itself, but must be considered as part of a much broader procedure that starts with the realization that a certain agent, capable of causing health impairment, may be present in the work environment, and concludes with the control of this agent so that it will be prevented from causing harm. Hazard evaluation paves the way to, but does not replace, hazard prevention.
Exposure assessment aims at determining how much of an agent workers have been exposed to, how often and for how long. Guidelines in this respect have been established both at the national and international level—for example, EN 689, prepared by the Comité Européen de Normalisation (European Committee for Standardization) (CEN 1994).
In the evaluation of exposure to airborne contaminants, the most usual procedure is the assessment of inhalation exposure, which requires the determination of the air concentration of the agent to which workers are exposed (or, in the case of airborne particles, the air concentration of the relevant fraction, e.g., the “respirable fraction”) and the duration of the exposure. However, if routes other than inhalation contribute appreciably to the uptake of a chemical, an erroneous judgement may be made by looking only at the inhalation exposure. In such cases, total exposure has to be assessed, and a very useful tool for this is biological monitoring.
The practice of occupational hygiene is concerned with three kinds of situations:
A primary reason for determining whether there is overexposure to a hazardous agent in the work environment, is to decide whether interventions are required. This often, but not necessarily, means establishing whether there is compliance with an adopted standard, which is usually expressed in terms of an occupational exposure limit. The determination of the “worst exposure” situation may be enough to fulfil this purpose. Indeed, if exposures are expected to be either very high or very low in relation to accepted limit values, the accuracy and precision of quantitative evaluations can be lower than when the exposures are expected to be closer to the limit values. In fact, when hazards are obvious, it may be wiser to invest resources initially on controls and to carry out more precise environmental evaluations after controls have been implemented.
Follow-up evaluations are often necessary, particularly if the need existed to install or improve control measures or if changes in the processes or materials utilized were foreseen. In these cases, quantitative evaluations have an important surveillance role in:
Whenever an occupational hygiene survey is carried out in connection with an epidemiological study in order to obtain quantitative data on relationships between exposure and health effects, the exposure must be characterized with a high level of accuracy and precision. In this case, all exposure levels must be adequately characterized, since it would not be enough, for example, to characterize only the worst case exposure situation. It would be ideal, although difficult in practice, to always keep precise and accurate exposure assessment records since there may be a future need to have historical exposure data.
In order to ensure that evaluation data is representative of workers’ exposure, and that resources are not wasted, an adequate sampling strategy, accounting for all possible sources of variability, must be designed and followed. Sampling strategies, as well as measurement techniques, are covered in “Evaluation of the work environment”.
Interpretation of results
The degree of uncertainty in the estimation of an exposure parameter, for example, the true average concentration of an airborne contaminant, is determined through statistical treatment of the results from measurements (e.g., sampling and analysis). The level of confidence on the results will depend on the coefficient of variation of the “measuring system” and on the number of measurements. Once there is an acceptable confidence, the next step is to consider the health implications of the exposure: what does it mean for the health of the exposed workers: now? in the near future? in their working life? will there be an impact on future generations?
The evaluation process is only completed when results from measurements are interpreted in view of data (sometimes referred to as “risk assessment data”) derived from experimental toxicology, epidemiological and clinical studies and, in certain cases, clinical trials. It should be clarified that the term risk assessment has been used in connection with two types of assessments—the assessment of the nature and extent of risk resulting from exposure to chemicals or other agents, in general, and the assessment of risk for a particular worker or group of workers, in a specific workplace situation.
In the practice of occupational hygiene, exposure assessment results are often compared with adopted occupational exposure limits which are intended to provide guidance for hazard evaluation and for setting target levels for control. Exposure in excess of these limits requires immediate remedial action by the improvement of existing control measures or implementation of new ones. In fact, preventive interventions should be made at the “action level”, which varies with the country (e.g., one-half or one-fifth of the occupational exposure limit). A low action level is the best assurance of avoiding future problems.
Comparison of exposure assessment results with occupational exposure limits is a simplification, since, among other limitations, many factors which influence the uptake of chemicals (e.g., individual susceptibilities, physical activity and body build) are not accounted for by this procedure. Furthermore, in most workplaces there is simultaneous exposure to many agents; hence a very important issue is that of combined exposures and agent interactions, because the health consequences of exposure to a certain agent alone may differ considerably from the consequences of exposure to this same agent in combination with others, particularly if there is synergism or potentiation of effects.
Measurements for control
Measurements with the purpose of investigating the presence of agents and the patterns of exposure parameters in the work environment can be extremely useful for the planning and design of control measures and work practices. The objectives of such measurements include:
Direct-reading instruments are extremely useful for control purposes, particularly those which can be used for continuous sampling and reflect what is happening in real time, thus disclosing exposure situations which might not otherwise be detected and which need to be controlled. Examples of such instruments include: photo-ionization detectors, infrared analysers, aerosol meters and detector tubes. When sampling to obtain a picture of the behaviour of contaminants, from the source throughout the work environment, accuracy and precision are not as critical as they would be for exposure assessment.
Recent developments in this type of measurement for control purposes include visualization techniques, one of which is the Picture Mix Exposure—PIMEX (Rosen 1993). This method combines a video image of the worker with a scale showing airborne contaminant concentrations, which are continuously measured, at the breathing zone, with a real-time monitoring instrument, thus making it possible to visualize how the concentration varies while the task is performed. This provides an excellent tool for comparing the relative efficacy of different control measures, such as ventilation and work practices, thus contributing to better design.
Measurements are also needed to assess the efficiency of control measures. In this case, source sampling or area sampling are convenient, alone or in addition to personal sampling, for the assessment of workers’ exposure. In order to assure validity, the locations for “before” and “after” sampling (or measurements) and the techniques used should be the same, or equivalent, in sensitivity, accuracy and precision.
Hazard prevention and control
The primary goal of occupational hygiene is the implementation of appropriate hazard prevention and control measures in the work environment. Standards and regulations, if not enforced, are meaningless for the protection of workers’ health, and enforcement usually requires both monitoring and control strategies. The absence of legally established standards should not be an obstacle to the implementation of the necessary measures to prevent harmful exposures or control them to the lowest level feasible. When serious hazards are obvious, control should be recommended, even before quantitative evaluations are carried out. It may sometimes be necessary to change the classical concept of “recognition-evaluation-control” to “recognition-control-evaluation”, or even to “recognition-control”, if capabilities for evaluation of hazards do not exist. Some examples of hazards in obvious need of action without the necessity of prior environmental sampling are electroplating carried out in an unventilated, small room, or using a jackhammer or sand-blasting equipment with no environmental controls or protective equipment. For such recognized health hazards, the immediate need is control, not quantitative evaluation.
Preventive action should in some way interrupt the chain by which the hazardous agent—a chemical, dust, a source of energy—is transmitted from the source to the worker. There are three major groups of control measures: engineering controls, work practices and personal measures.
The most efficient hazard prevention approach is the application of engineering control measures which prevent occupational exposures by managing the work environment, thus decreasing the need for initiatives on the part of workers or potentially exposed persons. Engineering measures usually require some process modifications or mechanical structures, and involve technical measures that eliminate or reduce the use, generation or release of hazardous agents at their source, or, when source elimination is not possible, engineering measures should be designed to prevent or reduce the spread of hazardous agents into the work environment by:
Control interventions which involve some modification of the source are the best approach because the harmful agent can be eliminated or reduced in concentration or intensity. Source reduction measures include substitution of materials, substitution/modification of processes or equipment and better maintenance of equipment.
When source modifications are not feasible, or are not sufficient to attain the desired level of control, then the release and dissemination of hazardous agents in the work environment should be prevented by interrupting their transmission path through measures such as isolation (e.g., closed systems, enclosures), local exhaust ventilation, barriers and shields, isolation of workers.
Other measures aiming at reducing exposures in the work environment include adequate workplace design, dilution or displacement ventilation, good housekeeping and adequate storage. Labelling and warning signs can assist workers in safe work practices. Monitoring and alarm systems may be required in a control programme. Monitors for carbon monoxide around furnaces, for hydrogen sulphide in sewage work, and for oxygen deficiency in closed spaces are some examples.
Work practices are an important part of control—for example, jobs in which a worker’s work posture can affect exposure, such as whether a worker bends over his or her work. The position of the worker may affect the conditions of exposure (e.g., breathing zone in relation to contaminant source, possibility of skin absorption).
Lastly, occupational exposure can be avoided or reduced by placing a protective barrier on the worker, at the critical entry point for the harmful agent in question (mouth, nose, skin, ear)—that is, the use of personal protective devices. It should be pointed out that all other possibilities of control should be explored before considering the use of personal protective equipment, as this is the least satisfactory means for routine control of exposures, particularly to airborne contaminants.
Other personal preventive measures include education and training, personal hygiene and limitation of exposure time.
Continuous evaluations, through environmental monitoring and health surveillance, should be part of any hazard prevention and control strategy.
Appropriate control technology for the work environment must also encompass measures for the prevention of environmental pollution (air, water, soil), including adequate management of hazardous waste.
Although most of the control principles hereby mentioned apply to airborne contaminants, many are also applicable to other types of hazards. For example, a process can be modified to produce less air contaminants or to produce less noise or less heat. An isolating barrier can isolate workers from a source of noise, heat or radiation.
Far too often prevention dwells on the most widely known measures, such as local exhaust ventilation and personal protective equipment, without proper consideration of other valuable control options, such as alternative cleaner technologies, substitution of materials, modification of processes, and good work practices. It often happens that work processes are regarded as unchangeable when, in reality, changes can be made which effectively prevent or at least reduce the associated hazards.
Hazard prevention and control in the work environment requires knowledge and ingenuity. Effective control does not necessarily require very costly and complicated measures. In many cases, hazard control can be achieved through appropriate technology, which can be as simple as a piece of impervious material between the naked shoulder of a dock worker and a bag of toxic material that can be absorbed through the skin. It can also consist of simple improvements such as placing a movable barrier between an ultraviolet source and a worker, or training workers in safe work practices.
Aspects to be considered when selecting appropriate control strategies and technology, include the type of hazardous agent (nature, physical state, health effects, routes of entry into the body), type of source(s), magnitude and conditions of exposure, characteristics of the workplace and relative location of workstations.
The required skills and resources for the correct design, implementation, operation, evaluation and maintenance of control systems must be ensured. Systems such as local exhaust ventilation must be evaluated after installation and routinely checked thereafter. Only regular monitoring and maintenance can ensure continued efficiency, since even well-designed systems may lose their initial performance if neglected.
Control measures should be integrated into hazard prevention and control programmes, with clear objectives and efficient management, involving multidisciplinary teams made up of occupational hygienists and other occupational health and safety staff, production engineers, management and workers. Programmes must also include aspects such as hazard communication, education and training covering safe work practices and emergency procedures.
Health promotion aspects should also be included, since the workplace is an ideal setting for promoting healthy life-styles in general and for alerting as to the dangers of hazardous non-occupational exposures caused, for example, by shooting without adequate protection, or smoking.
The Links among Occupational Hygiene, Risk Assessment and Risk Management
Risk assessment is a methodology that aims at characterizing the types of health effects expected as a result of a certain exposure to a given agent, as well as providing estimates on the probability of occurrence of these health effects, at different levels of exposure. It is also used to characterize specific risk situations. It involves hazard identification, the establishment of exposure-effect relationships, and exposure assessment, leading to risk characterization.
The first step refers to the identification of an agent—for example, a chemical—as causing a harmful health effect (e.g., cancer or systemic poisoning). The second step establishes how much exposure causes how much of a given effect in how many of the exposed persons. This knowledge is essential for the interpretation of exposure assessment data.
Exposure assessment is part of risk assessment, both when obtaining data to characterize a risk situation and when obtaining data for the establishment of exposure-effect relationships from epidemiological studies. In the latter case, the exposure that led to a certain occupational or environmentally caused effect has to be accurately characterized to ensure the validity of the correlation.
Although risk assessment is fundamental to many decisions which are taken in the practice of occupational hygiene, it has limited effect in protecting workers’ health, unless translated into actual preventive action in the workplace.
Risk assessment is a dynamic process, as new knowledge often discloses harmful effects of substances until then considered relatively harmless; therefore the occupational hygienist must have, at all times, access to up-to-date toxicological information. Another implication is that exposures should always be controlled to the lowest feasible level.
Figure 3. Elements of risk assessment.
Risk management in the work environment
It is not always feasible to eliminate all agents that pose occupational health risks because some are inherent to work processes that are indispensable or desirable; however, risks can and must be managed.
Risk assessment provides a basis for risk management. However, while risk assessment is a scientific procedure, risk management is more pragmatic, involving decisions and actions that aim at preventing, or reducing to acceptable levels, the occurrence of agents which may pose hazards to the health of workers, surrounding communities and the environment, also accounting for the socio-economic and public health context.
Risk management takes place at different levels; decisions and actions taken at the national level pave the way for the practice of risk management at the workplace level.
Risk management at the workplace level requires information and knowledge on:
to serve as a basis for decisions which include:
and which should lead to actions such as:
Traditionally, the profession responsible for most of these decisions and actions in the workplace is occupational hygiene.
One key decision in risk management, that of acceptable risk (what effect can be accepted, in what percentage of the working population, if any at all?), is usually, but not always, taken at the national policy-making level and followed by the adoption of occupational exposure limits and the promulgation of occupational health regulations and standards. This leads to the establishment of targets for control, usually at the workplace level by the occupational hygienist, who should have knowledge of the legal requirements. However, it may happen that decisions on acceptable risk have to be taken by the occupational hygienist at the workplace level—for example, in situations when standards are not available or do not cover all potential exposures.
All these decisions and actions must be integrated into a realistic plan, which requires multidisciplinary and multisectorial coordination and collaboration. Although risk management involves pragmatic approaches, its efficiency should be scientifically evaluated. Unfortunately risk management actions are, in most cases, a compromise between what should be done to avoid any risk and the best which can be done in practice, in view of financial and other limitations.
Risk management concerning the work environment and the general environment should be well coordinated; not only are there overlapping areas, but, in most situations, the success of one is interlinked with the success of the other.
Occupational Hygiene Programmes and Services
Political will and decision making at the national level will, directly or indirectly, influence the establishment of occupational hygiene programmes or services, either at the governmental or private level. It is beyond the scope of this article to provide detailed models for all types of occupational hygiene programmes and services; however, there are general principles that are applicable to many situations and may contribute to their efficient implementation and operation.
A comprehensive occupational hygiene service should have the capability to carry out adequate preliminary surveys, sampling, measurements and analysis for hazard evaluation and for control purposes, and to recommend control measures, if not to design them.
Key elements of a comprehensive occupational hygiene programme or service are human and financial resources, facilities, equipment and information systems, well organized and coordinated through careful planning, under efficient management, and also involving quality assurance and continuous programme evaluation. Successful occupational hygiene programmes require a policy basis and commitment from top management. The procurement of financial resources is beyond the scope of this article.
Adequate human resources constitute the main asset of any programme and should be ensured as a priority. All staff should have clear job descriptions and responsibilities. If needed, provisions for training and education should be made. The basic requirements for occupational hygiene programmes include:
One important aspect is professional competence, which must not only be achieved but also maintained. Continuous education, in or outside the programme or service, should cover, for example, legislation updates, new advances and techniques, and gaps in knowledge. Participation in conferences, symposia and workshops also contribute to the maintenance of competence.
Health and safety for staff
Health and safety should be ensured for all staff in field surveys, laboratories and offices. Occupational hygienists may be exposed to serious hazards and should wear the required personal protective equipment. Depending on the type of work, immunization may be required. If rural work is involved, depending on the region, provisions such as antidote for snake bites should be made. Laboratory safety is a specialized field discussed elsewhere in this Encyclopaedia.
Occupational hazards in offices should not be overlooked—for example, work with visual display units and sources of indoor pollution such as laser printers, photocopying machines and air-conditioning systems. Ergonomic and psychosocial factors should also be considered.
These include offices and meeting room(s), laboratories and equipment, information systems and library. Facilities should be well designed, accounting for future needs, as later moves and adaptations are usually more costly and time consuming.
Occupational hygiene laboratories and equipment
Occupational hygiene laboratories should have in principle the capability to carry out qualitative and quantitative assessment of exposure to airborne contaminants (chemicals and dusts), physical agents (noise, heat stress, radiation, illumination) and biological agents. In the case of most biological agents, qualitative assessments are enough to recommend controls, thus eliminating the need for the usually difficult quantitative evaluations.
Although some direct-reading instruments for airborne contaminants may have limitations for exposure assessment purposes, these are extremely useful for the recognition of hazards and identification of their sources, the determination of peaks in concentration, the gathering of data for control measures, and for checking on controls such as ventilation systems. In connection with the latter, instruments to check air velocity and static pressure are also needed.
One of the possible structures would comprise the following units:
Whenever selecting occupational hygiene equipment, in addition to performance characteristics, practical aspects have to be considered in view of the expected conditions of use—for example, available infrastructure, climate, location. These aspects include portability, required source of energy, calibration and maintenance requirements, and availability of the required expendable supplies.
Equipment should be purchased only if and when:
Calibration of all types of occupational hygiene measuring and sampling as well as analytical equipment should be an integral part of any procedure, and the required equipment should be available.
Maintenance and repairs are essential to prevent equipment from staying idle for long periods of time, and should be ensured by manufacturers, either by direct assistance or by providing training of staff.
If a completely new programme is being developed, only basic equipment should be initially purchased, more items being added as the needs are established and operational capabilities ensured. However, even before equipment and laboratories are available and operational, much can be achieved by inspecting workplaces to qualitatively assess health hazards, and by recommending control measures for recognized hazards. Lack of capability to carry out quantitative exposure assessments should never justify inaction concerning obviously hazardous exposures. This is particularly true for situations where workplace hazards are uncontrolled and heavy exposures are common.
This includes library (books, periodicals and other publications), databases (e.g. on CD-ROM) and communications.
Whenever possible, personal computers and CD-ROM readers should be provided, as well as connections to the INTERNET. There are ever-increasing possibilities for on-line networked public information servers (World Wide Web and GOPHER sites), which provide access to a wealth of information sources relevant to workers’ health, therefore fully justifying investment in computers and communications. Such systems should include e-mail, which opens new horizons for communications and discussions, either individually or as groups, thus facilitating and promoting exchange of information throughout the world.
Timely and careful planning for the implementation, management and periodic evaluation of a programme is essential to ensure that the objectives and goals are achieved, while making the best use of the available resources.
Initially, the following information should be obtained and analysed:
The planning and organization processes include:
Operational costs should not be underestimated, since lack of resources may seriously hinder the continuity of a programme. Requirements which cannot be overlooked include:
Resources must be optimized through careful study of all elements which should be considered as integral parts of a comprehensive service. A well-balanced allocation of resources to the different units (field measurements, sampling, analytical laboratories, etc.) and all the components (facilities and equipment, personnel, operational aspects) is essential for a successful programme. Moreover, allocation of resources should allow for flexibility, because occupational hygiene services may have to undergo adaptations in order to respond to the real needs, which should be periodically assessed.
Communication, sharing and collaboration are key words for successful teamwork and enhanced individual capabilities. Effective mechanisms for communication, within and outside the programme, are needed to ensure the required multidisciplinary approach for the protection and promotion of workers’ health. There should be close interaction with other occupational health professionals, particularly occupational physicians and nurses, ergonomists and work psychologists, as well as safety professionals. At the workplace level, this should include workers, production personnel and managers.
The implementation of successful programmes is a gradual process. Therefore, at the planning stage, a realistic timetable should be prepared, according to well-established priorities and in view of the available resources.
Management involves decision-making as to the goals to be achieved and actions required to efficiently achieve these goals, with participation of all concerned, as well as foreseeing and avoiding, or recognizing and solving, the problems which may create obstacles to the completion of the required tasks. It should be kept in mind that scientific knowledge is no assurance of the managerial competence required to run an efficient programme.
The importance of implementing and enforcing correct procedures and quality assurance cannot be overemphasized, since there is much difference between work done and work well done. Moreover, the real objectives, not the intermediate steps, should serve as a yardstick; the efficiency of an occupational hygiene programme should be measured not by the number of surveys carried out, but rather by the number of surveys that led to actual action to protect workers’ health.
Good management should be able to distinguish between what is impressive and what is important; very detailed surveys involving sampling and analysis, yielding very accurate and precise results, may be very impressive, but what is really important are the decisions and actions that will be taken afterwards.
The concept of quality assurance, involving quality control and proficiency testing, refers primarily to activities which involve measurements. Although these concepts have been more often considered in connection with analytical laboratories, their scope has to be extended to also encompass sampling and measurements.
Whenever sampling and analysis are required, the complete procedure should be considered as one, from the point of view of quality. Since no chain is stronger than the weakest link, it is a waste of resources to use, for the different steps of a same evaluation procedure, instruments and techniques of unequal levels of quality. The accuracy and precision of a very good analytical balance cannot compensate for a pump sampling at a wrong flowrate.
The performance of laboratories has to be checked so that the sources of errors can be identified and corrected. There is need for a systematic approach in order to keep the numerous details involved under control. It is important to establish quality assurance programmes for occupational hygiene laboratories, and this refers both to internal quality control and to external quality assessments (often called “proficiency testing”).
Concerning sampling, or measurements with direct-reading instruments (including for measurement of physical agents), quality involves adequate and correct:
Concerning the analytical laboratory, quality involves adequate and correct:
For both, it is indispensable to have:
Furthermore, it is essential to have a correct treatment of the obtained data and interpretation of results, as well as accurate reporting and record keeping.
Laboratory accreditation, defined by CEN (EN 45001) as “formal recognition that a testing laboratory is competent to carry out specific tests or specific types of tests” is a very important control tool and should be promoted. It should cover both the sampling and the analytical procedures.
The concept of quality must be applied to all steps of occupational hygiene practice, from the recognition of hazards to the implementation of hazard prevention and control programmes. With this in mind, occupational hygiene programmes and services must be periodically and critically evaluated, aiming at continuous improvement.
Occupational hygiene is essential for the protection of workers’ health and the environment. Its practice involves many steps, which are interlinked and which have no meaning by themselves but must be integrated into a comprehensive approach.
Toxicology plays a major role in the development of regulations and other occupational health policies. In order to prevent occupational injury and illness, decisions are increasingly based upon information obtainable prior to or in the absence of the types of human exposures that would yield definitive information on risk such as epidemiology studies. In addition, toxicological studies, as described in this chapter, can provide precise information on dose and response under the controlled conditions of laboratory research; this information is often difficult to obtain in the uncontrolled setting of occupational exposures. However, this information must be carefully evaluated in order to estimate the likelihood of adverse effects in humans, the nature of these adverse effects, and the quantitative relationship between exposures and effects.
Considerable attention has been given in many countries, since the 1980s, to developing objective methods for utilizing toxicological information in regulatory decision-making. Formal methods, frequently referred to as risk assessment, have been proposed and utilized in these countries by both governmental and non-governmental entities. Risk assessment has been varyingly defined; fundamentally it is an evaluative process that incorporates toxicology, epidemiology and exposure information to identify and estimate the probability of adverse effects associated with exposures to hazardous substances or conditions. Risk assessment may be qualitative in nature, indicating the nature of an adverse effect and a general estimate of likelihood, or it may be quantitative, with estimates of numbers of affected persons at specific levels of exposure. In many regulatory systems, risk assessment is undertaken in four stages: hazard identification, the description of the nature of the toxic effect; dose-response evaluation, a semi-quantitative or quantitative analysis of the relationship between exposure (or dose) and severity or likelihood of toxic effect; exposure assessment, the evaluation of information on the range of exposures likely to occur for populations in general or for subgroups within populations; risk characterization, the compilation of all the above information into an expression of the magnitude of risk expected to occur under specified exposure conditions (see NRC 1983 for a statement of these principles).
In this section, three approaches to risk assessment are presented as illustrative. It is impossible to provide a comprehensive compendium of risk assessment methods used throughout the world, and these selections should not be taken as prescriptive. It should be noted that there are trends towards harmonization of risk assessment methods, partly in response to provisions in the recent GATT accords. Two processes of international harmonization of risk assessment methods are currently underway, through the International Programme on Chemical Safety (IPCS) and the Organization for Economic Cooperation and Development (OECD). These organizations also maintain current information on national approaches to risk assessment.
The WHO (World Health Organization) introduced in 1980 a classification of functional limitation in people; the ICIDH (International Classification Impairment, Disability and Handicap). In this classification a difference is made between illness, limitations and handicap.
This reference model was created to facilitate international communication. The model was presented on the one hand to offer a reference framework for policy makers and on the other hand, to offer a reference framework for doctors diagnosing people suffering from the consequences of illness.
Why this reference framework? It arose with the aim of trying to improve and increase the participation of people with long-term limited abilities. Two aims are mentioned:
As of January 1st, 1994 the classification is official. The activities that have followed, are widespread and especially concerned with issues such as: information and educational measures for specific groups; regulations for the protection of workers; or, for instance, demands that companies should employ, for example, at least 5 per cent of workers with a disability. The classification itself leads in the long term to integration and non-discrimination.
Illness strikes each of us. Certain illnesses can be prevented, others not. Certain illnesses can be cured, others not. Where possible illness should be prevented and if possible cured.
Impairment means every absence or abnormality of a psychological, physiological or anatomic structure or function.
Being born with three fingers instead of five does not have to lead to disability. The capabilities of the individual, and the degree of manipulation possible with the three fingers, will determine whether or not the person is disabled. When, however, a fair amount of signal processing is not possible on a central level in the brain, then impairment will certainly lead to disability as at present there is no method to “cure” (solve) this problem for the patient.
Disability describes the functional level of an individual having difficulty in task performance e.g., difficulty standing up from their chair. These difficulties are of course related to the impairment, but also to the circumstances surrounding it. A person who uses a wheelchair and lives in a flat country like the Netherlands has more possibilities for self-transportation than the same person living in a mountainous area like Tibet.
When the problems are placed on a handicap level, it can be determined in which field the main problems are effective e.g., immobility or physical dependency. These can affect work performance; for example the person may not be able to get themselves to work; or, once at work, might need assistance in personal hygiene, etc.
A handicap shows the negative consequences of disability and can only be solved by taking the negative consequences away.
Summary and conclusions
The above-mentioned classification and the policies thereof offer a well defined international workable framework. Any discussion on designing for specific groups will need such a framework in order to define our activities and try to implement these thoughts in design.
Healthy individuals regularly sleep for several hours every day. Normally they sleep during the night hours. They find it most difficult to remain awake during the hours between midnight and early morning, when they normally sleep. If an individual has to remain awake during these hours either totally or partially, the individual comes to a state of forced sleep loss, or sleep deprivation, that is usually perceived as tiredness. A need for sleep, with fluctuating degrees of sleepiness, is felt which continues until sufficient sleep is taken. This is the reason why periods of sleep deprivation are often said to cause a person to incur sleep deficit or sleep debt.
Sleep deprivation presents a particular problem for workers who cannot take sufficient sleep periods because of work schedules (e.g., working at night) or, for that matter, prolonged free-time activities. A worker on a night shift remains sleep-deprived until the opportunity for a sleep period becomes available at the end of the shift. Since sleep taken during daytime hours is usually shorter than needed, the worker cannot recover from the condition of sleep loss sufficiently until a long sleep period, most likely a night sleep, is taken. Until then, the person accumulates a sleep deficit. (A similar condition—jet lag—arises after travelling between time zones that differ by a few hours or more. The traveller tends to be sleep-deprived as the activity periods in the new time zone correspond more clearly to the normal sleep period in the originating place.) During the periods of sleep loss, workers feel tired and their performance is affected in various ways. Thus various degrees of sleep deprivation are incorporated into the daily life of workers having to work irregular hours and it is important to take measures to cope with unfavourable effects of such sleep deficit. The main conditions of irregular working hours that contribute to sleep deprivation are shown in table 1.
Table 1. Main conditions of irregular working hours which contribute to sleep deprivation of various degrees
Irregular working hours
Conditions leading to sleep deprivation
No or shortened night-time sleep
Early morning or late evening duty
Shortened sleep, disrupted sleep
Long hours of work or working two shifts together
Phase displacement of sleep
Straight night or early morning shifts
Consecutive phase displacement of sleep
Short between-shift period
Short and disrupted sleep
Long interval between days off
Accumulation of sleep shortages
Work in a different time zone
No or shortened sleep during the “night” hours in the originating place (jet lag)
Unbalanced free time periods
Phase displacement of sleep, short sleep
In extreme conditions, sleep deprivation may last for more than a day. Then sleepiness and performance changes increase as the period of sleep deprivation is prolonged. Workers, however, normally take some form of sleep before sleep deprivation becomes too protracted. If the sleep thus taken is not sufficient, the effects of sleep shortage still continue. Thus, it is important to know not only the effects of sleep deprivation in various forms but also the ways in which workers can recover from it.
The complex nature of sleep deprivation is shown by figure 1, which depicts data from laboratory studies on the effects of two days of sleep deprivation (Fröberg 1985). The data show three basic changes resulting from prolonged sleep deprivation:
The fact that the effects of sleep deprivation are correlated with physiological circadian rhythms helps us to understand its complex nature (Folkard and Akerstedt 1992). These effects should be viewed as a result of a phase shift of the sleep-wakefulness cycle in one’s daily life.
The effects of continuous work or sleep deprivation thus include not only a reduction in alertness but decreased performance capabilities, increased probability of falling asleep, lowered well-being and morale and impaired safety. When such periods of sleep deprivation are repeated, as in the case of shift workers, their health may be affected (Rutenfranz 1982; Koller 1983; Costa et al. 1990). An important aim of research is thus to determine to what extent sleep deprivation damages the well-being of individuals and how we can best use the recovery function of sleep in reducing such effects.
Effects of Sleep Deprivation
During and after a night of sleep deprivation, the physiological circadian rhythms of the human body seem to remain sustained. For example, the body temperature curve during the first day’s work among night-shift workers tends to keep its basic circadian pattern. During the night hours, the temperature declines towards early morning hours, rebounds to rise during the subsequent daytime and falls again after an afternoon peak. The physiological rhythms are known to get “adjusted” to the reversed sleep-wakefulness cycles of night-shift workers only gradually in the course of several days of repeated night shifts. This means that the effects on performance and sleepiness are more significant during night hours than in the daytime. The effects of sleep deprivation are therefore variably associated with the original circadian rhythms seen in physiological and psychological functions.
The effects of sleep deprivation on performance depend on the type of the task to be performed. Different characteristics of the task influence the effects (Fröberg 1985; Folkard and Monk 1985; Folkard and Akerstedt 1992). Generally, a complex task is more vulnerable than a simpler task. Performance of a task involving an increasing number of digits or a more complex coding deteriorates more during three days of sleep loss (Fröberg 1985; Wilkinson 1964). Paced tasks that need to be responded to within a certain interval deteriorate more than self-paced tasks. Practical examples of vulnerable tasks include serial reactions to defined stimulations, simple sorting operations, the recording of coded messages, copy typing, display monitoring and continuous inspection. Effects of sleep deprivation on strenuous physical performance are also known. Typical effects of prolonged sleep deprivation on performance (on a visual task) is shown in figure 2 (Dinges 1992). The effects are more pronounced after two nights of sleep loss (40-56 hours) than after one night of sleep loss (16-40 hours).
Figure 2. Regression lines fit to response speed (the reciprocal of response times) on a 10-minute simple, unprepared visual task administered repeatedly to healthy young adults during no sleep loss (5-16 hours), one night of sleep loss (16-40 hours) and two nights of sleep loss (40-56 hours)
The degree to which the performance of tasks is affected also appears to depend on how it is influenced by the “masking” components of the circadian rhythms. For example, some measures of performance, such as five-target memory search tasks, are found to adjust to night work considerably more quickly than serial reaction time tasks, and hence they may be relatively unimpaired on rapidly rotating shift systems (Folkard et al. 1993). Such differences in the effects of endogenous physiological body clock rhythms and their masking components must be taken into account in considering the safety and accuracy of performance under the influence of sleep deprivation.
One particular effect of sleep deprivation on performance efficiency is the appearance of frequent “lapses” or periods of no response (Wilkinson 1964; Empson 1993). These performance lapses are short periods of lowered alertness or light sleep. This can be traced in records of videotaped performance, eye movements or electroencephalograms (EEGs). A prolonged task (one-half hour or more), especially when the task is replicated, can more easily lead to such lapses. Monotonous tasks such as repetitions of simple reactions or monitoring of infrequent signals are very sensitive in this regard. On the other hand, a novel task is less affected. Performance in changing work situations is also resistant.
While there is evidence of a gradual arousal decrease in sleep deprivation, one would expect less affected performance levels between lapses. This explains why results of some performance tests show little influence of sleep loss when the tests are done in a short period of time. In a simple reaction time task, lapses would lead to very long response times whereas the rest of the measured times would remain unchanged. Caution is thus needed in interpreting test results concerning sleep loss effects in actual situations.
Changes in sleepiness during sleep deprivation obviously relate to physiological circadian rhythms as well as to such lapse periods. Sleepiness sharply increases with time of the first period of night-shift work, but decreases during subsequent daytime hours. If sleep deprivation continues to the second night sleepiness becomes very advanced during the night hours (Costa et al. 1990; Matsumoto and Harada 1994). There are moments when the need for sleep is felt to be almost irresistible; these moments correspond to the appearance of lapses, as well as to the appearance of interruptions in the cerebral functions as evidenced by EEG records. After a while, sleepiness is felt to be reduced, but there follows another period of lapse effects. If workers are questioned about various fatigue feelings, however, they usually mention increasing levels of fatigue and general tiredness persisting throughout the sleep deprivation period and between-lapse periods. A slight recovery of subjective fatigue levels is seen during the daytime following a night of sleep deprivation, but fatigue feelings are remarkably advanced in the second and subsequent nights of continued sleep deprivation.
During sleep deprivation, sleep pressure from the interaction of prior wakefulness and circadian phase may always be present to some degree, but the lability of state in sleepy subjects is also modulated by context effects (Dinges 1992). Sleepiness is influenced by the amount and type of stimulation, the interest afforded by the environment and the meaning of the stimulation to the subject. Monotonous stimulation or that requiring sustained attention can more easily lead to vigilance decrement and lapses. The greater the physiological sleepiness due to sleep loss, the more the subject is vulnerable to environmental monotony. Motivation and incentive can help override this environmental effect, but only for a limited period.
Effects of Partial Sleep Deprivation and Accumulated Sleep Shortages
If a subject works continuously for a whole night without sleep, many performance functions will have definitely deteriorated. If the subject goes to the second night shift without getting any sleep, the performance decline is far advanced. After the third or fourth night of total sleep deprivation, very few people can stay awake and perform tasks even if highly motivated. In actual life, however, such conditions of total sleep loss rarely occur. Usually people take some sleep during subsequent night shifts. But reports from various countries show that sleep taken during daytime is almost always insufficient to recover from the sleep debt incurred by night work (Knauth and Rutenfranz 1981; Kogi 1981; ILO 1990). As a result, sleep shortages accumulate as shift workers repeat night shifts. Similar sleep shortages also result when sleep periods are reduced on account of the need to follow shift schedules. Even if night sleep can be taken, sleep restriction of as little as two hours each night is known to lead to an insufficient amount of sleep for most persons. Such sleep reduction can lead to impaired performance and alertness (Monk 1991).
Examples of conditions in shift systems which contribute to accumulation of sleep shortages, or partial sleep deprivation, are given in table 1. In addition to continued night work for two or more days, short between-shift periods, repetition of an early start of morning shifts, frequent night shifts and inappropriate holiday allotment accelerate the accumulation of sleep shortages.
The poor quality of daytime sleep or shortened sleep is important, too. Daytime sleep is accompanied by an increased frequency of awakenings, less deep and slow-wave sleep and a distribution of REM sleep different from that of normal night-time sleep (Torsvall, Akerstedt and Gillberg 1981; Folkard and Monk 1985; Empson 1993). Thus a daytime sleep may not be as sound as a night sleep even in a favourable environment.
This difficulty of taking good quality sleep due to different timing of sleep in a shift system is illustrated by figure 3 which shows the duration of sleep as a function of the time of sleep onset for German and Japanese workers based on diary records (Knauth and Rutenfranz 1981; Kogi 1985). Due to circadian influence, daytime sleep is forced to be short. Many workers may have split sleep during the daytime and often add some sleep in the evening where possible.
In real-life settings, shift workers take a variety of measures to cope with such accumulation of sleep shortages (Wedderburn 1991). For example, many of them try to sleep in advance before a night shift or have a long sleep after it. Although such efforts are by no means entirely effective to offset the effects of sleep deficit, they are made quite deliberately. Social and cultural activities may be restricted as part of coping measures. Outgoing free-time activities, for example, are undertaken less frequently between two night shifts. Sleep timing and duration as well as the actual accumulation of sleep deficit thus depend on both job-related and social circumstances.
Recovery from Sleep Deprivation and Health Measures
The only effective means of recovering from sleep deprivation is to sleep. This restorative effect of sleep is well known (Kogi 1982). As recovery by sleep may differ according to its timing and duration (Costa et al. 1990), it is essential to know when and for how long people should sleep. In normal daily life, it is always the best to take a full night’s sleep to accelerate the recovery from sleep deficit but efforts are usually made to minimize sleep deficit by taking sleep at different occasions as replacements of normal night sleeps of which one has been deprived. Aspects of such replacement sleeps are shown in table 2.
Table 2. Aspects of advance, anchor & retard sleeps taken as replacement of normal night sleep
Before a night shift
After a night shift
Short by definition
Usually short but
Longer latency of
Shorter latency for
To offset night sleep deficit, the usual effort made is to take daytime sleep in “advance” and “retard” phases (i.e., before and after night-shift work). Such a sleep coincides with the circadian activity phase. Thus the sleep is characterized by longer latency, shortened slow-wave sleep, disrupted REM sleep and disturbances of one’s social life. Social and environmental factors are important in determining the recuperative effect of a sleep. That a complete conversion of circadian rhythms is impossible for a shift worker in a real-life situation should be borne in mind in considering the effectiveness of the recovery functions of sleep.
In this respect, interesting features of a short “anchor sleep” have been reported (Minors and Waterhouse 1981; Kogi 1982; Matsumoto and Harada 1994). When part of the customary daily sleep is taken during the normal night sleep period and the rest at irregular times, the circadian rhythms of rectal temperature and urinary secretion of several electrolytes can retain a 24-hour period. This means that a short night-time sleep taken during the night sleep period can help preserve the original circadian rhythms in subsequent periods.
We may assume that sleeps taken at different periods of the day could have certain complementary effects in view of the different recovery functions of these sleeps. An interesting approach for night-shift workers is the use of a night-time nap which usually lasts up to a few hours. Surveys show this short sleep taken during a night shift is common among some groups of workers. This anchor-sleep type sleep is effective in reducing night work fatigue (Kogi 1982) and may reduce the need of recovery sleep. Figure 4 compares the subjective feelings of fatigue during two consecutive night shifts and the off-duty recovery period between the nap-taking group and the non-nap group (Matsumoto and Harada 1994). The positive effects of a night-time nap in reducing fatigue was obvious. These effects continued for a large part of the recovery period following night work. Between these two groups, no significant difference was found upon comparing the length of the day sleep of the non-nap group with the total sleeping time (night-time nap plus subsequent day sleep) of the nap group. Therefore a night-time nap enables part of the essential sleep to be taken in advance of the day sleep following night work. It can therefore be suggested that naps taken during night work can to a certain extent aid recovery from the fatigue caused by that work and accompanying sleep deprivation (Sakai et al. 1984; Saito and Matsumoto 1988).
It must be admitted, however, that it is not possible to work out optimal strategies that each worker suffering from sleep deficit can apply. This is demonstrated in the development of international labour standards for night work that recommend a set of measures for workers doing frequent night work (Kogi and Thurman 1993). The varied nature of these measures and the trend towards increasing flexibility in shift systems clearly reflect an effort to develop flexible sleep strategies (Kogi 1991). Age, physical fitness, sleep habits and other individual differences in tolerance may play important roles (Folkard and Monk 1985; Costa et al. 1990; Härmä 1993). Increasing flexibility in work schedules in combination with better job design is useful in this regard (Kogi 1991).
Sleep strategies against sleep deprivation should be dependent on type of working life and be flexible enough to meet individual situations (Knauth, Rohmert and Rutenfranz 1979; Rutenfranz, Knauth and Angersbach 1981; Wedderburn 1991; Monk 1991). A general conclusion is that we should minimize night sleep deprivation by selecting appropriate work schedules and facilitate recovery by encouraging individually suitable sleeps, including replacement sleeps and a sound night-time sleep in the early periods after sleep deprivation. It is important to prevent the accumulation of sleep deficit. The period of night work which deprives workers of sleep in the normal night sleep period should be as short as possible. Between-shift intervals should be long enough to allow a sleep of sufficient length. A better sleep environment and measures to cope with social needs are also useful. Thus, social support is essential in designing working time arrangements, job design and individual coping strategies in promoting the health of workers faced with frequent sleep deficit.
A workplace hazard can be defined as any condition that may adversely affect the well-being or health of exposed persons. Recognition of hazards in any occupational activity involves characterization of the workplace by identifying hazardous agents and groups of workers potentially exposed to these hazards. The hazards might be of chemical, biological or physical origin (see table 1). Some hazards in the work environment are easy to recognize—for example, irritants, which have an immediate irritating effect after skin exposure or inhalation. Others are not so easy to recognize—for example, chemicals which are accidentally formed and have no warning properties. Some agents like metals (e.g., lead, mercury, cadmium, manganese), which may cause injury after several years of exposure, might be easy to identify if you are aware of the risk. A toxic agent may not constitute a hazard at low concentrations or if no one is exposed. Basic to the recognition of hazards are identification of possible agents at the workplace, knowledge about health risks of these agents and awareness of possible exposure situations.
Table 1. Hazards of chemical, biological and physical agents.
Type of hazard
Chemicals enter the body principally through inhalation, skin absorption or ingestion. The toxic effect might be acute, chronic or both.,
Corrosive chemicals actually cause tissue destruction at the site of contact. Skin, eyes and digestive system are the most commonly affected parts of the body.
Concentrated acids and alkalis, phosphorus
Irritants cause inflammation of tissues where they are deposited. Skin irritants may cause reactions like eczema or dermatitis. Severe respiratory irritants might cause shortness of breath, inflammatory responses and oedema.
Skin: acids, alkalis, solvents, oils Respiratory: aldehydes, alkaline dusts, ammonia, nitrogendioxide, phosgene, chlorine, bromine, ozone
Chemical allergens or sensitizers can cause skin or respiratory allergic reactions.
Skin: colophony (rosin), formaldehyde, metals like chromium or nickel, some organic dyes, epoxy hardeners, turpentine
Respiratory: isocyanates, fibre-reactive dyes, formaldehyde, many tropical wood dusts, nickel
Asphyxiants exert their effects by interfering with the oxygenation of the tissues. Simple asphyxiants are inert gases that dilute the available atmospheric oxygen below the level required to support life. Oxygen-deficient atmospheres may occur in tanks, holds of ships, silos or mines. Oxygen concentration in air should never be below 19.5% by volume. Chemical asphyxiants prevent oxygen transport and the normal oxygenation of blood or prevent normal oxygenation of tissues.
Simple asphyxiants: methane, ethane, hydrogen, helium
Chemical asphyxiants: carbon monoxide, nitrobenzene, hydrogencyanide, hydrogen sulphide
Known human carcinogens are chemicals that have been clearly demonstrated to cause cancer in humans. Probable human carcinogens are chemicals that have been clearly demonstrated to cause cancer in animals or the evidence is not definite in humans. Soot and coal tars were the first chemicals suspected to cause cancer.
Known: benzene (leukaemia); vinyl chloride (liver angio-sarcoma); 2-naphthylamine, benzidine (bladder cancer); asbestos (lung cancer, mesothelioma); hardwood dust (nasalor nasal sinus adenocarcinoma) Probable: formaldehyde, carbon tetrachloride, dichromates, beryllium
Reproductive toxicants interfere with reproductive or sexual functioning of an individual.
Manganese, carbon disulphide, monomethyl and ethyl ethers of ethylene glycol, mercury
Developmental toxicants are agents that may cause an adverse effect in offspring of exposed persons; for example, birth defects. Embryotoxic or foetotoxic chemicals can cause spontaneous abortions or miscarriages.
Organic mercury compounds, carbon monoxide, lead, thalidomide, solvents
Systemic poisons are agents that cause injury to particular organs or body systems.
Brain: solvents, lead, mercury, manganese
Peripheral nervous system: n-hexane, lead, arsenic, carbon disulphide
Blood-forming system: benzene, ethylene glycol ethers
Kidneys: cadmium, lead, mercury, chlorinated hydrocarbons
Lungs: silica, asbestos, coal dust (pneumoconiosis)
Biological hazards can be defined as organic dusts originating from different sources of biological origin such as virus, bacteria, fungi, proteins from animals or substances from plants such as degradation products of natural fibres. The aetiological agent might be derived from a viable organism or from contaminants or constitute a specific component in the dust. Biological hazards are grouped into infectious and non-infectious agents. Non-infectious hazards can be further divided into viable organisms, biogenic toxins and biogenic allergens.
Occupational diseases from infectious agents are relatively uncommon. Workers at risk include employees at hospitals, laboratory workers, farmers, slaughterhouse workers, veterinarians, zoo keepers and cooks. Susceptibility is very variable (e.g., persons treated with immunodepressing drugs will have a high sensitivity).
Hepatitis B, tuberculosis, anthrax, brucella, tetanus, chlamydia psittaci, salmonella
Viable organisms and biogenic toxins
Viable organisms include fungi, spores and mycotoxins; biogenic toxins include endotoxins, aflatoxin and bacteria. The products of bacterial and fungal metabolism are complex and numerous and affected by temperature, humidity and kind of substrate on which they grow. Chemically they might consist of proteins, lipoproteins or mucopolysaccharides. Examples are Gram positive and Gram negative bacteria and moulds. Workers at risk include cotton mill workers, hemp and flax workers, sewage and sludge treatment workers, grain silo workers.
Byssinosis, “grain fever”, Legionnaire’s disease
Biogenic allergens include fungi, animal-derived proteins, terpenes, storage mites and enzymes. A considerable part of the biogenic allergens in agriculture comes from proteins from animal skin, hair from furs and protein from the faecal material and urine. Allergens might be found in many industrial environments, such as fermentation processes, drug production, bakeries, paper production, wood processing (saw mills, production, manufacturing) as well as in bio-technology (enzyme and vaccine production, tissue culture) and spice production. In sensitized persons, exposure to the allergic agents may induce allergic symptoms such as allergic rhinitis, conjunctivitis or asthma. Allergic alveolitis is characterized by acute respiratory symptoms like cough, chills, fever, headache and pain in the muscles, which might lead to chronic lung fibrosis.
Occupational asthma: wool, furs, wheat grain, flour, red cedar, garlic powder
Allergic alveolitis: farmer’s disease, bagassosis, “bird fancier’s disease”, humidifier fever, sequoiosis
Noise is considered as any unwanted sound that may adversely affect the health and well-being of individuals or populations. Aspects of noise hazards include total energy of the sound, frequency distribution, duration of exposure and impulsive noise. Hearing acuity is generally affected first with a loss or dip at 4000 Hz followed by losses in the frequency range from 2000 to 6000 Hz. Noise might result in acute effects like communication problems, decreased concentration, sleepiness and as a consequence interference with job performance. Exposure to high levels of noise (usually above 85 dBA) or impulsive noise (about 140 dBC) over a significant period of time may cause both temporary and chronic hearing loss. Permanent hearing loss is the most common occupational disease in compensation claims.
Foundries, woodworking, textile mills, metalworking
Vibration has several parameters in common with noise-frequency, amplitude, duration of exposure and whether it is continuous or intermittent. Method of operation and skilfulness of the operator seem to play an important role in the development of harmful effects of vibration. Manual work using powered tools is associated with symptoms of peripheral circulatory disturbance known as “Raynaud’s phenomenon” or “vibration-induced white fingers” (VWF). Vibrating tools may also affect the peripheral nervous system and the musculo-skeletal system with reduced grip strength, low back pain and degenerative back disorders.
Contract machines, mining loaders, fork-lift trucks, pneumatic tools, chain saws
The most important chronic effect of ionizing radiation is cancer, including leukaemia. Overexposure from comparatively low levels of radiation have been associated with dermatitis of the hand and effects on the haematological system. Processes or activities which might give excessive exposure to ionizing radiation are very restricted and regulated.
Nuclear reactors, medical and dental x-ray tubes, particle accelerators, radioisotopes
Non-ionizing radiation consists of ultraviolet radiation, visible radiation, infrared, lasers, electromagnetic fields (microwaves and radio frequency) and extreme low frequency radiation. IR radiation might cause cataracts. High-powered lasers may cause eye and skin damage. There is an increasing concern about exposure to low levels of electromagnetic fields as a cause of cancer and as a potential cause of adverse reproductive outcomes among women, especially from exposure to video display units. The question about a causal link to cancer is not yet answered. Recent reviews of available scientific knowledge generally conclude that there is no association between use of VDUs and adverse reproductive outcome.
Ultraviolet radiation: arc welding and cutting; UV curing of inks, glues, paints, etc.; disinfection; product control
Infrared radiation: furnaces, glassblowing
Lasers: communications, surgery, construction
Identification and Classification of Hazards
Before any occupational hygiene investigation is performed the purpose must be clearly defined. The purpose of an occupational hygiene investigation might be to identify possible hazards, to evaluate existing risks at the workplace, to prove compliance with regulatory requirements, to evaluate control measures or to assess exposure with regard to an epidemiological survey. This article is restricted to programmes aimed at identification and classification of hazards at the workplace. Many models or techniques have been developed to identify and evaluate hazards in the working environment. They differ in complexity, from simple checklists, preliminary industrial hygiene surveys, job-exposure matrices and hazard and operability studies to job exposure profiles and work surveillance programmes (Renes 1978; Gressel and Gideon 1991; Holzner, Hirsh and Perper 1993; Goldberg et al. 1993; Bouyer and Hémon 1993; Panett, Coggon and Acheson 1985; Tait 1992). No single technique is a clear choice for everyone, but all techniques have parts which are useful in any investigation. The usefulness of the models also depends on the purpose of the investigation, size of workplace, type of production and activity as well as complexity of operations.
Identification and classification of hazards can be divided into three basic elements: workplace characterization, exposure pattern and hazard evaluation.
A workplace might have from a few employees up to several thousands and have different activities (e.g., production plants, construction sites, office buildings, hospitals or farms). At a workplace different activities can be localized to special areas such as departments or sections. In an industrial process, different stages and operations can be identified as production is followed from raw materials to finished products.
Detailed information should be obtained about processes, operations or other activities of interest, to identify agents utilized, including raw materials, materials handled or added in the process, primary products, intermediates, final products, reaction products and by-products. Additives and catalysts in a process might also be of interest to identify. Raw material or added material which has been identified only by trade name must be evaluated by chemical composition. Information or safety data sheets should be available from manufacturer or supplier.
Some stages in a process might take place in a closed system without anyone exposed, except during maintenance work or process failure. These events should be recognized and precautions taken to prevent exposure to hazardous agents. Other processes take place in open systems, which are provided with or without local exhaust ventilation. A general description of the ventilation system should be provided, including local exhaust system.
When possible, hazards should be identified in the planning or design of new plants or processes, when changes can be made at an early stage and hazards might be anticipated and avoided. Conditions and procedures that may deviate from the intended design must be identified and evaluated in the process state. Recognition of hazards should also include emissions to the external environment and waste materials. Facility locations, operations, emission sources and agents should be grouped together in a systematic way to form recognizable units in the further analysis of potential exposure. In each unit, operations and agents should be grouped according to health effects of the agents and estimation of emitted amounts to the work environment.
The main exposure routes for chemical and biological agents are inhalation and dermal uptake or incidentally by ingestion. The exposure pattern depends on frequency of contact with the hazards, intensity of exposure and time of exposure. Working tasks have to be systematically examined. It is important not only to study work manuals but to look at what actually happens at the workplace. Workers might be directly exposed as a result of actually performing tasks, or be indirectly exposed because they are located in the same general area or location as the source of exposure. It might be necessary to start by focusing on working tasks with high potential to cause harm even if the exposure is of short duration. Non-routine and intermittent operations (e.g., maintenance, cleaning and changes in production cycles) have to be considered. Working tasks and situations might also vary throughout the year.
Within the same job title exposure or uptake might differ because some workers wear protective equipment and others do not. In large plants, recognition of hazards or a qualitative hazard evaluation very seldom can be performed for every single worker. Therefore workers with similar working tasks have to be classified in the same exposure group. Differences in working tasks, work techniques and work time will result in considerably different exposure and have to be considered. Persons working outdoors and those working without local exhaust ventilation have been shown to have a larger day-to-day variability than groups working indoors with local exhaust ventilation (Kromhout, Symanski and Rappaport 1993). Work processes, agents applied for that process/job or different tasks within a job title might be used, instead of the job title, to characterize groups with similar exposure. Within the groups, workers potentially exposed must be identified and classified according to hazardous agents, routes of exposure, health effects of the agents, frequency of contact with the hazards, intensity and time of exposure. Different exposure groups should be ranked according to hazardous agents and estimated exposure in order to determine workers at greatest risk.
Qualitative hazard evaluation
Possible health effects of chemical, biological and physical agents present at the workplace should be based on an evaluation of available epidemiological, toxicological, clinical and environmental research. Up-to-date information about health hazards for products or agents used at the workplace should be obtained from health and safety journals, databases on toxicity and health effects, and relevant scientific and technical literature.
Material Safety Data Sheets (MSDSs) should if necessary be updated. Data Sheets document percentages of hazardous ingredients together with the Chemical Abstracts Service chemical identifier, the CAS-number, and threshold limit value (TLV), if any. They also contain information about health hazards, protective equipment, preventive actions, manufacturer or supplier, and so on. Sometimes the ingredients reported are rather rudimentary and have to be supplemented with more detailed information.
Monitored data and records of measurements should be studied. Agents with TLVs provide general guidance in deciding whether the situation is acceptable or not, although there must be allowance for possible interactions when workers are exposed to several chemicals. Within and between different exposure groups, workers should be ranked according to health effects of agents present and estimated exposure (e.g., from slight health effects and low exposure to severe health effects and estimated high exposure). Those with the highest ranks deserve highest priority. Before any prevention activities start it might be necessary to perform an exposure monitoring programme. All results should be documented and easily attainable. A working scheme is illustrated in figure 1.
Figure 1. Elements of risk assessment
In occupational hygiene investigations the hazards to the outdoor environment (e.g., pollution and greenhouse effects as well as effects on the ozone layer) might also be considered.
Chemical, Biological and Physical Agents
Hazards might be of chemical, biological or physical origin. In this section and in table 1 a brief description of the various hazards will be given together with examples of environments or activities where they will be found (Casarett 1980; International Congress on Occupational Health 1985; Jacobs 1992; Leidel, Busch and Lynch 1977; Olishifski 1988; Rylander 1994). More detailed information will be found elsewhere in this Encyclopaedia.
Chemicals can be grouped into gases, vapours, liquids and aerosols (dusts, fumes, mists).
Gases are substances that can be changed to liquid or solid state only by the combined effects of increased pressure and decreased temperature. Handling gases always implies risk of exposure unless they are processed in closed systems. Gases in containers or distribution pipes might accidentally leak. In processes with high temperatures (e.g., welding operations and exhaust from engines) gases will be formed.
Vapours are the gaseous form of substances that normally are in the liquid or solid state at room temperature and normal pressure. When a liquid evaporates it changes to a gas and mixes with the surrounding air. A vapour can be regarded as a gas, where the maximal concentration of a vapour depends on the temperature and the saturation pressure of the substance. Any process involving combustion will generate vapours or gases. Degreasing operations might be performed by vapour phase degreasing or soak cleaning with solvents. Work activities like charging and mixing liquids, painting, spraying, cleaning and dry cleaning might generate harmful vapours.
Liquids may consist of a pure substance or a solution of two or more substances (e.g., solvents, acids, alkalis). A liquid stored in an open container will partially evaporate into the gas phase. The concentration in the vapour phase at equilibrium depends on the vapour pressure of the substance, its concentration in the liquid phase, and the temperature. Operations or activities with liquids might give rise to splashes or other skin contact, besides harmful vapours.
Dusts consist of inorganic and organic particles, which can be classified as inhalable, thoracic or respirable, depending on particle size. Most organic dusts have a biological origin. Inorganic dusts will be generated in mechanical processes like grinding, sawing, cutting, crushing, screening or sieving. Dusts may be dispersed when dusty material is handled or whirled up by air movements from traffic. Handling dry materials or powder by weighing, filling, charging, transporting and packing will generate dust, as will activities like insulation and cleaning work.
Fumes are solid particles vaporized at high temperature and condensed to small particles. The vaporization is often accompanied by a chemical reaction such as oxidation. The single particles that make up a fume are extremely fine, usually less than 0.1 μm, and often aggregate in larger units. Examples are fumes from welding, plasma cutting and similar operations.
Mists are suspended liquid droplets generated by condensation from the gaseous state to the liquid state or by breaking up a liquid into a dispersed state by splashing, foaming or atomizing. Examples are oil mists from cutting and grinding operations, acid mists from electroplating, acid or alkali mists from pickling operations or paint spray mists from spraying operations.
As in many other countries, risk due to exposure to chemicals is regulated in Japan according to the category of chemicals concerned, as listed in table 1. The governmental ministry or agency in charge varies. In the case of industrial chemicals in general, the major law that applies is the Law Concerning Examination and Regulation of Manufacture, Etc. of Chemical Substances, or Chemical Substances Control Law (CSCL) for short. The agencies in charge are the Ministry of International Trade and Industry and the Ministry of Health and Welfare. In addition, the Labour Safety and Hygiene Law (by the Ministry of Labour) provides that industrial chemicals should be examined for possible mutagenicity and, if the chemical in concern is found to be mutagenic, the exposure of workers to the chemical should be minimized by enclosure of production facilities, installation of local exhaust systems, use of protective equipment, and so on.
Table 1. Regulation of chemical substances by laws, Japan
|Food and food additives||Foodstuff Hygiene Law||MHW|
|Narcotics||Narcotics Control Law||MHW|
|Agricultural chemicals||Agricultural Chemicals Control Law||MAFF|
|Industrial chemicals||Chemical Substances Control Law||MHW & MITI|
|All chemicals except for radioactive substances||Law concerning Regulation of
House-Hold Products Containing
Poisonous and Deleterious
Substances Control Law
Labour Safety and Hygiene Law
|Radioactive substances||Law concerning Radioactive Substances||STA|
Abbreviations: MHW—Ministry of Health and Welfare; MAFF—Ministry of Agriculture, Forestry and Fishery; MITI—Ministry of International Trade and Industry; MOL—Ministry of Labour; STA—Science and Technology Agency.
Because hazardous industrial chemicals will be identified primarily by the CSCL, the framework of tests for hazard identification under CSCL will be described in this section.
The Concept of the Chemical SubstanceControl Law
The original CSCL was passed by the Diet (the parliament of Japan) in 1973 and took effect on 16 April 1974. The basic motivation for the Law was the prevention of environmental pollution and resulting human health effects by PCBs and PCB-like substances. PCBs are characterized by (1) persistency in the environment (poorly biodegradable), (2) increasing concentration as one goes up the food chain (or food web) (bioaccumulation) and (3) chronic toxicity in humans. Accordingly, the Law mandated that each industrial chemical be examined for such characteristics prior to marketing in Japan. In parallel with the passage of the Law, the Diet decided that the Environment Agency should monitor the general environment for possible chemical pollution. The Law was then amended by the Diet in 1986 (the amendment taking effect in 1987) in order to harmonize with actions of the OECD regarding health and the environment, the lowering of non-tariff barriers in international trade and especially the setting of a minimum premarketing set of data (MPD) and related test guidelines. The amendment was also a reflection of observation at the time, through monitoring of the environment, that chemicals such as trichloroethylene and tetrachloroethylene, which are not highly bioaccumulating although poorly biodegradable and chronically toxic, can pollute the environment; these chemical substances were detected in groundwater nationwide.
The Law classifies industrial chemicals into two categories: existing chemicals and new chemicals. The existing chemicals are those listed in the “Existing Chemicals Inventory” (established with the passage of the original Law) and number about 20,000, the number depending on the way some chemicals are named in the inventory. Chemicals not in the inventory are called new chemicals. The government is responsible for hazard identification of the existing chemicals, whereas the company or other entity that wishes to introduce a new chemical into the market in Japan is responsible for hazard identification of the new chemical. Two governmental ministries, the Ministry of Health and Welfare (MHW) and the Ministry of International Trade and Industry (MITI), are in charge of the Law, and the Environment Agency can express its opinion when necessary. Radioactive substances, specified poisons, stimulants and narcotics are excluded because they are regulated by other laws.
Test System Under CSCL
The flow scheme of examination is depicted in figure 1, which is a stepwise system in principle. All chemicals (for exceptions, see below) should be examined for biodegradability in vitro. In case the chemical is readily biodegradable, it is considered “safe”. Otherwise, the chemical is then examined for bioaccumulation. If it is found to be “highly accumulating,” full toxicity data are requested, based on which the chemical will be classified as a “Class 1 specified chemical substance” when toxicity is confirmed, or a “safe” one otherwise. The chemical with no or low accumulation will be subject to toxicity screening tests, which consist of mutagenicity tests and 28-day repeated dosing to experimental animals (for details, see table 2). After comprehensive evaluation of the toxicity data, the chemical will be classified as a “Designated chemical substance” if the data indicate toxicity. Otherwise, it is considered “safe”. When other data suggest that there is a great possibility of environmental pollution with the chemical in concern, full toxicity data are requested, from which the designated chemical will be reclassified as “Class 2 specified chemical substance” when positive. Otherwise, it is considered “safe”. Toxicological and ecotoxicological characteristics of “Class 1 specific chemical substance,” “Class 2 specific chemical substance” and “Designated chemical substance” are listed in table 3 together with outlines of regulatory actions.
|Biodegradation||For 2 weeks in principle, in vitro, with activated
|Bioaccumulation||For 8 weeks in principle, with carp|
Ames’ test and test with E. coli, ± S9 mix
CHL cells, etc., ±S9 mix
|28-day repeated dosing||Rats, 3 dose levels plus control for NOEL,
2 weeks recovery test at the highest dose level in addition
Table 3. Characteristics of classified chemical substances and regulations under the Japanese Chemical Substances Control Law
specified chemical substances
|Authorization to manufacture or import necessary1
Restriction in use
specified chemical substances
Non- or low bioaccumulation Chronic toxicity
Suspected environmental pollution
|Notification on scheduled manu-facturing or import quantity
Technical guideline to prevent pollution/heath effects
|Designated chemical substances||Nonbiodegradability
Non- or low bioaccumulation
Suspected chronic toxicity
|Report on manufacturing or import quantity
Study and literature survey
1 No authorization in practice.
Testing is not required for a new chemical with a limited use amount (i.e., less than 1,000 kg/company/year and less than 1,000 kg/year for all of Japan). Polymers are examined following the high molecular-weight compound flow scheme, which is developed with an assumption that chances are remote for absorption into the body when the chemical has a molecular weight of greater than 1,000 and is stable in the environment.
Results of Classification of Industrial Chemicals,as of 1996
In the 26 years from the time CSCL went into effect in 1973 to the end of 1996, 1,087 existing chemical items were examined under the original and amended CSCL. Among the 1,087, nine items (some are identified by generic names) were classified as “Class 1 specified chemical substance”. Among those remaining, 36 were classified as “designated”, of which 23 were reclassified as “Class 2 specified chemical substance” and another 13 remained as “designated”. The names of Class 1 and 2 specified chemical substances are listed in figure 2. It is clear from the table that most of the Class 1 chemicals are organochlorine pesticides in addition to PCB and its substitute, except for one seaweed killer. A majority of the Class 2 chemicals are seaweed killers, with the exceptions of three once widely used chlorinated hydrocarbon solvents.
In the same period from 1973 to the end of 1996, about 2,335 new chemicals were submitted for approval, of which 221 (about 9.5%) were identified as “designated”, but none as Class 1 or 2 chemicals. Other chemicals were considered “safe” and approved for manufacturing or import.
Hazard Surveillance and Survey Methods
Occupational surveillance involves active programmes to anticipate, observe, measure, evaluate and control exposures to potential health hazards in the workplace. Surveillance often involves a team of people that includes an occupational hygienist, occupational physician, occupational health nurse, safety officer, toxicologist and engineer. Depending upon the occupational environment and problem, three surveillance methods can be employed: medical, environmental and biological. Medical surveillance is used to detect the presence or absence of adverse health effects for an individual from occupational exposure to contaminants, by performing medical examinations and appropriate biological tests. Environmental surveillance is used to document potential exposure to contaminants for a group of employees, by measuring the concentration of contaminants in the air, in bulk samples of materials, and on surfaces. Biological surveillance is used to document the absorption of contaminants into the body and correlate with environmental contaminant levels, by measuring the concentration of hazardous substances or their metabolites in the blood, urine or exhaled breath of workers.
Medical surveillance is performed because diseases can be caused or exacerbated by exposure to hazardous substances. It requires an active programme with professionals who are knowledgeable about occupational diseases, diagnoses and treatment. Medical surveillance programmes provide steps to protect, educate, monitor and, in some cases, compensate the employee. It can include pre-employment screening programmes, periodic medical examinations, specialized tests to detect early changes and impairment caused by hazardous substances, medical treatment and extensive record keeping. Pre-employment screening involves the evaluation of occupational and medical history questionnaires and results of physical examinations. Questionnaires provide information concerning past illnesses and chronic diseases (especially asthma, skin, lung and heart diseases) and past occupational exposures. There are ethical and legal implications of pre-employment screening programmes if they are used to determine employment eligibility. However, they are fundamentally important when used to (1) provide a record of previous employment and associated exposures, (2) establish a baseline of health for an employee and (3) test for hypersusceptibility. Medical examinations can include audiometric tests for hearing loss, vision tests, tests of organ function, evaluation of fitness for wearing respiratory protection equipment, and baseline urine and blood tests. Periodic medical examinations are essential for evaluating and detecting trends in the onset of adverse health effects and may include biological monitoring for specific contaminants and the use of other biomarkers.
Environmental and Biological Surveillance
Environmental and biological surveillance starts with an occupational hygiene survey of the work environment to identify potential hazards and contaminant sources, and determine the need for monitoring. For chemical agents, monitoring could involve air, bulk, surface and biological sampling. For physical agents, monitoring could include noise, temperature and radiation measurements. If monitoring is indicated, the occupational hygienist must develop a sampling strategy that includes which employees, processes, equipment or areas to sample, the number of samples, how long to sample, how often to sample, and the sampling method. Industrial hygiene surveys vary in complexity and focus depending upon the purpose of the investigation, type and size of establishment, and nature of the problem.
There are no rigid formulas for performing surveys; however, thorough preparation prior to the on-site inspection significantly increases effectiveness and efficiency. Investigations that are motivated by employee complaints and illnesses have an additional focus of identifying the cause of the health problems. Indoor air quality surveys focus on indoor as well as outdoor sources of contamination. Regardless of the occupational hazard, the overall approach to surveying and sampling workplaces is similar; therefore, this chapter will use chemical agents as a model for the methodology.
Routes of Exposure
The mere presence of occupational stresses in the workplace does not automatically imply that there is a significant potential for exposure; the agent must reach the worker. For chemicals, the liquid or vapour form of the agent must make contact with and/or be absorbed into the body to induce an adverse health effect. If the agent is isolated in an enclosure or captured by a local exhaust ventilation system, the exposure potential will be low, regardless of the chemical’s inherent toxicity.
The route of exposure can impact the type of monitoring performed as well as the hazard potential. For chemical and biological agents, workers are exposed through inhalation, skin contact, ingestion and injection; the most common routes of absorption in the occupational environment are through the respiratory tract and the skin. To assess inhalation, the occupational hygienist observes the potential for chemicals to become airborne as gases, vapours, dusts, fumes or mists.
Skin absorption of chemicals is important primarily when there is direct contact with the skin through splashing, spraying, wetting or immersion with fat-soluble hydrocarbons and other organic solvents. Immersion includes body contact with contaminated clothing, hand contact with contaminated gloves, and hand and arm contact with bulk liquids. For some substances, such as amines and phenols, skin absorption can be as rapid as absorption through the lungs for substances that are inhaled. For some contaminants such as pesticides and benzidine dyes, skin absorption is the primary route of absorption, and inhalation is a secondary route. Such chemicals can readily enter the body through the skin, increase body burden and cause systemic damage. When allergic reactions or repeated washing dries and cracks the skin, there is a dramatic increase in the number and type of chemicals that can be absorbed into the body. Ingestion, an uncommon route of absorption for gases and vapours, can be important for particulates, such as lead. Ingestion can occur from eating contaminated food, eating or smoking with contaminated hands, and coughing and then swallowing previously inhaled particulates.
Injection of materials directly into the bloodstream can occur from hypodermic needles inadvertently puncturing the skin of health care workers in hospitals, and from high-velocity projectiles released from high-pressure sources and directly contacting the skin. Airless paint sprayers and hydraulic systems have pressures high enough to puncture the skin and introduce substances directly into the body.
The Walk-Through Inspection
The purpose of the initial survey, called the walk-through inspection, is to systematically gather information to judge whether a potentially hazardous situation exists and whether monitoring is indicated. An occupational hygienist begins the walk-through survey with an opening meeting that can include representatives of management, employees, supervisors, occupational health nurses and union representatives. The occupational hygienist can powerfully impact the success of the survey and any subsequent monitoring initiatives by creating a team of people who communicate openly and honestly with one another and understand the goals and scope of the inspection. Workers must be involved and informed from the beginning to ensure that cooperation, not fear, dominates the investigation.
During the meeting, requests are made for process flow diagrams, plant layout drawings, past environmental inspection reports, production schedules, equipment maintenance schedules, documentation of personal protection programmes, and statistics concerning the number of employees, shifts and health complaints. All hazardous materials used and produced by an operation are identified and quantified. A chemical inventory of products, by-products, intermediates and impurities is assembled and all associated Material Safety Data Sheets are obtained. Equipment maintenance schedules, age and condition are documented because the use of older equipment may result in higher exposures due to the lack of controls.
After the meeting, the occupational hygienist performs a visual walk-through survey of the workplace, scrutinizing the operations and work practices, with the goal of identifying potential occupational stresses, ranking the potential for exposure, identifying the route of exposure and estimating the duration and frequency of exposure. Examples of occupational stresses are given in figure 1. The occupational hygienist uses the walk-through inspection to observe the workplace and have questions answered. Examples of observations and questions are given in figure 2.
Figure 1. Occupational stresses.
Figure 2. Observations and questions to ask on a walk-through survey.
In addition to the questions shown in figure 5, questions should be asked that uncover what is not immediately obvious. Questions could address:
Non-routine tasks can result in significant peak exposures to chemicals that are difficult to predict and measure during a typical workday. Process changes and chemical substitutions may alter the release of substances into the air and affect subsequent exposure. Changes in the physical layout of a work area can alter the effectiveness of an existing ventilation system. Changes in job functions can result in tasks performed by inexperienced workers and increased exposures. Renovations and repairs may introduce new materials and chemicals into the work environment which off-gas volatile organic chemicals or are irritants.
Indoor Air Quality Surveys
Indoor air quality surveys are distinct from traditional occupational hygiene surveys because they are typically encountered in non-industrial workplaces and may involve exposures to mixtures of trace quantities of chemicals, none of which alone appears capable of causing illness (Ness 1991). The goal of indoor air quality surveys is similar to occupational hygiene surveys in terms of identifying sources of contamination and determining the need for monitoring. However, indoor air quality surveys are always motivated by employee health complaints. In many cases, the employees have a variety of symptoms including headaches, throat irritation, lethargy, coughing, itching, nausea and non-specific hypersensitivity reactions that disappear when they go home. When health complaints do not disappear after the employees leave work, non-occupational exposures should be considered as well. Non-occupational exposures include hobbies, other jobs, urban air pollution, passive smoking and indoor exposures in the home. Indoor air quality surveys frequently use questionnaires to document employee symptoms and complaints and link them to job location or job function within the building. The areas with the highest incidence of symptoms are then targeted for further inspection.
Sources of indoor air contaminants that have been documented in indoor air quality surveys include:
For indoor air quality investigations, the walk-through inspection is essentially a building and environmental inspection to determine potential sources of contamination both inside and outside of the building. Inside building sources include:
Figure 3. Observations and questions for an indoor air quality walk-through survey.
Sampling and Measurement Strategies
Occupational exposure limits
After the walk-through inspection is completed, the occupational hygienist must determine whether sampling is necessary; sampling should be performed only if the purpose is clear. The occupational hygienist must ask, “What will be made of the sampling results and what questions will the results answer?” It is relatively easy to sample and obtain numbers; it is far more difficult to interpret them.
Air and biological sampling data are usually compared to recommended or mandated occupational exposure limits (OELs). Occupational exposure limits have been developed in many countries for inhalation and biological exposure to chemical and physical agents. To date, out of a universe of over 60,000 commercially used chemicals, approximately 600 have been evaluated by a variety of organizations and countries. The philosophical bases for the limits are determined by the organizations that have developed them. The most widely used limits, called threshold limit values (TLVs), are those issued in the United States by the American Conference of Governmental Industrial Hygienists (ACGIH). Most of the OELs used by the Occupational Safety and Health Administration (OSHA) in the United States are based upon the TLVs. However, the National Institute for Occupational Safety and Health (NIOSH) of the US Department of Health and Human Services has suggested their own limits, called recommended exposure limits (RELs).
For airborne exposures, there are three types of TLVs: an eight-hour time-weighted-average exposure, TLV-TWA, to protect against chronic health effects; a fifteen-minute average short-term exposure limit, TLV-STEL, to protect against acute health effects; and an instantaneous ceiling value, TLV-C, to protect against asphyxiants or chemicals that are immediately irritating. Guidelines for biological exposure levels are called biological exposure indices (BEIs). These guidelines represent the concentration of chemicals in the body that would correspond to inhalation exposure of a healthy worker at a specific concentration in air. Outside of the United States as many as 50 countries or groups have established OELs, many of which are identical to the TLVs. In Britain, the limits are called the Health and Safety Executive Occupational Exposure Standards (OES), and in Germany OELs are called Maximum Workplace Concentrations (MAKs).
OELs have been set for airborne exposures to gases, vapours and particulates; they do not exist for airborne exposures to biological agents. Therefore, most investigations of bioaerosol exposure compare indoor with outdoor concentrations. If the indoor/outdoor profile and concentration of organisms is different, an exposure problem may exist. There are no OELs for skin and surface sampling, and each case must be evaluated separately. In the case of surface sampling, concentrations are usually compared with acceptable background concentrations that were measured in other studies or were determined in the current study. For skin sampling, acceptable concentrations are calculated based upon toxicity, rate of absorption, amount absorbed and total dose. In addition, biological monitoring of a worker may be used to investigate skin absorption.
An environmental and biological sampling strategy is an approach to obtaining exposure measurements that fulfils a purpose. A carefully designed and effective strategy is scientifically defensible, optimizes the number of samples obtained, is cost-effective and prioritizes needs. The goal of the sampling strategy guides decisions concerning what to sample (selection of chemical agents), where to sample (personal, area or source sample), whom to sample (which worker or group of workers), sample duration (real-time or integrated), how often to sample (how many days), how many samples, and how to sample (analytical method). Traditionally, sampling performed for regulatory purposes involves brief campaigns (one or two days) that concentrate on worst-case exposures. While this strategy requires a minimum expenditure of resources and time, it often captures the least amount of information and has little applicability to evaluating long-term occupational exposures. To evaluate chronic exposures so that they are useful for occupational physicians and epidemiological studies, sampling strategies must involve repeated sampling over time for large numbers of workers.
The goal of environmental and biological sampling strategies is either to evaluate individual employee exposures or to evaluate contaminant sources. Employee monitoring may be performed to:
Source and ambient air monitoring may be performed to:
When monitoring employees, air sampling provides surrogate measures of dose resulting from inhalation exposure. Biological monitoring can provide the actual dose of a chemical resulting from all absorption routes including inhalation, ingestion, injection and skin. Thus, biological monitoring can more accurately reflect an individual’s total body burden and dose than air monitoring. When the relationship between airborne exposure and internal dose is known, biological monitoring can be used to evaluate past and present chronic exposures.
Figure 4. Goals of biological monitoring.
Biological monitoring has its limitations and should be performed only if it accomplishes goals that cannot be accomplished with air monitoring alone (Fiserova-Bergova 1987). It is invasive, requiring samples to be taken directly from workers. Blood samples generally provide the most useful biological medium to monitor; however, blood is taken only if non-invasive tests such as urine or exhaled breath are not applicable. For most industrial chemicals, data concerning the fate of chemicals absorbed by the body are incomplete or non-existent; therefore, only a limited number of analytical measurement methods are available, and many are not sensitive or specific.
Biological monitoring results may be highly variable between individuals exposed to the same airborne concentrations of chemicals; age, health, weight, nutritional status, drugs, smoking, alcohol consumption, medication and pregnancy can impact uptake, absorption, distribution, metabolism and elimination of chemicals.
What to sampleMost occupational environments have exposures to multiple contaminants. Chemical agents are evaluated both individually and as multiple simultaneous assaults on workers. Chemical agents can act independently within the body or interact in a way that increases the toxic effect. The question of what to measure and how to interpret the results depends upon the biological mechanism of action of the agents when they are within the body. Agents can be evaluated separately if they act independently on altogether different organ systems, such as an eye irritant and a neurotoxin. If they act on the same organ system, such as two respiratory irritants, their combined effect is important. If the toxic effect of the mixture is the sum of the separate effects of the individual components, it is termed additive. If the toxic effect of the mixture is greater than the sum of the effects of the separate agents, their combined effect is termed synergistic. Exposure to cigarette smoking and inhalation of asbestos fibres gives rise to a much greater risk of lung cancer than a simple additive effect.
Sampling all the chemical agents in a workplace would be both expensive and not necessarily defensible. The occupational hygienist must prioritize the laundry list of potential agents by hazard or risk to determine which agents receive the focus.
Factors involved in ranking chemicals include:
To provide the best estimate of employee exposure, air samples are taken in the breathing zone of the worker (within a 30 cm radius of the head), and are called personal samples. To obtain breathing zone samples, the sampling device is placed directly on the worker for the duration of the sampling. If air samples are taken near the worker, outside of the breathing zone, they are called area samples. Area samples tend to underestimate personal exposures and do not provide good estimates of inhalation exposure. However, area samples are useful for evaluating contaminant sources and measuring ambient levels of contaminants. Area samples can be taken while walking through the workplace with a portable instrument, or with fixed sampling stations. Area sampling is routinely used at asbestos abatement sites for clearance sampling and for indoor air investigations.
Whom to sample
Ideally, to evaluate occupational exposure, each worker would be individually sampled for multiple days over the course of weeks or months. However, unless the workplace is small (<10 employees), it is usually not feasible to sample all the workers. To minimize the sampling burden in terms of equipment and cost, and increase the effectiveness of the sampling programme, a subset of employees from the workplace is sampled, and their monitoring results are used to represent exposures for the larger work force.
To select employees who are representative of the larger work force, one approach is to classify employees into groups with similar expected exposures, called homogeneous exposure groups (HEGs) (Corn 1985). After the HEGs are formed, a subset of workers is randomly selected from each group for sampling. Methods for determining the appropriate sample sizes assume a lognormal distribution of exposures, an estimated mean exposure, and a geometric standard deviation of 2.2 to 2.5. Prior sampling data might allow a smaller geometric standard deviation to be used. To classify employees into distinct HEGs, most occupational hygienists observe workers at their jobs and qualitatively predict exposures.
There are many approaches to forming HEGs; generally, workers may be classified by job task similarity or work area similarity. When both job and work area similarity are used, the method of classification is called zoning (see figure 5). Once airborne, chemical and biological agents can have complex and unpredictable spatial and temporal concentration patterns throughout the work environment. Therefore, proximity of the source relative to the employee may not be the best indicator of exposure similarity. Exposure measurements made on workers initially expected to have similar exposures may show that there is more variation between workers than predicted. In these cases, the exposure groups should be reconstructed into smaller sets of workers, and sampling should continue to verify that workers within each group actually have similar exposures (Rappaport 1995).
Figure 5. Factors involved in creating HEGs using zoning.
Exposures can be estimated for all the employees, regardless of job title or risk, or it can be estimated only for employees who are assumed to have the highest exposures; this is called worst-case sampling. The selection of worst-case sampling employees may be based upon production, proximity to the source, past sampling data, inventory and chemical toxicity. The worst-case method is used for regulatory purposes and does not provide a measure of long-term mean exposure and day-to-day variability. Task-related sampling involves selecting workers with jobs that have similar tasks that occur on a less than daily basis.
There are many factors that enter into exposure and can affect the success of HEG classification, including the following:
Sample durationThe concentrations of chemical agents in air samples are either measured directly in the field, obtaining immediate results (real-time or grab), or are collected over time in the field on sampling media or in sampling bags and are measured in a laboratory (integrated) (Lynch 1995). The advantage of real-time sampling is that results are obtained quickly onsite, and can capture measurements of short-term acute exposures. However, real-time methods are limited because they are not available for all contaminants of concern and they may not be analytically sensitive or accurate enough to quantify the targeted contaminants. Real-time sampling may not be applicable when the occupational hygienist is interested in chronic exposures and requires time-weighted-average measurements to compare with OELs.
Real-time sampling is used for emergency evaluations, obtaining crude estimates of concentration, leak detection, ambient air and source monitoring, evaluating engineering controls, monitoring short-term exposures that are less than 15 minutes, monitoring episodic exposures, monitoring highly toxic chemicals (carbon monoxide), explosive mixtures and process monitoring. Real-time sampling methods can capture changing concentrations over time and provide immediate qualitative and quantitative information. Integrated air sampling is usually performed for personal monitoring, area sampling and for comparing concentrations to time-weighted-average OELs. The advantages of integrated sampling are that methods are available for a wide variety of contaminants; it can be used to identify unknowns; accuracy and specificity is high and limits of detection are usually very low. Integrated samples that are analysed in a laboratory must contain enough contaminant to meet minimum detectable analytical requirements; therefore, samples are collected over a predetermined time period.
In addition to analytical requirements of a sampling method, sample duration should be matched to the sampling purpose. For source sampling, duration is based upon the process or cycle time, or when there are anticipated peaks of concentrations. For peak sampling, samples should be collected at regular intervals throughout the day to minimize bias and identify unpredictable peaks. The sampling period should be short enough to identify peaks while also providing a reflection of the actual exposure period.
For personal sampling, duration is matched to the occupational exposure limit, task duration or anticipated biological effect. Real-time sampling methods are used for assessing acute exposures to irritants, asphyxiants, sensitizers and allergenic agents. Chlorine, carbon monoxide and hydrogen sulphide are examples of chemicals that can exert their effects quickly and at relatively low concentrations.
Chronic disease agents such as lead and mercury are usually sampled for a full shift (seven hours or more per sample), using integrated sampling methods. To evaluate full shift exposures, the occupational hygienist uses either a single sample or a series of consecutive samples that cover the entire shift. The sampling duration for exposures that occur for less than a full shift are usually associated with particular tasks or processes. Construction workers, indoor maintenance personnel and maintenance road crews are examples of jobs with exposures that are tied to tasks.
How many samples and how often to sample?
Concentrations of contaminants can vary minute to minute, day to day and season to season, and variability can occur between individuals and within an individual. Exposure variability affects both the number of samples and the accuracy of the results. Variations in exposure can arise from different work practices, changes in pollutant emissions, the volume of chemicals used, production quotas, ventilation, temperature changes, worker mobility and task assignments. Most sampling campaigns are performed for a couple of days in a year; therefore, the measurements obtained are not representative of exposure. The period over which samples are collected is very short compared with the unsampled period; the occupational hygienist must extrapolate from the sampled to the unsampled period. For long-term exposure monitoring, each worker selected from a HEG should be sampled multiple times over the course of weeks or months, and exposures should be characterized for all shifts. While the day shift may be the busiest, the night shift may have the least supervision and there may be lapses in work practices.
Active and passive sampling
Contaminants are collected on sampling media either by actively pulling an air sample through the media, or by passively allowing the air to reach the media. Active sampling uses a battery-powered pump, and passive sampling uses diffusion or gravity to bring the contaminants to the sampling media. Gases, vapours, particulates and bioaerosols are all collected by active sampling methods; gases and vapours can also be collected by passive diffusion sampling.
For gases, vapours and most particulates, once the sample is collected the mass of the contaminant is measured, and concentration is calculated by dividing the mass by the volume of sampled air. For gases and vapours, concentration is expressed as parts per million (ppm) or mg/m3, and for particulates concentration is expressed as mg/m3 (Dinardi 1995).
In integrated sampling, air sampling pumps are critical components of the sampling system because concentration estimates require knowledge of the volume of sampled air. Pumps are selected based upon desired flowrate, ease of servicing and calibration, size, cost and suitability for hazardous environments. The primary selection criterion is flowrate: low-flow pumps (0.5 to 500 ml/min) are used for sampling gases and vapours; high-flow pumps (500 to 4,500 ml/min) are used for sampling particulates, bioaerosols and gases and vapours. To insure accurate sample volumes, pumps must be accurately calibrated. Calibration is performed using primary standards such as manual or electronic soap-bubble meters, which directly measure volume, or secondary methods such as wet test meters, dry gas meters and precision rotameters that are calibrated against primary methods.
Gases and vapours: sampling media
Gases and vapours are collected using porous solid sorbent tubes, impingers, passive monitors and bags. Sorbent tubes are hollow glass tubes that have been filled with a granular solid that enables adsorption of chemicals unchanged on its surface. Solid sorbents are specific for groups of compounds; commonly used sorbents include charcoal, silica gel and Tenax. Charcoal sorbent, an amorphous form of carbon, is electrically nonpolar, and preferentially adsorbs organic gases and vapours. Silica gel, an amorphous form of silica, is used to collect polar organic compounds, amines and some inorganic compounds. Because of its affinity for polar compounds, it will adsorb water vapour; therefore, at elevated humidity, water can displace the less polar chemicals of interest from the silica gel. Tenax, a porous polymer, is used for sampling very low concentrations of nonpolar volatile organic compounds.
The ability to accurately capture the contaminants in air and avoid contaminant loss depends upon the sampling rate, sampling volume, and the volatility and concentration of the airborne contaminant. Collection efficiency of solid sorbents can be adversely affected by increased temperature, humidity, flowrate, concentration, sorbent particle size and number of competing chemicals. As collection efficiency decreases chemicals will be lost during sampling and concentrations will be underestimated. To detect chemical loss, or breakthrough, solid sorbent tubes have two sections of granular material separated by a foam plug. The front section is used for sample collection and the back section is used to determine breakthrough. Breakthrough has occurred when at least 20 to 25% of the contaminant is present in the back section of the tube. Analysis of contaminants from solid sorbents requires extraction of the contaminant from the medium using a solvent. For each batch of sorbent tubes and chemicals collected, the laboratory must determine the desorption efficiency, the efficiency of removal of chemicals from the sorbent by the solvent. For charcoal and silica gel, the most commonly used solvent is carbon disulphide. For Tenax, the chemicals are extracted using thermal desorption directly into a gas chromatograph.
Impingers are usually glass bottles with an inlet tube that allows air to be drawn into the bottle through a solution that collects the gases and vapours by absorption either unchanged in solution or by a chemical reaction. Impingers are used less and less in workplace monitoring, especially for personal sampling, because they can break, and the liquid media can spill onto the employee. There are a variety of types of impingers, including gas wash bottles, spiral absorbers, glass bead columns, midget impingers and fritted bubblers. All impingers can be used to collect area samples; the most commonly used impinger, the midget impinger, can be used for personal sampling as well.
Passive, or diffusion monitors are small, have no moving parts and are available for both organic and inorganic contaminants. Most organic monitors use activated charcoal as the collection medium. In theory, any compound that can be sampled by a charcoal sorbent tube and pump can be sampled using a passive monitor. Each monitor has a uniquely designed geometry to give an effective sampling rate. Sampling starts when the monitor cover is removed and ends when the cover is replaced. Most diffusion monitors are accurate for eight-hour time-weighted-average exposures and are not appropriate for short-term exposures.
Sampling bags can be used to collect integrated samples of gases and vapours. They have permeability and adsorptive properties that enable storage for a day with minimal loss. Bags are made of Teflon (polytetrafluoroethylene) and Tedlar (polyvinylfluoride).
Sampling media: particulate materials
Occupational sampling for particulate materials, or aerosols, is currently in a state of flux; traditional sampling methods will eventually be replaced by particle size selective (PSS) sampling methods. Traditional sampling methods will be discussed first, followed by PSS methods.
The most commonly used media for collecting aerosols are fibre or membrane filters; aerosol removal from the air stream occurs by collision and attachment of the particles to the surface of the filters. The choice of filter medium depends upon the physical and chemical properties of the aerosols to be sampled, the type of sampler and the type of analysis. When selecting filters, they must be evaluated for collection efficiency, pressure drop, hygroscopicity, background contamination, strength and pore size, which can range from 0.01 to 10 μm. Membrane filters are manufactured in a variety of pore sizes and are usually made from cellulose ester, polyvinylchloride or polytetrafluoroethylene. Particle collection occurs at the surface of the filter; therefore, membrane filters are usually used in applications where microscopy will be performed. Mixed cellulose ester filters can be easily dissolved with acid and are usually used for collection of metals for analysis by atomic absorption. Nucleopore filters (polycarbonate) are very strong and thermally stable, and are used for sampling and analysing asbestos fibres using transmission electron microscopy. Fibre filters are usually made of fibreglass and are used to sample aerosols such as pesticides and lead.
For occupational exposures to aerosols, a known volume of air can be sampled through the filters, the total increase in mass (gravimetric analysis) can be measured (mg/m3 air), the total number of particles can be counted (fibres/cc) or the aerosols can be identified (chemical analysis). For mass calculations, the total dust that enters the sampler or only the respirable fraction can be measured. For total dust, the increase in mass represents exposure from deposition in all parts of the respiratory tract. Total dust samplers are subject to error due to high winds passing across the sampler and improper orientation of the sampler. High winds, and filters facing upright, can result in collection of extra particles and overestimation of exposure.
For respirable dust sampling, the increase in mass represents exposure from deposition in the gas exchange (alveolar) region of the respiratory tract. To collect only the respirable fraction, a preclassifier called a cyclone is used to alter the distribution of airborne dust presented to the filter. Aerosols are drawn into the cyclone, accelerated and whirled, causing the heavier particles to be thrown out to the edge of the air stream and dropped to a removal section at the bottom of the cyclone. The respirable particles that are less than 10 μm remain in the air stream and are drawn up and collected on the filter for subsequent gravimetric analysis.
Sampling errors encountered when performing total and respirable dust sampling result in measurements that do not accurately reflect exposure or relate to adverse health effects. Therefore, PSS has been proposed to redefine the relationship between particle size, adverse health impact and sampling method. In PSS sampling, the measurement of particles is related to the sizes that are associated with specific health effects. The International Organization for Standardization (ISO) and the ACGIH have proposed three particulate mass fractions: inhalable particulate mass (IPM), thoracic particulate mass (TPM) and respirable particulate mass (RPM). IPM refers to particles that can be expected to enter through the nose and mouth, and would replace the traditional total mass fraction. TPM refers to particles that can penetrate the upper respiratory system past the larynx. RPM refers to particles that are capable of depositing in the gas-exchange region of the lung, and would replace the current respirable mass fraction. The practical adoption of PSS sampling requires the development of new aerosol sampling methods and PSS-specific occupational exposure limits.
Sampling media: biological materials
There are few standardized methods for sampling biological material or bioaerosols. Although sampling methods are similar to those used for other airborne particulates, viability of most bioaerosols must be preserved to ensure laboratory culturability. Therefore, they are more difficult to collect, store and analyse. The strategy for sampling bioaerosols involves collection directly on semisolid nutrient agar or plating after collection in fluids, incubation for several days and identification and quantification of the cells that have grown. The mounds of cells that have multiplied on the agar can be counted as colony-forming units (CFU) for viable bacteria or fungi, and plaque-forming units (PFU) for active viruses. With the exception of spores, filters are not recommended for bioaerosol collection because dehydration causes cell damage.
Viable aerosolized micro-organisms are collected using all-glass impingers (AGI-30), slit samplers and inertial impactors. Impingers collect bioaerosols in liquid and the slit sampler collects bioaerosols on glass slides at high volumes and flowrates. The impactor is used with one to six stages, each containing a Petri dish, to allow for separation of particles by size.
Interpretation of sampling results must be done on a case-by-case basis because there are no occupational exposure limits. Evaluation criteria must be determined prior to sampling; for indoor air investigations, in particular, samples taken outside of the building are used as a background reference. A rule of thumb is that concentrations should be ten times background to suspect contamination. When using culture plating techniques, concentrations are probably underestimated because of losses of viability during sampling and incubation.
Skin and surface sampling
There are no standard methods for evaluating skin exposure to chemicals and predicting dose. Surface sampling is performed primarily to evaluate work practices and identify potential sources of skin absorption and ingestion. Two types of surface sampling methods are used to assess dermal and ingestion potential: direct methods, which involve sampling the skin of a worker, and indirect methods, which involve wipe sampling surfaces.
Direct skin sampling involves placing gauze pads on the skin to absorb chemicals, rinsing the skin with solvents to remove contaminants and using fluorescence to identify skin contamination. Gauze pads are placed on different parts of the body and are either left exposed or are placed under personal protective equipment. At the end of the workday the pads are removed and are analysed in the laboratory; the distribution of concentrations from different parts of the body are used to identify skin exposure areas. This method is inexpensive and easy to perform; however, the results are limited because gauze pads are not good physical models of the absorption and retention properties of skin, and measured concentrations are not necessarily representative of the entire body.
Skin rinses involve wiping the skin with solvents or placing hands in plastic bags filled with solvents to measure the concentration of chemicals on the surface. This method can underestimate dose because only the unabsorbed fraction of chemicals is collected.
Fluorescence monitoring is used to identify skin exposure for chemicals that naturally fluoresce, such as polynuclear aromatics, and to identify exposures for chemicals in which fluorescent compounds have been intentionally added. The skin is scanned with an ultraviolet light to visualize contamination. This visualization provides workers with evidence of the effect of work practices on exposure; research is underway to quantify the fluorescence intensity and relate it to dose.
Indirect wipe sampling methods involve the use of gauze, glass fibre filters or cellulose paper filters, to wipe the insides of gloves or respirators, or the tops of surfaces. Solvents may be added to increase collection efficiency. The gauze or filters are then analysed in the laboratory. To standardize the results and enable comparison between samples, a square template is used to sample a 100 cm2 area.
Blood, urine and exhaled air samples are the most suitable specimens for routine biological monitoring, while hair, milk, saliva and nails are less frequently used. Biological monitoring is performed by collecting bulk blood and urine samples in the workplace and analysing them in the laboratory. Exhaled air samples are collected in Tedlar bags, specially designed glass pipettes or sorbent tubes, and are analysed in the field using direct-reading instruments, or in the laboratory. Blood, urine and exhaled air samples are primarily used to measure the unchanged parent compound (same chemical that is sampled in workplace air), its metabolite or a biochemical change (intermediate) that has been induced in the body. For example, the parent compound lead is measured in blood to evaluate lead exposure, the metabolite mandelic acid is measured in urine for both styrene and ethyl benzene, and carboxyhaemoglobin is the intermediate measured in blood for both carbon monoxide and methylene chloride exposure. For exposure monitoring, the concentration of an ideal determinant will be highly correlated with intensity of exposure. For medical monitoring, the concentration of an ideal determinant will be highly correlated with target organ concentration.
The timing of specimen collection can impact the usefulness of the measurements; samples should be collected at times which most accurately reflect exposure. Timing is related to the excretion biological half-life of a chemical, which reflects how quickly a chemical is eliminated from the body; this can vary from hours to years. Target organ concentrations of chemicals with short biological half-lives closely follow the environmental concentration; target organ concentrations of chemicals with long biological half-lives fluctuate very little in response to environmental exposures. For chemicals with short biological half-lives, less than three hours, a sample is taken immediately at the end of the workday, before concentrations rapidly decline, to reflect exposure on that day. Samples may be taken at any time for chemicals with long half-lives, such as polychlorinated biphenyls and lead.
Direct-reading instruments provide real-time quantification of contaminants; the sample is analysed within the equipment and does not require off-site laboratory analysis (Maslansky and Maslansky 1993). Compounds can be measured without first collecting them on separate media, then shipping, storing and analysing them. Concentration is read directly from a meter, display, strip chart recorder and data logger, or from a colour change. Direct-reading instruments are primarily used for gases and vapours; a few instruments are available for monitoring particulates. Instruments vary in cost, complexity, reliability, size, sensitivity and specificity. They include simple devices, such as colorimetric tubes, that use a colour change to indicate concentration; dedicated instruments that are specific for a chemical, such as carbon monoxide indicators, combustible gas indicators (explosimeters) and mercury vapour meters; and survey instruments, such as infrared spectrometers, that screen large groups of chemicals. Direct-reading instruments use a variety of physical and chemical methods to analyse gases and vapours, including conductivity, ionization, potentiometry, photometry, radioactive tracers and combustion.
Commonly used portable direct-reading instruments include battery-powered gas chromatographs, organic vapour analysers and infrared spectrometers. Gas chromatographs and organic vapour monitors are primarily used for environmental monitoring at hazardous waste sites and for community ambient air monitoring. Gas chromatographs with appropriate detectors are specific and sensitive, and can quantify chemicals at very low concentrations. Organic vapour analysers are usually used to measure classes of compounds. Portable infrared spectrometers are primarily used for occupational monitoring and leak detection because they are sensitive and specific for a wide range of compounds.
Small direct-reading personal monitors are available for a few common gases (chlorine, hydrogen cyanide, hydrogen sulphide, hydrazine, oxygen, phosgene, sulphur dioxide, nitrogen dioxide and carbon monoxide). They accumulate concentration measurements over the course of the day and can provide a direct readout of time-weighted-average concentration as well as provide a detailed contaminant profile for the day.
Colorimetric tubes (detector tubes) are simple to use, cheap and available for a wide variety of chemicals. They can be used to quickly identify classes of air contaminants and provide ballpark estimates of concentrations that can be used when determining pump flow rates and volumes. Colorimetric tubes are glass tubes filled with solid granular material which has been impregnated with a chemical agent that can react with a contaminant and create a colour change. After the two sealed ends of a tube are broken open, one end of the tube is placed in a hand pump. The recommended volume of contaminated air is sampled through the tube by using a specified number of pump strokes for a particular chemical. A colour change or stain is produced on the tube, usually within two minutes, and the length of the stain is proportional to concentration. Some colorimetric tubes have been adapted for long duration sampling, and are used with battery-powered pumps that can run for at least eight hours. The colour change produced represents a time-weighted-average concentration. Colorimetric tubes are good for both qualitative and quantitative analysis; however, their specificity and accuracy is limited. The accuracy of colorimetric tubes is not as high as that of laboratory methods or many other real-time instruments. There are hundreds of tubes, many of which have cross-sensitivities and can detect more than one chemical. This can result in interferences that modify the measured concentrations.
Direct-reading aerosol monitors cannot distinguish between contaminants, are usually used for counting or sizing particles, and are primarily used for screening, not to determine TWA or acute exposures. Real-time instruments use optical or electrical properties to determine total and respirable mass, particle count and particle size. Light-scattering aerosol monitors, or aerosol photometers, detect the light scattered by particles as they pass through a volume in the equipment. As the number of particles increases, the amount of scattered light increases and is proportional to mass. Light-scattering aerosol monitors cannot be used to distinguish between particle types; however, if they are used in a workplace where there are a limited number of dusts present, the mass can be attributed to a particular material. Fibrous aerosol monitors are used to measure the airborne concentration of particles such as asbestos. Fibres are aligned in an oscillating electric field and are illuminated with a helium neon laser; the resulting pulses of light are detected by a photomultiplier tube. Light-attenuating photometers measure the extinction of light by particles; the ratio of incident light to measured light is proportional to concentration.
There are many available methods for analysing laboratory samples for contaminants. Some of the more commonly used techniques for quantifying gases and vapours in air include gas chromatography, mass spectrometry, atomic absorption, infrared and UV spectroscopy and polarography.
Gas chromatography is a technique used to separate and concentrate chemicals in mixtures for subsequent quantitative analysis. There are three main components to the system: the sample injection system, a column and a detector. A liquid or gaseous sample is injected using a syringe, into an air stream that carries the sample through a column where the components are separated. The column is packed with materials that interact differently with different chemicals, and slows down the movement of the chemicals. The differential interaction causes each chemical to travel through the column at a different rate. After separation, the chemicals go directly into a detector, such as a flame ionization detector (FID), photo-ionization detector (PID) or electron capture detector (ECD); a signal proportional to concentration is registered on a chart recorder. The FID is used for almost all organics including: aromatics, straight chain hydrocarbons, ketones and some chlorinated hydrocarbons. Concentration is measured by the increase in the number of ions produced as a volatile hydrocarbon is burned by a hydrogen flame. The PID is used for organics and some inorganics; it is especially useful for aromatic compounds such as benzene, and it can detect aliphatic, aromatic and halogenated hydrocarbons. Concentration is measured by the increase in the number of ions produced when the sample is bombarded by ultraviolet radiation. The ECD is primarily used for halogen-containing chemicals; it gives a minimal response to hydrocarbons, alcohols and ketones. Concentration is measured by the current flow between two electrodes caused by ionization of the gas by radioactivity.
The mass spectrophotometer is used to analyse complex mixtures of chemicals present in trace amounts. It is often coupled with a gas chromatograph for the separation and quantification of different contaminants.
Atomic absorption spectroscopy is primarily used for the quantification of metals such as mercury. Atomic absorption is the absorption of light of a particular wavelength by a free, ground-state atom; the quantity of light absorbed is related to concentration. The technique is highly specific, sensitive and fast, and is directly applicable to approximately 68 elements. Detection limits are in the sub-ppb to low-ppm range.
Infrared analysis is a powerful, sensitive, specific and versatile technique. It uses the absorption of infrared energy to measure many inorganic and organic chemicals; the amount of light absorbed is proportional to concentration. The absorption spectrum of a compound provides information enabling its identification and quantification.
UV absorption spectroscopy is used for analysis of aromatic hydrocarbons when interferences are known to be low. The amount of absorption of UV light is directly proportional to concentration.
Polarographic methods are based upon the electrolysis of a sample solution using an easily polarized electrode and a nonpolarizable electrode. They are used for qualitative and quantitative analysis of aldehydes, chlorinated hydrocarbons and metals.
Neurotoxicity and reproductive toxicity are important areas for risk assessment, since the nervous and reproductive systems are highly sensitive to xenobiotic effects. Many agents have been identified as toxic to these systems in humans (Barlow and Sullivan 1982; OTA 1990). Many pesticides are deliberately designed to disrupt reproduction and neurological function in target organisms, such as insects, through interference with hormonal biochemistry and neurotransmission.
It is difficult to identify substances potentially toxic to these systems for three interrelated reasons: first, these are among the most complex biological systems in humans, and animal models of reproductive and neurological function are generally acknowledged to be inadequate for representing such critical events as cognition or early embryofoetal development; second, there are no simple tests for identifying potential reproductive or neurological toxicants; and third, these systems contain multiple cell types and organs, such that no single set of mechanisms of toxicity can be used to infer dose-response relationships or predict structure-activity relationships (SAR). Moreover, it is known that the sensitivity of both the nervous and reproductive systems varies with age, and that exposures at critical periods may have much more severe effects than at other times.
Neurotoxicity Risk Assessment
Neurotoxicity is an important public health problem. As shown in table 1, there have been several episodes of human neurotoxicity involving thousands of workers and other populations exposed through industrial releases, contaminated food, water and other vectors. Occupational exposures to neurotoxins such as lead, mercury, organophosphate insecticides and chlorinated solvents are widespread throughout the world (OTA 1990; Johnson 1978).
Table 1. Selected major neurotoxicity incidents
|400 BC||Rome||Lead||Hippocrates recognizes lead toxicity in the mining industry.|
|1930s||United States (Southeast)||TOCP||Compound often added to lubricating oils contaminates “Ginger Jake,” an alcoholic beverage; more than 5,000 paralyzed, 20,000 to 100,000 affected.|
|1930s||Europe||Apiol (with TOCP)||Abortion-inducing drug containing TOCP causes 60 cases of neuropathy.|
|1932||United States (California)||Thallium||Barley laced with thallium sulphate, used as rodenticide, is stolen and used to make tortillas; 13 family members hospitalized with neurological symptoms; 6 deaths.|
|1937||South Africa||TOCP||60 South Africans develop paralysis after using contaminated cooking oil.|
|1946||—||Tetraethyl lead||More than 25 individuals suffer neurological effects after cleaning gasoline tanks.|
|1950s||Japan (Minimata)||Mercury||Hundreds ingest fish and shellfish contaminated with mercury from chemical plant; 121 poisoned, 46 deaths, many infants with serious nervous system damage.|
|1950s||France||Organotin||Contamination of Stallinon with triethyltin results in more than 100 deaths.|
|1950s||Morocco||Manganese||150 ore miners suffer chronic manganese intoxication involving severe neurobehavioural problems.|
|1950s-1970s||United States||AETT||Component of fragrances found to be neurotoxic; withdrawn from market in 1978; human health effects unknown.|
|1956||—||Endrin||49 persons become ill after eating bakery foods prepared from flour contaminated with the insecticide endrin; convulsions result in some instances.|
|1956||Turkey||HCB||Hexachlorobenzene, a seed grain fungicide, leads to poisoning of 3,000 to 4,000; 10 per cent mortality rate.|
|1956-1977||Japan||Clioquinol||Drug used to treat travellers’ diarrhoea found to cause neuropathy; as many as 10,000 affected over two decades.|
|1959||Morocco||TOCP||Cooking oil contaminated with lubricating oil affects some 10,000 individuals.|
|1960||Iraq||Mercury||Mercury used as fungicide to treat seed grain used in bread; more than 1,000 people affected.|
|1964||Japan||Mercury||Methylmercury affects 646 people.|
|1968||Japan||PCBs||Polychlorinated biphenyls leaked into rice oil; 1,665 people affected.|
|1969||Japan||n-Hexane||93 cases of neuropathy occur following exposure to n-hexane, used to make vinyl sandals.|
|1971||United States||Hexachlorophene||After years of bathing infants in 3 per cent hexachlorophene, the disinfectant is found to be toxic to the nervous system and other systems.|
|1971||Iraq||Mercury||Mercury used as fungicide to treat seed grain is used in bread; more than 5,000 severe poisonings, 450 hospital deaths, effects on many infants exposedprenatally not documented.|
|1973||United States (Ohio)||MIBK||Fabric production plant employees exposed to solvent; more than 80 workers suffer neuropathy, 180 have less severe effects.|
|1974-1975||United States (Hopewell, VA)||Chlordecone (Kepone)||Chemical plant employees exposed to insecticide; more than 20 suffer severe neurologicalproblems, more than 40 have less severe problems.|
|1976||United States (Texas)||Leptophos (Phosvel)||At least 9 employees suffer severe neurological problems following exposure to insecticide during manufacturing process.|
|1977||United States (California)||Dichloropropene (Telone II)||24 individuals hospitalized after exposure to pesticide Telone following traffic accident.|
|1979-1980||United States (Lancaster, TX)||BHMH (Lucel-7)||Seven employees at plastic bathtub manufacturing plant experience serious neurologicalproblems following exposure to BHMH.|
|1980s||United States||MPTP||Impurity in synthesis of illicit drug found to cause symptoms identical to those of Parkinson’s disease.|
|1981||Spain||Contaminated toxic oil||20,000 persons poisoned by toxic substance in oil, resulting in more than 500 deaths; many suffer severe neuropathy.|
|1985||United States and Canada||Aldicarb||More than 1,000 individuals in California and other Western States and British Columbia experience neuromuscular and cardiac problems following ingestion of melons contaminated with the pesticide aldicarb.|
|1987||Canada||Domoic acid||Ingestion of mussels contaminated with domoic acid causes 129 illnesses and 2 deaths; symptoms include memory loss, disorientation and seizures.|
Source: OTA 1990.
Chemicals may affect the nervous system through actions at any of several cellular targets or biochemical processes within the central or peripheral nervous system. Toxic effects on other organs may also affect the nervous system, as in the example of hepatic encephalopathy. The manifestations of neurotoxicity include effects on learning (including memory, cognition and intellectual performance), somatosensory processes (including sensation and proprioreception), motor function (including balance, gait and fine movement control), affect (including personality status and emotionality) and autonomic function (nervous control of endocrine function and internal organ systems). The toxic effects of chemicals upon the nervous system often vary in sensitivity and expression with age: during development, the central nervous system may be especially susceptible to toxic insult because of the extended process of cellular differentiation, migration, and cell-to-cell contact that takes place in humans (OTA 1990). Moreover, cytotoxic damage to the nervous system may be irreversible because neurons are not replaced after embryogenesis. While the central nervous system (CNS) is somewhat protected from contact with absorbed compounds through a system of tightly joined cells (the blood-brain barrier, composed of capillary endothelial cells that line the vasculature of the brain), toxic chemicals can gain access to the CNS by three mechanisms: solvents and lipophilic compounds can pass through cell membranes; some compounds can attach to endogenous transporter proteins that serve to supply nutrients and biomolecules to the CNS; small proteins if inhaled can be directly taken up by the olfactory nerve and transported to the brain.
US regulatory authorities
Statutory authority for regulating substances for neurotoxicity is assigned to four agencies in the United States: the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), the Occupational Safety and Health Administration (OSHA), and the Consumer Product Safety Commission (CPSC). While OSHA generally regulates occupational exposures to neurotoxic (and other) chemicals, the EPA has authority to regulate occupational and nonoccupational exposures to pesticides under the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA). EPA also regulates new chemicals prior to manufacture and marketing, which obligates the agency to consider both occupational and nonoccupational risks.
Agents that adversely affect the physiology, biochemistry, or structural integrity of the nervous system or nervous system function expressed behaviourally are defined as neurotoxic hazards (EPA 1993). The determination of inherent neurotoxicity is a difficult process, owing to the complexity of the nervous system and the multiple expressions of neurotoxicity. Some effects may be delayed in appearance, such as the delayed neurotoxicity of certain organophosphate insecticides. Caution and judgement are required in determining neurotoxic hazard, including consideration of the conditions of exposure, dose, duration and timing.
Hazard identification is usually based upon toxicological studies of intact organisms, in which behavioural, cognitive, motor and somatosensory function is assessed with a range of investigative tools including biochemistry, electrophysiology and morphology (Tilson and Cabe 1978; Spencer and Schaumberg 1980). The importance of careful observation of whole organism behaviour cannot be overemphasized. Hazard identification also requires evaluation of toxicity at different developmental stages, including early life (intrauterine and early neonatal) and senescence. In humans, the identification of neurotoxicity involves clinical evaluation using methods of neurological assessment of motor function, speech fluency, reflexes, sensory function, electrophysiology, neuropsychological testing, and in some cases advanced techniques of brain imaging and quantitative electroencephalography. WHO has developed and validated a neurobehavioural core test battery (NCTB), which contains probes of motor function, hand-eye coordination, reaction time, immediate memory, attention and mood. This battery has been validated internationally by a coordinated process (Johnson 1978).
Hazard identification using animals also depends upon careful observational methods. The US EPA has developed a functional observational battery as a first-tier test designed to detect and quantify major overt neurotoxic effects (Moser 1990). This approach is also incorporated in the OECD subchronic and chronic toxicity testing methods. A typical battery includes the following measures: posture; gait; mobility; general arousal and reactivity; presence or absence of tremor, convulsions, lacrimation, piloerection, salivation, excess urination or defecation, stereotypy, circling, or other bizarre behaviours. Elicited behaviours include response to handling, tail pinch, or clicks; balance, righting reflex, and hind limb grip strength. Some representative tests and agents identified with these tests are shown in table 2.
Table 2. Examples of specialized tests to measure neurotoxicity
|Weakness||Grip strength; swimming endurance; suspension from rod; discriminative motor function; hind limb splay||n-Hexane, Methylbutylketone, Carbaryl|
|Incoordination||Rotorod, gait measurements||3-Acetylpyridine, Ethanol|
|Tremor||Rating scale, spectral analysis||Chlordecone, Type I Pyrethroids, DDT|
|Myoclonia, spasms||Rating scale, spectral analysis||DDT, Type II Pyrethroids|
|Auditory||Discriminant conditioning, reflex modification||Toluene, Trimethyltin|
|Visual toxicity||Discriminant conditioning||Methyl mercury|
|Somatosensory toxicity||Discriminant conditioning||Acrylamide|
|Pain sensitivity||Discriminant conditioning (btration); functional observational battery||Parathion|
|Olfactory toxicity||Discriminant conditioning||3-Methylindole methylbromide|
|Habituation||Startle reflex||Diisopropylfluorophosphate (DFP)|
|Classical conditioning||Nictitating membrane, conditioned flavour aversion, passive avoidance, olfactory conditioning||Aluminium, Carbaryl, Trimethyltin, IDPN, Trimethyltin (neonatal)|
|Operant or instrumental conditioning||One-way avoidance, Two-way avoidance, Y-maze avoidance, Biol watermaze, Morris water maze, Radial arm maze, Delayed matching to sample, Repeated acquisition, Visual discrimination learning||Chlordecone, Lead (neonatal), Hypervitaminosis A, Styrene, DFP, Trimethyltin, DFP. Carbaryl, Lead|
Source: EPA 1993.
These tests may be followed by more complex assessments usually reserved for mechanistic studies rather than hazard identification. In vitro methods for neurotoxicity hazard identification are limited since they do not provide indications of effects on complex function, such as learning, but they may be very useful in defining target sites of toxicity and improving the precision of target site dose-response studies (see WHO 1986 and EPA 1993 for comprehensive discussions of principles and methods for identifying potential neurotoxicants).
The relationship between toxicity and dose may be based upon human data when available or upon animal tests, as described above. In the United States, an uncertainty or safety factor approach is generally used for neurotoxicants. This process involves determining a “no observed adverse effect level” (NOAEL) or “lowest observed adverse effect level” (LOAEL) and then dividing this number by uncertainty or safety factors (usually multiples of 10) to allow for such considerations as incompleteness of data, potentially higher sensitivity of humans and variability of human response due to age or other host factors. The resultant number is termed the reference dose (RfD) or reference concentration (RfC). The effect occurring at the lowest dose in the most sensitive animal species and gender is generally used to determine the LOAEL or NOAEL. Conversion of animal dose to human exposure is done by standard methods of cross-species dosimetry, taking into account differences in lifespan and exposure duration.
The use of the uncertainty factor approach assumes that there is a threshold, or dose below which no adverse effect is induced. Thresholds for specific neurotoxicants may be difficult to determine experimentally; they are based upon assumptions as to mechanism of action which may or may not hold for all neurotoxicants (Silbergeld 1990).
At this stage, information is evaluated on sources, routes, doses and durations of exposure to the neurotoxicant for human populations, subpopulations or even individuals. This information may be derived from monitoring of environmental media or human sampling, or from estimates based upon standard scenarios (such as workplace conditions and job descriptions) or models of environmental fate and dispersion (see EPA 1992 for general guidelines on exposure assessment methods). In some limited cases, biological markers may be used to validate exposure inferences and estimates; however, there are relatively few usable biomarkers of neurotoxicants.
The combination of hazard identification, dose-response and exposure assessment is used to develop the risk characterization. This process involves assumptions as to the extrapolation of high to low doses, extrapolation from animals to humans, and the appropriateness of threshold assumptions and use of uncertainty factors.
Reproductive Toxicology—Risk Assessment Methods
Reproductive hazards may affect multiple functional endpoints and cellular targets within humans, with consequences for the health of the affected individual and future generations. Reproductive hazards may affect the development of the reproductive system in males or females, reproductive behaviours, hormonal function, the hypothalamus and pituitary, gonads and germ cells, fertility, pregnancy and the duration of reproductive function (OTA 1985). In addition, mutagenic chemicals may also affect reproductive function by damaging the integrity of germ cells (Dixon 1985).
The nature and extent of adverse effects of chemical exposures upon reproductive function in human populations is largely unknown. Relatively little surveillance information is available on such endpoints as fertility of men or women, age of menopause in women, or sperm counts in men. However, both men and women are employed in industries where exposures to reproductive hazards may occur (OTA 1985).
This section does not recapitulate those elements common to both neurotoxicant and reproductive toxicant risk assessment, but focuses upon issues specific to reproductive toxicant risk assessment. As with neurotoxicants, authority to regulate chemicals for reproductive toxicity is placed by statute in the EPA, OSHA, the FDA and the CPSC. Of these agencies, only the EPA has a stated set of guidelines for reproductive toxicity risk assessment. In addition, the state of California has developed methods for reproductive toxicity risk assessment in response to a state law, Proposition 65 (Pease et al. 1991).
Reproductive toxicants, like neurotoxicants, may act by affecting any of a number of target organs or molecular sites of action. Their assessment has additional complexity because of the need to evaluate three distinct organisms separately and together—the male, the female and the offspring (Mattison and Thomford 1989). While an important endpoint of reproductive function is the generation of a healthy child, reproductive biology also plays a role in the health of developing and mature organisms regardless of their involvement in procreation. For instance, loss of ovulatory function through natural depletion or surgical removal of oocytes has substantial effects upon the health of women, involving changes in blood pressure, lipid metabolism and bone physiology. Changes in hormone biochemistry may affect susceptibility to cancer.
The identification of a reproductive hazard may be made on the basis of human or animal data. In general, data from humans are relatively sparse, owing to the need for careful surveillance to detect alterations in reproductive function, such as sperm count or quality, ovulatory frequency and cycle length, or age at puberty. Detecting reproductive hazards through collection of information on fertility rates or data on pregnancy outcome may be confounded by the intentional suppression of fertility exercised by many couples through family-planning measures. Careful monitoring of selected populations indicates that rates of reproductive failure (miscarriage) may be very high, when biomarkers of early pregnancy are assessed (Sweeney et al. 1988).
Testing protocols using experimental animals are widely used to identify reproductive toxicants. In most of these designs, as developed in the United States by the FDA and the EPA and internationally by the OECD test guidelines program, the effects of suspect agents are detected in terms of fertility after male and/or female exposure; observation of sexual behaviours related to mating; and histopathological examination of gonads and accessory sex glands, such as mammary glands (EPA 1994). Often reproductive toxicity studies involve continuous dosing of animals for one or more generations in order to detect effects on the integrated reproductive process as well as to study effects on specific organs of reproduction. Multigenerational studies are recommended because they permit detection of effects that may be induced by exposure during the development of the reproductive system in utero. A special test protocol, the Reproductive Assessment by Continuous Breeding (RACB), has been developed in the United States by the National Toxicology Program. This test provides data on changes in the temporal spacing of pregnancies (reflecting ovulatory function), as well as number and size of litters over the entire test period. When extended to the lifetime of the female, it can yield information on early reproductive failure. Sperm measures can be added to the RACB to detect changes in male reproductive function. A special test to detect pre- or postimplantation loss is the dominant lethal test, designed to detect mutagenic effects in male spermatogenesis.
In vitro tests have also been developed as screens for reproductive (and developmental) toxicity (Heindel and Chapin 1993). These tests are generally used to supplement in vivo test results by providing more information on target site and mechanism of observed effects.
Table 3 shows the three types of endpoints in reproductive toxicity assessment—couple-mediated, female-specific and male-specific. Couple-mediated endpoints include those detectable in multigenerational and single-organism studies. They generally include assessment of offspring as well. It should be noted that fertility measurement in rodents is generally insensitive, as compared to such measurement in humans, and that adverse effects on reproductive function may well occur at lower doses than those that significantly affect fertility (EPA 1994). Male-specific endpoints can include dominant lethality tests as well as histopathological evaluation of organs and sperm, measurement of hormones, and markers of sexual development. Sperm function can also be assessed by in vitro fertilization methods to detect germ cell properties of penetration and capacitation; these tests are valuable because they are directly comparable to in vitro assessments conducted in human fertility clinics, but they do not by themselves provide dose-response information. Female-specific endpoints include, in addition to organ histopathology and hormone measurements, assessment of the sequelae of reproduction, including lactation and offspring growth.
Table 3. Endpoints in reproductive toxicology
|Multigenerational studies||Other reproductive endpoints|
|Mating rate, time to mating (time to pregnancy1)
Litter size (total and live)
Number of live and dead offspring (foetal death rate1)
External malformations and variations1
Internal malformations and variations1
Postnatal structural and functional development1
Visual examination and histopathology
|Testes, epididymides, seminal vesicles, prostate, pituitary
Testes, epididymides, seminal vesicles, prostate, pituitary
Sperm number (count) and quality (morphology, motility)
Luteinizing hormone, follicle stimulating hormone, testosterone, oestrogen, prolactin
Testis descent1, preputial separation, sperm production1, ano-genital distance, normality of external genitalia1
Visual examination and histopathology
Oestrous (menstrual1) cycle normality
Ovary, uterus, vagina, pituitary
Ovary, uterus, vagina, pituitary, oviduct, mammary gland
Vaginal smear cytology
LH, FSH, oestrogen, progesterone, prolactin
Normality of external genitalia1, vaginal opening, vaginal smear cytology, onset of oestrus behaviour (menstruation1)
Vaginal smear cytology, ovarian histology
1 Endpoints that can be obtained relatively noninvasively with humans.
Source: EPA 1994.
In the United States, the hazard identification concludes with a qualitative evaluation of toxicity data by which chemicals are judged to have either sufficient or insufficient evidence of hazard (EPA 1994). “Sufficient” evidence includes epidemiological data providing convincing evidence of a causal relationship (or lack thereof), based upon case-control or cohort studies, or well-supported case series. Sufficient animal data may be coupled with limited human data to support a finding of a reproductive hazard: to be sufficient, the experimental studies are generally required to utilize EPA’s two-generation test guidelines, and must include a minimum of data demonstrating an adverse reproductive effect in an appropriate, well-conducted study in one test species. Limited human data may or may not be available; it is not necessary for the purposes of hazard identification. To rule out a potential reproductive hazard, the animal data must include an adequate array of endpoints from more than one study showing no adverse reproductive effect at doses minimally toxic to the animal (EPA 1994).
As with the evaluation of neurotoxicants, the demonstration of dose-related effects is an important part of risk assessment for reproductive toxicants. Two particular difficulties in dose-response analyses arise due to complicated toxicokinetics during pregnancy, and the importance of distinguishing specific reproductive toxicity from general toxicity to the organism. Debilitated animals, or animals with substantial nonspecific toxicity (such as weight loss) may fail to ovulate or mate. Maternal toxicity can affect the viability of pregnancy or support for lactation. These effects, while evidence of toxicity, are not specific to reproduction (Kimmel et al. 1986). Assessing dose response for a specific endpoint, such as fertility, must be done in the context of an overall assessment of reproduction and development. Dose-response relationships for different effects may differ significantly, but interfere with detection. For instance, agents that reduce litter size may result in no effects upon litter weight because of reduced competition for intrauterine nutrition.
An important component of exposure assessment for reproductive risk assessment relates to information on the timing and duration of exposures. Cumulative exposure measures may be insufficiently precise, depending upon the biological process that is affected. It is known that exposures at different developmental stages in males and females can result in different outcomes in both humans and experimental animals (Gray et al. 1988). The temporal nature of spermatogenesis and ovulation also affects outcome. Effects on spermatogenesis may be reversible if exposures cease; however, oocyte toxicity is not reversible since females have a fixed set of germ cells to draw upon for ovulation (Mattison and Thomford 1989).
As with neurotoxicants, the existence of a threshold is usually assumed for reproductive toxicants. However, the actions of mutagenic compounds on germ cells may be considered an exception to this general assumption. For other endpoints, an RfD or RfC is calculated as with neurotoxicants by determination of the NOAEL or LOAEL and application of appropriate uncertainty factors. The effect used for determining the NOAEL or LOAEL is the most sensitive adverse reproductive endpoint from the most appropriate or most sensitive mammalian species (EPA 1994). Uncertainty factors include consideration of interspecies and intraspecies variation, ability to define a true NOAEL, and sensitivity of the endpoint detected.
Risk characterizations should also be focused upon specific subpopulations at risk, possibly specifying males and females, pregnancy status, and age. Especially sensitive individuals, such as lactating women, women with reduced oocyte numbers or men with reduced sperm counts, and prepubertal adolescents may also be considered.
After a hazard has been recognized and evaluated, the most appropriate interventions (methods of control) for a particular hazard must be determined. Control methods usually fall into three categories:
As with any change in work processes, training must be provided to ensure the success of the changes.
Engineering controls are changes to the process or equipment that reduce or eliminate exposures to an agent. For example, substituting a less toxic chemical in a process or installing exhaust ventilation to remove vapours generated during a process step, are examples of engineering controls. In the case of noise control, installing sound-absorbing materials, building enclosures and installing mufflers on air exhaust outlets are examples of engineering controls. Another type of engineering control might be changing the process itself. An example of this type of control would be removal of one or more degreasing steps in a process that originally required three degreasing steps. By removing the need for the task that produced the exposure, the overall exposure for the worker has been controlled. The advantage of engineering controls is the relatively small involvement of the worker, who can go about the job in a more controlled environment when, for instance, contaminants are automatically removed from the air. Contrast this to the situation where the selected method of control is a respirator to be worn by the worker while performing the task in an “uncontrolled” workplace. In addition to the employer actively installing engineering controls on existing equipment, new equipment can be purchased that contains the controls or other more effective controls. A combination approach has often been effective (i.e., installing some engineering controls now and requiring personal protective equipment until new equipment arrives with more effective controls that will eliminate the need for personal protective equipment). Some common examples of engineering controls are:
The occupational hygienist must be sensitive to the worker’s job tasks and must solicit worker participation when designing or selecting engineering controls. Placing barriers in the workplace, for example, could significantly impair a worker’s ability to perform the job and may encourage “work arounds”. Engineering controls are the most effective methods of reducing exposures. They are also, often, the most expensive. Since engineering controls are effective and expensive it is important to maximize the involvement of the workers in the selection and design of the controls. This should result in a greater likelihood that the controls will reduce exposures.
Administrative controls involve changes in how a worker accomplishes the necessary job tasks—for example, how long they work in an area where exposures occur, or changes in work practices such as improvements in body positioning to reduce exposures. Administrative controls can add to the effectiveness of an intervention but have several drawbacks:
Personal protective equipment consists of devices provided to the worker and required to be worn while performing certain (or all) job tasks. Examples include respirators, chemical goggles, protective gloves and faceshields. Personal protective equipment is commonly used in cases where engineering controls have not been effective in controlling the exposure to acceptable levels or where engineering controls have not been found to be feasible (for cost or operational reasons). Personal protective equipment can provide significant protection to workers if worn and used correctly. In the case of respiratory protection, protection factors (ratio of concentration outside the respirator to that inside) can be 1,000 or more for positive-pressure supplied air respirators or ten for half-face air-purifying respirators. Gloves (if selected appropriately) can protect hands for hours from solvents. Goggles can provide effective protection from chemical splashes.
Intervention: Factors to Consider
Often a combination of controls is used to reduce the exposures to acceptable levels. Whatever methods are selected, the intervention must reduce the exposure and resulting hazard to an acceptable level. There are, however, many other factors that need to be considered when selecting an intervention. For example:
Effectiveness of controls
Effectiveness of controls is obviously a prime consideration when taking action to reduce exposures. When comparing one type of intervention to another, the level of protection required must be appropriate for the challenge; too much control is a waste of resources. Those resources could be used to reduce other exposures or exposures of other employees. On the other hand, too little control leaves the worker exposed to unhealthy conditions. A useful first step is to rank the interventions according to their effectiveness, then use this ranking to evaluate the significance of the other factors.
Ease of use
For any control to be effective the worker must be able to perform his or her job tasks with the control in place. For example, if the control method selected is substitution, then the worker must know the hazards of the new chemical, be trained in safe handling procedures, understand proper disposal procedures, and so on. If the control is isolation—placing an enclosure around the substance or the worker—the enclosure must allow the worker to do his or her job. If the control measures interfere with the tasks of the job, the worker will be reluctant to use them and may find ways to accomplish the tasks that could result in increased, not decreased, exposures.
Every organization has limits on resources. The challenge is to maximize the use of those resources. When hazardous exposures are identified and an intervention strategy is being developed, cost must be a factor. The “best buy” many times will not be the lowest- or highest-cost solutions. Cost becomes a factor only after several viable methods of control have been identified. Cost of the controls can then be used to select the controls that will work best in that particular situation. If cost is the determining factor at the outset, poor or ineffective controls may be selected, or controls that interfere with the process in which the employee is working. It would be unwise to select an inexpensive set of controls that interfere with and slow down a manufacturing process. The process then would have a lower throughput and higher cost. In very short time the “real” costs of these “low cost” controls would become enormous. Industrial engineers understand the layout and overall process; production engineers understand the manufacturing steps and processes; the financial analysts understand the resource allocation problems. Occupational hygienists can provide a unique insight into these discussions due to their understanding of the specific employee’s job tasks, the employee’s interaction with the manufacturing equipment as well as how the controls will work in a particular setting. This team approach increases the likelihood of selecting the most appropriate (from a variety of perspectives) control.
Adequacy of warning properties
When protecting a worker against an occupational health hazard, the warning properties of the material, such as odour or irritation, must be considered. For example, if a semiconductor worker is working in an area where arsine gas is used, the extreme toxicity of the gas poses a significant potential hazard. The situation is compounded by arsine’s very poor warning properties—the workers cannot detect the arsine gas by sight or smell until it is well above acceptable levels. In this case, controls that are marginally effective at keeping exposures below acceptable levels should not be considered because excursions above acceptable levels cannot be detected by the workers. In this case, engineering controls should be installed to isolate the worker from the material. In addition, a continuous arsine gas monitor should be installed to warn workers of the failure of the engineering controls. In situations involving high toxicity and poor warning properties, preventive occupational hygiene is practised. The occupational hygienist must be flexible and thoughtful when approaching an exposure problem.
Acceptable level of exposure
If controls are being considered to protect a worker from a substance such as acetone, where the acceptable level of exposure may be in the range of 800 ppm, controlling to a level of 400 ppm or less may be achieved relatively easily. Contrast the example of acetone control to control of 2-ethoxyethanol, where the acceptable level of exposure may be in the range of 0.5 ppm. To obtain the same per cent reduction (0.5 ppm to 0.25 ppm) would probably require different controls. In fact, at these low levels of exposure, isolation of the material may become the primary means of control. At high levels of exposure, ventilation may provide the necessary reduction. Therefore, the acceptable level determined (by the government, company, etc.) for a substance can limit the selection of controls.
Frequency of exposure
When assessing toxicity the classic model uses the following relationship:
TIME x CONCENTRATION = DOSE
Dose, in this case, is the amount of material being made available for absorption. The previous discussion focused on minimizing (lowering) the concentration portion of this relationship. One might also reduce the time spent being exposed (the underlying reason for administrative controls). This would similarly reduce the dose. The issue here is not the employee spending time in a room, but how often an operation (task) is performed. The distinction is important. In the first example, the exposure is controlled by removing the workers when they are exposed to a selected amount of toxicant; the intervention effort is not directed at controlling the amount of toxicant (in many situations there may be a combination approach). In the second case, the frequency of the operation is being used to provide the appropriate controls, not to determine a work schedule. For example, if an operation such as degreasing is performed routinely by an employee, the controls may include ventilation, substitution of a less toxic solvent or even automation of the process. If the operation is performed rarely (e.g., once per quarter) personal protective equipment may be an option (depending on many of the factors described in this section). As these two examples illustrate, the frequency with which an operation is performed can directly affect the selection of controls. Whatever the exposure situation, the frequency with which a worker performs the tasks must be considered and factored into the control selection.
Route of exposure obviously is going to affect the method of control. If a respiratory irritant is present, ventilation, respirators, and so on, would be considered. The challenge for the occupational hygienist is identifying all routes of exposure. For example, glycol ethers are used as a carrier solvent in printing operations. Breathing-zone air concentrations can be measured and controls implemented. Glycol ethers, however, are rapidly absorbed through intact skin. The skin represents a significant route of exposure and must be considered. In fact, if the wrong gloves are chosen, the skin exposure may continue long after the air exposures have decreased (due to the employee continuing to use gloves that have experienced breakthrough). The hygienist must evaluate the substance—its physical properties, chemical and toxicological properties, and so on—to determine what routes of exposure are possible and plausible (based on the tasks performed by the employee).
In any discussion of controls, one of the factors that must be considered is the regulatory requirements for controls. There may well be codes of practice, regulations, and so on, that require a specific set of controls. The occupational hygienist has flexibility above and beyond the regulatory requirements, but the minimum mandated controls must be installed. Another aspect of the regulatory requirements is that the mandated controls may not work as well or may conflict with the best judgement of the occupational hygienist. The hygienist must be creative in these situations and find solutions that satisfy the regulatory as well as best practice goals of the organization.
Training and Labelling
Regardless of what form of intervention is eventually selected, training and other forms of notification must be provided to ensure that the workers understand the interventions, why they were selected, what reductions in exposure are expected, and the role of the workers in achieving those reductions. Without the participation and understanding of the workforce, the interventions will likely fail or at least operate at reduced efficiency. Training builds hazard awareness in the workforce. This new awareness can be invaluable to the occupational hygienist in identifying and reducing previously unrecognized exposures or new exposures.
Training, labelling and related activities may be part of a regulatory compliance scheme. It would be prudent to check the local regulations to ensure that whatever type of training or labelling is undertaken satisfies the regulatory as well as operational requirements.
In this short discussion on interventions, some general considerations have been presented to stimulate thought. In practice, these rules become very complex and often have significant ramifications for employee and company health. The occupational hygienist’s professional judgement is essential in selecting the best controls. Best is a term with many different meanings. The occupational hygienist must become adept at working in teams and soliciting input from the workers, management and technical staff.