Banner ToolsApproach

Children categories

28. Epidemiology and Statistics

28. Epidemiology and Statistics (12)

Banner 4

 

28. Epidemiology and Statistics

Chapter Editors:  Franco Merletti, Colin L. Soskolne and Paolo Vineis


Table of Contents

Tables and Figures

Epidemiological Method Applied to Occupational Health and Safety
Franco Merletti, Colin L. Soskolne and Paolo Vineis

Exposure Assessment
M. Gerald Ott

Summary Worklife Exposure Measures
Colin L. Soskolne

Measuring Effects of Exposures
Shelia Hoar Zahm

     Case Study: Measures
     Franco Merletti, Colin L. Soskolne and Paola Vineis

Options in Study Design
Sven Hernberg

Validity Issues in Study Design
Annie J. Sasco

Impact of Random Measurement Error
Paolo Vineis and Colin L. Soskolne

Statistical Methods
Annibale Biggeri and Mario Braga

Causality Assessment and Ethics in Epidemiological Research
Paolo Vineis

Case Studies Illustrating Methodological Issues in the Surveillance of Occupational Diseases
Jung-Der Wang

Questionnaires in Epidemiological Research
Steven D. Stellman and Colin L. Soskolne

Asbestos Historical Perspective
Lawrence Garfinkel

Tables

Click a link below to view table in article context.

1. Five selected summary measures of worklife exposure

2. Measures of disease occurrence

3. Measures of association for a cohort study

4. Measures of association for case-control studies

5. General frequency table layout for cohort data

6. Sample layout of case-control data

7. Layout case-control data - one control per case

8. Hypothetical cohort of 1950 individuals to T2

9. Indices of central tendency & dispersion

10. A binomial experiment & probabilities

11. Possible outcomes of a binomial experiment

12. Binomial distribution, 15 successes/30 trials

13. Binomial distribution, p = 0.25; 30 trials

14. Type II error & power; x = 12, n = 30, a = 0.05

15. Type II error & power; x = 12, n = 40, a = 0.05

16. 632 workers exposed to asbestos 20 years or longer

17. O/E number of deaths among 632 asbestos workers

Figures

Point to a thumbnail to see figure caption, click to see the figure in article context.

EPI110F1EPI110F2


Click to return to top of page

View items...
29. Ergonomics

29. Ergonomics (27)

Banner 4

 

29. Ergonomics

Chapter Editors:  Wolfgang Laurig and Joachim Vedder

 


 

Table of Contents 

Tables and Figures

Overview
Wolfgang Laurig and Joachim Vedder

Goals, Principles and Methods

The Nature and Aims of Ergonomics
William T. Singleton

Analysis of Activities, Tasks and Work Systems
Véronique De Keyser

Ergonomics and Standardization
Friedhelm Nachreiner

Checklists
Pranab Kumar Nag

Physical and Physiological Aspects

Anthropometry
Melchiorre Masali

Muscular Work
Juhani Smolander and Veikko Louhevaara

Postures at Work
Ilkka Kuorinka

Biomechanics
Frank Darby

General Fatigue
Étienne Grandjean

Fatigue and Recovery
Rolf Helbig and Walter Rohmert

Psychological Aspects

Mental Workload
Winfried Hacker

Vigilance
Herbert Heuer

Mental Fatigue
Peter Richter

Organizational Aspects of Work

Work Organization
Eberhard Ulich and Gudela Grote

Sleep Deprivation
Kazutaka Kogi

Work Systems Design

Workstations
Roland Kadefors

Tools
T.M. Fraser

Controls, Indicators and Panels
Karl H. E. Kroemer

Information Processing and Design
Andries F. Sanders

Designing for Everyone

Designing for Specific Groups
Joke H. Grady-van den Nieuwboer

     Case Study: The International Classification of Functional Limitation in People

Cultural Differences
Houshang Shahnavaz

Elderly Workers
Antoine Laville and Serge Volkoff

Workers with Special Needs
Joke H. Grady-van den Nieuwboer

Diversity and Importance of Ergonomics--Two Examples

System Design in Diamond Manufacturing
Issachar Gilad

Disregarding Ergonomic Design Principles: Chernobyl
Vladimir M. Munipov 

Tables

Click a link below to view table in article context.

1. Basic anthropometric core list

2. Fatigue & recovery dependent on activity levels

3. Rules of combination effects of two stress factors on strain

4. Differenting among several negative consequences of mental strain

5. Work-oriented principles for production structuring

6. Participation in organizational context

7. User participation in the technology process

8. Irregular working hours & sleep deprivation

9. Aspects of advance, anchor & retard sleeps

10. Control movements & expected effects

11. Control-effect relations of common hand controls

12. Rules for arrangement of controls

13. Guidelines for labels

Figures

Point to a thumbnail to see figure caption, click to see the figure in the article context.

ERG040T1ERG040F1ERG040F2ERG040F3ERG040T2ERG040F5ERG070F1ERG070F2ERG070F3ERG060F2ERG060F1ERG060F3ERG080F1ERG080F4ERG090F1ERG090F2ERG090F3ERG090F4ERG225F1ERG225F2ERG150F1ERG150F2ERG150F4ERG150F5ERG150F6ERG120F1ERG130F1ERG290F1ERG160T1ERG160F1ERG185F1ERG185F2ERG185F3ERG185F4ERG190F1ERG190F2ERG190F3ERG210F1ERG210F2ERG210F3ERG210F4ERG210T4ERG210T5ERG210T6ERG220F1ERG240F1ERG240F2ERG240F3ERG240F4ERG260F1ERG300F1ERG255F1

View items...
32. Record Systems and Surveillance

32. Record Systems and Surveillance (9)

Banner 4

 

32. Record Systems and Surveillance

Chapter Editor:  Steven D. Stellman

 


 

Table of Contents 

Tables and Figures

Occupational Disease Surveillance and Reporting Systems
Steven B. Markowitz

Occupational Hazard Surveillance
David H. Wegman and Steven D. Stellman

Surveillance in Developing Countries
David Koh and Kee-Seng Chia

Development and Application of an Occupational Injury and Illness Classification System
Elyce Biddle

Risk Analysis of Nonfatal Workplace Injuries and Illnesses
John W. Ruser

Case Study: Worker Protection and Statistics on Accidents and Occupational Diseases - HVBG, Germany
Martin Butz and Burkhard Hoffmann

Case Study: Wismut - A Uranium Exposure Revisited
Heinz Otten and Horst Schulz

Measurement Strategies and Techniques for Occupational Exposure Assessment in Epidemiology
Frank Bochmann and Helmut Blome

Case Study: Occupational Health Surveys in China

Tables

Click a link below to view the table in article context.

1. Angiosarcoma of the liver - world register

2. Occupational illness, US, 1986 versus 1992

3. US Deaths from pneumoconiosis & pleural mesothelioma

4. Sample list of notifiable occupational diseases

5. Illness & injury reporting code structure, US

6. Nonfatal occupational injuries & illnesses, US 1993

7. Risk of occupational injuries & illnesses

8. Relative risk for repetitive motion conditions

9. Workplace accidents, Germany, 1981-93

10. Grinders in metalworking accidents, Germany, 1984-93

11. Occupational disease, Germany, 1980-93

12. Infectious diseases, Germany, 1980-93

13. Radiation exposure in the Wismut mines

14. Occupational diseases in Wismut uranium mines 1952-90

Figures

Point to a thumbnail to see figure caption, click to see the figure in article context.

REC60F1AREC060F2REC100F1REC100T1REC100T2


Click to return to top of page

View items...
33. Toxicology

33. Toxicology (21)

Banner 4

 

33. Toxicology

Chapter Editor: Ellen K. Silbergeld


Table of Contents

Tables and Figures

Introduction
Ellen K. Silbergeld, Chapter Editor

General Principles of Toxicology

Definitions and Concepts
Bo Holmberg, Johan Hogberg and Gunnar Johanson

Toxicokinetics
Dušan Djuríc

Target Organ And Critical Effects
Marek Jakubowski

Effects Of Age, Sex And Other Factors
Spomenka Telišman

Genetic Determinants Of Toxic Response
Daniel W. Nebert and Ross A. McKinnon

Mechanisms of Toxicity

Introduction And Concepts
Philip G. Watanabe

Cellular Injury And Cellular Death
Benjamin F. Trump and Irene K. Berezesky

Genetic Toxicology
R. Rita Misra and Michael P. Waalkes

Immunotoxicology
Joseph G. Vos and Henk van Loveren

Target Organ Toxicology
Ellen K. Silbergeld

Toxicology Test Methods

Biomarkers
Philippe Grandjean

Genetic Toxicity Assessment
David M. DeMarini and James Huff

In Vitro Toxicity Testing
Joanne Zurlo

Structure Activity Relationships
Ellen K. Silbergeld

Regulatory Toxicology

Toxicology In Health And Safety Regulation
Ellen K. Silbergeld

Principles Of Hazard Identification - The Japanese Approach
Masayuki Ikeda

The United States Approach to Risk Assessment Of Reproductive Toxicants and Neurotoxic Agents
Ellen K. Silbergeld

Approaches To Hazard Identification - IARC
Harri Vainio and Julian Wilbourn

Appendix - Overall Evaluations of Carcinogenicity to Humans: IARC Monographs Volumes 1-69 (836)

Carcinogen Risk Assessment: Other Approaches
Cees A. van der Heijden

Tables 

Click a link below to view table in article context.

  1. Examples of critical organs & critical effects
  2. Basic effects of possible multiple interactions of metals
  3. Haemoglobin adducts in workers exposed to aniline & acetanilide
  4. Hereditary, cancer-prone disorders & defects in DNA repair
  5. Examples of chemicals that exhibit genotoxicity in human cells
  6. Classification of tests for immune markers
  7. Examples of biomarkers of exposure
  8. Pros & cons of methods for identifying human cancer risks
  9. Comparison of in vitro systems for hepatotoxicity studies
  10. Comparison of SAR & test data: OECD/NTP analyses
  11. Regulation of chemical substances by laws, Japan
  12. Test items under the Chemical Substance Control Law, Japan
  13. Chemical substances & the Chemical Substances Control Law
  14. Selected major neurotoxicity incidents
  15. Examples of specialized tests to measure neurotoxicity
  16. Endpoints in reproductive toxicology
  17. Comparison of low-dose extrapolations procedures
  18. Frequently cited models in carcinogen risk characterization

Figures

Point to a thumbnail to see figure caption, click to see figure in article context.

testTOX050F1TOX050F2TOX050F4TOX050T1TOX050F6TOX210F1TOX210F2TOX060F1TOX090F1TOX090F2TOX090F3TOX090F4TOX110F1TOX260F1TOX260T4


Click to return to top of page

View items...
Monday, 28 February 2011 20:07

General Principles

Basic Concepts and Definitions

At the worksite, industrial hygiene methodologies can measure and control only airborne chemicals, while other aspects of the problem of possible harmful agents in the environment of workers, such as skin absorption, ingestion, and non-work-related exposure, remain undetected and therefore uncontrolled. Biological monitoring helps fill this gap.

Biological monitoring was defined in a 1980 seminar, jointly sponsored by the European Economic Community (EEC), National Institute for Occupational Safety and Health (NIOSH) and Occupational Safety and Health Association (OSHA) (Berlin, Yodaiken and Henman 1984) in Luxembourg as “the measurement and assessment of agents or their metabolites either in tissues, secreta, excreta, expired air or any combination of these to evaluate exposure and health risk compared to an appropriate reference”. Monitoring is a repetitive, regular and preventive activity designed to lead, if necessary, to corrective actions; it should not be confused with diagnostic procedures.

Biological monitoring is one of the three important tools in the prevention of diseases due to toxic agents in the general or occupational environment, the other two being environmental monitoring and health surveillance.

The sequence in the possible development of such disease may be schematically represented as follows: source-exposed chemical agent—internal dose—biochemical or cellular effect (reversible) —health effects—disease. The relationships among environmental, biological, and exposure monitoring, and health surveillance, are shown in figure 1. 

Figure 1. The relationship between environmental, biological and exposure monitoring, and health surveillance

BMO010F1

When a toxic substance (an industrial chemical, for example) is present in the environment, it contaminates air, water, food, or surfaces in contact with the skin; the amount of toxic agent in these media is evaluated via environmental monitoring.

As a result of absorption, distribution, metabolism, and excretion, a certain internal dose of the toxic agent (the net amount of a pollutant absorbed in or passed through the organism over a specific time interval) is effectively delivered to the body, and becomes detectable in body fluids. As a result of its interaction with a receptor in the critical organ (the organ which, under specific conditions of exposure, exhibits the first or the most important adverse effect), biochemical and cellular events occur. Both the internal dose and the elicited biochemical and cellular effects may be measured through biological monitoring.

Health surveillance was defined at the above-mentioned 1980 EEC/NIOSH/OSHA seminar as “the periodic medico-physiological examination of exposed workers with the objective of protecting health and preventing disease”.

Biological monitoring and health surveillance are parts of a continuum that can range from the measurement of agents or their metabolites in the body via evaluation of biochemical and cellular effects, to the detection of signs of early reversible impairment of the critical organ. The detection of established disease is outside the scope of these evaluations.

Goals of Biological Monitoring

Biological monitoring can be divided into (a) monitoring of exposure, and (b) monitoring of effect, for which indicators of internal dose and of effect are used respectively.

The purpose of biological monitoring of exposure is to assess health risk through the evaluation of internal dose, achieving an estimate of the biologically active body burden of the chemical in question. Its rationale is to ensure that worker exposure does not reach levels capable of eliciting adverse effects. An effect is termed “adverse” if there is an impairment of functional capacity, a decreased ability to compensate for additional stress, a decreased ability to maintain homeostasis (a stable state of equilibrium), or an enhanced susceptibility to other environmental influences.

Depending on the chemical and the analysed biological parameter, the term internal dose may have different meanings (Bernard and Lauwerys 1987). First, it may mean the amount of a chemical recently absorbed, for example, during a single workshift. A determination of the pollutant’s concentration in alveolar air or in the blood may be made during the workshift itself, or as late as the next day (samples of blood or alveolar air may be taken up to 16 hours after the end of the exposure period). Second, in the case that the chemical has a long biological half-life—for example, metals in the bloodstream—the internal dose could reflect the amount absorbed over a period of a few months.

Third, the term may also mean the amount of chemical stored. In this case it represents an indicator of accumulation which can provide an estimate of the concentration of the chemical in organs and/or tissues from which, once deposited, it is only slowly released. For example, measurements of DDT or PCB in blood could provide such an estimate.

Finally, an internal dose value may indicate the quantity of the chemical at the site where it exerts its effects, thus providing information about the biologically effective dose. One of the most promising and important uses of this capability, for example, is the determination of adducts formed by toxic chemicals with protein in haemoglobin or with DNA.

Biological monitoring of effects is aimed at identifying early and reversible alterations which develop in the critical organ, and which, at the same time, can identify individuals with signs of adverse health effects. In this sense, biological monitoring of effects represents the principal tool for the health surveillance of workers.

Principal Monitoring Methods

Biological monitoring of exposure is based on the determination of indicators of internal dose by measuring:

    • the amount of the chemical, to which the worker is exposed, in blood or urine (rarely in milk, saliva, or fat)
    • the amount of one or more metabolites of the chemical involved in the same body fluids
    • the concentration of volatile organic compounds (solvents) in alveolar air
    • the biologically effective dose of compounds which have formed adducts to DNA or other large molecules and which thus have a potential genotoxic effect.

           

          Factors affecting the concentration of the chemical and its metabolites in blood or urine will be discussed below.

          As far as the concentration in alveolar air is concerned, besides the level of environmental exposure, the most important factors involved are solubility and metabolism of the inhaled substance, alveolar ventilation, cardiac output, and length of exposure (Brugnone et al. 1980).

          The use of DNA and haemoglobin adducts in monitoring human exposure to substances with carcinogenic potential is a very promising technique for measurement of low level exposures. (It should be noted, however, that not all chemicals that bind to macromolecules in the human organism are genotoxic, i.e., potentially carcinogenic.) Adduct formation is only one step in the complex process of carcinogenesis. Other cellular events, such as DNA repair promotion and progression undoubtedly modify the risk of developing a disease such as cancer. Thus, at the present time, the measurement of adducts should be seen as being confined only to monitoring exposure to chemicals. This is discussed more fully in the article “Genotoxic chemicals” later in this chapter.

          Biological monitoring of effects is performed through the determination of indicators of effect, that is, those that can identify early and reversible alterations. This approach may provide an indirect estimate of the amount of chemical bound to the sites of action and offers the possibility of assessing functional alterations in the critical organ in an early phase.

          Unfortunately, we can list only a few examples of the application of this approach, namely, (1) the inhibition of pseudocholinesterase by organophosphate insecticides, (2) the inhibition of d-aminolaevulinic acid dehydratase (ALA-D) by inorganic lead, and (3) the increased urinary excretion of d-glucaric acid and porphyrins in subjects exposed to chemicals inducing microsomal enzymes and/or to porphyrogenic agents (e.g., chlorinated hydrocarbons).

          Advantages and Limitations of Biological Monitoring

          For substances that exert their toxicity after entering the human organism, biological monitoring provides a more focused and targeted assessment of health risk than does environmental monitoring. A biological parameter reflecting the internal dose brings us one step closer to understanding systemic adverse effects than does any environmental measurement.

          Biological monitoring offers numerous advantages over environmental monitoring and in particular permits assessment of:

            • exposure over an extended time period
            • exposure as a result of worker mobility in the working environment
            • absorption of a substance via various routes, including the skin
            • overall exposure as a result of different sources of pollution, both occupational and non-occupational
            • the quantity of a substance absorbed by the subject depending on factors other than the degree of exposure, such as the physical effort required by the job, ventilation, or climate
            • the quantity of a substance absorbed by a subject depending on individual factors that can influence the toxicokinetics of the toxic agent in the organism; for example, age, sex, genetic features, or functional state of the organs where the toxic substance undergoes biotransformation and elimination.

                       

                      In spite of these advantages, biological monitoring still suffers today from considerable limitations, the most significant of which are the following:

                        • The number of possible substances which can be monitored biologically is at present still rather small.
                        • In the case of acute exposure, biological monitoring supplies useful information only for exposure to substances that are rapidly metabolized, for example, aromatic solvents.
                        • The significance of biological indicators has not been clearly defined; for example, it is not always known whether the levels of a substance measured on biological material reflect current or cumulative exposure (e.g., urinary cadmium and mercury).
                        • Generally, biological indicators of internal dose allow assessment of the degree of exposure, but do not furnish data that will measure the actual amount present in the critical organ
                        • Often there is no knowledge of possible interference in the metabolism of the substances being monitored by other exogenous substances to which the organism is simultaneously exposed in the working and general environment.
                        • There is not always sufficient knowledge on the relationships existing between the levels of environmental exposure and the levels of the biological indicators on the one hand, and between the levels of the biological indicators and possible health effects on the other.
                        • The number of biological indicators for which biological exposure indices (BEIs) exist at present is rather limited. Follow-up information is needed to determine whether a substance, presently identified as not capable of causing an adverse effect, may at a later time be shown to be harmful.
                        • A BEI usually represents a level of an agent that is most likely to be observed in a specimen collected from a healthy worker who has been exposed to the chemical to the same extent as a worker with an inhalation exposure to the TLV (threshold limit value) time-weighted average (TWA).

                                       

                                      Information Required for the Development of Methods and Criteria for Selecting Biological Tests

                                      Programming biological monitoring requires the following basic conditions:

                                        • knowledge of the metabolism of an exogenous substance in the human organism (toxicokinetics)
                                        • knowledge of the alterations that occur in the critical organ (toxicodynamics)
                                        • existence of indicators
                                        • existence of sufficiently accurate analytical methods
                                        • possibility of using readily obtainable biological samples on which the indicators can be measured
                                        • existence of dose-effect and dose-response relationships and knowledge of these relationships
                                        • predictive validity of the indicators.

                                                     

                                                    In this context, the validity of a test is the degree to which the parameter under consideration predicts the situation as it really is (i.e., as more accurate measuring instruments would show it to be). Validity is determined by the combination of two properties: sensitivity and specificity. If a test possesses a high sensitivity, this means that it will give few false negatives; if it possesses high specificity, it will give few false positives (CEC 1985-1989).

                                                    Relationship between exposure, internal dose and effects

                                                    The study of the concentration of a substance in the working environment and the simultaneous determination of the indicators of dose and effect in exposed subjects allows information to be obtained on the relationship between occupational exposure and the concentration of the substance in biological samples, and between the latter and the early effects of exposure.

                                                    Knowledge of the relationships between the dose of a substance and the effect it produces is an essential requirement if a programme of biological monitoring is to be put into effect. The evaluation of this dose-effect relationship is based on the analysis of the degree of association existing between the indicator of dose and the indicator of effect and on the study of the quantitative variations of the indicator of effect with every variation of indicator of dose. (See also the chapter Toxicology, for further discussion of dose-related relationships).

                                                    With the study of the dose-effect relationship it is possible to identify the concentration of the toxic substance at which the indicator of effect exceeds the values currently considered not harmful. Furthermore, in this way it may also be possible to examine what the no-effect level might be.

                                                    Since not all the individuals of a group react in the same manner, it is necessary to examine the dose-response relationship, in other words, to study how the group responds to exposure by evaluating the appearance of the effect compared to the internal dose. The term response denotes the percentage of subjects in the group who show a specific quantitative variation of an effect indicator at each dose level.

                                                    Practical Applications of Biological Monitoring

                                                    The practical application of a biological monitoring programme requires information on (1) the behaviour of the indicators used in relation to exposure, especially those relating to degree, continuity and duration of exposure, (2) the time interval between end of exposure and measurement of the indicators, and (3) all physiological and pathological factors other than exposure that can alter the indicator levels.

                                                    In the following articles the behaviour of a number of biological indicators of dose and effect that are used for monitoring occupational exposure to substances widely used in industry will be presented. The practical usefulness and limits will be assessed for each substance, with particular emphasis on time of sampling and interfering factors. Such considerations will be helpful in establishing criteria for selecting a biological test.

                                                    Time of sampling

                                                    In selecting the time of sampling, the different kinetic aspects of the chemical must be kept in mind; in particular it is essential to know how the substance is absorbed via the lung, the gastrointestinal tract and the skin, subsequently distributed to the different compartments of the body, biotransformed, and finally eliminated. It is also important to know whether the chemical may accumulate in the body.

                                                    With respect to exposure to organic substances, the collection time of biological samples becomes all the more important in view of the different velocity of the metabolic processes involved and consequently the more or less rapid excretion of the absorbed dose.

                                                    Interfering Factors

                                                    Correct use of biological indicators requires a thorough knowledge of those factors which, although independent of exposure, may nevertheless affect the biological indicator levels. The following are the most important types of interfering factors (Alessio, Berlin and Foà 1987).

                                                    Physiological factors including diet, sex and age, for example, can affect results. Consumption of fish and crustaceans may increase the levels of urinary arsenic and blood mercury. In female subjects with the same lead blood levels as males, the erythrocyte protoporphyrin values are significantly higher compared to those of male subjects. The levels of urinary cadmium increase with age.

                                                    Among the personal habits that can distort indicator levels, smoking and alcohol consumption are particularly important. Smoking may cause direct absorption of substances naturally present in tobacco leaves (e.g., cadmium), or of pollutants present in the working environment that have been deposited on the cigarettes (e.g., lead), or of combustion products (e.g., carbon monoxide).

                                                    Alcohol consumption may influence biological indicator levels, since substances such as lead are naturally present in alcoholic beverages. Heavy drinkers, for example, show higher blood lead levels than control subjects. Ingestion of alcohol can interfere with the biotransformation and elimination of toxic industrial compounds: with a single dose, alcohol can inhibit the metabolism of many solvents, for example, trichloroethylene, xylene, styrene and toluene, because of their competition with ethyl alcohol for enzymes which are essential for the breakdown of both ethanol and solvents. Regular alcohol ingestion can also affect the metabolism of solvents in a totally different manner by accelerating solvent metabolism, presumably due to induction of the microsome oxidizing system. Since ethanol is the most important substance capable of inducing metabolic interference, it is advisable to determine indicators of exposure for solvents only on days when alcohol has not been consumed.

                                                    Less information is available on the possible effects of drugs on the levels of biological indicators. It has been demonstrated that aspirin can interfere with the biological transformation of xylene to methylhippuric acid, and phenylsalicylate, a drug widely used as an analgesic, can significantly increase the levels of urinary phenols. The consumption of aluminium-based antacid preparations can give rise to increased levels of aluminium in plasma and urine.

                                                    Marked differences have been observed in different ethnic groups in the metabolism of widely used solvents such as toluene, xylene, trichloroethylene, tetrachloroethylene, and methylchloroform.

                                                    Acquired pathological states can influence the levels of biological indicators. The critical organ can behave anomalously with respect to biological monitoring tests because of the specific action of the toxic agent as well as for other reasons. An example of situations of the first type is the behaviour of urinary cadmium levels: when tubular disease due to cadmium sets in, urinary excretion increases markedly and the levels of the test no longer reflect the degree of exposure. An example of the second type of situation is the increase in erythrocyte protoporphyrin levels observed in iron-deficient subjects who show no abnormal lead absorption.

                                                    Physiological changes in the biological media—urine, for example—on which determinations of the biological indicators are based, can influence the test values. For practical purposes, only spot urinary samples can be obtained from individuals during work, and the varying density of these samples means that the levels of the indicator can fluctuate widely in the course of a single day.

                                                    In order to overcome this difficulty, it is advisable to eliminate over-diluted or over-concentrated samples according to selected specific gravity or creatinine values. In particular, urine with a specific gravity below 1010 or higher than 1030 or with a creatinine concentration lower than 0.5 g/l or greater than 3.0 g/l should be discarded. Several authors also suggest adjusting the values of the indicators according to specific gravity or expressing the values according to urinary creatinine content.

                                                    Pathological changes in the biological media can also considerably influence the values of the biological indicators. For example, in anaemic subjects exposed to metals (mercury, cadmium, lead, etc.) the blood levels of the metal may be lower than would be expected on the basis of exposure; this is due to the low level of red blood cells that transport the toxic metal in the blood circulation.

                                                    Therefore, when determinations of toxic substances or metabolites bound to red blood cells are made on whole blood, it is always advisable to determine the haematocrit, which gives a measure of the percentage of blood cells in whole blood.

                                                    Multiple exposure to toxic substances present in the workplace

                                                    In the case of combined exposure to more than one toxic substance present at the workplace, metabolic interferences may occur that can alter the behaviour of the biological indicators and thus create serious problems in interpretation. In human studies, interferences have been demonstrated, for example, in combined exposure to toluene and xylene, xylene and ethylbenzene, toluene and benzene, hexane and methyl ethyl ketone, tetrachloroethylene and trichloroethylene.

                                                    In particular, it should be noted that when biotransformation of a solvent is inhibited, the urinary excretion of its metabolite is reduced (possible underestimation of risk) whereas the levels of the solvent in blood and expired air increase (possible overestimation of risk).

                                                    Thus, in situations in which it is possible to measure simultaneously the substances and their metabolites in order to interpret the degree of inhibitory interference, it would be useful to check whether the levels of the urinary metabolites are lower than expected and at the same time whether the concentration of the solvents in blood and/or expired air is higher.

                                                    Metabolic interferences have been described for exposures where the single substances are present in levels close to and sometimes below the currently accepted limit values. Interferences, however, do not usually occur when exposure to each substance present in the workplace is low.

                                                    Practical Use of Biological Indicators

                                                    Biological indicators can be used for various purposes in occupational health practice, in particular for (1) periodic control of individual workers, (2) analysis of the exposure of a group of workers, and (3) epidemiological assessments. The tests used should possess the features of precision, accuracy, good sensitivity, and specificity in order to minimize the possible number of false classifications.

                                                    Reference values and reference groups

                                                    A reference value is the level of a biological indicator in the general population not occupationally exposed to the toxic substance under study. It is necessary to refer to these values in order to compare the data obtained through biological monitoring programmes in a population which is presumed to be exposed. Reference values should not be confused with limit values, which generally are the legal limits or guidelines for occupational and environmental exposure (Alessio et al. 1992).

                                                    When it is necessary to compare the results of group analyses, the distribution of the values in the reference group and in the group under study must be known because only then can a statistical comparison be made. In these cases, it is essential to attempt to match the general population (reference group) with the exposed group for similar characteristics such as, sex, age, lifestyle and eating habits.

                                                    To obtain reliable reference values one must make sure that the subjects making up the reference group have never been exposed to the toxic substances, either occupationally or due to particular conditions of environmental pollution.

                                                    In assessing exposure to toxic substances one must be careful not to include subjects who, although not directly exposed to the toxic substance in question, work in the same workplace, since if these subjects are, in fact, indirectly exposed, the exposure of the group may be in consequence underestimated.

                                                    Another practice to avoid, although it is still widespread, is the use for reference purposes of values reported in the literature that are derived from case lists from other countries and may often have been collected in regions where different environmental pollution situations exist.

                                                    Periodic monitoring of individual workers

                                                    Periodic monitoring of individual workers is mandatory when the levels of the toxic substance in the atmosphere of the working environment approach the limit value. Where possible, it is advisable to simultaneously check an indicator of exposure and an indicator of effect. The data thus obtained should be compared with the reference values and the limit values suggested for the substance under study (ACGIH 1993).

                                                    Analysis of a group of workers

                                                    Analysis of a group becomes mandatory when the results of the biological indicators used can be markedly influenced by factors independent of exposure (diet, concentration or dilution of urine, etc.) and for which a wide range of “normal” values exists.

                                                    In order to ensure that the group study will furnish useful results, the group must be sufficiently numerous and homogeneous as regards exposure, sex, and, in the case of some toxic agents, work seniority. The more the exposure levels are constant over time, the more reliable the data will be. An investigation carried out in a workplace where the workers frequently change department or job will have little value. For a correct assessment of a group study it is not sufficient to express the data only as mean values and range. The frequency distribution of the values of the biological indicator in question must also be taken into account.

                                                    Epidemiological assessments

                                                    Data obtained from biological monitoring of groups of workers can also be used in cross-sectional or prospective epidemiological studies.

                                                    Cross-sectional studies can be used to compare the situations existing in different departments of the factory or in different industries in order to set up risk maps for manufacturing processes. A difficulty that may be encountered in this type of application depends on the fact that inter-laboratory quality controls are not yet sufficiently widespread; thus it cannot be guaranteed that different laboratories will produce comparable results.

                                                    Prospective studies serve to assess the behaviour over time of the exposure levels so as to check, for example, the efficacy of environmental improvements or to correlate the behaviour of biological indicators over the years with the health status of the subjects being monitored. The results of such long-term studies are very useful in solving problems involving changes over time. At present, biological monitoring is mainly used as a suitable procedure for assessing whether current exposure is judged to be “safe,” but it is as yet not valid for assessing situations over time. A given level of exposure considered safe today may no longer be regarded as such at some point in the future.

                                                    Ethical Aspects

                                                    Some ethical considerations arise in connection with the use of biological monitoring as a tool to assess potential toxicity. One goal of such monitoring is to assemble enough information to decide what level of any given effect constitutes an undesirable effect; in the absence of sufficient data, any perturbation will be considered undesirable. The regulatory and legal implications of this type of information need to be evaluated. Therefore, we should seek societal discussion and consensus as to the ways in which biological indicators should best be used. In other words, education is required of workers, employers, communities and regulatory authorities as to the meaning of the results obtained by biological monitoring so that no one is either unduly alarmed or complacent.

                                                    There must be appropriate communication with the individual upon whom the test has been performed concerning the results and their interpretation. Further, whether or not the use of some indicators is experimental should be clearly conveyed to all participants.

                                                    The International Code of Ethics for Occupational Health Professionals, issued by the International Commission on Occupational Health in 1992, stated that “biological tests and other investigations must be chosen from the point of view of their validity for protection of the health of the worker concerned, with due regard to their sensitivity, their specificity and their predictive value”. Use must not be made of tests “which are not reliable or which do not have a sufficient predictive value in relation to the requirements of the work assignment”. (See the chapter Ethical Issues for further discussion and the text of the Code.)

                                                    Trends in Regulation and Application

                                                    Biological monitoring can be carried out for only a limited number of environmental pollutants on account of the limited availability of appropriate reference data. This imposes important limitations on the use of biological monitoring in evaluating exposure.

                                                    The World Health Organization (WHO), for example, has proposed health-based reference values for lead, mercury, and cadmium only. These values are defined as levels in blood and urine not linked to any detectable adverse effect.The American Conference of Governmental Industrial Hygienists (ACGIH) has established biological exposure indices (BEIs) for about 26 compounds; BEIs are defined as “values for determinants which are indicators of the degree of integrated exposure to industrial chemicals” (ACGIH 1995).

                                                     

                                                    Back

                                                    Monday, 07 March 2011 18:49

                                                    The Nature and Aims of Ergonomics

                                                    Definition and Scope

                                                    Ergonomics means literally the study or measurement of work. In this context, the term work signifies purposeful human function; it extends beyond the more restricted concept of work as labour for monetary gain to incorporate all activities whereby a rational human operator systematically pursues an objective. Thus it includes sports and other leisure activities, domestic work such as child care and home maintenance, education and training, health and social service, and either controlling engineered systems or adapting to them, for example, as a passenger in a vehicle.

                                                    The human operator, the focus of study, may be a skilled professional operating a complex machine in an artificial environment, a customer who has casually purchased a new piece of equipment for personal use, a child sitting in a classroom or a disabled person in a wheelchair. The human being is highly adaptable but not infinitely so. There are ranges of optimum conditions for any activity. One of the tasks of ergonomics is to define what these ranges are and to explore the undesirable effects which occur if the limits are transgressed—for example if a person is expected to work in conditions of excessive heat, noise or vibration, or if the physical or mental workload is too high or too low.

                                                    Ergonomics examines not only the passive ambient situation but also the unique advantages of the human operator and the contributions that can be made if a work situation is designed to permit and encourage the person to make the best use of his or her abilities. Human abilities may be characterized not only with reference to the generic human operator but also with respect to those more particular abilities that are called upon in specific situations where high performance is essential. For example, an automobile manufacturer will consider the range of physical size and strength of the population of drivers who are expected to use a particular model to ensure that the seats are comfortable, that the controls are readily identifiable and within reach, that there is clear visibility to the front and the rear, and that the internal instruments are easy to read. Ease of entry and egress will also be taken into account. By contrast, the designer of a racing car will assume that the driver is athletic so that ease of getting in and out, for example, is not important and, in fact, design features as a whole as they relate to the driver may well be tailored to the dimensions and preferences of a particular driver to ensure that he or she can exercise his or her full potential and skill as a driver.

                                                    In all situations, activities and tasks the focus is the person or persons involved. It is assumed that the structure, the engineering and any other technology is there to serve the operator, not the other way round.

                                                    History and Status

                                                    About a century ago it was recognized that working hours and conditions in some mines and factories were not tolerable in terms of safety and health, and the need was evident to pass laws to set permissible limits in these respects. The determination and statement of those limits can be regarded as the beginning of ergonomics. They were, incidentally, the beginning of all the activities which now find expression through the work of the International Labour Organization (ILO).

                                                    Research, development and application proceeded slowly until the Second World War. This triggered greatly accelerated development of machines and instrumentation such as vehicles, aircraft, tanks, guns and vastly improved sensing and navigation devices. As technology advanced, greater flexibility was available to allow adaptation to the operator, an adaptation that became the more necessary because human performance was limiting the performance of the system. If a powered vehicle can travel at a speed of only a few kilometres per hour there is no need to worry about the performance of the driver, but when the vehicle’s maximum speed is increased by a factor of ten or a hundred, then the driver has to react more quickly and there is no time to correct mistakes to avert disaster. Similarly, as technology is improved there is less need to worry about mechanical or electrical failure (for instance) and attention is freed to think about the needs of the driver.

                                                    Thus ergonomics, in the sense of adapting engineering technology to the needs of the operator, becomes simultaneously both more necessary and more feasible as engineering advances.

                                                    The term ergonomics came into use about 1950 when the priorities of developing industry were taking over from the priorities of the military. The development of research and application for the following thirty years is described in detail in Singleton (1982). The United Nations agencies, particularly the ILO and the World Health Organization (WHO), became active in this field in the 1960s.

                                                    In immediate postwar industry the overriding objective, shared by ergonomics, was greater productivity. This was a feasible objective for ergonomics because so much industrial productivity was determined directly by the physical effort of the workers involved—speed of assembly and rate of lifting and movement determined the extent of output. Gradually, mechanical power replaced human muscle power. More power, however, leads to more accidents on the simple principle that an accident is the consequence of power in the wrong place at the wrong time. When things are happening faster, the potential for accidents is further increased. Thus the concern of industry and the aim of ergonomics gradually shifted from productivity to safety. This occurred in the 1960s and early 1970s. About and after this time, much of manufacturing industry shifted from batch production to flow and process production. The role of the operator shifted correspondingly from direct participation to monitoring and inspection. This resulted in a lower frequency of accidents because the operator was more remote from the scene of action but sometimes in a greater severity of accidents because of the speed and power inherent in the process.

                                                    When output is determined by the speed at which machines function then productivity becomes a matter of keeping the system running: in other words, reliability is the objective. Thus the operator becomes a monitor, a trouble-shooter and a maintainer rather than a direct manipulator.

                                                    This historical sketch of the postwar changes in manufacturing industry might suggest that the ergonomist has regularly dropped one set of problems and taken up another set but this is not the case for several reasons. As explained earlier, the concerns of ergonomics are much wider than those of manufacturing industry. In addition to production ergonomics, there is product or design ergonomics, that is, adapting the machine or product to the user. In the car industry, for example, ergonomics is important not only to component manufacturing and the production lines but also to the eventual driver, passenger and maintainer. It is now routine in the marketing of cars and in their critical appraisal by others to review the quality of the ergonomics, considering ride, seat comfort, handling, noise and vibration levels, ease of use of controls, visibility inside and outside, and so on.

                                                    It was suggested above that human performance is usually optimized within a tolerance range of a relevant variable. Much of the early ergonomics attempted to reduce both muscle power output and the extent and variety of movement by way of ensuring that such tolerances were not exceeded. The greatest change in the work situation, the advent of computers, has created the opposite problem. Unless it is well designed ergonomically, a computer workspace can induce too fixed a posture, too little bodily movement and too much repetition of particular combinations of joint movements.

                                                    This brief historical review is intended to indicate that, although there has been continuous development of ergonomics, it has taken the form of adding more and more problems rather than changing the problems. However, the corpus of knowledge grows and becomes more reliable and valid, energy expenditure norms are not dependent on how or why the energy is expended, postural issues are the same in aircraft seats and in front of computer screens, much human activity now involves using videoscreens and there are well-established principles based on a mix of laboratory evidence and field studies.

                                                    Ergonomics and Related Disciplines

                                                    The development of a science-based application which is intermediate between the well-established technologies of engineering and medicine inevitably overlaps into many related disciplines. In terms of its scientific basis, much of ergonomic knowledge derives from the human sciences: anatomy, physiology and psychology. The physical sciences also make a contribution, for example, to solving problems of lighting, heating, noise and vibration.

                                                    Most of the European pioneers in ergonomics were workers among the human sciences and it is for this reason that ergonomics is well-balanced between physiology and psychology. A physiological orientation is required as a background to problems such as energy expenditure, posture and application of forces, including lifting. A psychological orientation is required to study problems such as information presentation and job satisfaction. There are of course many problems which require a mixed human sciences approach such as stress, fatigue and shift work.

                                                    Most of the American pioneers in this field were involved in either experimental psychology or engineering and it is for this reason that their typical occupational titles—human engineering and human factors—reflect a difference in emphasis (but not in core interests) from European ergonomics. This also explains why occupational hygiene, from its close relationship to medicine, particularly occupational medicine, is regarded in the United States as quite different from human factors or ergonomics. The difference in other parts of the world is less marked. Ergonomics concentrates on the human operator in action, occupational hygiene concentrates on the hazards to the human operator present in the ambient environment. Thus the central interest of the occupational hygienist is toxic hazards, which are outside the scope of the ergonomist. The occupational hygienist is concerned about effects on health, either long-term or short-term; the ergonomist is, of course, concerned about health but he or she is also concerned about other consequences, such as productivity, work design and workspace design. Safety and health are the generic issues which run through ergonomics, occupational hygiene, occupational health and occupational medicine. It is, therefore, not surprising to find that in a large institution of a research, design or production kind, these subjects are often grouped together. This makes possible an approach based on a team of experts in these separate subjects, each making a specialist contribution to the general problem of health, not only of the workers in the institution but also of those affected by its activities and products. By contrast, in institutions concerned with design or provision of services, the ergonomist might be closer to the engineers and other technologists.

                                                    It will be clear from this discussion that because ergonomics is interdisciplinary and still quite new there is an important problem of how it should best be fitted into an existing organization. It overlaps onto so many other fields because it is concerned with people and people are the basic and all-pervading resource of every organization. There are many ways in which it can be fitted in, depending on the history and objectives of the particular organization. The main criteria are that ergonomics objectives are understood and appreciated and that mechanisms for implementation of recommendations are built into the organization.

                                                    Aims of Ergonomics

                                                    It will be clear already that the benefits of ergonomics can appear in many different forms, in productivity and quality, in safety and health, in reliability, in job satisfaction and in personal development.

                                                    The reason for this breadth of scope is that its basic aim is efficiency in purposeful activity—efficiency in the widest sense of achieving the desired result without wasteful input, without error and without damage to the person involved or to others. It is not efficient to expend unnecessary energy or time because insufficient thought has been given to the design of the work, the workspace, the working environment and the working conditions. It is not efficient to achieve the desired result in spite of the situation design rather than with support from it.

                                                    The aim of ergonomics is to ensure that the working situation is in harmony with the activities of the worker. This aim is self-evidently valid but attaining it is far from easy for a variety of reasons. The human operator is flexible and adaptable and there is continuous learning, but there are quite large individual differences. Some differences, such as physical size and strength, are obvious, but others, such as cultural differences and differences in style and in level of skill, are less easy to identify.

                                                    In view of these complexities it might seem that the solution is to provide a flexible situation where the human operator can optimize a specifically appropriate way of doing things. Unfortunately such an approach is sometimes impracticable because the more efficient way is often not obvious, with the result that a worker can go on doing something the wrong way or in the wrong conditions for years.

                                                    Thus it is necessary to adopt a systematic approach: to start from a sound theory, to set measurable objectives and to check success against these objectives. The various possible objectives are considered below.

                                                    Safety and health

                                                    There can be no disagreement about the desirability of safety and health objectives. The difficulty stems from the fact that neither is directly measurable: their achievement is assessed by their absence rather than their presence. The data in question always pertain to departures from safety and health.

                                                    In the case of health, much of the evidence is long-term as it is based on populations rather than individuals. It is, therefore, necessary to maintain careful records over long periods and to adopt an epidemiological approach through which risk factors can be identified and measured. For example, what should be the maximum hours per day or per year required of a worker at a computer workstation? It depends on the design of the workstation, the kind of work and the kind of person (age, vision, abilities and so on). The effects on health can be diverse, from wrist problems to mental apathy, so it is necessary to carry out comprehensive studies covering quite large populations while simultaneously keeping track of differences within the populations.

                                                    Safety is more directly measurable in a negative sense in terms of kinds and frequencies of accidents and damage. There are problems in defining different kinds of accidents and identifying the often multiple causal factors and there is often a distant relationship between the kind of accident and the degree of harm, from none to fatality.

                                                    Nevertheless, an enormous body of evidence concerning safety and health has been accumulated over the past fifty years and consistencies have been discovered which can be related back to theory, to laws and standards and to principles operative in particular kinds of situations.

                                                    Productivity and efficiency

                                                    Productivity is usually defined in terms of output per unit of time, whereas efficiency incorporates other variables, particularly the ratio of output to input. Efficiency incorporates the cost of what is done in relation to achievement, and in human terms this requires the consideration of the penalties to the human operator.

                                                    In industrial situations, productivity is relatively easy to measure: the amount produced can be counted and the time taken to produce it is simple to record. Productivity data are often used in before/after comparisons of working methods, situations or conditions. It involves assumptions about equivalence of effort and other costs because it is based on the principle that the human operator will perform as well as is feasible in the circumstances. If the productivity is higher then the circumstances must be better. There is much to recommend this simple approach provided that it is used with due regard to the many possible complicating factors which can disguise what is really happening. The best safeguard is to try to make sure that nothing has changed between the before and after situations except the aspects being studied.

                                                    Efficiency is a more comprehensive but always a more difficult measure. It usually has to be specifically defined for a particular situation and in assessing the results of any studies the definition should be checked for its relevance and validity in terms of the conclusions being drawn. For example, is bicycling more efficient than walking? Bicycling is much more productive in terms of the distance that can be covered on a road in a given time, and it is more efficient in terms of energy expenditure per unit of distance or, for indoor exercise, because the apparatus required is cheaper and simpler. On the other hand, the purpose of the exercise might be energy expenditure for health reasons or to climb a mountain over difficult terrain; in these circumstances walking will be more efficient. Thus, an efficiency measure has meaning only in a well-defined context.

                                                    Reliability and quality

                                                    As explained above, reliability rather than productivity becomes the key measure in high technology systems (for instance, transport aircraft, oil refining and power generation). The controllers of such systems monitor performance and make their contribution to productivity and to safety by making tuning adjustments to ensure that the automatic machines stay on line and function within limits. All these systems are in their safest states either when they are quiescent or when they are functioning steadily within the designed performance envelope. They become more dangerous when moving or being moved between equilibrium states, for example, when an aircraft is taking off or a process system is being shut down. High reliability is the key characteristic not only for safety reasons but also because unplanned shut-down or stoppage is extremely expensive. Reliability is straightforward to measure after performance but is extremely difficult to predict except by reference to the past performance of similar systems. When or if something goes wrong human error is invariably a contributing cause, but it is not necessarily an error on the part of the controller: human errors can originate at the design stage and during setting up and maintenance. It is now accepted that such complex high-technology systems require a considerable and continuous ergonomics input from design to the assessment of any failures that occur.

                                                    Quality is related to reliability but is very difficult if not impossible to measure. Traditionally, in batch and flow production systems, quality has been checked by inspection after output, but the current established principle is to combine production and quality maintenance. Thus each operator has parallel responsibility as an inspector. This usually proves to be more effective, but it may mean abandoning work incentives based simply on rate of production. In ergonomic terms it makes sense to treat the operator as a responsible person rather than as a kind of robot programmed for repetitive performance.

                                                    Job satisfaction and personal development

                                                    From the principle that the worker or human operator should be recognized as a person and not a robot it follows that consideration should be given to responsibilities, attitudes, beliefs and values. This is not easy because there are many variables, mostly detectable but not quantifiable, and there are large individual and cultural differences. Nevertheless a great deal of effort now goes into the design and management of work with the aim of ensuring that the situation is as satisfactory as is reasonably practicable from the operator’s viewpoint. Some measurement is possible by using survey techniques and some principles are available based on such working features as autonomy and empowerment.

                                                    Even accepting that these efforts take time and cost money, there can still be considerable dividends from listening to the suggestions, opinions and attitudes of the people actually doing the work. Their approach may not be the same as that of the external work designer and not the same as the assumptions made by the work designer or manager. These differences of view are important and can provide a refreshing change in strategy on the part of everyone involved.

                                                    It is well established that the human being is a continuous learner or can be, given the appropriate conditions. The key condition is to provide feedback about past and present performance which can be used to improve future performance. Moreover, such feedback itself acts as an incentive to performance. Thus everyone gains, the performer and those responsible in a wider sense for the performance. It follows that there is much to be gained from performance improvement, including self-development. The principle that personal development should be an aspect of the application of ergonomics requires greater designer and manager skills but, if it can be applied successfully, can improve all the aspects of human performance discussed above.

                                                    Successful application of ergonomics often follows from doing no more than developing the appropriate attitude or point of view. The people involved are inevitably the central factor in any human effort and the systematic consideration of their advantages, limitations, needs and aspirations is inherently important.

                                                    Conclusion

                                                    Ergonomics is the systematic study of people at work with the objective of improving the work situation, the working conditions and the tasks performed. The emphasis is on acquiring relevant and reliable evidence on which to base recommendation for changes in specific situations and on developing more general theories, concepts, guidelines and procedures which will contribute to the continually developing expertise available from ergonomics.

                                                     

                                                    Back

                                                    Monday, 20 December 2010 19:16

                                                    Definitions and Concepts

                                                    Exposure, Dose and Response

                                                    Toxicity is the intrinsic capacity of a chemical agent to affect an organism adversely.

                                                    Xenobiotics is a term for “foreign substances”, that is, foreign to the organism. Its opposite is endogenous compounds. Xenobiotics include drugs, industrial chemicals, naturally occurring poisons and environmental pollutants.

                                                    Hazard is the potential for the toxicity to be realized in a specific setting or situation.

                                                    Risk is the probability of a specific adverse effect to occur. It is often expressed as the percentage of cases in a given population and during a specific time period. A risk estimate can be based upon actual cases or a projection of future cases, based upon extrapolations.

                                                    Toxicity rating and toxicity classification can be used for regulatory purposes. Toxicity rating is an arbitrary grading of doses or exposure levels causing toxic effects. The grading can be “supertoxic,” “highly toxic,” “moderately toxic” and so on. The most common ratings concern acute toxicity. Toxicity classification concerns the grouping of chemicals into general categories according to their most important toxic effect. Such categories can include allergenic, neurotoxic, carcinogenic and so on. This classification can be of administrative value as a warning and as information.

                                                    The dose-effect relationship is the relationship between dose and effect on the individual level. An increase in dose may in- crease the intensity of an effect, or a more severe effect may result. A dose-effect curve may be obtained at the level of the whole organism, the cell or the target molecule. Some toxic effects, such as death or cancer, are not graded but are “all or none” effects.

                                                    The dose-response relationship is the relationship between dose and the percentage of individuals showing a specific effect. With increasing dose a greater number of individuals in the exposed population will usually be affected.

                                                    It is essential to toxicology to establish dose-effect and dose- response relationships. In medical (epidemiological) studies a criterion often used for accepting a causal relationship between an agent and a disease is that effect or response is proportional to dose.

                                                    Several dose-response curves can be drawn for a chemical—one for each type of effect. The dose-response curve for most toxic effects (when studied in large populations) has a sigmoid shape. There is usually a low-dose range where there is no response detected; as dose increases, the response follows an ascending curve that will usually reach a plateau at a 100% response. The dose-response curve reflects the variations among individuals in a population. The slope of the curve varies from chemical to chemical and between different types of effects. For some chemicals with specific effects (carcinogens, initiators, mutagens) the dose-response curve might be linear from dose zero within a certain dose range. This means that no threshold exists and that even small doses represent a risk. Above that dose range, the risk may increase at greater than a linear rate.

                                                    Variation in exposure during the day and the total length of exposure during one’s lifetime may be as important for the outcome (response) as mean or average or even integrated dose level. High peak exposures may be more harmful than a more even exposure level. This is the case for some organic solvents. On the other hand, for some carcinogens, it has been experimentally shown that the fractionation of a single dose into several exposures with the same total dose may be more effective in producing tumours.

                                                    A dose is often expressed as the amount of a xenobiotic entering an organism (in units such as mg/kg body weight). The dose may be expressed in different (more or less informative) ways: exposure dose, which is the air concentration of pollutant inhaled during a certain time period (in work hygiene usually eight hours), or the retained or absorbed dose (in industrial hygiene also called the body burden), which is the amount present in the body at a certain time during or after exposure. The tissue dose is the amount of substance in a specific tissue and the target dose is the amount of substance (usually a metabolite) bound to the critical molecule. The target dose can be expressed as mg chemical bound per mg of a specific macromolecule in the tissue. To apply this concept, information on the mechanism of toxic action on the molecular level is needed. The target dose is more exactly associated with the toxic effect. The exposure dose or body burden may be more easily available, but these are less precisely related to the effect.

                                                    In the dose concept a time aspect is often included, even if it is not always expressed. The theoretical dose according to Haber’s law is D = ct, where D is dose, c is concentration of the xenobiotic in the air and t the duration of exposure to the chemical. If this concept is used at the target organ or molecular level, the amount per mg tissue or molecule over a certain time may be used. The time aspect is usually more important for understanding repeated exposures and chronic effects than for single exposures and acute effects.

                                                    Additive effects occur as a result of exposure to a combination of chemicals, where the individual toxicities are simply added to each other (1+1= 2). When chemicals act via the same mechanism, additivity of their effects is assumed although not always the case in reality. Interaction between chemicals may result in an inhibition (antagonism), with a smaller effect than that expected from addition of the effects of the individual chemicals (1+1 2). Alternatively, a combination of chemicals may produce a more pronounced effect than would be expected by addition (increased response among individuals or an increase in frequency of response in a population), this is called synergism (1+1 >2).

                                                    Latency time is the time between first exposure and the appearance of a detectable effect or response. The term is often used for carcinogenic effects, where tumours may appear a long time after the start of exposure and sometimes long after the cessation of exposure.

                                                    A dose threshold is a dose level below which no observable effect occurs. Thresholds are thought to exist for certain effects, like acute toxic effects; but not for others, like carcinogenic effects (by DNA-adduct-forming initiators). The mere absence of a response in a given population should not, however, be taken as evidence for the existence of a threshold. Absence of response could be due to simple statistical phenomena: an adverse effect occurring at low frequency may not be detectable in a small population.

                                                    LD50 (effective dose) is the dose causing 50% lethality in an animal population. The LD50 is often given in older literature as a measure of acute toxicity of chemicals. The higher the LD50, the lower is the acute toxicity. A highly toxic chemical (with a low LD50) is said to be potent. There is no necessary correlation between acute and chronic toxicity. ED50 (effective dose) is the dose causing a specific effect other than lethality in 50% of the animals.

                                                    NOEL (NOAEL) means the no observed (adverse) effect level, or the highest dose that does not cause a toxic effect. To establish a NOEL requires multiple doses, a large population and additional information to make sure that absence of a response is not merely a statistical phenomenon. LOEL is the lowest observed effective dose on a dose-response curve, or the lowest dose that causes an effect.

                                                    A safety factor is a formal, arbitrary number with which one divides the NOEL or LOEL derived from animal experiments to obtain a tentative permissible dose for humans. This is often used in the area of food toxicology, but may be used also in occupational toxicology. A safety factor may also be used for extrapolation of data from small populations to larger populations. Safety factors range from 100 to 103. A safety factor of two may typically be sufficient to protect from a less serious effect (such as irritation) and a factor as large as 1,000 may be used for very serious effects (such as cancer). The term safety factor could be better replaced by the term protection factor or, even, uncertainty factor. The use of the latter term reflects scientific uncertainties, such as whether exact dose-response data can be translated from animals to humans for the particular chemical, toxic effect or exposure situation.

                                                    Extrapolations are theoretical qualitative or quantitative estimates of toxicity (risk extrapolations) derived from translation of data from one species to another or from one set of dose-response data (typically in the high dose range) to regions of dose-response where no data exist. Extrapolations usually must be made to predict toxic responses outside the observation range. Mathematical modelling is used for extrapolations based upon an understanding of the behaviour of the chemical in the organism (toxicokinetic modelling) or based upon the understanding of statistical probabilities that specific biological events will occur (biologically or mechanistically based models). Some national agencies have developed sophisticated extrapolation models as a formalized method to predict risks for regulatory purposes. (See discussion of risk assessment later in the chapter.)

                                                    Systemic effects are toxic effects in tissues distant from the route of absorption.

                                                    Target organ is the primary or most sensitive organ affected after exposure. The same chemical entering the body by different routes of exposure dose, dose rate, sex and species may affect different target organs. Interaction between chemicals, or between chemicals and other factors may affect different target organs as well.

                                                    Acute effects occur after limited exposure and shortly (hours, days) after exposure and may be reversible or irreversible.

                                                    Chronic effects occur after prolonged exposure (months, years, decades) and/or persist after exposure has ceased.

                                                    Acute exposure is an exposure of short duration, while chronic exposure is long-term (sometimes life-long) exposure.

                                                    Tolerance to a chemical may occur when repeat exposures result in a lower response than what would have been expected without pretreatment.

                                                    Uptake and Disposition

                                                    Transport processes

                                                    Diffusion. In order to enter the organism and reach a site where damage is produced, a foreign substance has to pass several barriers, including cells and their membranes. Most toxic substances pass through membranes passively by diffusion. This may occur for small water-soluble molecules by passage through aqueous channels or, for fat-soluble ones, by dissolution into and diffusion through the lipid part of the membrane. Ethanol, a small molecule that is both water and fat soluble, diffuses rapidly through cell membranes.

                                                    Diffusion of weak acids and bases. Weak acids and bases may readily pass membranes in their non-ionized, fat-soluble form while ionized forms are too polar to pass. The degree of ionization of these substances depends on pH. If a pH gradient exists across a membrane they will therefore accumulate on one side. The urinary excretion of weak acids and bases is highly dependent on urinary pH. Foetal or embryonic pH is somewhat higher than maternal pH, causing a slight accumulation of weak acids in the foetus or embryo.

                                                    Facilitated diffusion. The passage of a substance may be facilitated by carriers in the membrane. Facilitated diffusion is similar to enzyme processes in that it is protein mediated, highly selective, and saturable. Other substances may inhibit the facilitated transport of xenobiotics.

                                                    Active transport. Some substances are actively transported across cell membranes. This transport is mediated by carrier proteins in a process analogous to that of enzymes. Active transport is similar to facilitated diffusion, but it may occur against a concentration gradient. It requires energy input and a metabolic inhibitor can block the process. Most environmental pollutants are not transported actively. One exception is the active tubular secretion and reabsorption of acid metabolites in the kidneys.

                                                    Phagocytosis is a process where specialized cells such as macrophages engulf particles for subsequent digestion. This transport process is important, for example, for the removal of particles in the alveoli.

                                                    Bulk flow. Substances are also transported in the body along with the movement of air in the respiratory system during breathing, and the movements of blood, lymph or urine.

                                                    Filtration. Due to hydrostatic or osmotic pressure water flows in bulk through pores in the endothelium. Any solute that is small enough will be filtered together with the water. Filtration occurs to some extent in the capillary bed in all tissues but is particularly important in the formation of primary urine in the kidney glomeruli.

                                                    Absorption

                                                    Absorption is the uptake of a substance from the environment into the organism. The term usually includes not only the entrance into the barrier tissue but also the further transport into circulating blood.

                                                    Pulmonary absorption. The lungs are the primary route of deposition and absorption of small airborne particles, gases, vapours and aerosols. For highly water-soluble gases and vapours a significant part of the uptake occurs in the nose and the respiratory tree, but for less soluble substances it primarily takes place in the lung alveoli. The alveoli have a very large surface area (about 100m2 in humans). In addition, the diffusion barrier is extremely small, with only two thin cell layers and a distance in the order of micrometers from alveolar air to systemic blood circulation. This makes the lungs very efficient not only in the exchange of oxygen and carbon dioxide but also of other gases and vapours. In general, the diffusion across the alveolar wall is so rapid that it does not limit the uptake. The absorption rate is instead dependent on flow (pulmonary ventilation, cardiac output) and solubility (blood: air partition coefficient). Another important factor is metabolic elimination. The relative importance of these factors for pulmonary absorption varies greatly for different substances. Physical activity results in increased pulmonary ventilation and cardiac output, and decreased liver blood flow (and, hence, biotransformation rate). For many inhaled substances this leads to a marked increase in pulmonary absorption.

                                                    Percutaneous absorption. The skin is a very efficient barrier. Apart from its thermoregulatory role, it is designed to protect the organism from micro-organisms, ultraviolet radiation and other deleterious agents, and also against excessive water loss. The diffusion distance in the dermis is on the order of tenths of millimetres. In addition, the keratin layer has a very high resistance to diffusion for most substances. Nevertheless, significant dermal absorption resulting in toxicity may occur for some substances—highly toxic, fat-soluble substances such as organophosphorous insecticides and organic solvents, for example. Significant absorption is likely to occur after exposure to liquid substances. Percutaneous absorption of vapour may be important for solvents with very low vapour pressure and high affinity to water and skin.

                                                    Gastrointestinal absorption occurs after accidental or intentional ingestion. Larger particles originally inhaled and deposited in the respiratory tract may be swallowed after mucociliary transport to the pharynx. Practically all soluble substances are efficiently absorbed in the gastrointestinal tract. The low pH of the gut may facilitate absorption, for instance, of metals.

                                                    Other routes. In toxicity testing and other experiments, special routes of administration are often used for convenience, although these are rare and usually not relevant in the occupational setting. These routes include intravenous (IV), subcutaneous (sc), intraperitoneal (ip) and intramuscular (im) injections. In general, substances are absorbed at a higher rate and more completely by these routes, especially after IV injection. This leads to short-lasting but high concentration peaks that may increase the toxicity of a dose.

                                                    Distribution

                                                    The distribution of a substance within the organism is a dynamic process which depends on uptake and elimination rates, as well as the blood flow to the different tissues and their affinities for the substance. Water-soluble, small, uncharged molecules, univalent cations, and most anions diffuse easily and will eventually reach a relatively even distribution in the body.

                                                    Volume of distribution is the amount of a substance in the body at a given time, divided by the concentration in blood, plasma or serum at that time. The value has no meaning as a physical volume, as many substances are not uniformly distributed in the organism. A volume of distribution of less than one l/kg body weight indicates preferential distribution in the blood (or serum or plasma), whereas a value above one indicates a preference for peripheral tissues such as adipose tissue for fat soluble substances.

                                                    Accumulation is the build-up of a substance in a tissue or organ to higher levels than in blood or plasma. It may also refer to a gradual build-up over time in the organism. Many xenobiotics are highly fat soluble and tend to accumulate in adipose tissue, while others have a special affinity for bone. For example, calcium in bone may be exchanged for cations of lead, strontium, barium and radium, and hydroxyl groups in bone may be exchanged for fluoride.

                                                    Barriers. The blood vessels in the brain, testes and placenta have special anatomical features that inhibit passage of large molecules like proteins. These features, often referred to as blood-brain, blood-testes, and blood-placenta barriers, may give the false impression that they prevent passage of any substance. These barriers are of little or no importance for xenobiotics that can diffuse through cell membranes.

                                                    Blood binding. Substances may be bound to red blood cells or plasma components, or occur unbound in blood. Carbon monoxide, arsenic, organic mercury and hexavalent chromium have a high affinity for red blood cells, while inorganic mercury and trivalent chromium show a preference for plasma proteins. A number of other substances also bind to plasma proteins. Only the unbound fraction is available for filtration or diffusion into eliminating organs. Blood binding may therefore increase the residence time in the organism but decrease uptake by target organs.

                                                    Elimination

                                                    Elimination is the disappearance of a substance in the body. Elimination may involve excretion from the body or transformation to other substances not captured by a specific method of measurement. The rate of disappearance may be expressed by the elimination rate constant, biological half-time or clearance.

                                                    Concentration-time curve. The curve of concentration in blood (or plasma) versus time is a convenient way of describing uptake and disposition of a xenobiotic.

                                                    Area under the curve (AUC) is the integral of concentration in blood (plasma) over time. When metabolic saturation and other non-linear processes are absent, AUC is proportional to the absorbed amount of substance.

                                                    Biological half-time (or half-life) is the time needed after the end of exposure to reduce the amount in the organism to one-half. As it is often difficult to assess the total amount of a substance, measurements such as the concentration in blood (plasma) are used. The half-time should be used with caution, as it may change, for example, with dose and length of exposure. In addition, many substances have complex decay curves with several half-times.

                                                    Bioavailability is the fraction of an administered dose entering the systemic circulation. In the absence of presystemic clearance, or first-pass metabolism, the fraction is one. In oral exposure presystemic clearance may be due to metabolism within the gastrointestinal content, gut wall or liver. First-pass metabolism will reduce the systemic absorption of the substance and instead increase the absorption of metabolites. This may lead to a different toxicity pattern.

                                                    Clearance is the volume of blood (plasma) per unit time completely cleared of a substance. To distinguish from renal clearance, for example, the prefix total, metabolic or blood (plasma) is often added.

                                                    Intrinsic clearance is the capacity of endogenous enzymes to transform a substance, and is also expressed in volume per unit time. If the intrinsic clearance in an organ is much lower than the blood flow, the metabolism is said to be capacity limited. Conversely, if the intrinsic clearance is much higher than the blood flow, the metabolism is flow limited.

                                                    Excretion

                                                    Excretion is the exit of a substance and its biotransformation products from the organism.

                                                    Excretion in urine and bile. The kidneys are the most important excretory organs. Some substances, especially acids with high molecular weights, are excreted with bile. A fraction of biliary excreted substances may be reabsorbed in the intestines. This process, enterohepatic circulation, is common for conjugated substances following intestinal hydrolysis of the conjugate.

                                                    Other routes of excretion. Some substances, such as organic solvents and breakdown products such as acetone, are volatile enough so that a considerable fraction may be excreted by exhalation after inhalation. Small water-soluble molecules as well as fat-soluble ones are readily secreted to the foetus via the placenta, and into milk in mammals. For the mother, lactation can be a quantitatively important excretory pathway for persistent fat-soluble chemicals. The offspring may be secondarily exposed via the mother during pregnancy as well as during lactation. Water-soluble compounds may to some extent be excreted in sweat and saliva. These routes are generally of minor importance. However, as a large volume of saliva is produced and swallowed, saliva excretion may contribute to reabsorption of the compound. Some metals such as mercury are excreted by binding permanently to the sulphydryl groups of the keratin in the hair.

                                                    Toxicokinetic models

                                                    Mathematical models are important tools to understand and describe the uptake and disposition of foreign substances. Most models are compartmental, that is, the organism is represented by one or more compartments. A compartment is a chemically and physically theoretical volume in which the substance is assumed to distribute homogeneously and instantaneously. Simple models may be expressed as a sum of exponential terms, while more complicated ones require numerical procedures on a computer for their solution. Models may be subdivided in two categories, descriptive and physiological.

                                                    In descriptive models, fitting to measured data is performed by changing the numerical values of the model parameters or even the model structure itself. The model structure normally has little to do with the structure of the organism. Advantages of the descriptive approach are that few assumptions are made and that there is no need for additional data. A disadvantage of descriptive models is their limited usefulness for extrapolations.

                                                    Physiological models are constructed from physiological, anatomical and other independent data. The model is then refined and validated by comparison with experimental data. An advantage of physiological models is that they can be used for extrapolation purposes. For example, the influence of physical activity on the uptake and disposition of inhaled substances may be predicted from known physiological adjustments in ventilation and cardiac output. A disadvantage of physiological models is that they require a large amount of independent data.

                                                    Biotransformation

                                                    Biotransformation is a process which leads to a metabolic conversion of foreign compounds (xenobiotics) in the body. The process is often referred to as metabolism of xenobiotics. As a general rule metabolism converts lipid-soluble xenobiotics to large, water- soluble metabolites that can be effectively excreted.

                                                    The liver is the main site of biotransformation. All xenobiotics taken up from the intestine are transported to the liver by a single blood vessel (vena porta). If taken up in small quantities a foreign substance may be completely metabolized in the liver before reaching the general circulation and other organs (first pass effect). Inhaled xenobiotics are distributed via the general circulation to the liver. In that case only a fraction of the dose is metabolized in the liver before reaching other organs.

                                                    Liver cells contain several enzymes that oxidize xenobiotics. This oxidation generally activates the compound—it becomes more reactive than the parent molecule. In most cases the oxidized metabolite is further metabolized by other enzymes in a second phase. These enzymes conjugate the metabolite with an endogenous substrate, so that the molecule becomes larger and more polar. This facilitates excretion.

                                                    Enzymes that metabolize xenobiotics are also present in other organs such as the lungs and kidneys. In these organs they may play specific and qualitatively important roles in the metabolism of certain xenobiotics. Metabolites formed in one organ may be further metabolized in a second organ. Bacteria in the intestine may also participate in biotransformation.

                                                    Metabolites of xenobiotics can be excreted by the kidneys or via the bile. They can also be exhaled via the lungs, or bound to endogenous molecules in the body.

                                                    The relationship between biotransformation and toxicity is complex. Biotransformation can be seen as a necessary process for survival. It protects the organism against toxicity by preventing accumulation of harmful substances in the body. However, reactive intermediary metabolites may be formed in biotransformation, and these are potentially harmful. This is called metabolic activation. Thus, biotransformation may also induce toxicity. Oxidized, intermediary metabolites that are not conjugated can bind to and damage cellular structures. If, for example, a xenobiotic metabolite binds to DNA, a mutation can be induced (see “Genetic toxicology”). If the biotransformation system is overloaded, a massive destruction of essential proteins or lipid membranes may occur. This can result in cell death (see “Cellular injury and cellular death”).

                                                    Metabolism is a word often used interchangeably with biotransformation. It denotes chemical breakdown or synthesis reactions catalyzed by enzymes in the body. Nutrients from food, endogenous compounds, and xenobiotics are all metabolized in the body.

                                                    Metabolic activation means that a less reactive compound is converted to a more reactive molecule. This usually occurs during Phase 1 reactions.

                                                    Metabolic inactivation means that an active or toxic molecule is converted to a less active metabolite. This usually occurs during Phase 2 reactions. In certain cases an inactivated metabolite might be reactivated, for example by enzymatic cleavage.

                                                    Phase 1 reaction refers to the first step in xenobiotic metabolism. It usually means that the compound is oxidized. Oxidation usually makes the compound more water soluble and facilitates further reactions.

                                                    Cytochrome P450 enzymes are a group of enzymes that preferentially oxidize xenobiotics in Phase 1 reactions. The different enzymes are specialized for handling specific groups of xenobiotics with certain characteristics. Endogenous molecules are also substrates. Cytochrome P450 enzymes are induced by xenobiotics in a specific fashion. Obtaining induction data on cytochrome P450 can be informative about the nature of previous exposures (see “Genetic determinants of toxic response”).

                                                    Phase 2 reaction refers to the second step in xenobiotic meta- bolism. It usually means that the oxidized compound is conjugated with (coupled to) an endogenous molecule. This reaction increases the water solubility further. Many conjugated meta- bolites are actively excreted via the kidneys.

                                                    Transferases are a group of enzymes that catalyze Phase 2 reactions. They conjugate xenobiotics with endogenous compounds such as glutathione, amino acids, glucuronic acid or sulphate.

                                                    Glutathione is an endogenous molecule, a tripeptide, that is conjugated with xenobiotics in Phase 2 reactions. It is present in all cells (and in liver cells in high concentrations), and usually protects from activated xenobiotics. When glutathione is depleted, toxic reactions between activated xenobiotic metabolites and proteins, lipids or DNA may occur.

                                                    Induction means that enzymes involved in biotransformation are increased (in activity or amount) as a response to xenobiotic exposure. In some cases within a few days enzyme activity can be increased several fold. Induction is often balanced so that both Phase 1 and Phase 2 reactions are increased simultaneously. This may lead to a more rapid biotransformation and can explain tolerance. In contrast, unbalanced induction may increase toxicity.

                                                    Inhibition of biotransformation can occur if two xenobiotics are metabolized by the same enzyme. The two substrates have to compete, and usually one of the substrates is preferred. In that case the second substrate is not metabolized, or only slowly metabolized. As with induction, inhibition may increase as well as decrease toxicity.

                                                    Oxygen activation can be triggered by metabolites of certain xenobiotics. They may auto-oxidize under the production of activated oxygen species. These oxygen-derived species, which include superoxide, hydrogen peroxide and the hydroxyl radical, may damage DNA, lipids and proteins in cells. Oxygen activation is also involved in inflammatory processes.

                                                    Genetic variability between individuals is seen in many genes coding for Phase 1 and Phase 2 enzymes. Genetic variability may explain why certain individuals are more susceptible to toxic effects of xenobiotics than others.

                                                     

                                                    Back

                                                    Monday, 28 February 2011 20:12

                                                    Quality assurance

                                                    Decisions affecting the health, well-being, and employability of individual workers or an employer’s approach to health and safety issues must be based on data of good quality. This is especially so in the case of biological monitoring data and it is therefore the responsibility of any laboratory undertaking analytical work on biological specimens from working populations to ensure the reliability, accuracy and precision of its results. This responsibility extends from providing suitable methods and guidance for specimen collection to ensuring that the results are returned to the health professional responsible for the care of the individual worker in a suitable form. All these activities are covered by the expression of quality assurance.
                                                    The central activity in a quality assurance programme is the control and maintenance of analytical accuracy and precision. Biological monitoring laboratories have often developed in a clinical environment and have taken quality assurance techniques and philosophies from the discipline of clinical chemistry. Indeed, measurements of toxic chemicals and biological effect indicators in blood and urine are essentially no different from those made in clinical chemistry and in clinical pharmacology service laboratories found in any major hospital.
                                                    A quality assurance programme for an individual analyst starts with the selection and establishment of a suitable method. The next stage is the development of an internal quality control procedure to maintain precision; the laboratory needs then to satisfy itself of the accuracy of the analysis, and this may well involve external quality assessment (see below). It is important to recognize however, that quality assurance includes more than these aspects of analytical quality control.

                                                    Method Selection
                                                    There are several texts presenting analytical methods in biological monitoring. Although these give useful guidance, much needs to be done by the individual analyst before data of suitable quality can be produced. Central to any quality assurance programme is the production of a laboratory protocol that must specify in detail those parts of the method which have the most bearing on its reliability, accuracy, and precision. Indeed, national accreditation of laboratories in clinical chemistry, toxicology, and forensic science is usually dependent on the quality of the laboratory’s protocols. Development of a suitable protocol is usually a time-consuming process. If a laboratory wishes to establish a new method, it is often most cost-effective to obtain from an existing laboratory a protocol that has proved its performance, for example, through validation in an established international quality assurance programme. Should the new laboratory be committed to a specific analytical technique, for example gas chromatography rather than high-performance liquid chromatography, it is often possible to identify a laboratory that has a good performance record and that uses the same analytical approach. Laboratories can often be identified through journal articles or through organizers of various national quality assessment schemes.

                                                    Internal Quality Control
                                                    The quality of analytical results depends on the precision of the method achieved in practice, and this in turn depends on close adherence to a defined protocol. Precision is best assessed by the inclusion of “quality control samples” at regular intervals during an analytical run. For example, for control of blood lead analyses, quality control samples are introduced into the run after every six or eight actual worker samples. More stable analytical methods can be monitored with fewer quality control samples per run. The quality control samples for blood lead analysis are prepared from 500 ml of blood (human or bovine) to which inorganic lead is added; individual aliquots are stored at low temperature (Bullock, Smith and Whitehead 1986). Before each new batch is put into use, 20 aliquots are analysed in separate runs on different occasions to establish the mean result for this batch of quality control samples, as well as its standard deviation (Whitehead 1977). These two figures are used to set up a Shewhart control chart (figure 27.2). The results from the analysis of the quality control samples included in subsequent runs are plotted on the chart. The analyst then uses rules for acceptance or rejection of an analytical run depending on whether the results of these samples fall within two or three standard deviations (SD) of the mean. A sequence of rules, validated by computer modelling, has been suggested by Westgard et al. (1981) for application to control samples. This approach to quality control is described in textbooks of clinical chemistry and a simple approach to the introduction of quality assurance is set forth in Whitehead (1977). It must be emphasized that these techniques of quality control depend on the preparation and analysis of quality control samples separately from the calibration samples that are used on each analytical occasion.

                                                    Figure 27.2 Shewhart control chart for quality control samples

                                                    BMO020F1.jpg

                                                    This approach can be adapted to a range of biological monitoring or biological effect monitoring assays. Batches of blood or urine samples can be prepared by addition of either the toxic material or the metabolite that is to be measured. Similarly, blood, serum, plasma, or urine can be aliquotted and stored deep-frozen or freeze-dried for measurement of enzymes or proteins. However, care has to be taken to avoid infective risk to the analyst from samples based on human blood.
                                                    Careful adherence to a well-defined protocol and to rules for acceptability is an essential first stage in a quality assurance programme. Any laboratory must be prepared to discuss its quality control and quality assessment performance with the health professionals using it and to investigate surprising or unusual findings.

                                                    External Quality Assessment
                                                    Once a laboratory has established that it can produce results with adequate precision, the next stage is to confirm the accuracy (“trueness”) of the measured values, that is, the relationship of the measurements made to the actual amount present. This is a difficult exercise for a laboratory to do on its own but can be achieved by taking part in a regular external quality assessment scheme. These have been an essential part of clinical chemistry practice for some time but have not been widely available for biological monitoring. The exception is blood lead analysis, where schemes have been available since the 1970s (e.g., Bullock, Smith and Whitehead 1986). Comparison of analytical results with those reported from other laboratories analysing samples from the same batch allows assessment of a laboratory’s performance compared with others, as well as a measure of its accuracy. Several national and international quality assessment schemes are available. Many of these schemes welcome new laboratories, as the validity of the mean of the results of an analyte from all the participating laboratories (taken as a measure of the actual concentration) increases with the number of participants. Schemes with many participants are also more able to analyse laboratory performance according to analytical method and thus advise on alternatives to methods with poor performance characteristics. In some countries, participation in such a scheme is an essential part of laboratory accreditation. Guidelines for external quality assessment scheme design and operation have been published by the WHO (1981).
                                                    In the absence of established external quality assessment schemes, accuracy may be checked using certified reference materials which are available on a commercial basis for a limited range of analytes. The advantages of samples circulated by external quality assessment schemes are that (1) the analyst does not have fore-knowledge of the result, (2) a range of concentrations is presented, and (3) as definitive analytical methods do not have to be employed, the materials involved are cheaper.

                                                    Pre-analytical Quality Control
                                                    Effort spent in attaining good laboratory accuracy and precision is wasted if the samples presented to the laboratory have not been taken at the correct time, if they have suffered contamination, have deteriorated during transport, or have been inadequately or incorrectly labelled. It is also bad professional practice to submit individuals to invasive sampling without taking adequate care of the sampled materials. Although sampling is often not under the direct control of the laboratory analyst, a full quality programme of biological monitoring must take these factors into account and the laboratory should ensure that syringes and sample containers provided are free from contamination, with clear instructions about sampling technique and sample storage and transport. The importance of the correct sampling time within the shift or working week and its dependence on the toxicokinetics of the sampled material are now recognized (ACGIH 1993; HSE 1992), and this information should be made available to the health professionals responsible for collecting the samples.

                                                    Post-analytical Quality Control
                                                    High-quality analytical results may be of little use to the individual or health professional if they are not communicated to the professional in an interpretable form and at the right time. Each biological monitoring laboratory should develop reporting procedures for alerting the health care professional submitting the samples to abnormal, unexpected, or puzzling results in time to allow appropriate action to be taken. Interpretation of laboratory results, especially changes in concentration between successive samples, often depends on knowledge of the precision of the assay. As part of total quality management from sample collection to return of results, health professionals should be given information concerning the biological monitoring laboratory’s precision and accuracy, as well as reference ranges and advisory and statutory limits, in order to help them in interpreting the results. 

                                                     

                                                    Click to return to top of page

                                                    It is difficult to speak of work analysis without putting it in the perspective of recent changes in the industrial world, because the nature of activities and the conditions in which they are carried out have undergone considerable evolution in recent years. The factors giving rise to these changes have been numerous, but there are two whose impact has proved crucial. On the one hand, technological progress with its ever-quickening pace and the upheavals brought about by information technologies have revolutionized jobs (De Keyser 1986). On the other hand, the uncertainty of the economic market has required more flexibility in personnel management and work organization. If the workers have gained a wider view of the production process that is less routine-oriented and undoubtedly more systematic, they have at the same time lost exclusive links with an environment, a team, a production tool. It is difficult to view these changes with serenity, but we have to face the fact that a new industrial landscape has been created, sometimes more enriching for those workers who can find their place in it, but also filled with pitfalls and worries for those who are marginalized or excluded. However, one idea is being taken up in firms and has been confirmed by pilot experiments in many countries: it should be possible to guide changes and soften their adverse effects with the use of relevant analyses and by using all resources for negotiation between the different work actors. It is within this context that we must place work analyses today—as tools allowing us to describe tasks and activities better in order to guide interventions of different kinds, such as training, the setting up of new organizational modes or the design of tools and work systems. We speak of analyses, and not just one analysis, since there exist a large number of them, depending on the theoretical and cultural contexts in which they are developed, the particular goals they pursue, the evidence they collect, or the analyser’s concern for either specificity or generality. In this article, we will limit ourselves to presenting a few characteristics of work analyses and emphasizing the importance of collective work. Our conclusions will highlight other paths that the limits of this text prevent us from pursuing in greater depth.

                                                    Some Characteristics of Work Analyses

                                                    The context

                                                    If the primary goal of any work analysis is to describe what the operator does, or should do, placing it more precisely into its context has often seemed indispensable to researchers. They mention, according to their own views, but in a broadly similar manner, the concepts of context, situation, environment, work domain, work world or work environment. The problem lies less in the nuances between these terms than in the selection of variables that need to be described in order to give them a useful meaning. Indeed, the world is vast and the industry is complex, and the characteristics that could be referred to are innumerable. Two tendencies can be noted among authors in the field. The first one sees the description of the context as a means of capturing the reader’s interest and providing him or her with an adequate semantic framework. The second has a different theoretical perspective: it attempts to embrace both context and activity, describing only those elements of the context that are capable of influencing the behavior of operators.

                                                    The semantic framework

                                                    Context has evocative power. It is enough, for an informed reader, to read about an operator in a control room engaged in a continuous process to call up a picture of work through commands and surveillance at a distance, where the tasks of detection, diagnosis, and regulation predominate. What variables need to be described in order to create a sufficiently meaningful context? It all depends on the reader. Nonetheless, there is a consensus in the literature on a few key variables. The nature of the economic sector, the type of production or service, the size and the geographical location of the site are useful.

                                                    The production processes, the tools or machines and their level of automation allow certain constraints and certain necessary qualifications to be guessed at. The structure of the personnel, together with age and level of qualification and experience are crucial data whenever the analysis concerns aspects of training or of organizational flexibility. The organization of work established depends more on the firm’s philosophy than on technology. Its description includes, notably, work schedules, the degree of centralization of decisions and the types of control exercised over the workers. Other elements may be added in different cases. They are linked to the firm’s history and culture, its economic situation, work conditions, and any restructuring, mergers, and investments. There exist at least as many systems of classification as there are authors, and there are numerous descriptive lists in circulation. In France, a special effort has been made to generalize simple descriptive methods, notably allowing for the ranking of certain factors according to whether or not they are satisfactory for the operator (RNUR 1976; Guelaud et al. 1977).

                                                    The description of relevant factors regarding the activity

                                                    The taxonomy of complex systems described by Rasmussen, Pejtersen, and Schmidts (1990) represents one of the most ambitious attempts to cover at the same time the context and its influence on the operator. Its main idea is to integrate, in a systematic fashion, the different elements of which it is composed and to bring out the degrees of freedom and the constraints within which individual strategies can be developed. Its exhaustive aim makes it difficult to manipulate, but the use of multiple modes of representation, including graphs, to illustrate the constraints has a heuristic value that is bound to be attractive to many readers. Other approaches are more targeted. What the authors seek is the selection of factors that can influence a precise activity. Hence, with an interest in the control of processes in a changing environment, Brehmer (1990) proposes a series of temporal characteristics of the context which affect the control and anticipation of the operator (see figure 1). This author’s typology has been developed from “micro-worlds”, computerized simulations of dynamic situations, but the author himself, along with many others since, used it for the continuous-process industry (Van Daele 1992). For certain activities, the influence of the environment is well known, and the selection of factors is not too difficult. Thus, if we are interested in heart rate in the work environment, we often limit ourselves to describing the air temperatures, the physical constraints of the task or the age and training of the subject—even though we know that by doing so we perhaps leave out relevant elements. For others, the choice is more difficult. Studies on human error, for example, show that the factors capable of producing them are numerous (Reason 1989). Sometimes, when theoretical knowledge is insufficient, only statistical processing, combining context and activity analysis, allows us to bring out the relevant contextual factors (Fadier 1990).

                                                    Figure 1. The criteria and sub-criteria of the taxonomy of micro-worlds proposed by Brehmer (1990)

                                                    ERG040T1

                                                    The Task or the Activity?

                                                    The task

                                                    The task is defined by its objectives, its constraints and the means it requires for achievement. A function within the firm is generally characterized by a set of tasks. The realized task differs from the prescribed task scheduled by the firm for a large number of reasons: the strategies of operators vary within and among individuals, the environment fluctuates and random events require responses that are often outside the prescribed framework. Finally, the task is not always scheduled with the correct knowledge of its conditions of execution, hence the need for adaptations in real-time. But even if the task is updated during the activity, sometimes to the point of being transformed, it still remains the central reference.

                                                    Questionnaires, inventories, and taxonomies of tasks are numerous, especially in the English-language literature—the reader will find excellent reviews in Fleishman and Quaintance (1984) and in Greuter and Algera (1989). Certain of these instruments are merely lists of elements—for example, the action verbs to illustrate tasks—that are checked off according to the function studied. Others have adopted a hierarchical principle, characterizing a task as interlocking elements, ordered from the global to the particular. These methods are standardized and can be applied to a large number of functions; they are simple to use, and the analytical stage is much shortened. But where it is a question of defining specific work, they are too static and too general to be useful.

                                                    Next, there are those instruments requiring more skill on the part of the researcher; since the elements of analysis are not predefined, it is up to the researcher to characterize them. The already outdated critical incident technique of Flanagan (1954), where the observer describes a function by reference to its difficulties and identifies the incidents which the individual will have to face, belongs to this group.

                                                    It is also the path adopted by cognitive task analysis (Roth and Woods 1988). This technique aims to bring to light the cognitive requirements of a job. One way to do this is to break the job down into goals, constraints and means. Figure 2 shows how the task of an anesthetist, characterized first by a very global goal of patient survival, can be broken down into a series of sub-goals, which can themselves be classified as actions and means to be employed. More than 100 hours of observation in the operating theatre and subsequent interviews with anesthetists were necessary to obtain this synoptic “photograph” of the requirements of the function. This technique, although quite laborious, is nevertheless useful in ergonomics in determining whether all the goals of a task are provided with the means of attaining them. It also allows for an understanding of the complexity of a task (its particular difficulties and conflicting goals, for example) and facilitates the interpretation of certain human errors. But it suffers, as do other methods, from the absence of a descriptive language (Grant and Mayes 1991). Moreover, it does not permit hypotheses to be formulated as to the nature of the cognitive processes brought into play to attain the goals in question.

                                                    Figure 2. Cognitive analysis of the task: general anesthesia

                                                    ERG040F1

                                                    Other approaches have analyzed the cognitive processes associated with given tasks by drawing up hypotheses as to the information processing necessary to accomplish them. A frequently employed cognitive model of this kind is Rasmussen’s (1986), which provides, according to the nature of the task and its familiarity for the subject, three possible levels of activity-based either on skill-based habits and reflexes, on acquired rule-based procedures or on knowledge-based procedures. But other models or theories that reached the height of their popularity during the 1970s remain in use. Hence, the theory of optimal control, which considers man as a controller of discrepancies between assigned and observed goals, is sometimes still applied to cognitive processes. And modeling by means of networks of interconnected tasks and flow charts continues to inspire the authors of cognitive task analysis; figure 3 provides a simplified description of the behavioral sequences in an energy-control task, constructing a hypothesis about certain mental operations. All these attempts reflect the concern of researchers to bring together in the same description not only elements of the context but also the task itself and the cognitive processes that underlie it—and to reflect the dynamic character of work as well.

                                                    Figure 3. A simplified description of the determinants of a behavior sequence in energy control tasks: a case of unacceptable consumption of energy

                                                    ERG040F2

                                                    Since the arrival of the scientific organization of work, the concept of the prescribed task has been adversely criticized because it has been viewed as involving the imposition on workers of tasks that are not only designed without consulting their needs but are often accompanied by specific performance time, a restriction not welcomed by many workers. Even if the imposition aspect has become rather more flexible today and even if the workers contribute more often to the design of tasks, an assigned time for tasks remains necessary for schedule planning and remains an essential component of work organization. The quantification of time should not always be perceived in a negative manner. It constitutes a valuable indicator of workload. A simple but common method of measuring the time pressure exerted on a worker consists of determining the quotient of the time necessary for the execution of a task divided by the available time. The closer this quotient is to unity, the greater the pressure (Wickens 1992). Moreover, quantification can be used in flexible but appropriate personnel management. Let us take the case of nurses where the technique of predictive analysis of tasks has been generalized, for example, in the Canadian regulation Planning of Required Nursing (PRN 80) (Kepenne 1984) or one of its European variants. Thanks to such task lists, accompanied by their meantime of execution, one can, each morning, taking into account the number of patients and their medical conditions, establish a care schedule and a distribution of personnel. Far from being a constraint, PRN 80 has, in a number of hospitals, demonstrated that a shortage of nursing personnel exists, since the technique allows a difference to be established (see figure 4) between the desired and the observed, that is, between the number of staff necessary and the number available, and even between the tasks planned and the tasks carried out. The times calculated are only averages, and the fluctuations in the situation do not always make them applicable, but this negative aspect is minimized by a flexible organization that accepts adjustments and allows the personnel to participate in effecting those adjustments.

                                                    Figure 4.  Discrepancies between the numbers of personnel present and required  on the basis of PRN80

                                                    ERG040F3

                                                    The activity, the evidence, and the performance

                                                    An activity is defined as the set of behaviors and resources used by the operator so that work occurs—that is to say, the transformation or production of goods or the rendering of a service. This activity can be understood through observation in different ways. Faverge (1972) has described four forms of analysis. The first is an analysis in terms of gestures and postures, where the observer locates, within the visible activity of the operator, classes of behavior that are recognizable and repeated during work. These activities are often coupled with a precise response: for example, the heart rate, which allows us to assess the physical load associated with each activity. The second form of analysis is in terms of information uptake. What is discovered, through direct observation—or with the aid of cameras or recorders of eye movements—is the set of signals picked up by the operator in the information field surrounding him or her. This analysis is particularly useful in cognitive ergonomics in trying to better understand the information processing carried out by the operator. The third type of analysis is in terms of regulation. The idea is to identify the adjustments of activity carried out by the operator in order to deal with either fluctuation in the environment or changes in his own condition. There we find the direct intervention of context within the analysis. One of the most frequently cited research projects in this area is that of Sperandio (1972). This author studied the activity of air traffic controllers and identified important strategy changes during an increase in air traffic. He interpreted them as an attempt to simplify the activity by aiming to maintain an acceptable load level, while at the same time continuing to meet the requirements of the task. The fourth is an analysis in terms of thought processes. This type of analysis has been widely used in the ergonomics of highly automated posts. Indeed, the design of computerized aids and notably intelligent aids for the operator requires a thorough understanding of the way in which the operator reasons in order to solve certain problems. The reasoning involved in scheduling, anticipation, and diagnosis has been the subject of analyses, an example of which can be found in figure 5. However, evidence of mental activity can only be inferred. Apart from certain observable aspects of behavior, such as eye movements and problem-solving time, most of these analyses resort to the verbal response. Particular emphasis has been placed, in recent years, on the knowledge necessary to accomplish certain activities, with researchers trying not to postulate them at the outset but to make them apparent through the analysis itself.

                                                    Figure 5. Analysis of mental activity. Strategies in the control of processes  with long response times: the need for computerized support in diagnosis

                                                    ERG040T2

                                                    Such efforts have brought to light the fact that almost identical performances can be obtained with very different levels of knowledge, as long as operators are aware of their limits and apply strategies adapted to their capabilities. Hence, in our study of the start-up of a thermoelectric plant (De Keyser and Housiaux 1989), the start-ups were carried out by both engineers and operators. The theoretical and procedural knowledge that these two groups possessed, which had been elicited by means of interviews and questionnaires, were very different. The operators in particular sometimes had an erroneous understanding of the variables in the functional links of the process. In spite of this, the performances of the two groups were very close. But the operators took into account more variables in order to verify the control of the start-up and undertook more frequent verifications. Such results were also obtained by Amalberti (1991), who mentioned the existence of metaknowledge allowing experts to manage their own resources.

                                                    What evidence of activity is appropriate to elicit? Its nature, as we have seen, depends closely on the form of analysis planned. Its form varies according to the degree of methodological care exercised by the observer. Provoked evidence is distinguished from spontaneous evidence and concomitant from subsequent evidence. Generally speaking, when the nature of the work allows, concomitant and spontaneous evidence are to be preferred. They are free of various drawbacks such as the unreliability of memory, observer interference, the effect of rationalizing reconstruction on the part of the subject, and so forth. To illustrate these distinctions, we will take the example of verbalizations. Spontaneous verbalizations are verbal exchanges, or monologues expressed spontaneously without being requested by the observer; provoked verbalizations are those made at the specific request of the observer, such as the request made to the subject to “think aloud”, which is well known in the cognitive literature. Both types can be done in real-time, during work, and are thus concomitant.

                                                    They can also be subsequent, as in interviews, or subjects’ verbalizations when they view videotapes of their work. As for the validity of the verbalizations, the reader should not ignore the doubt raised in this regard by the controversy between Nisbett and De Camp Wilson (1977) and White (1988) and the precautions suggested by numerous authors aware of their importance in the study of mental activity in view of the methodological difficulties encountered (Ericson and Simon 1984; Savoyant and Leplat 1983; Caverni 1988; Bainbridge 1986).

                                                    The organization of this evidence, its processing and its formalization require descriptive languages and sometimes analyses that go beyond field observation. Those mental activities which are inferred from the evidence, for example, remain hypothetical. Today they are often described using languages derived from artificial intelligence, making use of representations in terms of schemes, production rules, and connecting networks. Moreover, the use of computerized simulations—of micro-worlds—to pinpoint certain mental activities has become widespread, even though the validity of the results obtained from such computerized simulations, in view of the complexity of the industrial world, is subject to debate. Finally, we must mention the cognitive modelings of certain mental activities extracted from the field. Among the best known is the diagnosis of the operator of a nuclear power plant, carried out in ISPRA (Decortis and Cacciabue 1990), and the planning of the combat pilot perfected in Centre d’études et de recherches de médecine aérospatiale (CERMA) (Amalberti et al. 1989).

                                                    Measurement of the discrepancies between the performance of these models and that of real, living operators is a fruitful field in activity analysis. Performance is the outcome of the activity, the final response given by the subject to the requirements of the task. It is expressed at the level of production: productivity, quality, error, incident, accident—and even, at a more global level, absenteeism or turnover. But it must also be identified at the individual level: the subjective expression of satisfaction, stress, fatigue or workload, and many physiological responses are also performance indicators. Only the entire set of data permits interpretation of the activity—that is to say, judging whether or not it furthers the desired goals while remaining within human limits. There exists a set of norms which, up to a certain point, guide the observer. But these norms are not situated—they do not take into account the context, its fluctuations and the condition of the worker. This is why in design ergonomics, even when rules, norms, and models exist, designers are advised to test the product using prototypes as early as possible and to evaluate the users’ activity and performance.

                                                    Individual or Collective Work?

                                                    While in the vast majority of cases, work is a collective act, most work analyses focus on tasks or individual activities. Nonetheless, the fact is that technological evolution, just like work organization, today emphasizes distributed work, whether it be between workers and machines or simply within a group. What paths have been explored by authors so as to take this distribution into account (Rasmussen, Pejtersen and Schmidts 1990)? They focus on three aspects: structure, the nature of exchanges and structural lability.

                                                    Structure

                                                    Whether we view structure as elements of the analysis of people, or of services, or even of different branches of a firm working in a network, the description of the links that unite them remains a problem. We are very familiar with the organigrams within firms that indicate the structure of authority and whose various forms reflect the organizational philosophy of the firm—very hierarchically organized for a Taylor-like structure, or flattened like a rake, even matrix-like, for a more flexible structure. Other descriptions of distributed activities are possible: an example is given in figure 6. More recently, the need for firms to represent their information exchanges at a global level has led to a rethinking of information systems. Thanks to certain descriptive languages—for example, design schemas, or entity-relations-attribute matrixes—the structure of relations at the collective level can today be described in a very abstract manner and can serve as a springboard for the creation of computerized management systems.

                                                    Figure 6.  Integrated life cycle design

                                                    ERG040F5

                                                    The nature of exchanges

                                                    Simply having a description of the links uniting the entities says little about the content itself of the exchanges; of course the nature of the relation can be specified—movement from place to place, information transfers, hierarchical dependence, and so on—but this is often quite inadequate. The analysis of communications within teams has become a favored means of capturing the very nature of collective work, encompassing subjects mentioned, creation of a common language in a team, modification of communications when circumstances are critical, and so forth (Tardieu, Nanci and Pascot 1985; Rolland 1986; Navarro 1990; Van Daele 1992; Lacoste 1983; Moray, Sanderson and Vincente 1989). Knowledge of these interactions is particularly useful for the creation of computer tools, notably decision-making aids for understanding errors. The different stages and the methodological difficulties linked to the use of this evidence have been well described by Falzon (1991).

                                                    Structural lability

                                                    It is the work on activities rather than on tasks that have opened up the field of structural lability—that is to say, of the constant reconfigurations of collective work under the influence of contextual factors. Studies such as those of Rogalski (1991), who over a long period analyzed the collective activities dealing with forest fires in France, and Bourdon and Weill Fassina (1994), who studied the organizational structure set up to deal with railway accidents, are both very informative. They clearly show how the context molds the structure of exchanges, the number, and type of actors involved, the nature of the communications and the number of parameters essential to the work. The more this context fluctuates, the further the fixed descriptions of the task are removed from reality. Knowledge of this lability, and a better understanding of the phenomena that take place within it, are essential in planning for the unpredictable and in order to provide better training for those involved in collective work in a crisis.

                                                    Conclusions

                                                    The various phases of the work analysis that have been described are an iterative part of any human factors design cycle (see figure 6). In this design of any technical object, whether a tool, a workstation or a factory, in which human factors are a consideration, certain information is needed in time. In general, the beginning of the design cycle is characterized by a need for data involving environmental constraints, the types of jobs that are to be carried out, and the various characteristics of the users. This initial information allows the specifications of the object to be drawn up so as to take into account work requirements. But this is, in some sense, only a coarse model compared to the real work situation. This explains why models and prototypes are necessary that, from their inception, allow not the jobs themselves, but the activities of the future users to be evaluated. Consequently, while the design of the images on a monitor in a control room can be based on a thorough cognitive analysis of the job to be done, only a data-based analysis of the activity will allow an accurate determination of whether the prototype will actually be of use in the actual work situation (Van Daele 1988). Once the finished technical object is put into operation, greater emphasis is put on the performance of the users and on dysfunctional situations, such as accidents or human error. The gathering of this type of information allows the final corrections to be made that will increase the reliability and usability of the completed object. Both the nuclear industry and the aeronautics industry serve as an example: operational feedback involves reporting every incident that occurs. In this way, the design loop comes full circle.

                                                     

                                                    Back

                                                    Monday, 20 December 2010 19:18

                                                    Toxicokinetics

                                                    The human organism represents a complex biological system on various levels of organization, from the molecular-cellular level to the tissues and organs. The organism is an open system, exchanging matter and energy with the environment through numerous biochemical reactions in a dynamic equilibrium. The environment can be polluted, or contaminated with various toxicants.

                                                    Penetration of molecules or ions of toxicants from the work or living environment into such a strongly coordinated biological system can reversibly or irreversibly disturb normal cellular biochemical processes, or even injure and destroy the cell (see “Cellular injury and cellular death”).

                                                    Penetration of a toxicant from the environment to the sites of its toxic effect inside the organism can be divided into three phases:

                                                    1. The exposure phase encompasses all processes occurring between various toxicants and/or the influence on them of environmental factors (light, temperature, humidity, etc.). Chemical transformations, degradation, biodegradation (by micro-organisms) as well as disintegration of toxicants can occur.
                                                    2. The toxicokinetic phase encompasses absorption of toxicants into the organism and all processes which follow transport by body fluids, distribution and accumulation in tissues and organs, biotransformation to metabolites and elimination (excretion) of toxicants and/or metabolites from the organism.
                                                    3. The toxicodynamic phase refers to the interaction of toxicants (molecules, ions, colloids) with specific sites of action on or in- side the cells—receptors—ultimately producing a toxic effect.

                                                     

                                                    Here we will focus our attention exclusively on the toxicokinetic processes inside the human organism following exposure to toxicants in the environment.

                                                    The molecules or ions of toxicants present in the environment will penetrate into the organism through the skin and mucosa, or the epithelial cells of the respiratory and gastrointestinal tracts, depending on the point of entry. That means molecules and ions of toxicants must penetrate through cellular membranes of these biological systems, as well as through an intricate system of endomembranes inside the cell.

                                                    All toxicokinetic and toxicodynamic processes occur on the molecular-cellular level. Numerous factors influence these processes and these can be divided into two basic groups:

                                                    • chemical constitution and physicochemical properties of toxicants
                                                    • structure of the cell especially properties and function of membranes around the cell and its interior organelles.

                                                     

                                                    Physico-Chemical Properties of Toxicants

                                                    In 1854 the Russian toxicologist E.V. Pelikan started studies on the relation between the chemical structure of a substance and its biological activity—the structure activity relationship (SAR). Chemical structure directly determines physico-chemical properties, some of which are responsible for biological activity.

                                                    To define the chemical structure numerous parameters can be selected as descriptors, which can be divided into various groups:

                                                    1. Physico-chemical:

                                                    • general—melting point, boiling point, vapour pressure, dissociation constant (pKa)
                                                    • electric—ionization potential, dielectric constant, dipole moment, mass: charge ratio, etc.
                                                    • quantum chemical—atomic charge, bond energy, resonance energy, electron density, molecular reactivity, etc.

                                                     

                                                     2. Steric: molecular volume, shape and surface area, substructure shape, molecular reactivity, etc.
                                                     3. Structural: number of bonds number of rings (in polycyclic compounds), extent of branching, etc.

                                                     

                                                    For each toxicant it is necessary to select a set of descriptors related to a particular mechanism of activity. However, from the toxicokinetic point of view two parameters are of general importance for all toxicants:

                                                    • The Nernst partition coefficient (P) establishes the solubility of toxicant molecules in the two-phase octanol (oil)-water system, correlating to their lipo- or hydrosolubility. This parameter will greatly influence the distribution and accumulation of toxicant molecules in the organism.
                                                    • The dissociation constant (pKa) defines the degree of ionization (electrolytic dissociation) of molecules of a toxicant into charged cations and anions at a particular pH. This constant represents the pH at which 50% ionization is achieved. Molecules can be lipophilic or hydrophilic, but ions are soluble exclusively in the water of body fluids and tissues. Knowing pKa it is possible to calculate the degree of ionization of a substance for each pH using the Henderson-Hasselbach equation.

                                                     

                                                    For inhaled dusts and aerosols, the particle size, shape, surface area and density also influence their toxicokinetics and toxico- dynamics.

                                                    Structure and Properties of Membranes

                                                    The eukaryotic cell of human and animal organisms is encircled by a cytoplasmic membrane regulating the transport of substances and maintaining cell homeostasis. The cell organelles (nucleus, mitochondria) possess membranes too. The cell cytoplasm is compartmentalized by intricate membranous structures, the endo- plasmic reticulum and Golgi complex (endomembranes). All these membranes are structurally alike, but vary in the content of lipids and proteins.

                                                    The structural framework of membranes is a bilayer of lipid molecules (phospholipids, sphyngolipids, cholesterol). The backbone of a phospholipid molecule is glycerol with two of its -OH groups esterified by aliphatic fatty acids with 16 to 18 carbon atoms, and the third group esterified by a phosphate group and a nitrogenous compound (choline, ethanolamine, serine). In sphyngolipids, sphyngosine is the base.

                                                    The lipid molecule is amphipatic because it consists of a polar hydrophilic “head” (amino alcohol, phosphate, glycerol) and a non-polar twin “tail” (fatty acids). The lipid bilayer is arranged so that the hydrophilic heads constitute the outer and inner surface of membrane and lipophilic tails are stretched toward the membrane interior, which contains water, various ions and molecules.

                                                    Proteins and glycoproteins are inserted into the lipid bilayer (intrinsic proteins) or attached to the membrane surface (extrinsic proteins). These proteins contribute to the structural integrity of the membrane, but they may also perform as enzymes, carriers, pore walls or receptors.

                                                    The membrane represents a dynamic structure which can be disintegrated and rebuilt with a different proportion of lipids and proteins, according to functional needs.

                                                    Regulation of transport of substances into and out of the cell represents one of the basic functions of outer and inner membranes.

                                                    Some lipophilic molecules pass directly through the lipid bilayer. Hydrophilic molecules and ions are transported via pores. Membranes respond to changing conditions by opening or sealing certain pores of various sizes.

                                                    The following processes and mechanisms are involved in the transport of substances, including toxicants, through membranes:

                                                    • diffusion through lipid bilayer
                                                    • diffusion through pores
                                                    • transport by a carrier (facilitated diffusion).

                                                     

                                                    Active processes:

                                                    • active transport by a carrier
                                                    • endocytosis (pinocytosis).

                                                     

                                                    Diffusion

                                                    This represents the movement of molecules and ions through lipid bilayer or pores from a region of high concentration, or high electric potential, to a region of low concentration or potential (“downhill”). Difference in concentration or electric charge is the driving force influencing the intensity of the flux in both directions. In the equilibrium state, influx will be equal to efflux. The rate of diffusion follows Ficke’s law, stating that it is directly proportional to the available surface of membrane, difference in concentration (charge) gradient and characteristic diffusion coefficient, and inversely proportional to the membrane thickness.

                                                    Small lipophilic molecules pass easily through the lipid layer of membrane, according to the Nernst partition coefficient.

                                                    Large lipophilic molecules, water soluble molecules and ions will use aqueous pore channels for their passage. Size and stereoconfiguration will influence passage of molecules. For ions, besides size, the type of charge will be decisive. The protein molecules of pore walls can gain positive or negative charge. Narrow pores tend to be selective—negatively charged ligands will allow passage only for cations, and positively charged ligands will allow passage only for anions. With the increase of pore diameter hydrodynamic flow is dominant, allowing free passage of ions and molecules, according to Poiseuille’s law. This filtration is a consequence of the osmotic gradient. In some cases ions can penetrate through specific complex molecules—ionophores—which can be produced by micro-organisms with antibiotic effects (nonactin, valinomycin, gramacidin, etc.).

                                                    Facilitated or catalyzed diffusion

                                                    This requires the presence of a carrier in the membrane, usually a protein molecule (permease). The carrier selectively binds substances, resembling a substrate-enzyme complex. Similar molecules (including toxicants) can compete for the specific carrier until its saturation point is reached. Toxicants can compete for the carrier and when they are irreversibly bound to it the transport is blocked. The rate of transport is characteristic for each type of carrier. If transport is performed in both direction, it is called exchange diffusion.

                                                    Active transport

                                                    For transport of some substances vital for the cell, a special type of carrier is used, transporting against the concentration gradient or electric potential (“uphill”). The carrier is very stereospecific and can be saturated.

                                                    For uphill transport, energy is required. The necessary energy is obtained by catalytic cleavage of ATP molecules to ADP by the enzyme adenosine triphosphatase (ATP-ase).

                                                    Toxicants can interfere with this transport by competitive or non-competitive inhibition of the carrier or by inhibition of ATP-ase activity.

                                                    Endocytosis

                                                    Endocytosis is defined as a transport mechanism in which the cell membrane encircles material by enfolding to form a vesicle transporting it through the cell. When the material is liquid, the process is termed pinocytosis. In some cases the material is bound to a receptor and this complex is transported by a membrane vesicle. This type of transport is especially used by epithelial cells of the gastrointestinal tract, and cells of the liver and kidneys.

                                                    Absorption of Toxicants

                                                    People are exposed to numerous toxicants present in the work and living environment, which can penetrate into the human organism by three main portals of entry:

                                                    • via the respiratory tract by inhalation of polluted air
                                                    • via the gastrointestinal tract by ingestion of contaminated food, water and drinks
                                                    • through the skin by dermal, cutaneous penetration.

                                                     

                                                    In the case of exposure in industry, inhalation represents the dominant way of entry of toxicants, followed by dermal penetration. In agriculture, pesticides exposure via dermal absorption is almost equal to cases of combined inhalation and dermal penetration. The general population is mostly exposed by ingestion of contaminated food, water and beverages, then by inhalation and less often by dermal penetration.

                                                    Absorption via the respiratory tract

                                                    Absorption in the lungs represents the main route of uptake for numerous airborne toxicants (gases, vapours, fumes, mists, smokes, dusts, aerosols, etc.).

                                                    The respiratory tract (RT) represents an ideal gas-exchange system possessing a membrane with a surface of 30m2 (expiration) to 100m2 (deep inspiration), behind which a network of about 2,000km of capillaries is located. The system, developed through evolution, is accommodated into a relatively small space (chest cavity) protected by ribs.

                                                    Anatomically and physiologically the RT can be divided into three compartments:

                                                    • the upper part of RT, or nasopharingeal (NP), starting at nose nares and extended to the pharynx and larynx; this part serves as an air-conditioning system
                                                    • the tracheo-bronchial tree (TB), encompassing numerous tubes of various sizes, which bring air to the lungs
                                                    • the pulmonary compartment (P), which consists of millions of alveoli (air-sacs) arranged in grapelike clusters.

                                                     

                                                    Hydrophilic toxicants are easily absorbed by the epithelium of the nasopharingeal region. The whole epithelium of the NP and TB regions is covered by a film of water. Lipophilic toxicants are partially absorbed in the NP and TB, but mostly in the alveoli by diffusion through alveolo-capillary membranes. The absorption rate depends on lung ventilation, cardiac output (blood flow through lungs), solubility of toxicant in blood and its metabolic rate.

                                                    In the alveoli, gas exchange is carried out. The alveolar wall is made up of an epithelium, an interstitial framework of basement membrane, connective tissue and the capillary endothelium. The diffusion of toxicants is very rapid through these layers, which have a thickness of about 0.8 μm. In alveoli, toxicant is transferred from the air phase into the liquid phase (blood). The rate of absorption (air to blood distribution) of a toxicant depends on its concentration in alveolar air and the Nernst partition coefficient for blood (solubility coefficient).

                                                    In the blood the toxicant can be dissolved in the liquid phase by simple physical processes or bound to the blood cells and/or plasma constituents according to chemical affinity or by adsorption. The water content of blood is 75% and, therefore, hydrophilic gases and vapours show a high solubility in plasma (e.g., alcohols). Lipophilic toxicants (e.g., benzene) are usually bound to cells or macro-molecules such as albumen.

                                                    From the very beginning of exposure in the lungs, two opposite processes are occurring: absorption and desorption. The equilibrium between these processes depends on the concentration of toxicant in alveolar air and blood. At the onset of exposure the toxicant concentration in the blood is 0 and retention in blood is almost 100%. With continuation of exposure, an equilibrium between absorption and desorption is attained. Hydrophilic toxicants will rapidly attain equilibrium, and the rate of absorption depends on pulmonary ventilation rather than on blood flow. Lipophilic toxicants need a longer time to achieve equilibrium, and here the flow of unsaturated blood governs the rate of absorption.

                                                    Deposition of particles and aerosols in the RT depends on physical and physiological factors, as well as particle size. In short, the smaller the particle the deeper it will penetrate into the RT.

                                                    Relatively constant low retention of dust particles in the lungs of persons who are highly exposed (e.g., miners) suggests the existence of a very efficient system for the clearance of particles. In the upper part of the RT (tracheo-bronchial) a mucociliary blanket performs the clearance. In the pulmonary part, three different mechanisms are at work.: (1) mucociliary blanket, (2) phagocytosis and (3) direct penetration of particles through the alveolar wall.

                                                    The first 17 of the 23 branchings of the tracheo-bronchial tree possess ciliated epithelial cells. By their strokes these cilia constantly move a mucous blanket toward the mouth. Particles deposited on this mucociliary blanket will be swallowed in the mouth (ingestion). A mucous blanket also covers the surface of the alveolar epithelium, moving toward the mucociliary blanket. Additionally the specialized moving cells—phagocytes—engulf particles and micro-organisms in the alveoli and migrate in two possible directions:

                                                    • toward the mucociliary blanket, which transports them to the mouth
                                                    • through the intercellular spaces of the alveolar wall to the lymphatic system of the lungs; also particles can directly penetrate by this route.

                                                     

                                                    Absorption via gastrointestinal tract

                                                    Toxicants can be ingested in the case of accidental swallowing, intake of contaminated food and drinks, or swallowing of particles cleared from the RT.

                                                    The entire alimentary channel, from oesophagus to anus, is basically built in the same way. A mucous layer (epithelium) is supported by connective tissue and then by a network of capillaries and smooth muscle. The surface epithelium of the stomach is very wrinkled to increase the absorption/secretion surface area. The intestinal area contains numerous small projections (villi), which are able to absorb material by “pumping in”. The active area for absorption in the intestines is about 100m2.

                                                    In the gastrointestinal tract (GIT) all absorption processes are very active:

                                                    •  transcellular transport by diffusion through the lipid layer and/or pores of cell membranes, as well as pore filtration
                                                    •  paracellular diffusion through junctions between cells
                                                    •  facilitated diffusion and active transport
                                                    •  endocytosis and the pumping mechanism of the villi.

                                                     

                                                    Some toxic metal ions use specialized transport systems for essential elements: thallium, cobalt and manganese use the iron system, while lead appears to use the calcium system.

                                                    Many factors influence the rate of absorption of toxicants in various parts of the GIT:

                                                    • physico-chemical properties of toxicants, especially the Nernst partition coefficient and the dissociation constant; for particles, particle size is important—the smaller the size, the higher the solubility
                                                    • quantity of food present in the GIT (diluting effect)
                                                    • residence time in each part of the GIT (from a few minutes in the mouth to one hour in the stomach to many hours in the intestines
                                                    • the absorption area and absorption capacity of the epithelium
                                                    • local pH, which governs absorption of dissociated toxicants; in the acid pH of the stomach, non-dissociated acidic compounds will be more quickly absorbed
                                                    • peristalsis (movement of intestines by muscles) and local blood flow
                                                    • gastric and intestinal secretions transform toxicants into more or less soluble products; bile is an emulsifying agent producing more soluble complexes (hydrotrophy)
                                                    • combined exposure to other toxicants, which can produce synergistic or antagonistic effects in absorption processes
                                                    • presence of complexing/chelating agents
                                                    • the action of microflora of the RT (about 1.5kg), about 60 different bacterial species which can perform biotransformation of toxicants.

                                                     

                                                    It is also necessary to mention the enterohepatic circulation. Polar toxicants and/or metabolites (glucuronides and other conjugates) are excreted with the bile into the duodenum. Here the enzymes of the microflora perform hydrolysis and liberated products can be reabsorbed and transported by the portal vein into the liver. This mechanism is very dangerous in the case of hepatotoxic substances, enabling their temporary accumulation in the liver.

                                                    In the case of toxicants biotransformed in the liver to less toxic or non-toxic metabolites, ingestion may represent a less dangerous portal of entry. After absorption in the GIT these toxicants will be transported by the portal vein to the liver, and there they can be partially detoxified by biotransformation.

                                                    Absorption through the skin (dermal, percutaneous)

                                                    The skin (1.8 m2 of surface in a human adult) together with the mucous membranes of the body orifices, covers the surface of the body. It represents a barrier against physical, chemical and biological agents, maintaining the body integrity and homeostasis and performing many other physiological tasks.

                                                    Basically the skin consists of three layers: epidermis, true skin (dermis) and subcutaneous tissue (hypodermis). From the toxicological point of view the epidermis is of most interest here. It is built of many layers of cells. A horny surface of flattened, dead cells (stratum corneum) is the top layer, under which a continuous layer of living cells (stratum corneum compactum) is located, followed by a typical lipid membrane, and then by stratum lucidum, stratum gramulosum and stratum mucosum. The lipid membrane represents a protective barrier, but in hairy parts of the skin, both hair follicles and sweat gland channels penetrate through it. Therefore, dermal absorption can occur by the following mechanisms:

                                                    • transepidermal absorption by diffusion through the lipid membrane (barrier), mostly by lipophilic substances (organic solvents, pesticides, etc.) and to a small extent by some hydrophilic substances through pores
                                                    • transfollicular absorption around the hair stalk into the hair follicle, bypassing the membrane barrier; this absorption occurs only in hairy areas of skin
                                                    • absorption via the ducts of sweat glands, which have a cross-sectional area of about 0.1to 1% of the total skin area (relative absorption is in this proportion)
                                                    • absorption through skin when injured mechanically, thermally, chemically or by skin diseases; here the skin layers, including lipid barrier, are disrupted and the way is open for toxicants and harmful agents to enter.

                                                     

                                                    The rate of absorption through the skin will depend on many factors:

                                                    • concentration of toxicant, type of vehicle (medium), presence of other substances
                                                    • water content of skin, pH, temperature, local blood flow, perspiration, surface area of contaminated skin, thickness of skin
                                                    • anatomical and physiological characteristics of the skin due to sex, age, individual variations, differences occurring in various ethnic groups and races, etc.

                                                    Transport of Toxicants by Blood and Lymph

                                                    After absorption by any of these portals of entry, toxicants will reach the blood, lymph or other body fluids. The blood represents the major vehicle for transport of toxicants and their metabolites.

                                                    Blood is a fluid circulating organ, transporting necessary oxygen and vital substances to the cells and removing waste products of metabolism. Blood also contains cellular components, hormones, and other molecules involved in many physiological functions. Blood flows inside a relatively well closed, high-pressure circulatory system of blood vessels, pushed by the activity of the heart. Due to high pressure, leakage of fluid occurs. The lymphatic system represents the drainage system, in the form of a fine mesh of small, thin-walled lymph capillaries branching through the soft tissues and organs.

                                                    Blood is a mixture of a liquid phase (plasma, 55%) and solid blood cells (45%). Plasma contains proteins (albumins, globulins, fibrinogen), organic acids (lactic, glutamic, citric) and many other substances (lipids, lipoproteins, glycoproteins, enzymes, salts, xenobiotics, etc.). Blood cell elements include erythrocytes (Er), leukocytes, reticulocytes, monocytes, and platelets.

                                                    Toxicants are absorbed as molecules and ions. Some toxicants at blood pH form colloid particles as a third form in this liquid. Molecules, ions and colloids of toxicants have various possibilities for transport in blood:

                                                    •  to be physically or chemically bound to the blood elements, mostly Er
                                                    •  to be physically dissolved in plasma in a free state
                                                    •  to be bound to one or more types of plasma proteins, complexed with the organic acids or attached to other fractions of plasma.

                                                     

                                                    Most of the toxicants in blood exist partially in a free state in plasma and partially bound to erythrocytes and plasma constituents. The distribution depends on the affinity of toxicants to these constituents. All fractions are in a dynamic equilibrium.

                                                    Some toxicants are transported by the blood elements—mostly by erythrocytes, very rarely by leukocytes. Toxicants can be adsorbed on the surface of Er, or can bind to the ligands of stroma. If they penetrate into Er they can bind to the haem (e.g. carbon monoxide and selenium) or to the globin (Sb111, Po210). Some toxicants transported by Er are arsenic, cesium, thorium, radon, lead and sodium. Hexavalent chromium is exclusively bound to the Er and trivalent chromium to the proteins of plasma. For zinc, competition between Er and plasma occurs. About 96% of lead is transported by Er. Organic mercury is mostly bound to Er and inorganic mercury is carried mostly by plasma albumin. Small fractions of beryllium, copper, tellurium and uranium are carried by Er.

                                                    The majority of toxicants are transported by plasma or plasma proteins. Many electrolytes are present as ions in an equilibrium with non-dissociated molecules free or bound to the plasma fractions. This ionic fraction of toxicants is very diffusible, penetrating through the walls of capillaries into tissues and organs. Gases and vapours can be dissolved in the plasma.

                                                    Plasma proteins possess a total surface area of about 600to 800km2 offered for absorption of toxicants. Albumin molecules possess about 109 cationic and 120 anionic ligands at the disposal of ions. Many ions are partially carried by albumin (e.g., copper, zinc and cadmium), as are such compounds as dinitro- and ortho-cresols, nitro- and halogenated derivatives of aromatic hydrocarbons, and phenols.

                                                    Globulin molecules (alpha and beta) transport small molecules of toxicants as well as some metallic ions (copper, zinc and iron) and colloid particles. Fibrinogen shows affinity for certain small molecules. Many types of bonds can be involved in binding of toxicants to plasma proteins: Van der Waals forces, attraction of charges, association between polar and non-polar groups, hydrogen bridges, covalent bonds.

                                                    Plasma lipoproteins transport lipophilic toxicants such as PCBs. The other plasma fractions serve as a transport vehicle too. The affinity of toxicants for plasma proteins suggests their affinity for proteins in tissues and organs during distribution.

                                                    Organic acids (lactic, glutaminic, citric) form complexes with some toxicants. Alkaline earths and rare earths, as well as some heavy elements in the form of cations, are complexed also with organic oxy- and amino acids. All these complexes are usually diffusible and easily distributed in tissues and organs.

                                                    Physiologically chelating agents in plasma such as transferrin and metallothionein compete with organic acids and amino acids for cations to form stable chelates.

                                                    Diffusible free ions, some complexes and some free molecules are easily cleared from the blood into tissues and organs. The free fraction of ions and molecules is in a dynamic equilibrium with the bound fraction. The concentration of a toxicant in blood will govern the rate of its distribution into tissues and organs, or its mobilization from them into the blood.

                                                    Distribution of Toxicants in the Organism

                                                    The human organism can be divided into the following compartments. (1) internal organs, (2) skin and muscles, (3) adipose tissues, (4) connective tissue and bones. This classification is mostly based on the degree of vascular (blood) perfusion in a decreasing order. For example internal organs (including the brain), which represent only 12% of the total body weight, receive about 75% of the total blood volume. On the other hand, connective tissues and bones (15% of total body weight) receive only one per cent of the total blood volume.

                                                    The well-perfused internal organs generally achieve the highest concentration of toxicants in the shortest time, as well as an equilibrium between blood and this compartment. The uptake of toxicants by less perfused tissues is much slower, but retention is higher and duration of stay much longer (accumulation) due to low perfusion.

                                                    Three components are of major importance for the intracellular distribution of toxicants: content of water, lipids and proteins in the cells of various tissues and organs. The above-mentioned order of compartments also follows closely a decreasing water content in their cells. Hydrophilic toxicants will be more rapidly distributed to the body fluids and cells with high water content, and lipophilic toxicants to cells with higher lipid content (fatty tissue).

                                                    The organism possesses some barriers which impair penetration of some groups of toxicants, mostly hydrophilic, to certain organs and tissues, such as:

                                                    • the blood-brain barrier (cerebrospinal barrier), which restricts penetration of large molecules and hydrophilic toxicants to the brain and CNS; this barrier consists of a closely joined layer of endothelial cells; thus, lipophilic toxicants can penetrate through it
                                                    • the placental barrier, which has a similar effect on penetration of toxicants into the foetus from the blood of the mother
                                                    • the histo-haematologic barrier in the walls of capillaries, which is permeable for small- and intermediate-sized molecules, and for some larger molecules, as well as ions.

                                                     

                                                    As previously noted only the free forms of toxicants in plasma (molecules, ions, colloids) are available for penetration through the capillary walls participating in distribution. This free fraction is in a dynamic equilibrium with the bound fraction. Concentration of toxicants in blood is in a dynamic equilibrium with their concentration in organs and tissues, governing retention (accumulation) or mobilization from them.

                                                    The condition of the organism, functional state of organs (especially neuro-humoral regulation), hormonal balance and other factors play a role in distribution.

                                                    Retention of toxicant in a particular compartment is generally temporary and redistribution into other tissues can occur. Retention and accumulation is based on the difference between the rates of absorption and elimination. The duration of retention in a compartment is expressed by the biological half-life. This is the time interval in which 50% of the toxicant is cleared from the tissue or organ and redistributed, translocated or eliminated from the organism.

                                                    Biotransformation processes occur during distribution and retention in various organs and tissues. Biotransformation produces more polar, more hydrophilic metabolites, which are more easily eliminated. A low rate of biotransformation of a lipophilic toxicant will generally cause its accumulation in a compartment.

                                                    The toxicants can be divided into four main groups according to their affinity, predominant retention and accumulation in a particular compartment:

                                                    1. Toxicants soluble in the body fluids are uniformly distributed according to the water content of compartments. Many monovalent cations (e.g., lithium, sodium, potassium, rubidium) and some anions (e.g., chlorine, bromine), are distributed according to this pattern.
                                                    2. Lipophilic toxicants show a high affinity for lipid-rich organs (CNS) and tissues (fatty, adipose).
                                                    3. Toxicants forming colloid particles are then trapped by specialized cells of the reticuloendothelial system (RES) of organs and tissues. Tri- and quadrivalent cations (lanthanum, cesium, hafnium) are distributed in the RES of tissues and organs.
                                                    4. Toxicants showing a high affinity for bones and connective tissue (osteotropic elements, bone seekers) include divalent cations (e.g., calcium, barium, strontium, radon, beryllium, aluminium, cadmium, lead).

                                                     

                                                    Accumulation in lipid-rich tissues

                                                    The “standard man” of 70kg body weight contains about 15% of body weight in the form of adipose tissue, increasing with obesity to 50%. However, this lipid fraction is not uniformly distributed. The brain (CNS) is a lipid-rich organ, and peripheral nerves are wrapped with a lipid-rich myelin sheath and Schwann cells. All these tissues offer possibilities for accumulation of lipophilic toxicants.

                                                    Numerous non-electrolytes and non-polar toxicants with a suitable Nernst partition coefficient will be distributed to this compartment, as well as numerous organic solvents (alcohols, aldehydes, ketones, etc.), chlorinated hydrocarbons (including organochlorine insecticides such as DDT), some inert gases (radon), etc.

                                                    Adipose tissue will accumulate toxicants due to its low vascularization and lower rate of biotransformation. Here accumulation of toxicants may represent a kind of temporary “neutralization” because of lack of targets for toxic effect. However, potential danger for the organism is always present due to the possibility of mobilization of toxicants from this compartment back to the circulation.

                                                    Deposition of toxicants in the brain (CNS) or lipid-rich tissue of the myelin sheath of the peripheral nervous system is very dangerous. The neurotoxicants are deposited here directly next to their targets. Toxicants retained in lipid-rich tissue of the endocrine glands can produce hormonal disturbances. Despite the blood-brain barrier, numerous neurotoxicants of a lipophilic nature reach the brain (CNS): anaesthetics, organic solvents, pesticides, tetraethyl lead, organomercurials, etc.

                                                    Retention in the reticuloendothelial system

                                                    In each tissue and organ a certain percentage of cells is specialized for phagocytic activity, engulfing micro-organisms, particles, colloid particles, and so on. This system is called the reticuloendothelial system (RES), comprising fixed cells as well as moving cells (phagocytes). These cells are present in non-active form. An increase of the above-mentioned microbes and particles will activate the cells up to a saturation point.

                                                    Toxicants in the form of colloids will be captured by the RES of organs and tissues. Distribution depends on the colloid particle size. For larger particles, retention in the liver will be favoured. With smaller colloid particles, more or less uniform distribution will occur between the spleen, bone marrow and liver. Clearance of colloids from the RES is very slow, although small particles are cleared relatively more quickly.

                                                    Accumulation in bones

                                                    About 60 elements can be identified as osteotropic elements, or bone seekers.

                                                    Osteotropic elements can be divided into three groups:

                                                    1. Elements representing or replacing physiological constituents of the bone. Twenty such elements are present in higher quantities. The others appear in trace quantities. Under conditions of chronic exposure, toxic metals such as lead, aluminium and mercury can also enter the mineral matrix of bone cells.
                                                    2. Alkaline earths and other elements forming cations with an ionic diameter similar to that of calcium are exchangeable with it in bone mineral. Also, some anions are exchangeable with anions (phosphate, hydroxyl) of bone mineral.
                                                    3. Elements forming microcolloids (rare earths) may be adsorbed on the surface of bone mineral.

                                                     

                                                    The skeleton of a standard man accounts for 10to 15% of the total body weight, representing a large potential storage depot for osteotropic toxicants. Bone is a highly specialized tissue consisting by volume of 54% minerals and 38% organic matrix. The mineral matrix of bone is hydroxyapatite, Ca10(PO4)6(OH)2 , in which the ratio of Ca to P is about 1.5 to one. The surface area of mineral available for adsorption is about 100m2 per g of bone.

                                                    Metabolic activity of the bones of the skeleton can be divided in two categories:

                                                    • active, metabolic bone, in which processes of resorption and new bone formation, or remodelling of existing bone, are very extensive
                                                    • stable bone with a low rate of remodelling or growth.

                                                     

                                                    In the fetus, infant and young child metabolic bone (see “available skeleton”) represents almost 100% of the skeleton. With age this percentage of metabolic bone decreases. Incorporation of toxicants during exposure appears in the metabolic bone and in more slowly turning-over compartments.

                                                    Incorporation of toxicants into bone occurs in two ways:

                                                    1. For ions, an ion exchange occurs with physiologically present calcium cations, or anions (phosphate, hydroxyl).
                                                    2. For toxicants forming colloid particles, adsorption on the mineral surface occurs.

                                                     

                                                    Ion-exchange reactions

                                                    The bone mineral, hydroxyapatite, represents a complex ion- exchange system. Calcium cations can be exchanged by various cations. The anions present in bone can also be exchanged by anions: phosphate with citrates and carbonates, hydroxyl with fluorine. Ions which are not exchangeable can be adsorbed on the mineral surface. When toxicant ions are incorporated in the mineral, a new layer of mineral can cover the mineral surface, burying toxicant into the bone structure. Ion exchange is a reversible process, depending on the concentration of ions, pH and fluid volume. Thus, for example, an increase of dietary calcium may decrease the deposition of toxicant ions in the lattice of minerals. It has been mentioned that with age the percentage of metabolic bone is decreased, although ion exchange continues. With ageing, bone mineral resorption occurs, in which bone density actually decreases. At this point, toxicants in bone may be released (e.g., lead).

                                                    About 30% of the ions incorporated into bone minerals are loosely bound and can be exchanged, captured by natural chelating agents and excreted, with a biological half-life of 15 days. The other 70% is more firmly bound. Mobilization and excretion of this fraction shows a biological half-life of 2.5 years and more depending on bone type (remodelling processes).

                                                    Chelating agents (Ca-EDTA, penicillamine, BAL, etc.) can mobilize considerable quantities of some heavy metals, and their excretion in urine greatly increased.

                                                    Colloid adsorption

                                                    Colloid particles are adsorbed as a film on the mineral surface (100m2 per g) by Van der Waals forces or chemisorption. This layer of colloids on the mineral surfaces is covered with the next layer of formed minerals, and the toxicants are more buried into the bone structure. The rate of mobilization and elimination depends on remodelling processes.

                                                    Accumulation in hair and nails

                                                    The hair and nails contain keratin, with sulphydryl groups able to chelate metallic cations such as mercury and lead.

                                                    Distribution of toxicant inside the cell

                                                    Recently the distribution of toxicants, especially some heavy metals, within cells of tissues and organs has become of importance. With ultracentrifugation techniques, various fractions of the cell can be separated to determine their content of metal ions and other toxicants.

                                                    Animal studies have revealed that after penetration into the cell, some metal ions are bound to a specific protein, metallothionein. This low molecular weight protein is present in the cells of liver, kidney and other organs and tissues. Its sulphydryl groups can bind six ions per molecule. Increased presence of metal ions induces the biosynthesis of this protein. Ions of cadmium are the most potent inducer. Metallothionein serves also to maintain homeostasis of vital copper and zinc ions. Metallothionein can bind zinc, copper, cadmium, mercury, bismuth, gold, cobalt and other cations.

                                                    Biotransformation and Elimination of Toxicants

                                                    During retention in cells of various tissues and organs, toxicants are exposed to enzymes which can biotransform (metabolize) them, producing metabolites. There are many pathways for the elimination of toxicants and/or metabolites: by exhaled air via the lungs, by urine via the kidneys, by bile via the GIT, by sweat via the skin, by saliva via the mouth mucosa, by milk via the mammary glands, and by hair and nails via normal growth and cell turnover.

                                                    The elimination of an absorbed toxicant depends on the portal of entry. In the lungs the absorption/desorption process starts immediately and toxicants are partially eliminated by exhaled air. Elimination of toxicants absorbed by other paths of entry is prolonged and starts after transport by blood, eventually being completed after distribution and biotransformation. During absorption an equilibrium exists between the concentrations of a toxicant in the blood and in tissues and organs. Excretion decreases toxicant blood concentration and may induce mobilization of a toxicant from tissues into blood.

                                                    Many factors can influence the elimination rate of toxicants and their metabolites from the body:

                                                    • physico-chemical properties of toxicants, especially the Nernst partition coefficient (P), dissociation constant (pKa), polarity, molecular structure, shape and weight
                                                    • level of exposure and time of post-exposure elimination
                                                    • portal of entry
                                                    • distribution in the body compartments, which differ in exchange rate with the blood and blood perfusion
                                                    • rate of biotransformation of lipophilic toxicants to more hydrophilic metabolites
                                                    • overall health condition of organism and, especially, of excretory organs (lungs, kidneys, GIT, skin, etc.)
                                                    • presence of other toxicants which can interfere with elimination.

                                                     

                                                    Here we distinguish two groups of compartments: (1) the rapid-exchange system— in these compartments, tissue concentration of toxicant is similar to that of the blood; and (2) the slow-exchange system, where tissue concentration of toxicant is higher than in blood due to binding and accumulation—adipose tissue, skeleton and kidneys can temporarily retain some toxicants, e.g., arsenic and zinc.

                                                    A toxicant can be excreted simultaneously by two or more excretion routes. However, usually one route is dominant.

                                                    Scientists are developing mathematical models describing the excretion of a particular toxicant. These models are based on the movement from one or both compartments (exchange systems), biotransformation and so on.

                                                    Elimination by exhaled air via lungs

                                                    Elimination via the lungs (desorption) is typical for toxicants with high volatility (e.g., organic solvents). Gases and vapours with low solubility in blood will be quickly eliminated this way, whereas toxicants with high blood solubility will be eliminated by other routes.

                                                    Organic solvents absorbed by the GIT or skin are excreted partially by exhaled air in each passage of blood through the lungs, if they have a sufficient vapour pressure. The Breathalyser test used for suspected drunk drivers is based on this fact. The concentration of CO in exhaled air is in equilibrium with the CO-Hb blood content. The radioactive gas radon appears in exhaled air due to the decay of radium accumulated in the skeleton.

                                                    Elimination of a toxicant by exhaled air in relation to the post-exposure period of time usually is expressed by a three-phase curve. The first phase represents elimination of toxicant from the blood, showing a short half-life. The second, slower phase represents elimination due to exchange of blood with tissues and organs (quick-exchange system). The third, very slow phase is due to exchange of blood with fatty tissue and skeleton. If a toxicant is not accumulated in such compartments, the curve will be two-phase. In some cases a four-phase curve is also possible.

                                                    Determination of gases and vapours in exhaled air in the post-exposure period is sometimes used for evaluation of exposures in workers.

                                                    Renal excretion

                                                    The kidney is an organ specialized in the excretion of numerous water-soluble toxicants and metabolites, maintaining homeostasis of the organism. Each kidney possesses about one million nephrons able to perform excretion. Renal excretion represents a very complex event encompassing three different mechanisms:

                                                    • glomerular filtration by Bowman’s capsule
                                                    • active transport in the proximal tubule
                                                    • passive transport in the distal tubule.

                                                     

                                                    Excretion of a toxicant via the kidneys to urine depends on the Nernst partition coefficient, dissociation constant and pH of urine, molecular size and shape, rate of metabolism to more hydrophilic metabolites, as well as health status of the kidneys.

                                                    The kinetics of renal excretion of a toxicant or its metabolite can be expressed by a two-, three- or four-phase excretion curve, depending on the distribution of the particular toxicant in various body compartments differing in the rate of exchange with the blood.

                                                    Saliva

                                                    Some drugs and metallic ions can be excreted through the mucosa of the mouth by saliva—for example, lead (“lead line”), mercury, arsenic, copper, as well as bromides, iodides, ethyl alcohol, alkaloids, and so on. The toxicants are then swallowed, reaching the GIT, where they can be reabsorbed or eliminated by faeces.

                                                    Sweat

                                                    Many non-electrolytes can be partially eliminated via skin by sweat: ethyl alcohol, acetone, phenols, carbon disulphide and chlorinated hydrocarbons.

                                                    Milk

                                                    Many metals, organic solvents and some organochlorine pesticides (DDT) are secreted via the mammary gland in mother’s milk. This pathway can represent a danger for nursing infants.

                                                    Hair

                                                    Analysis of hair can be used as an indicator of homeostasis of some physiological substances. Also exposure to some toxicants, especially heavy metals, can be evaluated by this kind of bioassay.

                                                    Elimination of toxicants from the body can be increased by:

                                                    • mechanical translocation via gastric lavage, blood transfusion or dialysis
                                                    • creating physiological conditions which mobilize toxicants by diet, change of hormonal balance, improving renal function by application of diuretics
                                                    • administration of complexing agents (citrates, oxalates, salicilates, phosphates), or chelating agents (Ca-EDTA, BAL, ATA, DMSA, penicillamine); this method is indicated only in persons under strict medical control. Application of chelating agents is often used for elimination of heavy metals from the body of exposed workers in the course of their medical treatment. This method is also used for evaluation of total body burden and level of past exposure.

                                                     

                                                    Exposure Determinations

                                                    Determination of toxicants and metabolites in blood, exhaled air, urine, sweat, faeces and hair is more and more used for evaluation of human exposure (exposure tests) and/or evaluation of the degree of intoxication. Therefore biological exposure limits (Biological MAC Values, Biological Exposure Indices—BEI) have recently been established. These bioassays show “internal exposure” of the organism, that is, total exposure of the body in both the work and living environments by all portals of entry (see “Toxicology test methods: Biomarkers”).

                                                    Combined Effects Due to Multiple Exposure

                                                    People in the work and/or living environment are usually exposed simultaneously or consecutively to various physical and chemical agents. Also it is necessary to take into consideration that some persons use medications, smoke, consume alcohol and food containing additives and so on. That means that usually multiple exposure is occurring. Physical and chemical agents can interact in each step of toxicokinetic and/or toxicodynamic processes, producing three possible effects:

                                                    1. Independent. Each agent produces a different effect due to a different mechanism of action,
                                                    2. Synergistic. The combined effect is greater than that of each single agent. Here we differentiate two types: (a) additive, where the combined effect is equal to the sum of the effects produced by each agent separately and (b) potentiating, where the combined effect is greater than additive.
                                                    3. Antagonistic. The combined effect is lower than additive.

                                                     

                                                    However, studies on combined effects are rare. This kind of study is very complex due to the combination of various factors and agents.

                                                    We can conclude that when the human organism is exposed to two or more toxicants simultaneously or consecutively, it is necessary to consider the possibility of some combined effects, which can increase or decrease the rate of toxicokinetic processes.

                                                     

                                                    Back

                                                    Monday, 28 February 2011 20:15

                                                    Metals and organometallic compounds

                                                    Toxic metals and organometallic compounds such as aluminium, antimony, inorganic arsenic, beryllium, cadmium, chromium, cobalt, lead, alkyl lead, metallic mercury and its salts, organic mercury compounds, nickel, selenium and vanadium have all been recognized for some time as posing potential health risks to exposed persons. In some cases, epidemiological studies on relationships between internal dose and resulting effect/response in occupationally exposed workers have been studied, thus permitting the proposal of health-based biological limit values (see table 1).

                                                    Table 1. Metals: Reference values and biological limit values proposed by the American Conference of Governmental Industrial Hygienists (ACGIH), Deutsche Forschungsgemeinschaft (DFG), and Lauwerys and Hoet (L and H)

                                                    Metal

                                                    Sample

                                                    Reference1 values*

                                                    ACGIH (BEI) limit2

                                                    DFG (BAT) limit3

                                                    L and H limit4 (TMPC)

                                                    Aluminium

                                                    Serum/plasma

                                                    Urine

                                                    <1 μg/100 ml

                                                    <30 μg/g

                                                     

                                                    200 μg/l (end of shift)

                                                    150 μg/g (end of shift)

                                                    Antimony

                                                    Urine

                                                    <1 μg/g

                                                       

                                                    35 μg/g (end of shift)

                                                    Arsenic

                                                    Urine (sum of inorganic arsenic and methylated metabolites)

                                                    <10 μg/g

                                                    50 μg/g (end of workweek)

                                                     

                                                    50 μg/g (if TWA: 0.05 mg/m3 ); 30 μg/g (if TWA: 0.01 mg/m3 ) (end of shift)

                                                    Beryllium

                                                    Urine

                                                    <2 μg/g

                                                         

                                                    Cadmium

                                                    Blood

                                                    Urine

                                                    <0.5 μg/100 ml

                                                    <2 μg/g

                                                    0.5 μg/100 ml

                                                    5 μg/g

                                                    1.5 μg/100 ml

                                                    15 μg/l

                                                    0.5 μg/100 ml

                                                    5 μg/g

                                                    Chromium

                                                    (soluble compounds)

                                                    Serum/plasma

                                                    Urine

                                                    <0.05 μg/100 ml

                                                    <5 μg/g

                                                    30 μg/g (end of shift, end of workweek); 10 μg/g (increase during shift)

                                                     

                                                    30 μg/g (end of shift)

                                                    Cobalt

                                                    Serum/plasma

                                                    Blood

                                                    Urine

                                                    <0.05 μg/100 ml

                                                    <0.2 μg/100 ml

                                                    <2 μg/g

                                                    0.1 μg/100 ml (end of shift, end of workweek)

                                                    15 μg/l (end of shift, end of workweek)

                                                    0.5 μg/100 ml (EKA)**

                                                    60 μg/l (EKA)**

                                                    30 μg/g (end of shift, end of workweek)

                                                    Lead

                                                    Blood (lead)

                                                    ZPP in blood

                                                    Urine (lead)

                                                    ALA urine

                                                    <25 μg/100 ml

                                                    <40 μg/100 ml blood

                                                    <2.5μg/g Hb

                                                    <50 μg/g

                                                    <4.5 mg/g

                                                    30 μg/100 ml (not critical)

                                                    female <45 years:

                                                    30 μg/100 ml

                                                    male: 70 μg/100 ml

                                                    female <45 years:

                                                    6 mg/l; male: 15 mg/l

                                                    40 μg/100 ml

                                                    40 μg/100 ml blood or 3 μg/g Hb

                                                    50 μg/g

                                                    5 mg/g

                                                    Manganese

                                                    Blood

                                                    Urine

                                                    <1 μg/100 ml

                                                    <3 μg/g

                                                         

                                                    Mercury inorganic

                                                    Blood

                                                    Urine

                                                    <1 μg/100 ml

                                                    <5 μg/g

                                                    1.5 μg/100 ml (end of shift, end of workweek)

                                                    35 μg/g (preshift)

                                                    5 μg/100 ml

                                                    200 μg/l

                                                    2 μg/100 ml (end of shift)

                                                    50 μg/g (end of shift)

                                                    Nickel

                                                    (soluble compounds)

                                                    Serum/plasma

                                                    Urine

                                                    <0.05 μg/100 ml

                                                    <2 μg/g

                                                     

                                                    45 μg/l (EKA)**

                                                    30 μg/g

                                                    Selenium

                                                    Serum/plasma

                                                    Urine

                                                    <15 μg/100 ml

                                                    <25 μg/g

                                                         

                                                    Vanadium

                                                    Serum/plasma

                                                    Blood

                                                    Urine

                                                    <0.2 μg/100 ml

                                                    <0.1 μg/100 ml

                                                    <1 μg/g

                                                     

                                                    70 μg/g creatinine

                                                    50 μg/g

                                                    * Urine values are per gram of creatinine.
                                                    ** EKA = Exposure equivalents for carcinogenic materials.
                                                    1 Taken with some modifications from Lauwerys and Hoet 1993.
                                                    2 From ACGIH 1996-97.
                                                    3 From DFG 1996.
                                                    4 Tentative maximum permissible concentrations (TMPCs) taken from Lauwerys and Hoet 1993.

                                                    One problem in seeking precise and accurate measurements of metals in biological materials is that the metallic substances of interest are often present in the media at very low levels. When biological monitoring consists of sampling and analyzing urine, as is often the case, it is usually performed on “spot” samples; correction of the results for the dilution of urine is thus usually advisable. Expression of the results per gram of creatinine is the method of standardization most frequently used. Analyses performed on too dilute or too concentrated urine samples are not reliable and should be repeated.

                                                    Aluminium

                                                    In industry, workers may be exposed to inorganic aluminium compounds by inhalation and possibly also by ingestion of dust containing aluminium. Aluminium is poorly absorbed by the oral route, but its absorption is increased by simultaneous intake of citrates. The rate of absorption of aluminium deposited in the lung is unknown; the bioavailability is probably dependent on the physicochemical characteristics of the particle. Urine is the main route of excretion of the absorbed aluminium. The concentration of aluminium in serum and in urine is determined by both the intensity of a recent exposure and the aluminium body burden. In persons non-occupationally exposed, aluminium concentration in serum is usually below 1 μg/100 ml and in urine rarely exceeds 30 μg/g creatinine. In subjects with normal renal function, urinary excretion of aluminium is a more sensitive indicator of aluminium exposure than its concentration in serum/plasma.

                                                    Data on welders suggest that the kinetics of aluminium excretion in urine involves a mechanism of two steps, the first one having a biological half-life of about eight hours. In workers who have been exposed for several years, some accumulation of the metal in the body effectively occurs and aluminium concentrations in serum and in urine are also influenced by the aluminium body burden. Aluminium is stored in several compartments of the body and excreted from these compartments at different rates over many years. High accumulation of aluminium in the body (bone, liver, brain) has also been found in patients suffering from renal insufficiency. Patients undergoing dialysis are at risk of bone toxicity and/or encephalopathy when their serum aluminium concentration chronically exceeds 20 μg/100 ml, but it is possible to detect signs of toxicity at even lower concentrations. The Commission of the European Communities has recommended that, in order to prevent aluminium toxicity, the concentration of aluminium in plasma should never exceed 20 μg/100 ml; a level above 10 μg/100 ml should lead to an increased monitoring frequency and health surveillance, and a concentration exceeding 6 μg/100 ml should be considered as evidence of an excessive build-up of the aluminium body burden.

                                                    Antimony

                                                    Inorganic antimony can enter the organism by ingestion or inhalation, but the rate of absorption is unknown. Absorbed pentavalent compounds are primarily excreted with urine and trivalent compounds via faeces. Retention of some antimony compounds is possible after long-term exposure. Normal concentrations of antimony in serum and urine are probably below 0.1 μg/100 ml and 1 μg/g creatinine, respectively.

                                                    A preliminary study on workers exposed to pentavalent antimony indicates that a time-weighted average exposure to 0.5 mg/m3 would lead to an increase in urinary antimony concentration of 35 μg/g creatinine during the shift.

                                                    Inorganic Arsenic

                                                    Inorganic arsenic can enter the organism via the gastrointestinal and respiratory tracts. The absorbed arsenic is mainly eliminated through the kidney either unchanged or after methylation. Inorganic arsenic is also excreted in the bile as a glutathione complex.

                                                    Following a single oral exposure to a low dose of arsenate, 25 and 45% of the administered dose is excreted in urine within one and four days, respectively.

                                                    Following exposure to inorganic trivalent or pentavalent arsenic, the urinary excretion consists of 10 to 20% inorganic arsenic, 10 to 20% monomethylarsonic acid, and 60 to 80% cacodylic acid. Following occupational exposure to inorganic arsenic, the proportion of the arsenical species in urine depends on the time of sampling.

                                                    The organoarsenicals present in marine organisms are also easily absorbed by the gastrointestinal tract but are excreted for the most part unchanged.

                                                    Long-term toxic effects of arsenic (including the toxic effects on genes) result mainly from exposure to inorganic arsenic. Therefore, biological monitoring aims at assessing exposure to inorganic arsenic compounds. For this purpose, the specific determination of inorganic arsenic (Asi), monomethylarsonic acid (MMA), and cacodylic acid (DMA) in urine is the method of choice. However, since seafood consumption might still influence the excretion rate of DMA, the workers being tested should refrain from eating seafood during the 48 hours prior to urine collection.

                                                    In persons non-occupationally exposed to inorganic arsenic and who have not recently consumed a marine organism, the sum of these three arsenical species does not usually exceed 10 μg/g urinary creatinine. Higher values can be found in geographical areas where the drinking water contains significant amounts of arsenic.

                                                    It has been estimated that in the absence of seafood consumption, a time-weighted average exposure to 50 and 200 μg/m3 inorganic arsenic leads to mean urinary concentrations of the sum of the metabolites (Asi, MMA, DMA) in post-shift urine samples of 54 and 88 μg/g creatinine, respectively.

                                                    In the case of exposure to less soluble inorganic arsenic compounds (e.g., gallium arsenide), the determination of arsenic in urine will reflect the amount absorbed but not the total dose delivered to the body (lung, gastrointestinal tract).

                                                    Arsenic in hair is a good indicator of the amount of inorganic arsenic absorbed during the growth period of the hair. Organic arsenic of marine origin does not appear to be taken up in hair to the same degree as inorganic arsenic. Determination of arsenic concentration along the length of the hair may provide valuable information concerning the time of exposure and the length of the exposure period. However, the determination of arsenic in hair is not recommended when the ambient air is contaminated by arsenic, as it will not be possible to distinguish between endogenous arsenic and arsenic externally deposited on the hair. Arsenic levels in hair are usually below 1 mg/kg. Arsenic in nails has the same significance as arsenic in hair.

                                                    As with urine levels, blood arsenic levels may reflect the amount of arsenic recently absorbed, but the relation between the intensity of arsenic exposure and its concentration in blood has not yet been assessed.

                                                    Beryllium

                                                    Inhalation is the primary route of beryllium uptake for occupationally exposed persons. Long-term exposure can result in the storage of appreciable amounts of beryllium in lung tissues and in the skeleton, the ultimate site of storage. Elimination of absorbed beryllium occurs mainly via urine and only to a minor degree in the faeces.

                                                    Beryllium levels can be determined in blood and urine, but at present these analyses can be used only as qualitative tests to confirm exposure to the metal, since it is not known to what extent the concentrations of beryllium in blood and urine may be influenced by recent exposure and by the amount already stored in the body. Furthermore, it is difficult to interpret the limited published data on the excretion of beryllium in exposed workers, because usually the external exposure has not been adequately characterized and the analytical methods have different sensitivities and precision. Normal urinary and serum levels of beryllium are probably below
                                                    2 μg/g creatinine and 0.03 μg/100 ml, respectively.

                                                    However, the finding of a normal concentration of beryllium in urine is not sufficient evidence to exclude the possibility of past exposure to beryllium. Indeed, an increased urinary excretion of beryllium has not always been found in workers even though they have been exposed to beryllium in the past and have consequently developed pulmonary granulomatosis, a disease characterized by multiple granulomas, that is, nodules of inflammatory tissue, found in the lungs.

                                                    Cadmium

                                                    In the occupational setting, absorption of cadmium occurs chiefly through inhalation. However, gastrointestinal absorption may significantly contribute to the internal dose of cadmium. One important characteristic of cadmium is its long biological half-life in the body, exceeding
                                                    10 years. In tissues, cadmium is mainly bound to metallothionein. In blood, it is mainly bound to red blood cells. In view of the property of cadmium to accumulate, any biological monitoring programme of population groups chronically exposed to cadmium should attempt to evaluate both the current and the integrated exposure.

                                                    By means of neutron activation, it is currently possible to carry out in vivo measurements of the amounts of cadmium accumulated in the main sites of storage, the kidneys and the liver. However, these techniques are not used routinely. So far, in the health surveillance of workers in industry or in large-scale studies on the general population, exposure to cadmium has usually been evaluated indirectly by measuring the metal in urine and blood.

                                                    The detailed kinetics of the action of cadmium in humans is not yet fully elucidated, but for practical purposes the following conclusions can be formulated regarding the significance of cadmium in blood and urine. In newly exposed workers, the levels of cadmium in blood increase progressively and after four to six months reach a concentration corresponding to the intensity of exposure. In persons with ongoing exposure to cadmium over a long period, the concentration of cadmium in the blood reflects mainly the average intake during recent months. The relative influence of the cadmium body burden on the cadmium level in the blood may be more important in persons who have accumulated a large amount of cadmium and have been removed from exposure. After cessation of exposure, the cadmium level in blood decreases relatively fast, with an initial half-time of two to three months. Depending on the body burden, the level may, however, remain higher than in control subjects. Several studies in humans and animals have indicated that the level of cadmium in urine can be interpreted as follows: in the absence of acute overexposure to cadmium, and as long as the storage capability of the kidney cortex is not exceeded or cadmium-induced nephropathy has not yet occurred, the level of cadmium in urine increases progressively with the amount of cadmium stored in the kidneys. Under such conditions, which prevail mainly in the general population and in workers moderately exposed to cadmium, there is a significant correlation between urinary cadmium and cadmium in the kidneys. If exposure to cadmium has been excessive, the cadmium-binding sites in the organism become progressively saturated and, despite continuous exposure, the cadmium concentration in the renal cortex levels off.

                                                    From this stage on, the absorbed cadmium cannot be further retained in that organ and it is rapidly excreted in the urine. Then at this stage, the concentration of urinary cadmium is influenced by both the body burden and the recent intake. If exposure is continued, some subjects may develop renal damage, which gives rise to a further increase of urinary cadmium as a result of the release of cadmium stored in the kidney and depressed reabsorption of circulating cadmium. However, after an episode of acute exposure, cadmium levels in urine may rapidly and briefly increase without reflecting an increase in the body burden.

                                                    Recent studies indicate that metallothionein in urine has the same biological significance. Good correlations have been observed between the urinary concentration of metallothionein and that of cadmium, independently of the intensity of exposure and the status of renal function.

                                                    The normal levels of cadmium in blood and in urine are usually below 0.5 μg/100 ml and
                                                    2 μg/g creatinine, respectively. They are higher in smokers than in nonsmokers. In workers chronically exposed to cadmium, the risk of renal impairment is negligible when urinary cadmium levels never exceed 10 μg/g creatinine. An accumulation of cadmium in the body which would lead to a urinary excretion exceeding this level should be prevented. However, some data suggest that certain renal markers (whose health significance is still unknown) may become abnormal for urinary cadmium values between 3 and 5 μg/g creatinine, so it seems reasonable to propose a lower biological limit value of 5 μg/g creatinine. For blood, a biological limit of 0.5 μg/100 ml has been proposed for long-term exposure. It is possible, however, that in the case of the general population exposed to cadmium via food or tobacco or in the elderly, who normally suffer a decline of renal function, the critical level in the renal cortex may be lower.

                                                    Chromium

                                                    The toxicity of chromium is attributable chiefly to its hexavalent compounds. The absorption of hexavalent compounds is relatively higher than the absorption of trivalent compounds. Elimination occurs mainly via urine.

                                                    In persons non-occupationally exposed to chromium, the concentration of chromium in serum and in urine usually does not exceed 0.05 μg/100 ml and 2 μg/g creatinine, respectively. Recent exposure to soluble hexavalent chromium salts (e.g., in electroplaters and stainless steel welders) can be assessed by monitoring chromium level in urine at the end of the workshift. Studies carried out by several authors suggest the following relation: a TWA exposure of 0.025 or 0.05 mg/m3 hexavalent chromium is associated with an average concentration at the end of the exposure period of 15 or 30 μg/g creatinine, respectively. This relation is valid only on a group basis. Following exposure to 0.025 mg/m3 hexavalent chromium, the lower 95% confidence limit value is approximately 5 μg/g creatinine. Another study among stainless steel welders has found that a urinary chromium concentration on the order of 40 μg/l corresponds to an average exposure to 0.1 mg/m3 chromium trioxide.

                                                    Hexavalent chromium readily crosses cell membranes, but once inside the cell, it is reduced to trivalent chromium. The concentration of chromium in erythrocytes might be an indicator of the exposure intensity to hexavalent chromium during the lifetime of the red blood cells, but this does not apply to trivalent chromium.

                                                    To what extent monitoring chromium in urine is useful for health risk estimation remains to be assessed.

                                                    Cobalt

                                                    Once absorbed, by inhalation and to some extent via the oral route, cobalt (with a biological half-life of a few days) is eliminated mainly with urine. Exposure to soluble cobalt compounds leads to an increase of cobalt concentration in blood and urine.

                                                    The concentrations of cobalt in blood and in urine are influenced chiefly by recent exposure. In non-occupationally exposed subjects, urinary cobalt is usually below 2 μg/g creatinine and serum/plasma cobalt below 0.05 μg/100 ml.

                                                    For TWA exposures of 0.1 mg/m3 and 0.05 mg/m3, mean urinary levels ranging from about 30 to 75 μg/l and 30 to 40 μg/l, respectively, have been reported (using end-of-shift samples). Sampling time is important as there is a progressive increase in the urinary levels of cobalt during the workweek.

                                                    In workers exposed to cobalt oxides, cobalt salts, or cobalt metal powder in a refinery, a TWA of 0.05 mg/m3 has been found to lead to an average cobalt concentration of 33 and 46 μg/g creatinine in the urine collected at the end of the shift on Monday and Friday, respectively.

                                                    Lead

                                                    Inorganic lead, a cumulative toxin absorbed by the lungs and the gastrointestinal tract, is clearly the metal that has been most extensively studied; thus, of all the metal contaminants, the reliability of methods for assessing recent exposure or body burden by biological methods is greatest for lead.

                                                    In a steady-state exposure situation, lead in whole blood is considered to be the best indicator of the concentration of lead in soft tissues and hence of recent exposure. However, the increase of blood lead levels (Pb-B) becomes progressively smaller with increasing levels of lead exposure. When occupational exposure has been prolonged, cessation of exposure is not necessarily associated with a return of Pb-B to a pre-exposure (background) value because of the continuous release of lead from tissue depots. The normal blood and urinary lead levels are generally below 20 μg/100 ml and 50 μg/g creatinine, respectively. These levels may be influenced by the dietary habits and the place of residence of the subjects. The WHO has proposed 40 μg/100 ml as the maximal tolerable individual blood lead concentration for adult male workers, and 30 μg/100 ml for women of child-bearing age. In children, lower blood lead concentrations have been associated with adverse effects on the central nervous system. Lead level in urine increases exponentially with increasing Pb-B and under a steady-state situation is mainly a reflection of recent exposure.

                                                    The amount of lead excreted in urine after administration of a chelating agent (e.g., CaEDTA) reflects the mobilizable pool of lead. In control subjects, the amount of lead excreted in urine within 24 hours after intravenous administration of one gram of EDTA usually does not exceed 600 μg. It seems that under constant exposure, chelatable lead values reflect mainly blood and soft tissues lead pool, with only a small fraction derived from bones.

                                                    An x-ray fluorescence technique has been developed for measuring lead concentration in bones (phalanges, tibia, calcaneus, vertebrae), but presently the limit of detection of the technique restricts its use to occupationally exposed persons.

                                                    Determination of lead in hair has been proposed as a method of evaluating the mobilizable pool of lead. However, in occupational settings, it is difficult to distinguish between lead incorporated endogenously into hair and that simply adsorbed on its surface.

                                                    The determination of lead concentration in the circumpulpal dentine of deciduous teeth (baby teeth) has been used to estimate exposure to lead during early childhood.

                                                    Parameters reflecting the interference of lead with biological processes can also be used for assessing the intensity of exposure to lead. The biological parameters which are currently used are coproporphyrin in urine (COPRO-U), delta-aminolaevulinic acid in urine (ALA-U), erythrocyte protoporphyrin (EP, or zinc protoporphyrin), delta-aminolaevulinic acid dehydratase (ALA-D), and pyrimidine-5’-nucleotidase (P5N) in red blood cells. In steady-state situations, the changes in these parameters are positively (COPRO-U, ALA-U, EP) or negatively (ALA-D, P5N) correlated with lead blood levels. The urinary excretion of COPRO (mostly the III isomer) and ALA starts to increase when the concentration of lead in blood reaches a value of about 40 μg/100 ml. Erythrocyte protoporphyrin starts to increase significantly at levels of lead in blood of about 35 μg/100 ml in males and 25 μg/100 ml in females. After the termination of occupational exposure to lead, the erythrocyte protoporphyrin remains elevated out of proportion to current levels of lead in blood. In this case, the EP level is better correlated with the amount of chelatable lead excreted in urine than with lead in blood.

                                                    Slight iron deficiency will also cause an elevated protoporphyrin concentration in red blood cells. The red blood cell enzymes, ALA-D and P5N, are very sensitive to the inhibitory action of lead. Within the range of blood lead levels of 10 to 40 μg/100 ml, there is a close negative correlation between the activity of both enzymes and blood lead.

                                                    Alkyl Lead

                                                    In some countries, tetraethyllead and tetramethyllead are used as antiknock agents in automobile fuels. Lead in blood is not a good indicator of exposure to tetraalkyllead, whereas lead in urine seems to be useful for evaluating the risk of overexposure.

                                                    Manganese

                                                    In the occupational setting, manganese enters the body mainly through the lungs; absorption via the gastrointestinal tract is low and probably depends on a homeostatic mechanism. Manganese elimination occurs through the bile, with only small amounts excreted with urine.

                                                    The normal concentrations of manganese in urine, blood, and serum or plasma are usually less than 3 μg/g creatinine, 1 μg/100 ml, and 0.1 μg/100 ml, respectively.

                                                    It seems that, on an individual basis, neither manganese in blood nor manganese in urine are correlated to external exposure parameters.

                                                    There is apparently no direct relation between manganese concentration in biological material and the severity of chronic manganese poisoning. It is possible that, following occupational exposure to manganese, early adverse central nervous system effects might already be detected at biological levels close to normal values.

                                                    Metallic Mercury and its Inorganic Salts

                                                    Inhalation represents the main route of uptake of metallic mercury. The gastrointestinal absorption of metallic mercury is negligible. Inorganic mercury salts can be absorbed through the lungs (inhalation of inorganic mercury aerosol) as well as the gastrointestinal tract. The cutaneous absorption of metallic mercury and its inorganic salts is possible.

                                                    The biological half-life of mercury is of the order of two months in the kidney but is much longer in the central nervous system.

                                                    Inorganic mercury is excreted mainly with the faeces and urine. Small quantities are excreted through salivary, lacrimal and sweat glands. Mercury can also be detected in expired air during the few hours following exposure to mercury vapour. Under chronic exposure conditions there is, at least on a group basis, a relationship between the intensity of recent exposure to mercury vapour and the concentration of mercury in blood or urine. The early investigations, during which static samples were used for monitoring general workroom air, showed that an average mercury-air, Hg–air, concentration of 100 μg/m3 corresponds to average mercury levels in blood (Hg–B) and in urine (Hg–U) of 6 μg Hg/100 ml and 200 to 260 μg/l, respectively. More recent observations, particularly those assessing the contribution of the external micro-environment close to the respiratory tract of the workers, indicate that the air (μg/m3)/urine (μg/g creatinine)/ blood (μg/100ml) mercury relationship is approximately 1/1.2/0.045. Several epidemiological studies on workers exposed to mercury vapour have demonstrated that for long-term exposure, the critical effect levels of Hg–U and Hg–B are approximately 50 μg/g creatinine and 2 μg/100 ml, respectively.

                                                    However, some recent studies seem to indicate that signs of adverse effects on the central nervous system or the kidney can already be observed at a urinary mercury level below 50 μg/g creatinine.

                                                    Normal urinary and blood levels are generally below 5 μg/g creatinine and 1 μg/100 ml, respectively. These values can be influenced by fish consumption and the number of mercury amalgam fillings in the teeth.

                                                    Organic Mercury Compounds

                                                    The organic mercury compounds are easily absorbed by all the routes. In blood, they are to be found mainly in red blood cells (around 90%). A distinction must be made, however, between the short chain alkyl compounds (mainly methylmercury), which are very stable and are resistant to biotransformation, and the aryl or alkoxyalkyl derivatives, which liberate inorganic mercury in vivo. For the latter compounds, the concentration of mercury in blood, as well as in urine, is probably indicative of the exposure intensity.

                                                    Under steady-state conditions, mercury in whole blood and in hair correlates with methylmercury body burden and with the risk of signs of methylmercury poisoning. In persons chronically exposed to alkyl mercury, the earliest signs of intoxication (paresthesia, sensory disturbances) may occur when the level of mercury in blood and in hair exceeds 20 μg/100 ml and 50 μg/g, respectively.

                                                    Nickel

                                                    Nickel is not a cumulative toxin and almost all the amount absorbed is excreted mainly via the urine, with a biological half-life of 17 to 39 hours. In non-occupationally exposed subjects, the urine and plasma concentrations of nickel are usually below 2 μg/g creatinine and 0.05 μg/100 ml, respectively.

                                                    The concentrations of nickel in plasma and in urine are good indicators of recent exposure to metallic nickel and its soluble compounds (e.g., during nickel electroplating or nickel battery production). Values within normal ranges usually indicate nonsignificant exposure and increased values are indicative of overexposure.

                                                    For workers exposed to soluble nickel compounds, a biological limit value of 30 μg/g creatinine (end of shift) has been tentatively proposed for nickel in urine.

                                                    In workers exposed to slightly soluble or insoluble nickel compounds, increased levels in body fluids generally indicate significant absorption or progressive release from the amount stored in the lungs; however, significant amounts of nickel may be deposited in the respiratory tract (nasal cavities, lungs) without any significant elevation of its plasma or urine concentration. Therefore, “normal” values have to be interpreted cautiously and do not necessarily indicate absence of health risk.

                                                    Selenium

                                                    Selenium is an essential trace element. Soluble selenium compounds seem to be easily absorbed through the lungs and the gastrointestinal tract. Selenium is mainly excreted in urine, but when exposure is very high it can also be excreted in exhaled air as dimethylselenide vapour. Normal selenium concentrations in serum and urine are dependent on daily intake, which may vary considerably in different parts of the world but are usually below 15 μg/100 ml and 25 μg/g creatinine, respectively. The concentration of selenium in urine is mainly a reflection of recent exposure. The relationship between the intensity of exposure and selenium concentration in urine has not yet been established.

                                                    It seems that the concentration in plasma (or serum) and urine mainly reflects short-term exposure, whereas the selenium content of erythrocytes reflects more long-term exposure.

                                                    Measuring selenium in blood or urine gives some information on selenium status. Currently it is more often used to detect a deficiency rather than an overexposure. Since the available data concerning the health risk of long-term exposure to selenium and the relationship between potential health risk and levels in biological media are too limited, no biological threshold value can be proposed.

                                                    Vanadium

                                                    In industry, vanadium is absorbed mainly via the pulmonary route. Oral absorption seems low (less than 1%). Vanadium is excreted in urine with a biological half-life of about 20 to 40 hours, and to a minor degree in faeces. Urinary vanadium seems to be a good indicator of recent exposure, but the relationship between uptake and vanadium levels in urine has not yet been sufficiently established. It has been suggested that the difference between post-shift and pre-shift urinary concentrations of vanadium permits the assessment of exposure during the workday, whereas urinary vanadium two days after cessation of exposure (Monday morning) would reflect accumulation of the metal in the body. In non-occupationally exposed persons, vanadium concentration in urine is usually below 1 μg/g creatinine. A tentative biological limit value of 50 μg/g creatinine (end of shift) has been proposed for vanadium in urine.

                                                     

                                                    Back

                                                    Monday, 07 March 2011 19:01

                                                    Ergonomics and Standardization

                                                    Origins

                                                    Standardization in the field of ergonomics has a relatively short history. It started in the beginning of the 1970s when the first committees were founded on the national level (e.g., in Germany within the standardization institute DIN), and it continued on an international level after the foundation of the ISO (International Organization for Standardization) TC (Technical Committee) 159 “Ergonomics”, in 1975. In the meantime ergonomics standardization takes place on regional levels as well, for example, on the European level within the CEN (Commission européenne de normalisation), which established its TC 122 “Ergonomics” in 1987. The existence of the latter committee underscores the fact that one of the important reasons for establishing committees for the standardization of ergonomics knowledge and principles can be found in legal (and quasi-legal) regulations, especially with respect to safety and health, which require the application of ergonomics principles and findings in the design of products and work systems. National laws requiring the application of well-established ergonomics findings were the reason for the establishment of the German ergonomics committee in 1970, and European Directives, especially the Machinery Directive (relating to safety standards), were responsible for establishing an ergonomics committee on the European level. Since legal regulations usually are not, cannot and should not be very specific, the task of specifying which ergonomics principles and findings should be applied was given to or taken up by ergonomics standardization committees. Especially on the European level, it can be recognized that ergonomics standardization can contribute to the task of providing for broad and comparable conditions of machinery safety, thus removing barriers to the free trade of machinery within the continent itself.

                                                    Perspectives

                                                    Ergonomics standardization thus started with a strong protective, although preventive, perspective, with ergonomics standards being developed with the aim of protecting workers against adverse effects at different levels of health protection. Ergonomics standards were thus prepared with the following intentions in view:

                                                    • to ensure that assigned tasks do not exceed the limits of the performance capacities of the worker
                                                    • to prevent injury or any detrimental effects to the health of the worker whether permanent or transient, either in the short or in the long run, even if the tasks in question can be performed, if only for a short time, without negative effects
                                                    • to provide that tasks and working conditions will not lead to impairments, even if recuperation is possible with time.

                                                     

                                                    International standardization, which was not so closely coupled to legislation, on the other hand, always also tried to open a perspective in the direction of producing standards which would go beyond the prevention of and protection against adverse effects (e.g., by specifying minimal/maximal values) and instead proactively provide for optimal working conditions to promote the well-being and personal development of the worker, as well as the effectiveness, efficiency, reliability and productivity of the work system.

                                                    This is a point where it becomes evident that ergonomics, and especially ergonomics standardization, has very distinct social and political dimensions. Whereas the protective approach with respect to safety and health is generally accepted and agreed upon among the parties involved (employers, unions, administration and ergonomics experts) for all levels of standardization, the proactive approach is not equally accepted by all parties in the same way. This might be due to the fact that, especially where legislation requires the application of ergonomics principles (and thus either explicitly or implicitly the application of ergonomics standards), some parties feel that such standards might limit their freedom of action or negotiation. Since international standards are less compelling (transferring them into the body of national standards is at the discretion of the national standardization committees) the proactive approach has been developed furthest at the international level of ergonomics standardization.

                                                    The fact that certain regulations would indeed restrict the discretion of those to whom they applied served to discourage standardization in certain areas, for example in connection with the European Directives under Article 118a of the Single European Act, relating to safety and health in the use and operation of machinery at the workplace, and in the design of work systems and workplace design. On the other hand, under the Directives issued under Article 100a, relating to safety and health in the design of machinery with regard to the free trade of this machinery within the European Union (EU), European ergonomics standardization is mandated by the European Commission.

                                                    From an ergonomics point of view, however, it is difficult to understand why ergonomics in the design of machinery should be different from that in the use and operation of machinery within a work system. It is thus to be hoped that the distinction will be given up in the future, since it seems to be more detrimental than beneficial to the development of a consistent body of ergonomics standards.

                                                    Types of Ergonomics Standards

                                                    The first international ergonomics standard to have been developed (based on a German DIN national standard) is ISO 6385, “Ergonomic principles in the design of work systems”, published in 1981. It is the basic standard of the ergonomics standards series and set the stage for the standards which followed by defining the basic concepts and stating the general principles of the ergonomic design of work systems, including tasks, tools, machinery, workstations, work space, work environment and work organization. This international standard, which is now undergoing revision, is a guideline standard, and as such provides guidelines to be followed. It does not, however, provide technical or physical specifications which have to be met. These can be found in a different type of standards, that is, specification standards, for example, those on anthropometry or thermal conditions. Both types of standards fulfil different functions. While guideline standards intend to show their users “what to do and how to do it” and indicate those principles that must or should be observed, for example, with respect to mental workload, specification standards provide users with detailed information about safety distances or measurement procedures, for example, that have to be met and where compliance with these prescriptions can be tested by specified procedures. This is not always possible with guideline standards, although despite their relative lack of specificity it can usually be demonstrated when and where guidelines have been violated. A subset of specification standards are “database” standards, which provide the user with relevant ergonomics data, for example, body dimensions.

                                                    CEN standards are classified as A-, B- and C-type standards, depending on their scope and field of application. A-type standards are general, basic standards which apply to all kinds of applications, B-type standards are specific for an area of application (which means that most of the ergonomics standards within the CEN will be of this type), and C-type standards are specific for a certain kind of machinery, for example, hand-held drilling machines.

                                                    Standardization Committees

                                                    Ergonomics standards, like other standards, are produced in the appropriate technical committees (TCs), their subcommittees (SCs) or working groups (WGs). For the ISO this is TC 159, for CEN it is TC 122, and on the national level, the respective national committees. Besides the ergonomics committees, ergonomics is also dealt with in TCs working on machine safety (e.g., CEN TC 114 and ISO TC 199) with which liaison and close cooperation is maintained. Liaisons are also established with other committees for which ergonomics might be of relevance. Responsibility for ergonomics standards, however, is reserved to the ergonomics committees themselves.

                                                    A number of other organizations are engaged in the production of ergonomics standards, such as the IEC (International Electrotechnical Commission); CENELEC, or the respective national committees in the electrotechnical field; CCITT (Comité consultative international des organisations téléphoniques et télégraphiques) or ETSI (European Telecommunication Standards Institute) in the field of telecommunications; ECMA (European Computer Manufacturers Association) in the field of computer systems; and CAMAC (Computer Assisted Measurement and Control Association) in the field of new technologies in manufacturing, to name only a few. With some of these the ergonomics committees do have liaisons in order to avoid duplication of work or inconsistent specifications; with some organizations (e.g., the IEC) even joint technical committees are established for cooperation in areas of mutual interest. With other committees, however, there is no coordination or cooperation at all. The main purpose of these committees is to produce (ergonomics) standards that are specific to their field of activity. Since the number of such organizations at the different levels is rather large, it becomes quite complicated (if not impossible) to carry out a complete overview of ergonomics standardization. The present review will therefore be restricted to ergonomics standardization in the international and European ergonomics committees.

                                                    Structure of Standardization Committees

                                                    Ergonomics standardization committees are quite similar to one another in structure. Usually one TC within a standardization organization is responsible for ergonomics. This committee (e.g., ISO TC 159) mainly has to do with decisions about what should be standardized (e.g., work items) and how to organize and coordinate the standardization within the committee, but usually no standards are prepared at this level. Below the TC level are other committees. For example, the ISO has subcommittees (SCs), which are responsible for a defined field of standardization: SC 1 for general ergonomic guiding principles, SC 3 for anthropometry and biomechanics, SC 4 for human-system interaction and SC 5 for the physical work environment. CEN TC 122 has working groups (WGs) below the TC level which are so constituted as to deal with specified fields within ergonomics standardization. SCs within ISO TC 159 operate as steering committees for their field of responsibility and do the first voting, but usually they do not also prepare standards. This is done in their WGs, which are composed of experts nominated by their national committees, whereas SC and TC meetings are attended by national delegations representing national points of view. Within the CEN, duties are not sharply distinguished at the WG level; WGs operate both as steering and production committees, although a good deal of work is accomplished in ad hoc groups, which are composed of members of the WG (nominated by their national committees) and established to prepare the drafts for a standard. WGs within an ISO SC are established to do the practical standardization work, that is, prepare drafts, work on comments, identify needs for standardization, and prepare proposals to the SC and TC, which will then take the appropriate decisions or actions.

                                                    Preparation of Ergonomics Standards

                                                    The preparation of ergonomics standards has changed quite markedly within the last years in view of the stronger emphasis now being placed on European and other international developments. In the beginning, national standards, which had been prepared by experts from one country in their national committee and agreed upon by the interested parties among the general public of that country in a specified voting procedure, were transferred as input to the responsible SC and WG of ISO TC 159, after a formal vote had been taken at the TC level that such an international standard should be prepared. The working group, composed of ergonomics experts (and experts from politically interested parties) from all participating member bodies (i.e., the national standardization organizations) of TC 159 who were willing to cooperate in this work project, would then work on any inputs and prepare a working draft (WD). After this draft proposal is agreed upon in the WG, it becomes a committee draft (CD), which is distributed to the member bodies of the SC for approval and comments. If the draft receives substantial support from the SC member bodies (i.e., if at least two-thirds vote in favour) and after comments by the national committees have been incorporated by the WG in the improved version, a Draft International Standard (DIS) is submitted for voting to all members of TC 159. If substantial support, at this step from the member bodies of the TC, is achieved (and perhaps after incorporating editorial changes), this version will then be published as an International Standard (IS) by the ISO. Voting of the member bodies at the TC and SC level is based on voting at the national level, and comments can be supplied through the member bodies by experts or interested parties in each country. The procedure is roughly equivalent in CEN TC 122, with the exception that there are no SCs below the TC level and that voting takes part with weighted votes (according to the size of the country) whereas within the ISO the rule is one country, one vote. If a draft fails at any step, and unless the WG decides that an agreeable revision cannot be achieved, it has to be revised and then has to pass through the voting procedure again.

                                                    International standards are then transferred into national standards if the national committees vote accordingly. By contrast, European Standards (ENs) have to be transferred into national standards by the CEN members and conflicting national standards have to be withdrawn. That means that harmonized ENs will be effective in all CEN countries (and, due to their influence on trade, will be relevant to manufacturers in all other countries who intend to sell goods to a customer in a CEN country).

                                                    ISO-CEN Cooperation

                                                    In order to avoid conflicting standards and duplication of work and to allow non-CEN members to take part in developments in the CEN, a cooperative agreement between the ISO and the CEN has been achieved (the so-called Vienna Agreement) which regulates the formalities and provides for a so-called parallel voting procedure, which allows the same drafts to be voted upon in the CEN and the ISO in parallel, if the responsible committees agree to do so. Among the ergonomics committees the tendency is quite clear: avoid duplication of work (manpower and financial resources are too limited), avoid conflicting specifications, and try to achieve a consistent body of ergonomics standards based on a division of labour. Whereas CEN TC 122 is bound by the decisions of the EU administration and gets mandated work items to stipulate the specifications of European directives, ISO TC 159 is free to standardize whatever it thinks necessary or appropriate in the field of ergonomics. This has led to shifts in the emphasis of both committees, with the CEN concentrating on machinery and safety-related topics and the ISO concentrating on areas where broader market interests than Europe are concerned (e.g., work with VDUs and control-room design for process and related industries); on areas where the operation of machinery is concerned, as in work system design; and on such areas as work environment and work organization as well. The intention, however, is to transfer work results from the CEN to the ISO, and vice versa, in order to build up a body of consistent ergonomics standards which in fact are effective all over the world.

                                                    The formal procedure of producing standards is still the same today. But since the emphasis has shifted more and more to the international or the European level, more and more activities are being transferred to these committees. Drafts are now usually worked out directly in these committees and are no longer based on existing national standards. After the decision has been made that a standard should be developed, work directly starts at one of these supranational levels, based on whatever input there may be available, sometimes starting from zero. This changes the role of the national ergonomics committees quite dramatically. While heretofore they formally developed their own national standards according to their national rules, they now have the task of observing and influencing standardization on the supranational levels—via the experts who work out the standards or via comments made at the different steps of voting (within the CEN, a national standardization project will be halted if a comparable project is being simultaneously worked on at the CEN level). This makes the task still more complicated, since this influence can only be exerted indirectly and since the preparation of ergonomics standards is not just a matter of pure science but a matter of bargaining, consensus and agreement (not least due to the political implications which the standard might have). This, of course, is one of the reasons why the process of producing an international or European ergonomics standard usually takes several years and why ergonomics standards cannot reflect the latest state of the art in ergonomics. International ergonomics standards thus have to be examined every five years, and, if necessary, undergo revision.

                                                    Fields of Ergonomics Standardization

                                                    International ergonomics standardization started with guidelines on the general principles of ergonomics in the design of work systems; they were laid down in ISO 6385, which is now under revision in order to incorporate new developments. The CEN has produced a similar basic standard (EN 614, Part 1, 1994)—this is oriented more to machinery and safety—and is preparing a standard with guidelines on task design as a second part of this basic standard. The CEN thus emphasizes the importance of operator tasks in the design of machinery or work systems, for which appropriate tools or machinery have to be designed.

                                                    Another area where concepts and guidelines have been laid down in standards is the field of mental workload. ISO 10075, Part 1, defines terms and concepts (e.g., fatigue, monotony, reduced vigilance), and Part 2 (at the stage of a DIS in the latter half of the 1990s) provides guidelines for the design of work systems with respect to mental workload in order to avoid impairments.

                                                    SC 3 of ISO TC 159 and WG 1 of CEN TC 122 produce standards on anthropometry and biomechanics, covering, among other topics, methods of anthropometric measurements, body dimensions, safety distances and access dimensions, the evaluation of working postures and the design of workplaces in relation to machinery, recommended limits of physical strength and problems of manual handling.

                                                    SC 4 of ISO 159 shows how technological and social changes affect ergonomics standardization and the programme of such a subcommittee. SC 4 started as “Signals and Controls” by standardizing principles for displaying information and designing control actuators, with one of its work items being the visual display unit (VDU), used for office tasks. It soon became apparent, however, that standardizing the ergonomics of VDUs would not be sufficient, and that standardization “around” this workstation—in the sense of a work system—was required, covering areas such as hardware (e.g., the VDU itself, including displays, keyboards, non-keyboard input devices, workstations), work environment (e.g., lighting), work organization (e.g., task requirements), and software (e.g., dialogue principles, menu and direct manipulation dialogues). This led to a multipart standard (ISO 9241) covering “ergonomic requirements for office work with VDUs” with at the moment 17 parts, 3 of which have reached the status of an IS already. This standard will be transferred to the CEN (as EN 29241) which will specify requirements for the VDU directive (90/270 EEC) of the EU—although this is a directive under article 118a of the Single European Act. This series of standards provides guidelines as well as specifications, depending on the subject of the given part of the standard, and introduces a new concept of standardization, the user performance approach, which might help to solve some of the problems in ergonomics standardization. It is described more fully in the chapter Visual Display Units .

                                                    The user performance approach is based on the idea that the aim of standardization is to prevent impairment and to provide for optimal working conditions for the operator, but not to establish technical specification per se. Specification is thus regarded only as a means to the end of unimpaired, optimal user performance. The important thing is to achieve this unimpaired performance of the operator, regardless of whether a certain physical specification is met. This requires that the unimpaired operator performance which has to be achieved, for example, reading performance on a VDU, must be specified in the first place, and second, that technical specifications be developed which will enable the desired performance to be achieved, based on the available evidence. The manufacturer is then free to follow these technical specifications, which will ensure that the product complies with the ergonomics requirements. Or he may demonstrate, by comparison with a product that is known to fulfil the requirements (either by compliance with the technical specifications of the standard or by proven performance), that with the new product the performance requirements are equally or better fulfilled than with the reference product, with or without compliance to the technical specifications of the standard. A test procedure which has to be followed for demonstrating conformance with the user performance requirements of the standard is specified in the standard.

                                                    This approach helps to overcome two problems. Standards, by virtue of their specifications, which are based on the state of the art (and technology) at the time of preparation of the standard, can restrict new developments. Specifications that are based on a certain technology (e.g., cathode-ray tubes) may be inappropriate for other technologies. Independently of technology, however, the user of a display device (for instance) should be able to read and understand the information displayed effectively and efficiently without any impairments, irrespective of whatever technique may be used. Performance in this case must, however, not be restricted to the pure output (as measured in terms of speed or accuracy) but must include considerations of comfort and effort as well.

                                                    The second problem that can be dealt with by this approach is the problem of interactions between conditions. Physical specification usually is unidimensional, leaving other conditions out of consideration. In the case of interactive effects, however, this can be misleading or even wrong. By specifying performance requirements, on the other hand, and leaving the means to achieve these to the manufacturer, any solution that satisfies these performance requirements will be acceptable. Treating specification as a means to an end thus represents a genuine ergonomic perspective.

                                                    Another standard with a work system approach is under preparation in SC 4, which relates to the design of control rooms, for instance, for process industries or power stations. A multipart standard (ISO 11064) is expected to be prepared as a result, with the different parts dealing with such aspects of control-room design as layout, operator workstation design, and the design of displays and input devices for process control. Because these work items and the approach taken clearly exceed problems of the design of “displays and controls”, SC 4 has been renamed “Human-System Interaction”.

                                                    Environmental problems, especially those relating to thermal conditions and communication in noisy environments, are dealt with in SC 5, where standards have been or are being prepared on measurement methods, methods for the estimation of heat stress, conditions of thermal comfort, metabolic heat production, and on auditory and visual danger signals, speech interference level and the assessment of speech communication.

                                                    CEN TC 122 covers roughly the same fields of ergonomics standardization, although with a different emphasis and a different structure of its working groups. It is intended, however, that by a division of labour between the ergonomics committees, and mutual acceptance of work results, a general and usable set of ergonomics standards will be developed.

                                                     

                                                    Back

                                                    Monday, 20 December 2010 19:21

                                                    Target Organ and Critical Effects

                                                    The priority objective of occupational and environmental toxicology is to improve the prevention or substantial limitation of health effects of exposure to hazardous agents in the general and occupational environments. To this end systems have been developed for quantitative risk assessment related to a given exposure (see the section “Regulatory toxicology”).

                                                    The effects of a chemical on particular systems and organs are related to the magnitude of exposure and whether exposure is acute or chronic. In view of the diversity of toxic effects even within one system or organ, a uniform philosophy concerning the critical organ and critical effect has been proposed for the purpose of risk assessment and development of health-based recommended concentration limits of toxic substances in different environmental media.

                                                    From the point of view of preventive medicine, it is of particular importance to identify early adverse effects, based on the general assumption that preventing or limiting early effects may prevent more severe health effects from developing.

                                                    Such an approach has been applied to heavy metals. Although heavy metals, such as lead, cadmium and mercury, belong to a specific group of toxic substances where the chronic effect of activity is dependent on their accumulation in the organs, the definitions presented below were published by the Task Group on Metal Toxicity (Nordberg 1976).

                                                    The definition of the critical organ as proposed by the Task Group on Metal Toxicity has been adopted with a slight modification: the word metal has been replaced with the expression potentially toxic substance (Duffus 1993).

                                                    Whether a given organ or system is regarded as critical depends not only on the toxicomechanics of the hazardous agent but also on the route of absorption and the exposed population.

                                                    • Critical concentration for a cell: the concentration at which adverse functional changes, reversible or irreversible, occur in the cell.
                                                    • Critical organ concentration: the mean concentration in the organ at the time at which the most sensitive type of cells in the organ reach critical concentration.
                                                    • Critical organ: that particular organ which first attains the critical concentration of metal under specified circumstances of exposure and for a given population.
                                                    • Critical effect: defined point in the relationship between dose and effect in the individual, namely the point at which an adverse effect occurs in cellular function of the critical organ. At an exposure level lower than that giving a critical concentration of metal in the critical organ, some effects may occur that do not impair cellular function per se, yet are detectable by means of biochemical and other tests. Such effects are defined as sub- critical effects.

                                                     

                                                    The biological meaning of subcritical effect is sometimes not known; it may stand for exposure biomarker, adaptation index or a critical effect precursor (see “Toxicology test methods: Biomarkers”). The latter possibility can be particularly significant in view of prophylactic activities.

                                                    Table 1 displays examples of critical organs and effects for different chemicals. In chronic environmental exposure to cadmium, where the route of absorption is of minor importance (cadmium air concentrations range from 10 to 20μg/m3 in the urban and 1 to 2 μg/m3 in the rural areas), the critical organ is the kidney. In the occupational setting where the TLV reaches 50μg/m3 and inhalation constitutes the main route of exposure, two organs, lung and kidney, are regarded as critical.

                                                    Table 1. Examples of critical organs and critical effects

                                                    Substance Critical organ in chronic exposure Critical effect
                                                    Cadmium Lungs Nonthreshold:
                                                    Lung cancer (unit risk 4.6 x 10-3)
                                                      Kidney Threshold:
                                                    Increased excretion of low molecular proteins (β2 –M, RBP) in urine
                                                      Lungs Emphysema slight function changes
                                                    Lead Adults
                                                    Haematopoietic system
                                                    Increased delta-aminolevulinic acid excretion in urine (ALA-U); increased concentration of free erythrocyte protoporphyrin (FEP) in erythrocytes
                                                      Peripheral nervous system Slowing of the conduction velocities of the slower nerve fibres
                                                    Mercury (elemental) Young children
                                                    Central nervous system
                                                    Decrease in IQ and other subtle effects; mercurial tremor (fingers, lips, eyelids)
                                                    Mercury (mercuric) Kidney Proteinuria
                                                    Manganese Adults
                                                    Central nervous system
                                                    Impairment of psychomotor functions
                                                      Children
                                                    Lungs
                                                    Respiratory symptoms
                                                      Central nervous system Impairment of psychomotor functions
                                                    Toluene Mucous membranes Irritation
                                                    Vinyl chloride Liver Cancer
                                                    (angiosarcoma unit risk 1 x 10-6 )
                                                    Ethyl acetate Mucous membrane Irritation

                                                     

                                                    For lead, the critical organs in adults are the haemopoietic and peripheral nervous systems, where the critical effects (e.g., elevated free erythrocyte protoporphyrin concentration (FEP), increased excretion of delta-aminolevulinic acid in urine, or impaired peripheral nerve conduction) manifest when the blood lead level (an index of lead absorption in the system) approaches 200 to 300μg/l. In small children the critical organ is the central nervous system (CNS), and the symptoms of dysfunction detected with the use of a psychological test battery have been found to appear in the examined populations even at concentrations in the range of about 100μg/l Pb in blood.

                                                    A number of other definitions have been formulated which may better reflect the meaning of the notion. According to WHO (1989), the critical effect has been defined as “the first adverse effect which appears when the threshold (critical) concentration or dose is reached in the critical organ. Adverse effects, such as cancer, with no defined threshold concentration are often regarded as critical. Decision on whether an effect is critical is a matter of expert judgement.” In the International Programme on Chemical Safety (IPCS) guidelines for developing Environmental Health Criteria Documents, the critical effect is described as “the adverse effect judged to be most appropriate for determining the tolerable intake”. The latter definition has been formulated directly for the purpose of evaluating the health-based exposure limits in the general environment. In this context the most essential seems to be determining which effect can be regarded as an adverse effect. Following current terminology, the adverse effect is the “change in morphology, physiology, growth, development or lifespan of an organism which results in impairment of the capacity to compensate for additional stress or increase in susceptibility to the harmful effects of other environmental influences. Decision on whether or not any effect is adverse requires expert judgement.”

                                                    Figure 1 displays hypothetical dose-response curves for different effects. In the case of exposure to lead, A can represent a subcritical effect (inhibition of erythrocyte ALA-dehydratase), B the critical effect (an increase in erythrocyte zinc protoporphyrin or increase in the excretion of delta-aminolevulinic acid, C the clinical effect (anaemia) and D the fatal effect (death). For lead exposure there is abundant evidence illustrating how particular effects of exposure are dependent on lead concentration in blood (practical counterpart of the dose), either in the form of the dose-response relationship or in relation to different variables (sex, age, etc.). Determining the critical effects and the dose-response relationship for such effects in humans makes it possible to predict the frequency of a given effect for a given dose or its counterpart (concentration in biological material) in a certain population.

                                                    Figure 1. Hypothetical dose-response curves for various effects

                                                    TOX080F1

                                                    The critical effects can be of two types: those considered to have a threshold and those for which there may be some risk at any exposure level (non-threshold, genotoxic carcinogens and germ mutagens). Whenever possible, appropriate human data should be used as a basis for the risk assessment. In order to determine the threshold effects for the general population, assumptions concerning the exposure level (tolerable intake, biomarkers of exposure) have to be made such that the frequency of the critical effect in the population exposed to a given hazardous agent corresponds to the frequency of that effect in the general population. In lead exposure, the maximum recommended blood lead concentration for the general population (200μg/l, median below 100μg/l) (WHO 1987) is practically below the threshold value for the assumed critical effect—the elevated free erythrocyte protoporphyrin level, although it is not below the level associated with effects on the CNS in children or blood pressure in adults. In general, if data from well-conducted human population studies defining a no observed adverse effect level are the basis for safety evaluation, then the uncertainty factor of ten has been considered appropriate. In the case of occupational exposure the critical effects may refer to a certain part of the population (e.g. 10%). Accordingly, in occupational lead exposure the recommended health-based concentration of blood lead has been adopted to be 400mg/l in men where a 10% response level for ALA-U of 5mg/l occurred at PbB concentrations of about 300 to 400mg/l. For the occupational exposure to cadmium (assuming the increased urinary excretion of low-weight proteins to be the critical effect), the level of 200ppm cadmium in renal cortex has been regarded as the admissible value, for this effect has been observed in 10% of the exposed population. Both these values are under consideration for lowering, in many countries, at the present time (i.e.,1996).

                                                    There is no clear consensus on appropriate methodology for the risk assessment of chemicals for which the critical effect may not have a threshold, such as genotoxic carcinogens. A number of approaches based largely on characterization of the dose- response relationship have been adopted for the assessment of such effects. Owing to the lack of socio-political acceptance of health risk caused by carcinogens in such documents as the Air Quality Guidelines for Europe (WHO 1987), only the values such as the unit lifetime risk (i.e., the risk associated with lifetime exposure to 1μg/m3 of the hazardous agent) are presented for non-threshold effects (see “Regulatory toxicology”).

                                                    Presently, the basic step in undertaking activities for risk assessment is determining the critical organ and critical effects. The definitions of both the critical and adverse effect reflect the responsibility of deciding which of the effects within a given organ or system should be regarded as critical, and this is directly related to the subsequent determination of recommended values for a given chemical in the general environment—for example, Air Quality Guidelines for Europe (WHO 1987) or health-based limits in occupational exposure (WHO 1980). Determining the critical effect from within the range of subcritical effects may lead to a situation where the recommended limits on toxic chemicals concentration in the general or occupational environment may be in practice impossible to maintain. Regarding as critical an effect that may overlap the early clinical effects may bring about the adoption of the values for which adverse effects may develop in some part of the population. The decision whether or not a given effect should be considered critical remains the responsibility of expert groups who specialize in toxicity and risk assessment.

                                                     

                                                    Back

                                                    Monday, 28 February 2011 20:21

                                                    Organic Solvents

                                                    Introduction

                                                    Organic solvents are volatile and generally soluble in body fat (lipophilic), although some of them, e.g., methanol and acetone, are water soluble (hydrophilic) as well. They have been extensively employed not only in industry but in consumer products, such as paints, inks, thinners, degreasers, dry-cleaning agents, spot removers, repellents, and so on. Although it is possible to apply biological monitoring to detect health effects, for example, effects on the liver and the kidney, for the purpose of health surveillance of workers who are occupationally exposed to organic solvents, it is best to use biological monitoring instead for “exposure” monitoring in order to protect the health of workers from the toxicity of these solvents, because this is an approach sensitive enough to give warnings well before any health effects may occur. Screening workers for high sensitivity to solvent toxicity may also contribute to the protection of their health.

                                                    Summary of Toxicokinetics

                                                    Organic solvents are generally volatile under standard conditions, although the volatility varies from solvent to solvent. Thus, the leading route of exposure in industrial settings is through inhalation. The rate of absorption through the alveolar wall of the lungs is much higher than that through the digestive tract wall, and a lung absorption rate of about 50% is considered typical for many common solvents such as toluene. Some solvents, for example, carbon disulphide and N,N-dimethylformamide in the liquid state, can penetrate intact human skin in amounts large enough to be toxic.

                                                    When these solvents are absorbed, a portion is exhaled in the breath without any biotransformation, but the greater part is distributed in organs and tissues rich in lipids as a result of their lipophilicity. Biotransformation takes place primarily in the liver (and also in other organs to a minor extent), and the solvent molecule becomes more hydrophilic, typically by a process of oxidation followed by conjugation, to be excreted via the kidney into the urine as metabolite(s). A small portion may be eliminated unchanged in the urine.

                                                    Thus, three biological materials, urine, blood and exhaled breath, are available for exposure monitoring for solvents from a practical viewpoint. Another important factor in selecting biological materials for exposure monitoring is the speed of disappearance of the absorbed substance, for which the biological half-life, or the time needed for a substance to diminish to one-half its original concentration, is a quantitative parameter. For example, solvents will disappear from exhaled breath much more rapidly than corresponding metabolites from urine, meaning they have a much shorter half-life. Within urinary metabolites, the biological half-life varies depending on how quickly the parent compound is metabolised, so that sampling time in relation to exposure is often of critical importance (see below). A third consideration in choosing a biological material is the specificity of the target chemical to be analysed in relation to the exposure. For example, hippuric acid is a long-used marker of exposure to toluene, but it is not only formed naturally by the body, but can also be derived from non-occupational sources such as some food additives, and is no longer considered a reliable marker when toluene exposure is low (less than 50 cm3/m3). Generally speaking, urinary metabolites have been most widely used as indicators of exposure to various organic solvents. Solvent in blood is analysed as a qualitative measure of exposure because it usually remains in the blood a shorter time and is more reflective of acute exposure, whereas solvent in exhaled breath is difficult to use for estimation of average exposure because the concentration in breath declines so rapidly after cessation of exposure. Solvent in urine is a promising candidate as a measure of exposure, but it needs further validation.

                                                    Biological Exposure Tests for Organic Solvents

                                                    In applying biological monitoring for solvent exposure, sampling time is important, as indicated above. Table 1 shows recommended sampling times for common solvents in the monitoring of everyday occupational exposure. When the solvent itself is to be analysed, attention should be paid to preventing possible loss (e.g., evaporation into room air) as well as contamination (e.g., dissolving from room air into the sample) during the sample handling process. In case the samples need to be transported to a distant laboratory or to be stored before analysis, care should be exercised to prevent loss. Freezing is recommended for metabolites, whereas refrigeration (but no freezing) in an airtight container without an air space (or more preferably, in a headspace vial) is recommended for analysis of the solvent itself. In chemical analysis, quality control is essential for reliable results (for details, see the article “Quality assurance” in this chapter). In reporting the results, ethics should be respected (see chapter Ethical Issues elsewhere in the Encyclopaedia).

                                                    Table 1. Some examples of target chemicals for biological monitoring and sampling time

                                                    Solvent

                                                    Target chemical

                                                    Urine/blood

                                                    Sampling time1

                                                    Carbon disulphide

                                                    2-Thiothiazolidine-4-carboxylicacid

                                                    Urine

                                                    Th F

                                                    N,N-Dimethyl-formamide

                                                    N-Methylformamide

                                                    Urine

                                                    M Tu W Th F

                                                    2-Ethoxyethanol and its acetate

                                                    Ethoxyacetic acid

                                                    Urine

                                                    Th F (end of last workshift)

                                                    Hexane

                                                    2,4-Hexanedione

                                                    Hexane

                                                    Urine

                                                    Blood

                                                    M Tu W Th F

                                                    confirmation of exposure

                                                    Methanol

                                                    Methanol

                                                    Urine

                                                    M Tu W Th F

                                                    Styrene

                                                    Mandelic acid

                                                    Phenylglyoxylic acid

                                                    Styrene

                                                    Urine

                                                    Urine

                                                    Blood

                                                    Th F

                                                    Th F

                                                    confirmation of exposure

                                                    Toluene

                                                    Hippuric acid

                                                    o-Cresol

                                                    Toluene

                                                    Toluene

                                                    Urine

                                                    Urine

                                                    Blood

                                                    Urine

                                                    Tu W Th F

                                                    Tu W Th F

                                                    confirmation of exposure

                                                    Tu W Th F

                                                    Trichloroethylene

                                                    Trichloroacetic acid

                                                    (TCA)

                                                    Total trichloro- compounds (sum of TCA and free and conjugated trichloroethanol)

                                                    Trichloroethylene

                                                    Urine

                                                    Urine

                                                    Blood

                                                    Th F

                                                    Th F

                                                    confirmation of exposure

                                                    Xylenes2

                                                    Methylhippuric acids

                                                    Xylenes

                                                    Urine

                                                    Blood

                                                    Tu W Th F

                                                    Tu W Th F

                                                    1 End of workshift unless otherwise noted: days of week indicate preferred sampling days.
                                                    2 Three isomers, either separately or in any combination.

                                                    Source: Summarized from WHO 1996.

                                                     

                                                    Anumber of analytical procedures are established for many solvents. Methods vary depending on the target chemical, but most of the recently developed methods use gas chromatography (GC) or high-performance liquid chromatography (HPLC) for separation. Use of an autosampler and data processor is recommended for good quality control in chemical analysis. When a solvent itself in blood or in urine is to be analysed, an application of headspace technique in GC (headspace GC) is very convenient, especially when the solvent is volatile enough. Table 2 outlines some examples of the methods established for common solvents.

                                                    Table 2. Some examples of analytical methods for biological monitoring of exposure to organic solvents

                                                    Solvent

                                                    Target chemical

                                                    Blood/urine

                                                    Analytical method

                                                    Carbon disulphide

                                                    2-Thiothiazolidine-4-
                                                    carboxylic acid

                                                    Urine

                                                    High performance liquid chromatograph with ultraviolet detection

                                                    (UV-HPLC)

                                                    N,N-Dimethylformamide

                                                    N-Methylformamide

                                                    Urine

                                                    Gas chromatograph with flame thermionic detection (FTD-GC)

                                                    2-Ethoxyethanol and its acetate

                                                    Ethoxyacetic acid

                                                    Urine

                                                    Extraction, derivatization and gas chromatograph with flame ionization detection (FID-GC)

                                                    Hexane

                                                    2,4-Hexanedione

                                                    Hexane

                                                    Urine

                                                    Blood

                                                    Extraction, (hydrolysis) and FID-GC

                                                    Head-space FID-GC

                                                    Methanol

                                                    Methanol

                                                    Urine

                                                    Head-space FID-GC

                                                    Styrene

                                                    Mandelic acid

                                                    Phenylglyoxylic acid

                                                    Styrene

                                                    Urine

                                                    Urine

                                                    Blood

                                                    Desalting and UV-HPLC

                                                    Desalting and UV-HPLC

                                                    Headspace FID-GC

                                                    Toluene

                                                    Hippuric acid

                                                    o-Cresol

                                                    Toluene

                                                    Toluene

                                                    Urine

                                                    Urine

                                                    Blood

                                                    Urine

                                                    Desalting and UV-HPLC

                                                    Hydrolysis, extraction and FID-GC

                                                    Headspace FID-GC

                                                    Headspace FID-GC

                                                    Trichloroethylene

                                                    Trichloroacetic acid
                                                    (TCA)

                                                    Total trichloro-compounds (sum of TCA and freeand conjugated trichloroethanol)

                                                    Trichloroethylene

                                                    Urine

                                                    Urine

                                                    Blood

                                                    Colorimetry or esterification and gas chromatograph with electron capture detection (ECD-GC)

                                                    Oxidation and colorimetry, or hydrolysis, oxidation, esterification and ECD-GC

                                                    Headspace ECD-GC

                                                    Xylenes

                                                    Methylhippuric acids (three isomers, either separately orin combination)

                                                    Urine

                                                    Headspace FID-GC

                                                    Source: Summarized from WHO 1996.

                                                    Evaluation

                                                    A linear relationship of the exposure indicators (listed in table 2) with the intensity of exposure to corresponding solvents may be established either through a survey of workers occupationally exposed to solvents, or by experimental exposure of human volunteers. Accordingly, the ACGIH (1994) and the DFG (1994), for example, have established the biological exposure index (BEI) and the biological tolerance value (BAT), respectively, as the values in the biological samples which are equivalent to the occupational exposure limit for airborne chemicals—that is, threshold limit value (TLV) and maximum workplace concentration (MAK), respectively. It is known, however, that the level of the target chemical in samples obtained from non-exposed people may vary, reflecting, for example, local customs (e.g., food), and that ethnic differences may exist in solvent metabolism. It is therefore desirable to establish limit values through the study of the local population of concern.

                                                    In evaluating the results, non-occupational exposure to the solvent (e.g., via use of solvent-containing consumer products or intentional inhalation) and exposure to chemicals which give rise to the same metabolites (e.g., some food additives) should be carefully excluded. In case there is a wide gap between the intensity of vapour exposure and the biological monitoring results, the difference may indicate the possibility of skin absorption. Cigarette smoking will suppress the metabolism of some solvents (e.g., toluene), whereas acute ethanol intake may suppress methanol metabolism in a competitive manner.

                                                     

                                                    Back

                                                    Page 1 of 7

                                                    " DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

                                                    Contents