ILO Content Manager

ILO Content Manager

Sunday, 16 January 2011 18:45

Biomarkers

The word biomarker is short for biological marker, a term that refers to a measurable event occurring in a biological system, such as the human body. This event is then interpreted as a reflection, or marker, of a more general state of the organism or of life expectancy. In occupational health, a biomarker is generally used as an indicator of health status or disease risk.

Biomarkers are used for in vitro as well as in vivo studies that may include humans. Usually, three specific types of biological markers are identified. Although a few biomarkers may be difficult to classify, usually they are separated into biomarkers of exposure, biomarkers of effect or biomarkers of susceptibility (see table 1).

Table 1. Examples of biomarkers of exposure or biomarkers of effect  that are used in toxicological studies in occupational health

Sample Measurement Purpose
Exposure biomarkers
Adipose tissue Dioxin Dioxin exposure
Blood Lead Lead exposure
Bone Aluminium Aluminium exposure
Exhaled breath Toluene Toluene exposure
Hair Mercury Methylmercury exposure
Serum Benzene Benzene exposure
Urine Phenol Benzene exposure
Effect biomarkers
Blood Carboxyhaemoglobin Carbon monoxide exposure
Red blood cells Zinc-protoporphyrin Lead exposure
Serum Cholinesterase Organophosphate exposure
Urine Microglobulins Nephrotoxic exposure
White blood cells DNA adducts Mutagen exposure

 

Given an acceptable degree of validity, biomarkers may be employed for several purposes. On an individual basis, a biomarker may be used to support or refute a diagnosis of a particular type of poisoning or other chemically-induced adverse effect. In a healthy subject, a biomarker may also reflect individual hypersusceptibility to specific chemical exposures and may therefore serve as a basis for risk prediction and counselling. In groups of exposed workers, some exposure biomarkers can be applied to assess the extent of compliance with pollution abatement regulations or the effectiveness of preventive efforts in general.

Biomarkers of Exposure

An exposure biomarker may be an exogenous compound (or a metabolite) within the body, an interactive product between the compound (or metabolite) and an endogenous component, or another event related to the exposure. Most commonly, biomarkers of exposures to stable compounds, such as metals, comprise measurements of the metal concentrations in appropriate samples, such as blood, serum or urine. With volatile chemicals, their concentration in exhaled breath (after inhalation of contamination-free air) may be assessed. If the compound is metabolized in the body, one or more metabolites may be chosen as a biomarker of the exposure; metabolites are often determined in urine samples.

Modern methods of analysis may allow separation of isomers or congeners of organic compounds, and determination of the speciation of metal compounds or isotopic ratios of certain elements. Sophisticated analyses allow determination of changes in the structure of DNA or other macromolecules caused by binding with reactive chemicals. Such advanced techniques will no doubt gain considerably in importance for applications in biomarker studies, and lower detection limits and better analytical validity are likely to make these biomarkers even more useful.

Particularly promising developments have occurred with biomarkers of exposure to mutagenic chemicals. These compounds are reactive and may form adducts with macromolecules, such as proteins or DNA. DNA adducts may be detected in white blood cells or tissue biopsies, and specific DNA fragments may be excreted in the urine. For example, exposure to ethylene oxide results in reactions with DNA bases, and, after excision of the damaged base, N-7-(2-hydroxyethyl)guanine will be eliminated in the urine. Some adducts may not refer directly to a particular exposure. For example, 8-hydroxy-2´-deoxyguanosine reflects oxidative damage to DNA, and this reaction may be triggered by several chemical compounds, most of which also induce lipid peroxidation.

Other macromolecules may also be changed by adduct formation or oxidation. Of special interest, such reactive compounds may generate haemoglobin adducts that can be determined as biomarkers of exposure to the compounds. The advantage is that ample amounts of haemoglobin can be obtained from a blood sample, and, given the four-month lifetime of red blood cells, the adducts formed with the amino acids of the protein will indicate the total exposure during this period.

Adducts may be determined by sensitive techniques such as high-performance lipid chromatography, and some immunological methods are also available. In general, the analytical methods are new, expensive and need further development and validation. Better sensitivity can be obtained by using the 32P post labelling assay, which is a nonspecific indication that DNA damage has taken place. All of these techniques are potentially useful for biological monitoring and have been applied in a growing number of studies. However, simpler and more sensitive analytical methods are needed. Given the limited specificity of some methods at low-level exposures, tobacco smoking or other factors may impact significantly on the measurement results, thus causing difficulties in interpretation.

Exposure to mutagenic compounds, or to compounds which are metabolized into mutagens, may also be determined by assessing the mutagenicity of the urine from an exposed individual. The urine sample is incubated with a strain of bacteria in which a specific point mutation is expressed in a way that can be easily measured. If mutagenic chemicals are present in the urine sample, then an increased rate of mutations will occur in the bacteria.

Exposure biomarkers must be evaluated with regard to temporal variation in exposure and the relation to different compartments. Thus, the time frame(s) represented by the biomarker, that is, the extent to which the biomarker measurement reflects past exposure(s) and/or accumulated body burden, must be determined from toxicokinetic data in order to interpret the result. In particular, the degree to which the biomarker indicates retention in specific target organs should be considered. Although blood samples are often used for biomarker studies, peripheral blood is generally not regarded as a compartment as such, although it acts as a transport medium between compartments. The degree to which the concentration in the blood reflects levels in different organs varies widely between different chemicals, and usually also depends upon the length of the exposure as well as time since exposure.

Sometimes this type of evidence is used to classify a biomarker as an indicator of (total) absorbed dose or an indicator of effective dose (i.e., the amount that has reached the target tissue). For example, exposure to a particular solvent may be evaluated from data on the actual concentration of the solvent in the blood at a particular time following the exposure. This measurement will reflect the amount of the solvent that has been absorbed into the body. Some of the absorbed amount will be exhaled due to the vapour pressure of the solvent. While circulating in the blood, the solvent will interact with various components of the body, and it will eventually become subject to breakdown by enzymes. The outcome of the metabolic processes can be assessed by determining specific mercapturic acids produced by conjugation with glutathione. The cumulative excretion of mercapturic acids may better reflect the effective dose than will the blood concentration.

Life events, such as reproduction and senescence, may affect the distribution of a chemical. The distribution of chemicals within the body is significantly affected by pregnancy, and many chemicals may pass the placental barrier, thus causing exposure of the foetus. Lactation may result in excretion of lipid-soluble chemicals, thus leading to a decreased retention in the mother along with an increased uptake by the infant. During weight loss or development of osteoporosis, stored chemicals may be released, which can then result in a renewed and protracted “endogenous” exposure of target organs. Other factors may affect individual absorption, metabolism, retention and distribution of chemical compounds, and some biomarkers of susceptibility are available (see below).

Biomarkers of Effect

A marker of effect may be an endogenous component, or a measure of the functional capacity, or some other indicator of the state or balance of the body or organ system, as affected by the exposure. Such effect markers are generally preclinical indicators of abnormalities.

These biomarkers may be specific or non-specific. The specific biomarkers are useful because they indicate a biological effect of a particular exposure, thus providing evidence that can potentially be used for preventive purposes. The non-specific biomarkers do not point to an individual cause of the effect, but they may reflect the total, integrated effect due to a mixed exposure. Both types of biomarkers may therefore be of considerable use in occupational health.

There is not a clear distinction between exposure biomarkers and effect biomarkers. For example, adduct formation could be said to reflect an effect rather than the exposure. However, effect biomarkers usually indicate changes in the functions of cells, tissues or the total body. Some researchers include gross changes, such as an increase in liver weight of exposed laboratory animals or decreased growth in children, as biomarkers of effect. For the purpose of occupational health, effect biomarkers should be restricted to those that indicate subclinical or reversible biochemical changes, such as inhibition of enzymes. The most frequently used effect biomarker is probably inhibition of cholinesterase caused by certain insecticides, that is, organophosphates and carbamates. In most cases, this effect is entirely reversible, and the enzyme inhibition reflects the total exposure to this particular group of insecticides.

Some exposures do not result in enzyme inhibition but rather in increased activity of an enzyme. This is the case with several enzymes that belong to the P450 family (see “Genetic determinants of toxic response”). They may be induced by exposures to certain solvents and polyaromatic hydrocarbons (PAHs). Since these enzymes are mainly expressed in tissues from which a biopsy may be difficult to obtain, the enzyme activity is determined indirectly in vivo by administering a compound that is metabolized by that particular enzyme, and then the breakdown product is measured in urine or plasma.

Other exposures may induce the synthesis of a protective protein in the body. The best example is probably metallothionein, which binds cadmium and promotes the excretion of this metal; cadmium exposure is one of the factors that result in increased expression of the metallothionein gene. Similar protective proteins may exist but have not yet been explored sufficiently to become accepted as biomarkers. Among the candidates for possible use as biomarkers are the so-called stress proteins, originally referred to as heat shock proteins. These proteins are generated by a range of different organisms in response to a variety of adverse exposures.

Oxidative damage may be assessed by determining the concentration of malondialdehyde in serum or the exhalation of ethane. Similarly, the urinary excretion of proteins with a small molecular weight, such as albumin, may be used as a biomarker of early kidney damage. Several parameters routinely used in clinical practice (for example, serum hormone or enzyme levels) may also be useful as biomarkers. However, many of these parameters may not be sufficiently sensitive to detect early impairment.

Another group of effect parameters relate to genotoxic effects (changes in the structure of chromosomes). Such effects may be detected by microscopy of white blood cells that undergo cell division. Serious damage to the chromosomes—chromosomal aberrations or formation of micronuclei—can be seen in a microscope. Damage may also be revealed by adding a dye to the cells during cell division. Exposure to a genotoxic agent can then be visualized as an increased exchange of the dye between the two chromatids of each chromosome (sister chromatid exchange). Chromosomal aberrations are related to an increased risk of developing cancer, but the significance of an increased rate of sister chromatid exchange is less clear.

More sophisticated assessment of genotoxicity is based on particular point mutations in somatic cells, that is, white blood cells or epithelial cells obtained from the oral mucosa. A mutation at a specific locus may make the cells capable of growing in a culture that contains a chemical that is otherwise toxic (such as 6-thioguanine). Alternatively, a specific gene product can be assessed (e.g., serum or tissue concentrations of oncoproteins encoded by particular oncogenes). Obviously, these mutations reflect the total genotoxic damage incurred and do not necessarily indicate anything about the causative exposure. These methods are not yet ready for practical use in occupational health, but rapid progress in this line of research would suggest that such methods will become available within a few years.

Biomarkers of Susceptibility

A marker of susceptibility, whether inherited or induced, is an indicator that the individual is particularly sensitive to the effect of a xenobiotic or to the effects of a group of such compounds. Most attention has been focused on genetic susceptibility, although other factors may be at least as important. Hypersusceptibility may be due to an inherited trait, the constitution of the individual, or environmental factors.

The ability to metabolize certain chemicals is variable and is genetically determined (see “Genetic determinants of toxic response”). Several relevant enzymes appear to be controlled by a single gene. For example, oxidation of foreign chemicals is mainly carried out be a family of enzymes belonging to the P450 family. Other enzymes make the metabolites more water soluble by conjugation (e.g., N-acetyltransferase and μ-glutathion-S-transferase). The activity of these enzymes is genetically controlled and varies considerably. As mentioned above, the activity can be determined by administering a small dose of a drug and then determining the amount of the metabolite in the urine. Some of the genes have now been characterized, and techniques are available to determine the genotype. Important studies suggest that a risk of developing certain cancer forms is related to the capability of metabolizing foreign compounds. Many questions still remain unanswered, thus at this time limiting the use of these potential susceptibility biomarkers in occupational health.

Other inherited traits, such as alpha1-antitrypsin deficiency or glucose-6-phosphate dehydrogenase deficiency, also result in deficient defence mechanisms in the body, thereby causing hypersusceptibility to certain exposures.

Most research related to susceptibility has dealt with genetic predisposition. Other factors play a role as well and have been partly neglected. For example, individuals with a chronic disease may be more sensitive to an occupational exposure. Also, if a disease process or previous exposure to toxic chemicals has caused some subclinical organ damage, then the capacity to withstand a new toxic exposure is likely to be less. Biochemical indicators of organ function may in this case be used as susceptibility biomarkers. Perhaps the best example regarding hypersusceptibility relates to allergic responses. If an individual has become sensitized to a particular exposure, then specific antibodies can be detected in serum. Even if the individual has not become sensitized, other current or past exposures may add to the risk of developing an adverse effect related to an occupational exposure.

A major problem is to determine the joint effect of mixed exposures at work. In addition, personal habits and drug use may result in an increased susceptibility. For example, tobacco smoke usually contains a considerable amount of cadmium. Thus, with occupational exposure to cadmium, a heavy smoker who has accumulated substantial amounts of this metal in the body will be at increased risk of developing cadmium-related kidney disease.

Application in Occupational Health

Biomarkers are extremely useful in toxicological research, and many may be applicable in biological monitoring. Nonetheless, the limitations must also be recognized. Many biomarkers have so far been studied only in laboratory animals. Toxicokinetic patterns in other species may not necessarily reflect the situation in human beings, and extrapolation may require confirmatory studies in human volunteers. Also, account must be taken of individual variations due to genetic or constitutional factors.

In some cases, exposure biomarkers may not at all be feasible (e.g., for chemicals which are short-lived in vivo). Other chemicals may be stored in, or may affect, organs which cannot be accessed by routine procedures, such as the nervous system. The route of exposure may also affect the distribution pattern and therefore also the biomarker measurement and its interpretation. For example, direct exposure of the brain via the olfactory nerve is likely to escape detection by measurement of exposure biomarkers. As to effect biomarkers, many of them are not at all specific, and the change can be due to a variety of causes, including lifestyle factors. Perhaps in particular with the susceptibility biomarkers, interpretation must be very cautious at the moment, as many uncertainties remain about the overall health significance of individual genotypes.

In occupational health, the ideal biomarker should satisfy several requirements. First of all, sample collection and analysis must be simple and reliable. For optimal analytical quality, standardization is needed, but the specific requirements vary considerably. Major areas of concern include: preparation of the in- dividual, sampling procedure and sample handling, and measurement procedure; the latter encompasses technical factors, such as calibration and quality assurance procedures, and individual- related factors, such as education and training of operators.

For documentation of analytical validity and traceability, reference materials should be based on relevant matrices and with appropriate concentrations of toxic substances or relevant metabolites at appropriate levels. For biomarkers to be used for biological monitoring or for diagnostic purposes, the responsible laboratories must have well-documented analytical procedures with defined performance characteristics, and accessible records to allow verification of the results. At the same time, nonetheless, the economics of characterizing and using reference materials to supplement quality assurance procedures in general must be considered. Thus, the achievable quality of results, and the uses to which they are put, have to be balanced against the added costs of quality assurance, including reference materials, manpower and instrumentation.

Another requirement is that the biomarker should be specific, at least under the circumstances of the study, for a particular type of exposure, with a clear-cut relationship to the degree of exposure. Otherwise, the result of the biomarker measurement may be too difficult to interpret. For proper interpretation of the measurement result of an exposure biomarker, the diagnostic validity must be known (i.e., the translation of the biomarker value into the magnitude of possible health risks). In this area, metals serve as a paradigm for biomarker research. Recent research has demonstrated the complexity and subtlety of dose-response relationships, with considerable difficulty in identifying no-effect levels and therefore also in defining tolerable exposures. However, this kind of research has also illustrated the types of investigation and the refinement that are necessary to uncover the relevant information. For most organic compounds, quantitative associations between exposures and the corresponding adverse health effects are not yet available; in many cases, even the primary target organs are not known for sure. In addition, evaluation of toxicity data and biomarker concentrations is often complicated by exposure to mixtures of substances, rather than exposure to a single compound at the time.

Before the biomarker is applied for occupational health purposes, some additional considerations are necessary. First, the biomarker must reflect a subclinical and reversible change only. Second, given that the biomarker results can be interpreted with regard to health risks, then preventive efforts should be available and should be considered realistic in case the biomarker data suggests a need to reduce the exposure. Third, the practical use of the biomarker must be generally regarded as ethically acceptable.

Industrial hygiene measurements may be compared with applicable exposure limits. Likewise, results on exposure biomarkers or effect biomarkers may be compared to biological action limits, sometimes referred to as biological exposure indices. Such limits should be based on the best advice of clinicians and scientists from appropriate disciplines, and responsible administrators as “risk managers” should then take into account relevant ethical, social, cultural and economic factors. The scientific basis should, if possible, include dose-response relationships supplemented by information on variations in susceptibility within the population at risk. In some countries, workers and members of the general public are involved in the standard-setting process and provide important input, particularly when scientific uncertainty is considerable. One of the major uncertainties is how to define an adverse health effect that should be prevented—for example, whether adduct formation as an exposure biomarker by itself represents an adverse effect (i.e., effect biomarker) that should be prevented. Difficult questions are likely to arise when deciding whether it is ethically defensible, for the same compound, to have different limits for adventitious exposure, on the one hand, and occupational exposure, on the other.

The information generated by the use of biomarkers should generally be conveyed to the individuals examined within the physician-patient relationship. Ethical concerns must in particular be considered in connection with highly experimental biomarker analyses that cannot currently be interpreted in detail in terms of actual health risks. For the general population, for example, limited guidance exists at present with regard to interpretation of exposure biomarkers other than the blood-lead concentration. Also of importance is the confidence in the data generated (i.e., whether appropriate sampling has been done, and whether sound quality assurance procedures have been utilized in the laboratory involved). An additional area of special worry relates to individual hypersusceptibility. These issues must be taken into account when providing the feedback from the study.

All sectors of society affected by, or concerned with carrying out, a biomarker study need to be involved in the decision-making process on how to handle the information generated by the study. Specific procedures to prevent or overcome inevitable ethical conflicts should be developed within the legal and social frameworks of the region or country. However, each situation represents a different set of questions and pitfalls, and no single procedure for public involvement can be developed to cover all applications of exposure biomarkers.

 

Back

Sunday, 16 January 2011 18:43

Target Organ Toxicology

The study and characterization of chemicals and other agents for toxic properties is often undertaken on the basis of specific organs and organ systems. In this chapter, two targets have been selected for in-depth discussion: the immune system and the gene. These examples were chosen to represent a complex target organ system and a molecular target within cells. For more comprehensive discussion of the toxicology of target organs, the reader is referred to standard toxicology texts such as Casarett and Doull, and Hayes. The International Programme on Chemical Safety (IPCS) has also published several criteria documents on target organ toxicology, by organ system.

Target organ toxicology studies are usually undertaken on the basis of information indicating the potential for specific toxic effects of a substance, either from epidemiological data or from general acute or chronic toxicity studies, or on the basis of special concerns to protect certain organ functions, such as reproduction or foetal development. In some cases, specific target organ toxicity tests are expressly mandated by statutory authorities, such as neurotoxicity testing under the US pesticides law (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents,” and mutagenicity testing under the Japanese Chemical Substance Control Law (see “Principles of hazard identification: The Japanese approach”).

As discussed in “Target organ and critical effects,” the identification of a critical organ is based upon the detection of the organ or organ system which first responds adversely or to the lowest doses or exposures. This information is then used to design specific toxicology investigations or more defined toxicity tests that are designed to elicit more sensitive indications of intoxication in the target organ. Target organ toxicology studies may also be used to determine mechanisms of action, of use in risk assessment (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”).

Methods of Target Organ Toxicity Studies

Target organs may be studied by exposure of intact organisms and detailed analysis of function and histopathology in the target organ, or by in vitro exposure of cells, tissue slices, or whole organs maintained for short or long term periods in culture (see “Mechanisms of toxicology: Introduction and concepts”). In some cases, tissues from human subjects may also be available for target organ toxicity studies, and these may provide opportunities to validate assumptions of cross-species extrapolation. However, it must be kept in mind that such studies do not provide information on relative toxicokinetics.

In general, target organ toxicity studies share the following common characteristics: detailed histopathological examination of the target organ, including post mortem examination, tissue weight, and examination of fixed tissues; biochemical studies of critical pathways in the target organ, such as important enzyme systems; functional studies of the ability of the organ and cellular constituents to perform expected metabolic and other functions; and analysis of biomarkers of exposure and early effects in target organ cells.

Detailed knowledge of target organ physiology, biochemistry and molecular biology may be incorporated in target organ studies. For instance, because the synthesis and secretion of small-molecular-weight proteins is an important aspect of renal function, nephrotoxicity studies often include special attention to these parameters (IPCS 1991). Because cell-to-cell communication is a fundamental process of nervous system function, target organ studies in neurotoxicity may include detailed neurochemical and biophysical measurements of neurotransmitter synthesis, uptake, storage, release and receptor binding, as well as electrophysiological measurement of changes in membrane potential associated with these events.

A high degree of emphasis is being placed upon the development of in vitro methods for target organ toxicity, to replace or reduce the use of whole animals. Substantial advances in these methods have been achieved for reproductive toxicants (Heindel and Chapin 1993).

In summary, target organ toxicity studies are generally undertaken as a higher order test for determining toxicity. The selection of specific target organs for further evaluation depends upon the results of screening level tests, such as the acute or subchronic tests used by OECD and the European Union; some target organs and organ systems may be a priori candidates for special investigation because of concerns to prevent certain types of adverse health effects.

 

Back

Sunday, 16 January 2011 18:35

Immunotoxicology

The functions of the immune system are to protect the body from invading infectious agents and to provide immune surveillance against arising tumour cells. It has a first line of defence that is non-specific and that can initiate effector reactions itself, and an acquired specific branch, in which lymphocytes and antibodies carry the specificity of recognition and subsequent reactivity towards the antigen.

Immunotoxicology has been defined as “the discipline concerned with the study of the events that can lead to undesired effects as a result of interaction of xenobiotics with the immune system. These undesired events may result as a consequence of (1) a direct and/or indirect effect of the xenobiotic (and/or its biotransformation product) on the immune system, or (2) an immunologically based host response to the compound and/or its metabolite(s), or host antigens modified by the compound or its metabolites” (Berlin et al. 1987).

When the immune system acts as a passive target of chemical insults, the result can be decreased resistance to infection and certain forms of neoplasia, or immune disregulation/stimulation that can exacerbate allergy or auto-immunity. In the case that the immune system responds to the antigenic specificity of the xenobiotic or host antigen modified by the compound, toxicity can become manifest as allergies or autoimmune diseases.

Animal models to investigate chemical-induced immune suppression have been developed, and a number of these methods are validated (Burleson, Munson, and Dean 1995; IPCS 1996). For testing purposes, a tiered approach is followed to make an adequate selection from the overwhelming number of assays available. Generally, the objective of the first tier is to identify potential immunotoxicants. If potential immunotoxicity is identified, a second tier of testing is performed to confirm and characterize further the changes observed. Third-tier investigations include special studies on the mechanism of action of the compound. Several xenobiotics have been identified as immunotoxicants causing immunosuppression in such studies with laboratory animals.

The database on immune function disturbances in humans by environmental chemicals is limited (Descotes 1986; NRC Subcommittee on Immunotoxicology 1992). The use of markers of immunotoxicity has received little attention in clinical and epidemiological studies to investigate the effect of these chemicals on human health. Such studies have not been performed frequently, and their interpretation often does not permit unequivocal conclusions to be drawn, due for instance to the uncontrolled nature of exposure. Therefore, at present, immunotoxicity assessment in rodents, with subsequent extrapolation to man, forms the basis of decisions regarding hazard and risk.

Hypersensitivity reactions, notably allergic asthma and contact dermatitis, are important occupational health problems in industrialized countries (Vos, Younes and Smith 1995). The phenomenon of contact sensitization was investigated first in the guinea pig (Andersen and Maibach 1985). Until recently this has been the species of choice for predictive testing. Many guinea pig test methods are available, the most frequently employed being the guinea pig maximization test and the occluded patch test of Buehler. Guinea pig tests and newer approaches developed in mice, such as ear swelling tests and the local lymph node assay, provide the toxicologist with the tools to assess skin sensitization hazard. The situation with respect to sensitization of the respiratory tract is very different. There are, as yet, no well-validated or widely accepted methods available for the identification of chemical respiratory allergens although progress in the development of animal models for the investigation of chemical respiratory allergy has been achieved in the guinea pig and mouse.

Human data show that chemical agents, in particular drugs, can cause autoimmune diseases (Kammüller, Bloksma and Seinen 1989). There are a number of experimental animal models of human autoimmune diseases. Such comprise both spontaneous pathology (for example systemic lupus erythematosus in New Zealand Black mice) and autoimmune phenomena induced by experimental immunization with a cross-reactive autoantigen (for example the H37Ra adjuvant induced arthritis in Lewis strain rats). These models are applied in the preclinical evaluation of immunosuppressive drugs. Very few studies have addressed the potential of these models for assessment of whether a xenobiotic exacerbates induced or congenital autoimmunity. Animal models that are suitable to investigate the ability of chemicals to induce autoimmune diseases are virtually lacking. One model that is used to a limited extent is the popliteal lymph node assay in mice. Like the situation in humans, genetic factors play a crucial role in the development of autoimmune disease (AD) in laboratory animals, which will limit the predictive value of such tests.

The Immune System

The major function of the immune system is defence against bacteria, viruses, parasites, fungi and neoplastic cells. This is achieved by the actions of various cell types and their soluble mediators in a finely tuned concert. The host defence can be roughly divided into non-specific or innate resistance and specific or acquired immunity mediated by lymphocytes (Roitt, Brostoff and Male 1989).

Components of the immune system are present throughout the body (Jones et al. 1990). The lymphocyte compartment is found within lymphoid organs (figure 1). The bone marrow and thymus are classified as primary or central lymphoid organs; the secondary or peripheral lymphoid organs include lymph nodes, spleen and lymphoid tissue along secretory surfaces such as the gastrointestinal and respiratory tracts, the so-called mucosa-associated lymphoid tissue (MALT). About half of the body’s lymphocytes are located at any one time in MALT. In addition the skin is an important organ for the induction of immune responses to antigens present on the skin. Important in this process are epidermal Langerhans cells that have an antigen-presenting function.

Figure 1. Primary and secondary lymphoid organs and tissues

TOX110F1

Phagocytic cells of the monocyte/macrophage lineage, called the mononuclear phagocyte system (MPS), occur in lymphoid organs and also at extranodal sites; the extranodal phagocytes include Kupffer cells in the liver, alveolar macrophages in the lung, mesangial macrophages in the kidney and glial cells in the brain. Polymorphonuclear leukocytes (PMNs) are present mainly in blood and bone marrow, but accumulate at sites of inflammation.

 

 

 

 

 

 

 

Non-specific defence

A first line of defence to micro-organisms is executed by a physical and chemical barrier, such as at the skin, the respiratory tract and the alimentary tract. This barrier is helped by non-specific protective mechanisms including phagocytic cells, such as macrophages and polymorphonuclear leukocytes, which are able to kill pathogens, and natural killer cells, which can lyse tumour cells and virus-infected cells. The complement system and certain microbial inhibitors (e.g., lysozyme) also take part in the non-specific response.

Specific immunity

After initial contact of the host with the pathogen, specific immune responses are induced. The hallmark of this second line of defence is specific recognition of determinants, so-called antigens or epitopes, of the pathogens by receptors on the cell surface of B- and T-lymphocytes. Following interaction with the specific antigen, the receptor-bearing cell is stimulated to undergo proliferation and differentiation, producing a clone of progeny cells that are specific for the eliciting antigen. The specific immune responses help the non-specific defence presented to the pathogens by stimulating the efficacy of the non-specific responses. A fundamental characteristic of specific immunity is that memory develops. Secondary contact with the same antigen provokes a faster and more vigorous but well-regulated response.

The genome does not have the capacity to carry the codes of an array of antigen receptors sufficient to recognize the number of antigens that can be encountered. The repertoire of specificity develops by a process of gene rearrangements. This is a random process, during which various specificities are brought about. This includes specificities for self components, which are undesirable. A selection process that takes place in the thymus (T cells), or bone marrow (B cells) operates to delete these undesirable specificities.

Normal immune effector function and homeostatic regulation of the immune response is dependent upon a variety of soluble products, known collectively as cytokines, which are synthesized and secreted by lymphocytes and by other cell types. Cytokines have pleiotropic effects on immune and inflammatory responses. Cooperation between different cell populations is required for the immune response—the regulation of antibody responses, the accumulation of immune cells and molecules at inflammatory sites, the initiation of acute phase responses, the control of macrophage cytotoxic function and many other processes central to host resistance. These are influenced by, and in many cases are dependent upon, cytokines acting individually or in concert.

Two arms of specific immunity are recognized—humoral immunity and cell-mediated or cellular immunity:

Humoral immunity. In the humoral arm B-lymphocytes are stimulated following recognition of antigen by cell-surface receptors. Antigen receptors on B-lymphocytes are immunoglobulins (Ig). Mature B cells (plasma cells) start the production of antigen-specific immunoglobulins that act as antibodies in serum or along mucosal surfaces. There are five major classes of immunoglobulins: (1) IgM, pentameric Ig with optimal agglutinating capacity, which is first produced after antigenic stimulation; (2) IgG, the main Ig in circulation, which can pass the placenta; (3) IgA, secretory Ig for the protection of mucosal surfaces; (4) IgE, Ig fixing to mast cells or basophilic granulocytes involved in immediate hypersensitivity reactions and (5) IgD, whose major function is as a receptor on B-lymphocytes.

Cell-mediated immunity. The cellular arm of the specific immune system is mediated by T-lymphocytes. These cells also have antigen receptors on their membranes. They recognize antigen if presented by antigen presenting cells in the context of histocompatibility antigens. Hence, these cells have a restriction in addition to the antigen specificity. T cells function as helper cells for various (including humoral) immune responses, mediate recruitment of inflammatory cells, and can, as cytotoxic T cells, kill target cells after antigen-specific recognition.

Mechanisms of Immunotoxicity

Immunosuppression

Effective host resistance is dependent upon the functional integrity of the immune system, which in turn requires that the component cells and molecules which orchestrate immune responses are available in sufficient numbers and in an operational form. Congenital immunodeficiencies in humans are often characterized by defects in certain stem cell lines, resulting in impaired or absent production of immune cells. By analogy with congenital and acquired human immunodeficiency diseases, chemical-induced immunosuppression may result simply from a reduced number of functional cells (IPCS 1996). The absence, or reduced numbers, of lymphocytes may have more or less profound effects on immune status. Some immunodeficiency states and severe immunosuppression, as can occur in transplantation or cytostatic therapy, have been associated in particular with increased incidences of opportunistic infections and of certain neoplastic diseases. The infections can be bacterial, viral, fungal or protozoan, and the predominant type of infection depends on the associated immunodeficiency. Exposure to immunosuppressive environmental chemicals may be expected to result in more subtle forms of immunosuppression, which may be difficult to detect. These may lead, for example, to an increased incidence of infections such as influenza or the common cold.

In view of the complexity of the immune system, with the wide variety of cells, mediators and functions that form a complicated and interactive network, immunotoxic compounds have numerous opportunities to exert an effect. Although the nature of the initial lesions induced by many immunotoxic chemicals have not yet been elucidated, there is increasing information available, mostly derived from studies in laboratory animals, regarding the immunobiological changes which result in depression of immune function (Dean et al. 1994). Toxic effects might occur at the following critical functions (and some examples are given of immunotoxic compounds affecting these functions):

  •  development and expansion of different stem cell populations (benzene exerts immunotoxic effects at the stem cell level, causing lymphocytopenia)
  •  proliferation of various lymphoid and myeloid cells as well as supportive tissues in which these cells mature and function (immunotoxic organotin compounds suppress the proliferative activity of lymphocytes in the thymic cortex through direct cytotoxicity; the thymotoxic action of 2,3,7,8-tetrachloro-dibenzo-p-dioxin (TCDD) and related compounds is likely due to an impaired function of thymic epithelial cells, rather than to direct toxicity for thymocytes)
  •  antigen uptake, processing and presentation by macrophages and other antigen-presenting cells (one of the targets of 7,12-dimethylbenz(a)anthracene (DMBA) and of lead is antigen presentation by macrophages; a target of ultraviolet radiation is the antigen-presenting Langerhans cell)
  •  regulatory function of T-helper and T-suppressor cells (T-helper cell function is impaired by organotins, aldicarb, polychlorinated biphenyls (PCBs), TCDD and DMBA; T-suppressor cell function is reduced by low-dose cyclophosphamide treatment)
  •  production of various cytokines or interleukins (benzo(a)pyrene (BP) suppresses interleukin-1 production; ultraviolet radiation alters production of cytokines by keratinocytes)
  •  synthesis of various classes of immunoglobulins IgM and IgG is suppressed following PCB and tributyltin oxide (TBT) treatment, and increased after hexachlorobenzene (HCB) exposure).
  •  complement regulation and activation (affected by TCDD)
  •  cytotoxic T cell function (3-methylcholanthrene (3-MC), DMBA, and TCDD suppress cytotoxic T cell activity)
  •  natural killer (NK) cell function (pulmonary NK activity is suppressed by ozone; splenic NK activity is impaired by nickel)
  •  macrophage and polymorphonuclear leukocyte chemotaxis and cytotoxic functions (ozone and nitrogen dioxide impair the phagocytic activity of alveolar macrophages).

 

Allergy

Allergy may be defined as the adverse health effects which result from the induction and elicitation of specific immune responses. When hypersensitivity reactions occur without involvement of the immune system the term pseudo-allergy is used. In the context of immunotoxicology, allergy results from a specific immune response to chemicals and drugs that are of interest. The ability of a chemical to sensitize individuals is generally related to its ability to bind covalently to body proteins. Allergic reactions may take a variety of forms and these differ with respect to both the underlying immunological mechanisms and the speed of the reaction. Four major types of allergic reactions have been recognized: Type I hypersensitivity reactions, which are effectuated by IgE antibody and where symptoms are manifest within minutes of exposure of the sensitized individual. Type II hypersensitivity reactions result from the damage or destruction of host cells by antibody. In this case symptoms become apparent within hours. Type III hypersensitivity, or Arthus, reactions are also antibody mediated, but against soluble antigen, and result from the local or systemic action of immune complexes. Type IV, or delayed-type hypersensitivity, reactions are effected by T-lymphocytes and normally symptoms develop 24to 48hours following exposure of the sensitized individual.

The two types of chemical allergy of greatest relevance to occupational health are contact sensitivity or skin allergy and allergy of the respiratory tract.

Contact hypersensitivity. A large number of chemicals are able to cause skin sensitization. Following topical exposure of a susceptible individual to a chemical allergen, a T-lymphocyte response is induced in the draining lymph nodes. In the skin the allergen interacts directly or indirectly with epidermal Langerhans cells, which transport the chemical to the lymph nodes and present it in an immunogenic form to responsive T-lymphocytes. Allergen- activated T-lymphocytes proliferate, resulting in clonal expansion. The individual is now sensitized and will respond to a second dermal exposure to the same chemical with a more aggressive immune response, resulting in allergic contact dermatitis. The cutaneous inflammatory reaction which characterizes allergic contact dermatitis is secondary to the recognition of the allergen in the skin by specific T-lymphocytes. These lymphocytes become activated, release cytokines and cause the local accumulation of other mononuclear leukocytes. Symptoms develop some 24 to 48 hours following exposure of the sensitized individual, and allergic contact dermatitis therefore represents a form of delayed-type hypersensitivity. Common causes of allergic contact dermatitis include organic chemicals (such as 2,4-dinitrochlorobenzene), metals (such as nickel and chromium) and plant products (such as urushiol from poison ivy).

Respiratory hypersensitivity. Respiratory hypersensitivity is usually considered to be a Type I hypersensitivity reaction. However, late phase reactions and the more chronic symptoms associated with asthma may involve cell-mediated (Type IV) immune processes. The acute symptoms associated with respiratory allergy are effected by IgE antibody, the production of which is provoked following exposure of the susceptible individual to the inducing chemical allergen. The IgE antibody distributes systemically and binds, via membrane receptors, to mast cells which are found in vascularized tissues, including the respiratory tract. Following inhalation of the same chemical a respiratory hypersensitivity reaction will be elicited. Allergen associates with protein and binds to, and cross-links, IgE antibody bound to mast cells. This in turn causes the degranulation of mast cells and the release of inflammatory mediators such as histamine and leukotrienes. Such mediators cause bronchoconstriction and vasodilation, resulting in the symptoms of respiratory allergy; asthma and/or rhinitis. Chemicals known to cause respiratory hypersensitivity in man include acid anhydrides (such as trimellitic anhydride), some diisocyanates (such as toluene diisocyanate), platinum salts and some reactive dyes. Also, chronic exposure to beryllium is known to cause hypersensitivity lung disease.

Autoimmunity

Autoimmunity can be defined as the stimulation of specific immune responses directed against endogenous “self” antigens. Induced autoimmunity can result either from alterations in the balance of regulatory T-lymphocytes or from the association of a xenobiotic with normal tissue components such as to render them immunogenic (“altered self”). Drugs and chemicals known to incidentally induce or exacerbate effects like those of autoimmune disease (AD) in susceptible individuals are low molecular weight compounds (molecular weight 100 to 500) that are generally considered to be not immunogenic themselves. The mechanism of AD by chemical exposure is mostly unknown. Disease can be produced directly by means of circulating antibody, indirectly through the formation of immune complexes, or as a consequence of cell-mediated immunity, but likely occurs through a combination of mechanisms. The pathogenesis is best known in immune haemolytic disorders induced by drugs:

  •  The drug can attach to the red-cell membrane and interact with a drug-specific antibody.
  •  The drug can alter the red-cell membrane so that the immune system regards the cell as foreign.
  •  The drug and its specific antibody form immune complexes that adhere to the red-cell membrane to produce injury.
  •  Red-cell sensitization occurs due to the production of red-cell autoantibody.

 

A variety of chemicals and drugs, in particular the latter, have been found to induce autoimmune-like responses (Kamüller, Bloksma and Seinen 1989). Occupational exposure to chemicals may incidentally lead to AD-like syndromes. Exposure to monomeric vinyl chloride, trichloroethylene, perchloroethylene, epoxy resins and silica dust may induce scleroderma-like syndromes. A syndrome similar to systemic lupus erythematosus (SLE) has been described after exposure to hydrazine. Exposure to toluene diisocyanate has been associated with the induction of thrombocytopenic purpura. Heavy metals such as mercury have been implicated in some cases of immune complex glomerulonephritis.

Human Risk Assessment

The assessment of human immune status is performed mainly using peripheral blood for analysis of humoral substances like immunoglobulins and complement, and of blood leukocytes for subset composition and functionality of subpopulations. These methods are usually the same as those used to investigate humoral and cell-mediated immunity as well as nonspecific resistance of patients with suspected congenital immunodeficiency disease. For epidemiological studies (e.g., of occupationally exposed populations) parameters should be selected on the basis of their predictive value in human populations, validated animal models, and the underlying biology of the markers (see table 1). The strategy in screening for immunotoxic effects after (accidental) exposure to environmental pollutants or other toxicants is much dependent on circumstances, such as type of immunodeficiency to be expected, time between exposure and immune status assessment, degree of exposure and number of exposed individuals. The process of assessing the immunotoxic risk of a particular xenobiotic in humans is extremely difficult and often impossible, due largely to the presence of various confounding factors of endogenous or exogenous origin that influence the response of individuals to toxic damage. This is particularly true for studies which investigate the role of chemical exposure in autoimmune diseases, where genetic factors play a crucial role.

Table 1. Classification of tests for immune markers

Test category Characteristics Specific tests
Basic-general
Should be included with general panels
Indicators of general health and organ system status Blood urea nitrogen, blood glucose, etc.
Basic-immune
Should be included with general panels
General indicators of immune status
Relatively low cost
Assay methods are standardized among laboratories
Results outside reference ranges are clinically interpretable
Complete blood counts
Serum IgG, IgA, IgM levels
Surface marker phenotypes for major lymphocyte subsets
Focused/reflex
Should be included when indicated by clinical findings, suspected exposures, or prior test results
Indicators of specific immune functions/events
Cost varies
Assay methods are standardized among laboratories
Results outside reference ranges are clinically interpretable
Histocompatibility genotype
Antibodies to infectious agents
Total serum IgE
Allergen-specific IgE
Autoantibodies
Skin tests for hypersensitivity
Granulocyte oxidative burst
Histopathology (tissue biopsy)
Research
Should be included only with control populations and careful study design
Indicators of general or specific immune functions/events
Cost varies; often expensive
Assay methods are usually not standardized among laboratories
Results outside reference ranges are often not clinically interpretable
In vitro stimulation assays
Cell activation surface markers
Cytokine serum concentrations
Clonality assays (antibody, cellular, genetic)
Cytotoxicity tests

 

As adequate human data are seldom available, the assessment of risk for chemical-induced immunosuppression in humans is in the majority of cases based upon animal studies. The identification of potential immunotoxic xenobiotics is undertaken primarily in controlled studies in rodents. In vivo exposure studies present, in this regard, the optimal approach to estimate the immunotoxic potential of a compound. This is due to the multifactoral and complex nature of the immune system and of immune responses. In vitro studies are of increasing value in the elucidation of mechanisms of immunotoxicity. In addition, by investigating the effects of the compound using cells of animal and human origin, data can be generated for species comparison, which can be used in the “parallelogram” approach to improve the risk assessment process. If data are available for three cornerstones of the parallelogram (in vivo animal, and in vitro animal and human) it may be easier to predict the outcome at the remaining cornerstone, that is, the risk in humans.

When assessment of risk for chemical-induced immunosuppression has to rely solely upon data from animal studies, an approach can be followed in the extrapolation to man by the application of uncertainty factors to the no observed adverse effect level (NOAEL). This level can be based on parameters determined in relevant models, such as host resistance assays and in vivo assessment of hypersensitivity reactions and antibody production. Ideally, the relevance of this approach to risk assessment requires confirmation by studies in humans. Such studies should combine the identification and measurement of the toxicant, epidemiological data and immune status assessments.

To predict contact hypersensitivity, guinea pig models are available and have been used in risk assessment since the 1970s. Although sensitive and reproducible, these tests have limitations as they depend on subjective evaluation; this can be overcome by newer and more quantitative methods developed in the mouse. Regarding chemical-induced hypersensitivity induced by inhalation or ingestion of allergens, tests should be developed and evaluated in terms of their predictive value in man. When it comes to setting safe occupational exposure levels of potential allergens, consideration has to be given to the biphasic nature of allergy: the sensitization phase and the elicitation phase. The concentration required to elicit an allergic reaction in a previously sensitized individual is considerably lower than the concentration necessary to induce sensitization in the immunologically naïve but susceptible individual.

As animal models to predict chemical-induced autoimmunity are virtually lacking, emphasis should be given to the development of such models. For the development of such models, our knowledge of chemical-induced autoimmunity in humans should be advanced, including the study of genetic and immune system markers to identify susceptible individuals. Humans that are exposed to drugs that induce autoimmunity offer such an opportunity.

 

Back

Sunday, 16 January 2011 16:34

Genetic Toxicology

Genetic toxicology, by definition, is the study of how chemical or physical agents affect the intricate process of heredity. Genotoxic chemicals are defined as compounds that are capable of modifying the hereditary material of living cells. The probability that a particular chemical will cause genetic damage inevitably depends on several variables, including the organism’s level of exposure to the chemical, the distribution and retention of the chemical once it enters the body, the efficiency of metabolic activation and/or detoxification systems in target tissues, and the reactivity of the chemical or its metabolites with critical macromolecules within cells. The probability that genetic damage will cause disease ultimately depends on the nature of the damage, the cell’s ability to repair or amplify genetic damage, the opportunity for expressing whatever alteration has been induced, and the ability of the body to recognize and suppress the multiplication of aberrant cells.

In higher organisms, hereditary information is organized in chromosomes. Chromosomes consist of tightly condensed strands of protein-associated DNA. Within a single chromosome, each DNA molecule exists as a pair of long, unbranched chains of nucleotide subunits linked together by phosphodiester bonds that join the 5 carbon of one deoxyribose moiety to the 3 carbon of the next (figure 1). In addition, one of four different nucleotide bases (adenine, cytosine, guanine or thymine) is attached to each deoxyribose subunit like beads on a string. Three-dimensionally, each pair of DNA strands forms a double helix with all of the bases oriented toward the inside of the spiral. Within the helix, each base is associated with its complementary base on the opposite DNA strand; hydrogen bonding dictates strong, noncovalent pairing of adenine with thymine and guanine with cytosine (figure 1). Since the sequence of nucleotide bases is complementary throughout the entire length of the duplex DNA molecule, both strands carry essentially the same genetic information. In fact, during DNA replication each strand serves as a template for the production of a new partner strand.

Figure 1. The (a) primary, (b) secondary and (c) tertiary organization of human hereditary information

TOX090F1Using RNA and an array of different proteins, the cell ultimately deciphers the information encoded by the linear sequence of bases within specific regions of DNA (genes) and produces proteins that are essential for basic cell survival as well as normal growth and differentiation. In essence, the nucleotides function like a biological alphabet which is used to code for amino acids, the building blocks of proteins.

When incorrect nucleotides are inserted or nucleotides are lost, or when unnecessary nucleotides are added during DNA synthesis, the mistake is called a mutation. It has been estimated that less than one mutation occurs for every 109 nucleotides incorporated during the normal replication of cells. Although mutations are not necessarily harmful, alterations causing inactivation or overexpression of important genes can result in a variety of disorders, including cancer, hereditary disease, developmental abnormalities, infertility and embryonic or perinatal death. Very rarely, a mutation can lead to enhanced survival; such occurrences are the basis of natural selection.

Although some chemicals react directly with DNA, most require metabolic activation. In the latter case, electrophilic intermediates such as epoxides or carbonium ions are ultimately responsible for inducing lesions at a variety of nucleophilic sites within the genetic material (figure 2). In other instances, genotoxicity is mediated by by-products of compound interaction with intracellular lipids, proteins, or oxygen.

Figure 2. Bioactivation of: a) benzo(a)pyrene; and b) N-nitrosodimethylamine

TOX090F2

Because of their relative abundance in cells, proteins are the most frequent target of toxicant interaction. However, modification of DNA is of greater concern due to the central role of this molecule in regulating growth and differentiation through multiple generations of cells.

At the molecular level, electrophilic compounds tend to attack oxygen and nitrogen in DNA. The sites that are most prone to modification are illustrated in figure 3. Although oxygens within phosphate groups in the DNA backbone are also targets for chemical modification, damage to bases is thought to be biologically more relevant since these groups are considered to be the primary informational elements in the DNA molecule.

Figure 3. Primary sites of chemically-induced DNA damage

TOX090F3

Compounds that contain one electrophilic moiety typically exert genotoxicity by producing mono-adducts in DNA. Similarly, compounds that contain two or more reactive moieties can react with two different nucleophilic centres and thereby produce intra- or inter-molecular crosslinks in genetic material (figure 4). Interstrand DNA-DNA and DNA-protein crosslinks can be particularly cytotoxic since they can form complete blocks to DNA replication. For obvious reasons, the death of a cell eliminates the possibility that it will be mutated or neoplastically transformed. Genotoxic agents can also act by inducing breaks in the phosphodiester backbone, or between bases and sugars (producing abasic sites) in DNA. Such breaks may be a direct result of chemical reactivity at the damage site, or may occur during the repair of one of the aforementioned types of DNA lesion.

Figure 4. Various types of damage to the protein-DNA complex

TOX090F4

Over the past thirty to forty years, a variety of techniques have been developed to monitor the type of genetic damage induced by various chemicals. Such assays are described in detail elsewhere in this chapter and Encyclopaedia.

Misreplication of “microlesions” such as mono-adducts, abasic sites or single-strand breaks may ultimately result in nucleotide base-pair substitutions, or the insertion or deletion of short polynucleotide fragments in chromosomal DNA. In contrast, “macrolesions,” such as bulky adducts, crosslinks, or double-strand breaks may trigger the gain, loss or rearrangement of relatively large pieces of chromosomes. In any case, the consequences can be devastating to the organism since any one of these events can lead to cell death, loss of function or malignant transformation of cells. Exactly how DNA damage causes cancer is largely unknown. It is currently believed the process may involve inappropriate activation of proto-oncogenes such as myc and ras, and/or inactivation of recently identified tumour suppressor genes such as p53. Abnormal expression of either type of gene abrogates normal cellular mechanisms for controlling cell proliferation and/or differentiation.

The preponderance of experimental evidence indicates that the development of cancer following exposure to electrophilic compounds is a relatively rare event. This can be explained, in part, by the cell’s intrinsic ability to recognize and repair damaged DNA or the failure of cells with damaged DNA to survive. During repair, the damaged base, nucleotide or short stretch of nucleotides surrounding the damage site is removed and (using the opposite strand as a template) a new piece of DNA is synthesized and spliced into place. To be effective, DNA repair must occur with great accuracy prior to cell division, before opportunities for the propagation of mutation.

Clinical studies have shown that people with inherited defects in the ability to repair damaged DNA frequently develop cancer and/or developmental abnormalities at an early age (table 1). Such examples provide strong evidence linking accumulation of DNA damage to human disease. Similarly, agents that promote cell proliferation (such as tetradecanoylphorbol acetate) often enhance carcinogenesis. For these compounds, the increased likelihood of neoplastic transformation may be a direct consequence of a decrease in the time available for the cell to carry out adequate DNA repair.

Table 1. Hereditary, cancer-prone disorders that appear to involve defects in DNA repair

Syndrome Symptoms Cellular phenotype
Ataxia telangiectasia Neurological deterioration
Immunodeficiency
High incidence of lymphoma
Hypersensitivity to ionizing radiation and certain alkylating agents.
Dysregulated replication of damaged DNA (may indicate shortened time for DNA repair)
Bloom’s syndrome Developmental abnormalities
Lesions on exposed skin
High incidence of tumours of the immune system and gastrointestinal tract
High frequency of chromosomal aberrations
Defective ligation of breaks associated with DNA repair
Fanconi’s anaemia Growth retardation
High incidence of leukaemia
Hypersensitivity to crosslinking agents
High frequency of chromosomal aberrations
Defective repair of crosslinks in DNA
Hereditary nonpolyposis colon cancer High incidence of colon cancer Defect in DNA mismatch repair (when insertion of wrong nucleotide occurs during replication)
Xeroderma pigmentosum High incidence of epithelioma on exposed areas of skin
Neurological impairment (in many cases)
Hypersensitivity to UV light and many chemical carcinogens
Defects in excision repair and/or replication of damaged DNA

 

The earliest theories on how chemicals interact with DNA can be traced back to studies conducted during the development of mustard gas for use in warfare. Further understanding grew out of efforts to identify anticancer agents that would selectively arrest the replication of rapidly dividing tumour cells. Increased public concern over hazards in our environment has prompted additional research into the mechanisms and consequences of chemical interaction with the genetic material. Examples of various types of chemicals which exert genotoxicity are presented in table 2.

Table 2. Examples of chemicals that exhibit genotoxicity in human cells

Class of chemical Example Source of exposure Probable genotoxic lesion
Aflatoxins Aflatoxin B1 Contaminated food Bulky DNA adducts
Aromatic amines 2-Acetylaminofluorene Environmental Bulky DNA adducts
Aziridine quinones Mitomycin C Cancer chemotherapy Mono-adducts, interstrand crosslinks and single-strand breaks in DNA.
Chlorinated hydrocarbons Vinyl chloride Environmental Mono-adducts in DNA
Metals and metal compounds Cisplatin Cancer chemotherapy Both intra- and inter-strand crosslinks in DNA
  Nickel compounds Environmental Mono-adducts and single-strand breaks in DNA
Nitrogen mustards Cyclophosphamide Cancer chemotherapy Mono-adducts and interstrand crosslinks in DNA
Nitrosamines N-Nitrosodimethylamine Contaminated food Mono-adducts in DNA
Polycyclic aromatic hydrocarbons Benzo(a)pyrene Environmental Bulky DNA adducts

 

Back

Sunday, 16 January 2011 16:29

Cellular Injury and Cellular Death

Virtually all of medicine is devoted to either preventing cell death, in diseases such as myocardial infarction, stroke, trauma and shock, or causing it, as in the case of infectious diseases and cancer. It is, therefore, essential to understand the nature and mechanisms involved. Cell death has been classified as “accidental”, that is, caused by toxic agents, ischaemia and so on, or “programmed”, as occurs during embryological development, including formation of digits, and resorption of the tadpole tail.

Cell injury and cell death are, therefore, important both in physiology and in pathophysiology. Physiological cell death is extremely important during embryogenesis and embryonic development. The study of cell death during development has led to important and new information on the molecular genetics involved, especially through the study of development in invertebrate animals. In these animals, the precise location and the significance of cells that are destined to undergo cell death have been carefully studied and, with the use of classic mutagenesis techniques, several involved genes have now been identified. In adult organs, the balance between cell death and cell proliferation controls organ size. In some organs, such as the skin and the intestine, there is a continual turnover of cells. In the skin, for example, cells differentiate as they reach the surface, and finally undergo terminal differentiation and cell death as keratinization proceeds with the formation of crosslinked envelopes.

Many classes of toxic chemicals are capable of inducing acute cell injury followed by death. These include anoxia and ischaemia and their chemical analogues such as potassium cyanide; chemical carcinogens, which form electrophiles that covalently bind to proteins in nucleic acids; oxidant chemicals, resulting in free radical formation and oxidant injury; activation of complement; and a variety of calcium ionophores. Cell death is also an important component of chemical carcinogenesis; many complete chemical carcinogens, at carcinogenic doses, produce acute necrosis and inflammation followed by regeneration and preneoplasia.

Definitions

Cell injury

Cell injury is defined as an event or stimulus, such as a toxic chemical, that perturbs the normal homeostasis of the cell, thus causing a number of events to occur (figure 1). The principal targets of lethal injury illustrated are inhibition of ATP synthesis, disruption of plasma membrane integrity or withdrawal of essential growth factors.

Figure 1. Cell injury

TOX060F1

Lethal injuries result in the death of a cell after a variable period of time, depending on temperature, cell type and the stimulus; or they can be sublethal or chronic—that is, the injury results in an altered homeostatic state which, though abnormal, does not result in cell death (Trump and Arstila 1971; Trump and Berezesky 1992; Trump and Berezesky 1995; Trump, Berezesky and Osornio-Vargas 1981). In the case of a lethal injury, there is a phase prior to the time of cell death

during this time, the cell will recover; however, after a particular point in time (the “point of no return” or point of cell death), the removal of the injury does not result in recovery but instead the cell undergoes degradation and hydrolysis, ultimately reaching physical-chemical equilibrium with the environment. This is the phase known as necrosis. During the prelethal phase, several principal types of change occur, depending on the cell and the type of injury. These are known as apoptosis and oncosis.

 

 

 

 

 

Apoptosis

Apoptosis is derived from the Greek words apo, meaning away from, and ptosis, meaning to fall. The term falling away from is derived from the fact that, during this type of prelethal change, the cells shrink and undergo marked blebbing at the periphery. The blebs then detach and float away. Apoptosis occurs in a variety of cell types following various types of toxic injury (Wyllie, Kerr and Currie 1980). It is especially prominent in lymphocytes, where it is the predominant mechanism for turnover of lymphocyte clones. The resulting fragments result in the basophilic bodies seen within macrophages in lymph nodes. In other organs, apoptosis typically occurs in single cells which are rapidly cleared away before and following death by phagocytosis of the fragments by adjacent parenchymal cells or by macrophages. Apoptosis occurring in single cells with subsequent phagocytosis typically does not result in inflammation. Prior to death, apoptotic cells show a very dense cytosol with normal or condensed mitochondria. The endoplasmic reticulum (ER) is normal or only slightly dilated. The nuclear chromatin is markedly clumped along the nuclear envelope and around the nucleolus. The nuclear contour is also irregular and nuclear fragmentation occurs. The chromatin conden- sation is associated with DNA fragmentation which, in many instances, occurs between nucleosomes, giving a characteristic ladder appearance on electrophoresis.

In apoptosis, increased [Ca2+]i may stimulate K+ efflux resulting in cell shrinkage, which probably requires ATP. Injuries that totally inhibit ATP synthesis, therefore, are more likely to result in apoptosis. A sustained increase of [Ca2+]i has a number of deleterious effects including activation of proteases, endonucleases, and phospholipases. Endonuclease activation results in single and double DNA strand breaks which, in turn, stimulate increased levels of p53 and in poly-ADP ribosylation, and of nuclear proteins which are essential in DNA repair. Activation of proteases modifies a number of substrates including actin and related proteins leading to bleb formation. Another important substrate is poly(ADP-ribose) polymerase (PARP), which inhibits DNA repair. Increased [Ca2+]i is also associated with activation of a number of protein kinases, such as MAP kinase, calmodulin kinase and others. Such kinases are involved in activation of transcription factors which initiate transcription of immediate-early genes, for example, c-fos, c-jun and c-myc, and in activation of phospholipase A2 which results in permeabilization of the plasma membrane and of intracellular membranes such as the inner membrane of mitochondria.

Oncosis

Oncosis, derived from the Greek word onkos, to swell, is so named because in this type of prelethal change the cell begins to swell almost immediately following the injury (Majno and Joris 1995). The reason for the swelling is an increase in cations in the water within the cell. The principal cation responsible is sodium, which is normally regulated to maintain cell volume. However, in the absence of ATP or if Na-ATPase of the plasmalemma is inhibited, volume control is lost because of intracellular protein, and sodium in the water continuing to increase. Among the early events in oncosis are, therefore, increased [Na+]i which leads to cellular swelling and increased [Ca2+]i resulting either from influx from the extracellular space or release from intracellular stores. This results in swelling of the cytosol, swelling of the endoplasmic reticulum and Golgi apparatus, and the formation of watery blebs around the cell surface. The mitochondria initially undergo condensation, but later they too show high-amplitude swelling because of damage to the inner mitochondrial membrane. In this type of prelethal change, the chromatin undergoes condensation and ultimately degradation; however, the characteristic ladder pattern of apoptosis is not seen.

Necrosis

Necrosis refers to the series of changes that occur following cell death when the cell is converted to debris which is typically removed by the inflammatory response. Two types can be distinguished: oncotic necrosis and apoptotic necrosis. Oncotic necrosis typically occurs in large zones, for example, in a myocardial infarct or regionally in an organ after chemical toxicity, such as the renal proximal tubule following administration of HgCl2. Broad zones of an organ are involved and the necrotic cells rapidly incite an inflammatory reaction, first acute and then chronic. In the event that the organism survives, in many organs necrosis is followed by clearing away of the dead cells and regeneration, for example, in the liver or kidney following chemical toxicity. In contrast, apoptotic necrosis typically occurs on a single cell basis and the necrotic debris is formed within the phagocytes of macrophages or adjacent parenchymal cells. The earliest characteristics of necrotic cells include interruptions in plasma membrane continuity and the appearance of flocculent densities, representing denatured proteins within the mitochondrial matrix. In some forms of injury that do not initially interfere with mitochondrial calcium accumulation, calcium phosphate deposits can be seen within the mitochondria. Other membrane systems are similarly fragmenting, such as the ER, the lysosomes and the Golgi apparatus. Ultimately, the nuclear chromatin undergoes lysis, resulting from attack by lysosomal hydrolases. Following cell death, lysosomal hydrolases play an important part in clearing away debris with cathepsins, nucleolases and lipases since these have an acid pH optimum and can survive the low pH of necrotic cells while other cellular enzymes are denatured and inactivated.

Mechanisms

Initial stimulus

In the case of lethal injuries, the most common initial interactions resulting in injury leading to cell death are interference with energy metabolism, such as anoxia, ischaemia or inhibitors of respiration, and glycolysis such as potassium cyanide, carbon monoxide, iodo-acetate, and so on. As mentioned above, high doses of compounds that inhibit energy metabolism typically result in oncosis. The other common type of initial injury resulting in acute cell death is modification of the function of the plasma membrane (Trump and Arstila 1971; Trump, Berezesky and Osornio-Vargas 1981). This can either be direct damage and permeabilization, as in the case of trauma or activation of the C5b-C9 complex of complement, mechanical damage to the cell membrane or inhibition of the sodium-potassium (Na+-K+) pump with glycosides such as ouabain. Calcium ionophores such as ionomycin or A23187, which rapidly carry [Ca2+] down the gradient into the cell, also cause acute lethal injury. In some cases, the pattern in the prelethal change is apoptosis; in others, it is oncosis.

Signalling pathways

With many types of injury, mitochondrial respiration and oxidative phosphorylation are rapidly affected. In some cells, this stimulates anaerobic glycolysis, which is capable of maintaining ATP, but with many injuries this is inhibited. The lack of ATP results in failure to energize a number of important homeostatic processes, in particular, control of intracellular ion homeostasis (Trump and Berezesky 1992; Trump, Berezesky and Osornio-Vargas 1981). This results in rapid increases of [Ca2+]i, and increased [Na+] and [Cl-] results in cell swelling. Increases in [Ca2+]i result in the activation of a number of other signalling mechanisms discussed below, including a series of kinases, which can result in increased immediate early gene transcription. Increased [Ca2+]i also modifies cytoskeletal function, in part resulting in bleb formation and in the activation of endonucleases, proteases and phospholipases. These seem to trigger many of the important effects discussed above, such as membrane damage through protease and lipase activation, direct degradation of DNA from endonuclease activation, and activation of kinases such as MAP kinase and calmodulin kinase, which act as transcription factors.

Through extensive work on development in the invertebrate C. elegans and Drosophila, as well as human and animal cells, a series of pro-death genes have been identified. Some of these invertebrate genes have been found to have mammalian counterparts. For example, the ced-3 gene, which is essential for programmed cell death in C. elegans, has protease activity and a strong homology with the mammalian interleukin converting enzyme (ICE). A closely related gene called apopain or prICE has recently been identified with even closer homology (Nicholson et al. 1995). In Drosophila, the reaper gene seems to be involved in a signal that leads to programmed cell death. Other pro-death genes include the Fas membrane protein and the important tumour-suppressor gene, p53, which is widely conserved. p53 is induced at the protein level following DNA damage and when phosphorylated acts as a transcription factor for other genes such as gadd45 and waf-1, which are involved in cell death signalling. Other immediate early genes such as c-fos, c-jun, and c-myc also seem to be involved in some systems.

At the same time, there are anti-death genes which appear to counteract the pro-death genes. The first of these to be identified was ced-9 from C. elegans, which is homologous to bcl-2 in humans. These genes act in an as yet unknown way to prevent cell killing by either genetic or chemical toxins. Some recent evidence indicates that bcl-2 may act as an antioxidant. Currently, there is much effort underway to develop an understanding of the genes involved and to develop ways to activate or inhibit these genes, depending on the situation.

 

Back

Sunday, 16 January 2011 16:18

Introduction and Concepts

Mechanistic toxicology is the study of how chemical or physical agents interact with living organisms to cause toxicity. Knowledge of the mechanism of toxicity of a substance enhances the ability to prevent toxicity and design more desirable chemicals; it constitutes the basis for therapy upon overexposure, and frequently enables a further understanding of fundamental biological processes. For purposes of this Encyclopaedia the emphasis will be placed on animals to predict human toxicity. Different areas of toxicology include mechanistic, descriptive, regulatory, forensic and environmental toxicology (Klaassen, Amdur and Doull 1991). All of these benefit from understanding the fundamental mechanisms of toxicity.

Why Understand Mechanisms of Toxicity?

Understanding the mechanism by which a substance causes toxicity enhances different areas of toxicology in different ways. Mechanistic understanding helps the governmental regulator to establish legally binding safe limits for human exposure. It helps toxicologists in recommending courses of action regarding clean-up or remediation of contaminated sites and, along with physical and chemical properties of the substance or mixture, can be used to select the degree of protective equipment required. Mechanistic knowledge is also useful in forming the basis for therapy and the design of new drugs for treatment of human disease. For the forensic toxicologist the mechanism of toxicity often provides insight as to how a chemical or physical agent can cause death or incapacitation.

If the mechanism of toxicity is understood, descriptive toxicology becomes useful in predicting the toxic effects of related chemicals. It is important to understand, however, that a lack of mechanistic information does not deter health professionals from protecting human health. Prudent decisions based on animal studies and human experience are used to establish safe exposure levels. Traditionally, a margin of safety was established by using the “no adverse effect level” or a “lowest adverse effect level” from animal studies (using repeated-exposure designs) and dividing that level by a factor of 100 for occupational exposure or 1,000 for other human environmental exposure. The success of this process is evident from the few incidents of adverse health effects attributed to chemical exposure in workers where appropriate exposure limits had been set and adhered to in the past. In addition, the human lifespan continues to increase, as does the quality of life. Overall the use of toxicity data has led to effective regulatory and voluntary control. Detailed knowledge of toxic mechanisms will enhance the predictability of newer risk models currently being developed and will result in continuous improvement.

Understanding environmental mechanisms is complex and presumes a knowledge of ecosystem disruption and homeostasis (balance). While not discussed in this article, an enhanced understanding of toxic mechanisms and their ultimate consequences in an ecosystem would help scientists to make prudent decisions regarding the handling of municipal and industrial waste material. Waste management is a growing area of research and will continue to be very important in the future.

Techniques for Studying Mechanisms of Toxicity

The majority of mechanistic studies start with a descriptive toxicological study in animals or clinical observations in humans. Ideally, animal studies include careful behavioural and clinical observations, careful biochemical examination of elements of the blood and urine for signs of adverse function of major biological systems in the body, and a post-mortem evaluation of all organ systems by microscopic examination to check for injury (see OECD test guidelines; EC directives on chemical evaluation; US EPA test rules; Japan chemicals regulations). This is analogous to a thorough human physical examination that would take place in a hospital over a two- to three-day time period except for the post-mortem examination.

Understanding mechanisms of toxicity is the art and science of observation, creativity in the selection of techniques to test various hypotheses, and innovative integration of signs and symptoms into a causal relationship. Mechanistic studies start with exposure, follow the time-related distribution and fate in the body (pharmacokinetics), and measure the resulting toxic effect at some level of the system and at some dose level. Different substances can act at different levels of the biological system in causing toxicity.

Exposure

The route of exposure in mechanistic studies is usually the same as for human exposure. Route is important because there can be effects that occur locally at the site of exposure in addition to systemic effects after the chemical has been absorbed into the blood and distributed throughout the body. A simple yet cogent example of a local effect would be irritation and eventual corrosion of the skin following application of strong acid or alkaline solutions designed for cleaning hard surfaces. Similarly, irritation and cellular death can occur in cells lining the nose and/or lungs following exposure to irritant vapours or gases such as oxides of nitrogen or ozone. (Both are constituents of air pollution, or smog). Following absorption of a chemical into blood through the skin, lungs or gastrointestinal tract, the concentration in any organ or tissue is controlled by many factors which determine the pharmacokinetics of the chemical in the body. The body has the ability to activate as well as detoxify various chemicals as noted below.

Role of Pharmacokinetics in Toxicity

Pharmacokinetics describes the time relationships for chemical absorption, distribution, metabolism (biochemical alterations in the body) and elimination or excretion from the body. Relative to mechanisms of toxicity, these pharmacokinetic variables can be very important and in some instances determine whether toxicity will or will not occur. For instance, if a material is not absorbed in a sufficient amount, systemic toxicity (inside the body) will not occur. Conversely, a highly reactive chemical that is detoxified quickly (seconds or minutes) by digestive or liver enzymes may not have the time to cause toxicity. Some polycyclic halogenated substances and mixtures as well as certain metals like lead would not cause significant toxicity if excretion were rapid; but accumulation to sufficiently high levels determines their toxicity since excretion is not rapid (sometimes measured in years). Fortunately, most chemicals do not have such long retention in the body. Accumulation of an innocuous material still would not induce toxicity. The rate of elimination from the body and detoxication is frequently referred to as the half-life of the chemical, which is the time for 50% of the chemical to be excreted or altered to a non-toxic form.

However, if a chemical accumulates in a particular cell or organ, that may signal a reason to further examine its potential toxicity in that organ. More recently, mathematical models have been developed to extrapolate pharmacokinetic variables from animals to humans. These pharmacokinetic models are extremely useful in generating hypotheses and testing whether the experimental animal may be a good representation for humans. Numerous chapters and texts have been written on this subject (Gehring et al. 1976; Reitz et al. 1987; Nolan et al. 1995). A simplified example of a physiological model is depicted in figure 1.

Figure 1. A simplified pharmacokinetic model

TOX210F1

Different Levels and Systems Can Be Adversely Affected

Toxicity can be described at different biological levels. Injury can be evaluated in the whole person (or animal), the organ system, the cell or the molecule. Organ systems include the immune, respiratory, cardiovascular, renal, endocrine, digestive, muscolo-skeletal, blood, reproductive and central nervous systems. Some key organs include the liver, kidney, lung, brain, skin, eyes, heart, testes or ovaries, and other major organs. At the cellular/biochemical level, adverse effects include interference with normal protein function, endocrine receptor function, metabolic energy inhibition, or xenobiotic (foreign substance) enzyme inhibition or induction. Adverse effects at the molecular level include alteration of the normal function of DNA-RNA transcription, of specific cytoplasmic and nuclear receptor binding, and of genes or gene products. Ultimately, dysfunction in a major organ system is likely caused by a molecular alteration in a particular target cell within that organ. However, it is not always possible to trace a mechanism back to a molecular origin of causation, nor is it necessary. Intervention and therapy can be designed without a complete understanding of the molecular target. However, knowledge about the specific mechanism of toxicity increases the predictive value and accuracy of extrapolation to other chemicals. Figure 2 is a diagrammatic representation of the various levels where interference of normal physiological processes can be detected. The arrows indicate that the consequences to an individual can be determined from top down (exposure, pharmaco- kinetics to system/organ toxicity) or from bottom up (molecular change, cellular/biochemical effect to system/organ toxicity).

Figure 2. Reresentation of mechanisms of toxicity

TOX210F2

Examples of Mechanisms of Toxicity

Mechanisms of toxicity can be straightforward or very complex. Frequently, there is a difference among the type of toxicity, the mechanism of toxicity, and the level of effect, related to whether the adverse effects are due to a single, acute high dose (like an accidental poisoning), or a lower-dose repeated exposure (from occupational or environmental exposure). Classically, for testing purposes, an acute, single high dose is given by direct intubation into the stomach of a rodent or exposure to an atmosphere of a gas or vapour for two to four hours, whichever best resembles the human exposure. The animals are observed over a two-week period following exposure and then the major external and internal organs are examined for injury. Repeated-dose testing ranges from months to years. For rodent species, two years is considered a chronic (lifetime) study sufficient to evaluate toxicity and carcinogenicity, whereas for non-human primates, two years would be considered a subchronic (less than lifetime) study to evaluate repeated dose toxicity. Following exposure a complete examination of all tissues, organs and fluids is conducted to determine any adverse effects.

Acute Toxicity Mechanisms

The following examples are specific to high-dose, acute effects which can lead to death or severe incapacitation. However, in some cases, intervention will result in transient and fully reversible effects. The dose or severity of exposure will determine the result.

Simple asphyxiants. The mechanism of toxicity for inert gases and some other non-reactive substances is lack of oxygen (anoxia). These chemicals, which cause deprivation of oxygen to the central nervous system (CNS), are termed simple asphyxiants. If a person enters a closed space that contains nitrogen without sufficient oxygen, immediate oxygen depletion occurs in the brain and leads to unconsciousness and eventual death if the person is not rapidly removed. In extreme cases (near zero oxygen) unconsciousness can occur in a few seconds. Rescue depends on rapid removal to an oxygenated environment. Survival with irreversible brain damage can occur from delayed rescue, due to the death of neurons, which cannot regenerate.

Chemical asphyxiants. Carbon monoxide (CO) competes with oxygen for binding to haemoglobin (in red blood cells) and therefore deprives tissues of oxygen for energy metabolism; cellular death can result. Intervention includes removal from the source of CO and treatment with oxygen. The direct use of oxygen is based on the toxic action of CO. Another potent chemical asphyxiant is cyanide. The cyanide ion interferes with cellular metabolism and utilization of oxygen for energy. Treatment with sodium nitrite causes a change in haemoglobin in red blood cells to methaemoglobin. Methaemoglobin has a greater binding affinity to the cyanide ion than does the cellular target of cyanide. Consequently, the methaemoglobin binds the cyanide and keeps the cyanide away from the target cells. This forms the basis for antidotal therapy.

Central nervous system (CNS) depressants. Acute toxicity is characterized by sedation or unconsciousness for a number of materials like solvents which are not reactive or which are transformed to reactive intermediates. It is hypothesized that sedation/anaesthesia is due to an interaction of the solvent with the membranes of cells in the CNS, which impairs their ability to transmit electrical and chemical signals. While sedation may seem a mild form of toxicity and was the basis for development of the early anaesthetics, “the dose still makes the poison”. If sufficient dose is administered by ingestion or inhalation the animal can die due to respiratory arrest. If anaesthetic death does not occur, this type of toxicity is usually readily reversible when the subject is removed from the environment or the chemical is redistributed or eliminated from the body.

Skin effects. Adverse effects to the skin can range from irritation to corrosion, depending on the substance encountered. Strong acids and alkaline solutions are incompatible with living tissue and are corrosive, causing chemical burns and possible scarring. Scarring is due to death of the dermal, deep skin cells responsible for regeneration. Lower concentrations may just cause irritation of the first layer of skin.

Another specific toxic mechanism of skin is that of chemical sensitization. As an example, sensitization occurs when 2,4-dinitrochlorobenzene binds with natural proteins in the skin and the immune system recognizes the altered protein-bound complex as a foreign material. In responding to this foreign material, the immune system activates special cells to eliminate the foreign substance by release of mediators (cytokines) which cause a rash or dermatitis (see “Immunotoxicology”). This is the same reaction of the immune system when exposure to poison ivy occurs. Immune sensitization is very specific to the particular chemical and takes at least two exposures before a response is elicited. The first exposure sensitizes (sets up the cells to recognize the chemical), and subsequent exposures trigger the immune system response. Removal from contact and symptomatic therapy with steroid-containing anti-inflammatory creams are usually effective in treating sensitized individuals. In serious or refractory cases a systemic acting immunosuppresant like prednisone is used in conjunction with topical treatment.

Lung sensitization. An immune sensitization response is elicited by toluene diisocyanate (TDI), but the target site is the lungs. TDI over-exposure in susceptible individuals causes lung oedema (fluid build-up), bronchial constriction and impaired breathing. This is a serious condition and requires removing the individual from potential subsequent exposures. Treatment is primarily symptomatic. Skin and lung sensitization follow a dose response. Exceeding the level set for occupational exposure can cause adverse effects.

Eye effects. Injury to the eye ranges from reddening of the outer layer (swimming-pool redness) to cataract formation of the cornea to damage to the iris (coloured part of the eye). Eye irritation tests are conducted when it is believed serious injury will not occur. Many of the mechanisms causing skin corrosion can also cause injury to the eyes. Materials corrosive to the skin, like strong acids (pH less than 2) and alkali (pH greater than 11.5), are not tested in the eyes of animals because most will cause corrosion and blindness due to a mechanism similar to that which causes skin corrosion. In addition, surface active agents like detergents and surfactants can cause eye injury ranging from irritation to corrosion. A group of materials that requires caution is the positively charged (cationic) surfactants, which can cause burns, permanent opacity of the cornea and vascularization (formation of blood vessels). Another chemical, dinitrophenol, has a specific effect of cataract formation. This appears to be related to concentration of this chemical in the eye, which is an example of pharmacokinetic distributional specificity.

While the listing above is far from exhaustive, it is designed to give the reader an appreciation for various acute toxicity mechanisms.

Subchronic and Chronic Toxicity Mechanisms

When given as a single high dose, some chemicals do not have the same mechanism of toxicity as when given repeatedly as a lower but still toxic dose. When a single high dose is given, there is always the possibility of exceeding the person’s ability to detoxify or excrete the chemical, and this can lead to a different toxic response than when lower repetitive doses are given. Alcohol is a good example. High doses of alcohol lead to primary central nervous system effects, while lower repetitive doses result in liver injury.

Anticholinesterase inhibition. Most organophosphate pesticides, for example, have little mammalian toxicity until they are metabolically activated, primarily in the liver. The primary mechanism of action of organophosphates is the inhibition of acetylcholinesterase (AChE) in the brain and peripheral nervous system. AChE is the normal enzyme that terminates the stimulation of the neurotransmitter acetylcholine. Slight inhibition of AChE over an extended period has not been associated with adverse effects. At high levels of exposure, inability to terminate this neuronal stimulation results in overstimulation of the cholinergic nervous system. Cholinergic overstimulation ultimately results in a host of symptoms, including respiratory arrest, followed by death if not treated. The primary treatment is the administration of atropine, which blocks the effects of acetylcholine, and the administration of pralidoxime chloride, which reactivates the inhibited AChE. Therefore, both the cause and the treatment of organophosphate toxicity are addressed by understanding the biochemical basis of toxicity.

Metabolic activation. Many chemicals, including carbon tetrachloride, chloroform, acetylaminofluorene, nitrosamines, and paraquat are metabolically activated to free radicals or other reactive intermediates which inhibit and interfere with normal cellular function. At high levels of exposure this results in cell death (see “Cellular injury and cellular death”). While the specific interactions and cellular targets remain unknown, the organ systems which have the capability to activate these chemicals, like the liver, kidney and lung, are all potential targets for injury. Specifically, particular cells within an organ have a greater or lesser capacity to activate or detoxify these intermediates, and this capacity determines the intracellular susceptibility within an organ. Metabolism is one reason why an understanding of pharmacokinetics, which describes these types of transformations and the distribution and elimination of these intermediates, is important in recognizing the mechanism of action of these chemicals.

Cancer mechanisms. Cancer is a multiplicity of diseases, and while the understanding of certain types of cancer is increasing rapidly due to the many molecular biological techniques that have been developed since 1980, there is still much to learn. However, it is clear that cancer development is a multi-stage process, and critical genes are key to different types of cancer. Alterations in DNA (somatic mutations) in a number of these critical genes can cause increased susceptibility or cancerous lesions (see “Genetic toxic- ology”). Exposure to natural chemicals (in cooked foods like beef and fish) or synthetic chemicals (like benzidine, used as a dye) or physical agents (ultraviolet light from the sun, radon from soil, gamma radiation from medical procedures or industrial activity) are all contributors to somatic gene mutations. However, there are natural and synthetic substances (such as anti-oxidants) and DNA repair processes which are protective and maintain homeostasis. It is clear that genetics is an important factor in cancer, since genetic disease syndromes such as xeroderma pigmentosum, where there is a lack of normal DNA repair, dramatically increase susceptibility to skin cancer from exposure to ultraviolet light from the sun.

Reproductive mechanisms. Similar to cancer, many mechanisms of reproductive and/or developmental toxicity are known, but much is to be learned. It is known that certain viruses (such as rubella), bacterial infections and drugs (such as thalidomide and vitamin A) will adversely affect development. Recently, work by Khera (1991), reviewed by Carney (1994), show good evidence that the abnormal developmental effects in animal tests with ethylene glycol are attributable to maternal metabolic acidic metabolites. This occurs when ethylene glycol is metabolized to acid metabolites including glycolic and oxalic acid. The subsequent effects on the placenta and foetus appear to be due to this metabolic toxication process.

Conclusion

The intent of this article is to give a perspective on several known mechanisms of toxicity and the need for future study. It is important to understand that mechanistic knowledge is not absolutely necessary to protect human or environmental health. This knowledge will enhance the professional’s ability to better predict and manage toxicity. The actual techniques used in elucidating any particular mechanism depend upon the collective knowledge of the scientists and the thinking of those who make decisions regarding human health.

 

Back

Any organization which seeks to establish and maintain the best state of mental, physical and social wellbeing of its employees needs to have policies and procedures which comprehensively address health and safety. These policies will include a mental health policy with procedures to manage stress based on the needs of the organization and its employees. These will be regularly reviewed and evaluated.

There are a number of options to consider in looking at the prevention of stress, which can be termed as primary, secondary and tertiary levels of prevention and address different stages in the stress process (Cooper and Cartwright 1994). Primary prevention is concerned with taking action to reduce or eliminate stressors (i.e., sources of stress), and positively promoting a supportive and healthy work environment. Secondary prevention is concerned with the prompt detection and management of depression and anxiety by increasing self-awareness and improving stress management skills. Tertiary prevention is concerned with the rehabilitation and recovery process of those individuals who have suffered or are suffering from serious ill health as a result of stress.

To develop an effective and comprehensive organizational policy on stress, employers need to integrate these three approaches (Cooper, Liukkonen and Cartwright 1996).

Primary Prevention

First, the most effective way of tackling stress is to eliminate it at its source. This may involve changes in personnel policies, improving communication systems, redesigning jobs, or allowing more decision making and autonomy at lower levels. Obviously, as the type of action required by an organization will vary according to the kinds of stressor operating, any intervention needs to be guided by some prior diagnosis or stress audit to identify what these stressors are and whom they are affecting.

Stress audits typically take the form of a self-report questionnaire administered to employees on an organization- wide, site or departmental basis. In addition to identifying the sources of stress at work and those individuals most vulnerable to stress, the questionnaire usually measures levels of employee job satisfaction, coping behaviour, and physical and psychological health comparative to similar occupational groups and industries. Stress audits are an extremely effective way of directing organizational resources into areas where they are most needed. Audits also provide a means of regularly monitoring stress levels and employee health over time, and provide a base line whereby subsequent interventions can be evaluated.

Diagnostic instruments, such as the Occupational Stress Indicator (Cooper, Sloan and Williams 1988) are increasingly being used by organizations for this purpose. They are usually administered through occupational health and/or personnel/human resource departments in consultation with a psychologist. In smaller companies, there may be the opportunity to hold employee discussion groups or develop checklists which can be administered on a more informal basis. The agenda for such discussions/ checklists should address the following issues:

  • job content and work scheduling
  • physical working conditions
  • employment terms and expectations of different employee groups within the organization
  • relationships at work
  • communication systems and reporting arrangements.

 

Another alternative is to ask employees to keep a stress diary for a few weeks in which they record any stressful events they encounter during the course of the day. Pooling this information on a group/departmental basis can be useful in identifying universal and persistent sources of stress.

Creating healthy and supportive networks/environments

Another key factor in primary prevention is the development of the kind of supportive organizational climate in which stress is recognized as a feature of modern industrial life and not interpreted as a sign of weakness or incompetence. Mental ill health is indiscriminate—it can affect anyone irrespective of their age, social status or job function. Therefore, employees should not feel awkward about admitting to any difficulties they encounter.

Organizations need to take explicit steps to remove the stigma often attached to those with emotional problems and maximize the support available to staff (Cooper and Williams 1994). Some of the formal ways in which this can be done include:

  • informing employees of existing sources of support and advice within the organization, like occupational health
  • specifically incorporating self-development issues within appraisal systems
  • extending and improving the “people” skills of managers and supervisors so they that convey a supportive attitude and can more comfortably handle employee problems.

 

Most importantly, there has to be demonstrable commitment to the issue of stress and mental health at work from both senior management and unions. This may require a move to more open communication and the dismantling of cultural norms within the organization which inherently promote stress among employees (e.g., cultural norms which encourage employees to work excessively long hours and feel guilty about leaving “on time”). Organizations with a supportive organizational climate will also be proactive in anticipating additional or new stressors which may be introduced as a result of proposed changes. For example, restructuring, new technology and take steps to address this, perhaps by training initiatives or greater employee involvement. Regular communication and increased employee involvement and participation play a key role in reducing stress in the context of organizational change.

Secondary Prevention

Initiatives which fall into this category are generally focused on training and education, and involve awareness activities and skill- training programmes.

Stress education and stress management courses serve a useful function in helping individuals to recognize the symptoms of stress in themselves and others and to extend and develop their coping skills and abilities and stress resilience.

The form and content of this kind of training can vary immensely but often includes simple relaxation techniques, lifestyle advice and planning, basic training in time management, assertiveness and problem-solving skills. The aim of these programmes is to help employees to review the psychological effects of stress and to develop a personal stress-control plan (Cooper 1996).

This kind of programme can be beneficial to all levels of staff and is particularly useful in training managers to recognize stress in their subordinates and be aware of their own managerial style and its impact on those they manage. This can be of great benefit if carried out following a stress audit.

Health screening/health enhancement programmes

Organizations, with the cooperation of occupational health personnel, can also introduce initiatives which directly promote positive health behaviours in the workplace. Again, health promotion activities can take a variety of forms. They may include:

  • the introduction of regular medical check-ups and health screening
  • the design of “healthy” canteen menus
  • the provision of on-site fitness facilities and exercise classes
  • corporate membership or concessionary rates at local health and fitness clubs
  • the introduction of cardiovascular fitness programmes
  • advice on alcohol and dietary control (particularly cutting down on cholesterol, salt and sugar)
  • smoking-cessation programmes
  • advice on lifestyle management, more generally.

 

For organizations without the facilities of an occupational health department, there are external agencies that can provide a range of health-promotion programmes. Evidence from established health-promotion programmes in the United States have produced some impressive results (Karasek and Theorell 1990). For example, the New York Telephone Company’s Wellness Programme, designed to improve cardiovascular fitness, saved the organization $2.7 million in absence and treatment costs in one year alone.

Stress management/lifestyle programmes can be particularly useful in helping individuals to cope with environmental stressors which may have been identified by the organization, but which cannot be changed, e.g., job insecurity.

Tertiary Prevention

An important part of health promotion in the workplace is the detection of mental health problems as soon as they arise and the prompt referral of these problems for specialist treatment. The majority of those who develop mental illness make a complete recovery and are able to return to work. It is usually far more costly to retire a person early on medical grounds and re-recruit and train a successor than it is to spend time easing a person back to work. There are two aspects of tertiary prevention which organizations can consider:

Counselling

Organizations can provide access to confidential professional counselling services for employees who are experiencing problems in the workplace or personal setting (Swanson and Murphy 1991). Such services can be provided either by in-house counsellors or outside agencies in the form of an Employee Assistance Programme (EAP).

EAPs provide counselling, information and/or referral to appropriate counselling treatment and support services. Such services are confidential and usually provide a 24-hour contact line. Charges are normally made on a per capita basis calculated on the total number of employees and the number of counselling hours provided by the programme.

Counselling is a highly skilled business and requires extensive training. It is important to ensure that counsellors have received recognized counselling skills training and have access to a suitable environment which allows them to conduct this activity in an ethical and confidential manner.

Again, the provision of counselling services is likely to be particularly effective in dealing with stress as a result of stressors operating within the organization which cannot be changed (e.g., job loss) or stress caused by non-work related problems (e.g., bereavement, marital breakdown), but which nevertheless tend to spill over into work life. It is also useful in directing employees to the most appropriate sources of help for their problems.

Facilitating the return to work

For those employees who are absent from work as a result of stress, it has to be recognized that the return to work itself is likely to be a “stressful” experience. It is important that organizations are sympathetic and understanding in these circumstances. A “return to work” interview should be conducted to establish whether the individual concerned is ready and happy to return to all aspects of their job. Negotiations should involve careful liaison between the employee, line manager and doctor. Once the individual has made a partial or complete return to his or her duties, a series of follow-up interviews are likely to be useful to monitor their progress and rehabilitation. Again, the occupational health department can play an important role in the rehabilitation process.

The options outlined above should not be regarded as mutually exclusive but rather as being potentially complimentary. Stress- management training, health-promotion activities and counselling services are useful in extending the physical and psychological resources of the individual to help them to modify their appraisal of a stressful situation and cope better with experienced distress (Berridge, Cooper and Highley 1997). However, there are many potential and persistent sources of stress the individual is likely to perceive him- or herself as lacking the resource or positional power to change (e.g., the structure, management style or culture of the organization). Such stressors require organizational level intervention if their long-term dysfunctional impact on employee health is to be overcome satisfactorily. They can only be identified by a stress audit.


Back

Friday, 14 January 2011 19:54

Burnout

Burnout is a type of prolonged response to chronic emotional and interpersonal stressors on the job. It has been conceptualized as an individual stress experience embedded in a context of complex social relationships, and it involves the person’s conception of both self and others. As such, it has been an issue of particular concern for human services occupations where: (a) the relationship between providers and recipients is central to the job; and (b) the provision of service, care, treatment or education can be a highly emotional experience. There are several types of occupations that meet these criteria, including health care, social services, mental health, criminal justice and education. Even though these occupations vary in the nature of the contact between providers and recipients, they are similar in having a structured caregiving relationship centred around the recipient’s current problems (psychological, social and/or physical). Not only is the provider’s work on these problems likely to be emotionally charged, but solutions may not be easily forthcoming, thus adding to the frustration and ambiguity of the work situation. The person who works continuously with people under such circumstances is at greater risk from burnout.

The operational definition (and the corresponding research measure) that is most widely used in burnout research is a three-component model in which burnout is conceptualized in terms of emotional exhaustion, depersonalization and reduced personal accomplishment (Maslach 1993; Maslach and Jackson 1981/1986). Emotional exhaustion refers to feelings of being emotionally overextended and depleted of one’s emotional resources. Depersonalization refers to a negative, callous or excessively detached response to the people who are usually the recipients of one’s service or care. Reduced personal accomplishment refers to a decline in one’s feelings of competence and successful achievement in one’s work.

This multidimensional model of burnout has important theoretical and practical implications. It provides a more complete understanding of this form of job stress by locating it within its social context and by identifying the variety of psychological reactions that different workers can experience. Such differential responses may not be simply a function of individual factors (such as personality), but may reflect the differential impact of situational factors on the three burnout dimensions. For example, certain job characteristics may influence the sources of emotional stress (and thus emotional exhaustion), or the resources available to handle the job successfully (and thus personal accomplishment). This multidimensional approach also implies that interventions to reduce burnout should be planned and designed in terms of the particular component of burnout that needs to be addressed. That is, it may be more effective to consider how to reduce the likelihood of emotional exhaustion, or to prevent the tendency to depersonalize, or to enhance one’s sense of accomplishment, rather than to use a more unfocused approach.

Consistent with this social framework, the empirical research on burnout has focused primarily on situational and job factors. Thus, studies have included such variables as relationships on the job (clients, colleagues, supervisors) and at home (family), job satisfaction, role conflict and role ambiguity, job withdrawal (turnover, absenteeism), expectations, workload, type of position and job tenure, institutional policy and so forth. The personal factors that have been studied are most often demographic variables (sex, age, marital status, etc.). In addition, some attention has been given to personality variables, personal health, relations with family and friends (social support at home), and personal values and commitment. In general, job factors are more strongly related to burnout than are biographical or personal factors. In terms of antecedents of burnout, the three factors of role conflict, lack of control or autonomy, and lack of social support on the job, seem to be most important. The effects of burnout are seen most consistently in various forms of job withdrawal and dissatisfaction, with the implication of a deterioration in the quality of care or service provided to clients or patients. Burnout seems to be correlated with various self-reported indices of personal dysfunction, including health problems, increased use of alcohol and drugs, and marital and family conflicts. The level of burnout seems fairly stable over time, underscoring the notion that its nature is more chronic than acute (see Kleiber and Enzmann 1990; Schaufeli, Maslach and Marek 1993 for reviews of the field).

An issue for future research concerns possible diagnostic criteria for burnout. Burnout has often been described in terms of dysphoric symptoms such as exhaustion, fatigue, loss of self-esteem and depression. However, depression is considered to be context-free and pervasive across all situations, whereas burnout is regarded as job-related and situation-specific. Other symptoms include problems in concentration, irritability and negativism, as well as a significant decrease in work performance over a period of several months. It is usually assumed that burnout symptoms manifest themselves in “normal” persons who do not suffer from prior psychopathology or an identifiable organic illness. The implication of these ideas about possible distinctive symptoms of burnout is that burnout could be diagnosed and treated at the individual level.

However, given the evidence for the situational aetiology of burnout, more attention has been given to social, rather than personal, interventions. Social support, particularly from one’s peers, seems to be effective in reducing the risk of burnout. Adequate job training that includes preparation for difficult and stressful work-related situations helps develop people’s sense of self-efficacy and mastery in their work roles. Involvement in a larger community or action-oriented group can also counteract the helplessness and pessimism that are commonly evoked by the absence of long-term solutions to the problems with which the worker is dealing. Accentuating the positive aspects of the job and finding ways to make ordinary tasks more meaningful are additional methods for gaining greater self-efficacy and control.

There is a growing tendency to view burnout as a dynamic process, rather than a static state, and this has important implications for the proposal of developmental models and process measures. The research gains to be expected from this newer perspective should yield increasingly sophisticated knowledge about the experience of burnout, and will enable both individuals and institutions to deal with this social problem more effectively.


Back

Friday, 14 January 2011 19:53

Mental Illness

Carles Muntaner and William W. Eaton

Introduction

Mental illness is one of the chronic outcomes of work stress that inflicts a major social and economic burden on communities (Jenkins and Coney 1992; Miller and Kelman 1992). Two disciplines, psychiatric epidemiology and mental health sociology (Aneshensel, Rutter and Lachenbruch 1991), have studied the effects of psychosocial and organizational factors of work on mental illness. These studies can be classified according to four different theoretical and methodological approaches: (1) studies of only a single occupation; (2) studies of broad occupational categories as indicators of social stratification; (3) comparative studies of occupational categories; and (4) studies of specific psychosocial and organizational risk factors. We review each of these approaches and discuss their implications for research and prevention.

Studies of a Single Occupation

There are numerous studies in which the focus has been a single occupation. Depression has been the focus of interest in recent studies of secretaries (Garrison and Eaton 1992), professionals and managers (Phelan et al. 1991; Bromet et al. 1990), computer workers (Mino et al. 1993), fire-fighters (Guidotti 1992), teachers (Schonfeld 1992), and “maquiladoras” (Guendelman and Silberg 1993). Alcoholism and drug abuse and dependence have been recently related to mortality among bus drivers (Michaels and Zoloth 1991) and to managerial and professional occupations (Bromet et al. 1990). Symptoms of anxiety and depression which are indicative of psychiatric disorder have been found among garment workers, nurses, teachers, social workers, offshore oil industry workers and young physicians (Brisson, Vezina and Vinet 1992; Fith-Cozens 1987; Fletcher 1988; McGrath, Reid and Boore 1989; Parkes 1992). The lack of a comparison group makes it difficult to determine the significance of this type of study.

Studies of Broad Occupational Categories as Indicators of Social Stratification

The use of occupations as indicators of social stratification has a long tradition in mental health research (Liberatos, Link and Kelsey 1988). Workers in unskilled manual jobs and lower-grade civil servants have shown high prevalence rates of minor psychiatric disorders in England (Rodgers 1991; Stansfeld and Marmot 1992). Alcoholism has been found to be prevalent among blue-collar workers in Sweden (Ojesjo 1980) and even more prevalent among managers in Japan (Kawakami et al. 1992). Failure to differentiate conceptually between effects of occupations per se from “lifestyle” factors associated with occupational strata is a serious weakness of this type of study. It is also true that occupation is an indicator of social stratification in a sense different from social class, that is, as the latter implies control over productive assets (Kohn et al. 1990; Muntaner et al. 1994). However, there have not been empirical studies of mental illness using this conceptualization.

Comparative Studies of Occupational Categories

Census categories for occupations constitute a readily available source of information that allows one to explore associations between occupations and mental illness (Eaton et al. 1990). Epidemiological Catchment Area (ECA) study analyses of comprehensive occupational categories have yielded findings of a high prevalence of depression for professional, administrative support and household services occupations (Roberts and Lee 1993). In another major epidemiological study, the Alameda county study, high rates of depression were found among workers in blue-collar occupations (Kaplan et al. 1991). High 12-month prevalence rates of alcohol dependence among workers in the Unites States have been found in craft occupations (15.6%) and labourers (15.2%) among men, and in farming, forestry and fishing occupations (7.5%) and unskilled service occupations (7.2%) among women (Harford et al. 1992). ECA rates of alcohol abuse and dependence yielded high prevalence among transportation, craft and labourer occupations (Roberts and Lee 1993). Workers in the service sector, drivers and unskilled workers showed high rates of alcoholism in a study of the Swedish population (Agren and Romelsjo 1992). Twelve-month prevalence of drug abuse or dependence in the ECA study was higher among farming (6%), craft (4.7%), and operator, transportation and labourer (3.3%) occupations (Roberts and Lee 1993). The ECA analysis of combined prevalence for all psychoactive substance abuse or dependence syndromes (Anthony et al. 1992) yielded higher prevalence rates for construction labourers, carpenters, construction trades as a whole, waiters, waitresses and transportation and moving occupations. In another ECA analysis (Muntaner et al. 1991), as compared to managerial occupations, greater risk of schizophrenia was found among private household workers, while artists and construction trades were found at higher risk of schizophrenia (delusions and hallucinations), according to criterion A of the Diagnostic and Statistics Manual of Mental Disorders (DSM-III) (APA 1980).

Several ECA studies have been conducted with more specific occupational categories. In addition to specifying occupational environments more closely, they adjust for sociodemographic factors which might have led to spurious results in uncontrolled studies. High 12-month prevalence rates of major depression (above the 3 to 5% found in the general population (Robins and Regier 1990), have been reported for data entry keyers and computer equipment operators (13%) and typists, lawyers, special education teachers and counsellors (10%) (Eaton et al. 1990). After adjustment for sociodemographic factors, lawyers, teachers and counsellors had significantly elevated rates when compared to the employed population (Eaton et al. 1990). In a detailed analysis of 104 occupations, construction labourers, skilled construction trades, heavy truck drivers and material movers showed high rates of alcohol abuse or dependence (Mandell et al. 1992).

Comparative studies of occupational categories suffer from the same flaws as social stratification studies. Thus, a problem with occupational categories is that specific risk factors are bound to be missed. In addition, “lifestyle” factors associated with occupational categories remain a potent explanation for results.

Studies of Specific Psychosocial and Organizational Risk Factors

Most studies of work stress and mental illness have been conducted with scales from Karasek’s Demand/Control model (Karasek and Theorell 1990) or with measures derived from the Dictionary of Occupational Titles (DOT) (Cain and Treiman 1981). In spite of the methodological and theoretical differences underlying these systems, they measure similar psychosocial dimensions (control, substantive complexity and job demands) (Muntaner et al. 1993). Job demands have been associated with major depressive disorder among male power-plant workers (Bromet 1988). Occupations involving lack of direction, control or planning have been shown to mediate the relation between socioeconomic status and depression (Link et al. 1993). However, in one study the relationship between low control and depression was not found (Guendelman and Silberg 1993). The number of negative work-related effects, lack of intrinsic job rewards and organizational stressors such as role conflict and ambiguity have also been associated with major depression (Phelan et al. 1991). Heavy alcohol drinking and alcohol-related problems have been linked to working overtime and to lack of intrinsic job rewards among men and to job insecurity among women in Japan (Kawakami et al. 1993), and to high demands and low control among males in the United States (Bromet 1988). Also among US males, high psychological or physical demands and low control were predictive of alcohol abuse or dependence (Crum et al. 1995). In another ECA analysis, high physical demands and low skill discretion were predictive of drug dependence (Muntaner et al. 1995). Physical demands and job hazards were predictors of schizophrenia or delusions or hallucinations in three US studies (Muntaner et al. 1991; Link et al. 1986; Muntaner et al. 1993). Physical demands have also been associated with psychiatric disease in the Swedish population (Lundberg 1991). These investigations have the potential for prevention because specific, potentially malleable risk factors are the focus of study.

Implications for Research and Prevention

Future studies might benefit from studying the demographic and sociological characteristics of workers in order to sharpen their focus on the occupations proper (Mandell et al. 1992). When occupation is considered an indicator of social stratification, adjustment for non-work stressors should be attempted. The effects of chronic exposure to lack of democracy in the workplace need to be investigated (Johnson and Johansson 1991). A major initiative for the prevention of work-related psychological disorders has emphasized improving working conditions, services, research and surveillance (Keita and Sauter 1992; Sauter, Murphy and Hurrell 1990).

While some researchers maintain that job redesign can improve both productivity and workers’ health (Karasek and Theorell 1990), others have argued that a firm’s profit maximization goals and workers’ mental health are in conflict (Phelan et al. 1991; Muntaner and O’Campo 1993; Ralph 1983).


Back

Friday, 14 January 2011 19:46

Musculoskeletal Disorders

There is growing evidence in the occupational health literature that psychosocial work factors may influence the development of musculoskeletal problems, including both low back and upper extremity disorders (Bongers et al. 1993). Psychosocial work factors are defined as aspects of the work environment (such as work roles, work pressure, relationships at work) that can contribute to the experience of stress in individuals (Lim and Carayon 1994; ILO 1986). This paper provides a synopsis of the evidence and underlying mechanisms linking psychosocial work factors and musculoskeletal problems with the emphasis on studies of upper extremity disorders among office workers. Directions for future research are also discussed.

An impressive array of studies from 1985 to 1995 had linked workplace psychosocial factors to upper extremity musculoskeletal problems in the office work environment (see Moon and Sauter 1996 for an extensive review). In the United States, this relationship was first suggested in an exploratory research by the National Institute for Occupational Safety and Health (NIOSH) (Smith et al. 1981). Results of this research indicated that video display unit (VDU) operators who reported less autonomy and role clarity and greater work pressure and management control over their work processes also reported more musculoskeletal problems than their counterparts who did not work with VDUs (Smith et al. 1981).

Recent studies employing more powerful inferential statistical techniques point more strongly to an effect of psychosocial work factors on upper extremity musculoskeletal disorders among office workers. For example, Lim and Carayon (1994) used structural analysis methods to examine the relationship between psychosocial work factors and upper extremity musculoskeletal discomfort in a sample of 129 office workers. Results showed that psychosocial factors such as work pressure, task control and production quotas were important predictors of upper extremity musculoskeletal discomfort, especially in the neck and shoulder regions. Demographic factors (age, gender, tenure with employer, hours of computer use per day) and other confounding factors (self-reports of medical conditions, hobbies and keyboard use outside work) were controlled for in the study and were not related to any of these problems.

Confirmatory findings were reported by Hales et al. (1994) in a NIOSH study of musculoskeletal disorders in 533 tele-communication workers from 3 different metropolitan cities. Two types of musculoskeletal outcomes were investigated: (1) upper extremity musculoskeletal symptoms determined by questionnaire alone; and (2) potential work-related upper extremity musculoskeletal disorders which were determined by physical examination in addition to the questionnaire. Using regression techniques, the study found that factors such as work pressure and little decision-making opportunity were associated both with intensified musculoskeletal symptoms and also with increased physical evidence of disease. Similar relationships have been observed in the industrial environment, but mainly for back pain (Bongers et al. 1993).

Researchers have suggested a variety of mechanisms underlying the relationship between psychosocial factors and musculoskeletal problems (Sauter and Swanson 1996; Smith and Carayon 1996; Lim 1994; Bongers et al. 1993). These mechanisms can be classified into four categories:

  1. psychophysiological
  2. behavioural
  3. physical
  4. perceptual.

 

Psychophysiological mechanisms

It has been demonstrated that individuals subject to stressful psychosocial working conditions also exhibit increased autonomic arousal (e.g., increased catecholomine secretion, increased heart rate and blood pressure, increased muscle tension etc.) (Frankenhaeuser and Gardell 1976). This is a normal and adaptive psychophysiological response which prepares the individual for action. However, prolonged exposure to stress may have a deleterious effect on musculoskeletal function as well as on health in general. For example, stress-related muscle tension may increase the static loading of muscles, thereby accelerating muscle fatigue and associated discomfort (Westgaard and Bjorklund 1987; Grandjean 1986).

Behavioural mechanisms

Individuals who are under stress may alter their work behaviour in a way that increases musculoskeletal strain. For example, psychological stress may result in greater application of force than necessary during typing or other manual tasks, leading to increased wear and tear on the musculoskeletal system.

Physical mechanisms

Psychosocial factors may influence the physical (ergonomic) demands of the job directly. For example, an increase in time pressure is likely to lead to an increase in work pace (i.e., increased repetition) and increased strain. Alternatively, workers who are given more control over their tasks may be able to adjust their tasks in ways that lead to reduced repetitiveness (Lim and Carayon 1994).

Perceptual mechanisms

Sauter and Swanson (1996) suggest that the relationship between biomechanical stressors (e.g., ergonomic factors) and the development of musculoskeletal problems is mediated by perceptual processes which are influenced by workplace psychosocial factors. For example, symptoms might become more evident in dull, routine jobs than in more engrossing tasks which more fully occupy the attention of the worker (Pennebaker and Hall 1982).

Additional research is needed to assess the relative importance of each of these mechanisms and their possible interactions. Further, our understanding of causal relationships between psychosocial work factors and musculoskeletal disorders would benefit from: (1) increased use of longitudinal study designs; (2) improved methods for assessing and disentangling psychosocial and physical exposures; and (3) improved measurement of musculoskeletal outcomes.

Still, the current evidence linking psychosocial factors and musculoskeletal disorders is impressive and suggests that psychosocial interventions probably play an important role in preventing musculoskeletal problems in the workplace. In this regard, several publications (NIOSH 1988; ILO 1986) provide directions for optimizing the psychosocial environment at work. As suggested by Bongers et al. (1993), special attention should be given to providing a supportive work environment, manageable workloads and increased worker autonomy. Positive effects of such variables were evident in a case study by Westin (1990) of the Federal Express Corporation. According to Westin, a programme of work reorganization to provide an “employee-supportive” work environment, improve communications and reduce work and time pressures was associated with minimal evidence of musculoskeletal health problems.


Back

Page 101 of 106

" DISCLAIMER: The ILO does not take responsibility for content presented on this web portal that is presented in any language other than English, which is the language used for the initial production and peer-review of original content. Certain statistics have not been updated since the production of the 4th edition of the Encyclopaedia (1998)."

Contents

Preface
Part I. The Body
Part II. Health Care
Part III. Management & Policy
Part IV. Tools and Approaches
Part V. Psychosocial and Organizational Factors
Part VI. General Hazards
Part VII. The Environment
Part VIII. Accidents and Safety Management
Part IX. Chemicals
Part X. Industries Based on Biological Resources
Part XI. Industries Based on Natural Resources
Part XII. Chemical Industries
Part XIII. Manufacturing Industries
Part XIV. Textile and Apparel Industries
Part XV. Transport Industries
Part XVI. Construction
Part XVII. Services and Trade
Part XVIII. Guides